“If it has to be jailbroken and side-loaded in order to run approved code, and then disposed of when it’s reached it’s pRE-determined end of life, what you’ve got there isn’t a computer, that’s a Smart Device.”

When Steve Jobs debuted the original iPhone it was clearly a very small computer.  There was a CPU, a screen, memory, input, output.  Almost every device we use these days, from televisions to watches to vacuum cleaners, is some sort of computer.  I have smart light bulbs that are technically computers.  I say this because I want to make a distinction between a Personal Computer (PC) and a Smart Device (SD).

A Smart Device is a computer by any technical definition, but there are some things that differentiate a SD from a PC, mainly based on how the technology is intended to be used and maintained by the purchaser. To illustrate the differences, I will compare and contrast my television, my phone, and my laptop.

My television is an LG Smart TV.  It has apps and an app store and it has some sort of operating system on it that is referred to as “firmware”.  Maybe it’s Android, maybe it’s WebOS, maybe it’s something else, honestly I don’t know and I don’t care and I’m not supposed to care.  There is a Netflix app and a Hulu app and the like, so it can do television things.  While it is technically a computer I don’t think of it as a computer and I don’t use it as a general purpose computer and neither would any other normal person.  Other than my ability to choose which channels I watch or which apps I install to tailor the television to my needs, I would never be expected to open the hardware or alter the underlying firmware, I would never “hack” my television.  The product is defined, designed, and controlled by LG and sold as a single-purpose Smart Device with a curated and controlled experience and when it no longer works as you desire, you are intended to dispose of it and buy another one.

Almost everything I just said about my television also applies to my phone.  My current phone is also an LG, coincidentally, but I’ve also owned iPhones and Android-powered phones from other manufacturers.  All of these phones are Smart Devices and, like the television, they have a proprietary hardware design with embedded firmware that is controlled by the manufacturer.  They feature some sort of app store that allows for the installation of new capabilities, but they are sold as dedicated devices, not general purpose computers.  You cannot easily install alternate firmware or execute code that is not distributed via the app store.  You cannot replace or upgrade the internal hardware.  When they fail you are expected to recycle them and buy new ones.

Now, let’s compare that to my laptop.  In this case, it’s a Lenovo Yoga 920 but I also have an older Apple MacBook sitting nearby.  In both cases, the machine comes with an operating system, similar to the firmware on the television or phone, but with one key difference.  I can pick which operating system I would like to use and even install more than one.  My Lenovo is currently running Microsoft Windows 10 but can also boot up into Linux.  The MacBook defaults to MacOS but can also boot up to either Windows or Linux.  If I don’t want to configure multiple boot setups, I can run “virtual machines” within host operating systems.  For example, I routinely run an Android virtual machine on my Windows laptop (using BlueStacks) so that I can use certain Android apps that aren’t available on Windows.  There are no fundamental obstacles in place barring me from doing any of these things.  I bought the hardware, I choose what I want to do with it, I do not have to pay Lenovo or Apple for the privilege of using the hardware I own for general purpose computing, whatever that may be.  I bought it once, I own it, and I’m free to alter it’s behavior.  It’s more than just operating systems, though.  Let’s talk about software and data.

“Jailbreaking was never a concept in the world of personal computers and this illustrates a fundamental difference: you don’t need to break out of a jail that isn’t there.”

In the Smart Device space it is common to hear the term “side loading”.  Side loading is when you put content that is not approved by the SD manufacturer onto the device.  For instance, if you purchase an Amazon Kindle and attempt to put a book on to the device that you didn’t get from Amazon.  If a friend sends you an ePub or a PDF and says “you should read this” you can do that on your PC but not on that Kindle.  This is true for software running on Smart Devices as well.  The only way to run an application or a game on a Smart Device is via the approved channel.  Any other code that you attempt to run is side loaded and, depending on the device manufacturer, can even void your warranty and lead to your device being rendered non-functional.  This is why we have the concept of “jailbreaking” Smart Devices.  Hackers around the world have found the imposed limits of handset and tablet manufacturers to be frustrating and arbitrary and have therefore collaborated to free the devices from those constraints and allow them to run unapproved code.  Jailbreaking was never a concept in the world of personal computers and this illustrates a fundamental difference: you don’t need to break out of a jail that isn’t there.  Personal computers have, until the M1 Mac, never been designed to need jailbreaking.  They have never before come with baked in limits to what code you could run on the hardware you purchased.  This has been the case with almost all the Smart Devices every sold, but not computers.

The final point is about the hardware itself and whether or not you are intended to be able to upgrade or repair it.  If my TV or phone die, I will recycle them and buy new ones, but if the PC in my basement refuses to boot up someday, I will repair it.  Every component in the box is replaceable or upgrade-able, the power supply, the video card, and processor, the motherboard itself.  The entire hardware configuration is modular and component based.  This used to be true of virtually every computer built or sold, but in recent years laptops have become less upgrade-able and repairable, a trend lead by Apple.  It was common place to have a replaceable battery pack until Apple’s quest to make thinner and thinner machines put a stop to that.  Replaceable hard drives and memory were done away with, again by Apple, a few years ago and they started gluing or soldering the components in.  This trend towards laptops that cannot be altered from their purchased state is a choice by Apple to drive sales of new machines rather than allow users to update older machines, a sales decision, not an engineering one. Other manufacturers still offer machines that can be upgraded or fixed when there are hardware failures, but Apple has made the modern MacBook an entirely disposable product and many other manufacturers have followed suit.  Consumers haven’t generally complained too much since most of them didn’t really upgrade, repair, or replace things but this freedom to alter the configuration of the hardware you own has nonetheless long been one of the defining characteristics of a Personal Computer as opposed to a Smart Device.

This brings us to the new M1 Macs and the final step in the Apple plan to transform the Mac from a Personal Computer into a Smart Device.

A lot of people seem to forget that when the iPhone was originally announced, there was a ton of skepticism.  Nobody thought that people would shell out the money Apple was asking and Apple themselves were not entirely sure how they would fare in the market and the most glaring omission from the original iPhone was the App Store.  Apple had no idea how insanely profitable it would be to get a cut of all that sweet app revenue and actually had planned for the iPhone to work entirely with mobile web applications on Safari.  They did not allow for third-party app developers.

When they launched the App Store it was a big deal.  I was one of the early sign-ups, having been developing software professionally for 13 years.  I really loved learning to code for the iPhone.  The code-signing and App Store signup process and all that was a pain in the ass, but I did it.  It was an exciting time but it was by no means without controversy.  Developers HATED the constraints of the App Store model.  If I wanted to write a game and give it to a friend to play on their PC (Windows, Mac or Linux), I could do it, but there was suddenly no way to write software for this new phone without paying Apple a hundred bucks a year, filing a bunch of paperwork with them, and getting their approval of my app?  This was unprecedented.  The first app I ever submitted to the App Store, Virtual Bacon, was rejected by Apple because it didn’t have enough practical use.  Of course it didn’t, it was silly, it was an app to virtually fry bacon on your phone but I was not allowed to share it with the world because Apple didn’t like it.  I couldn’t even put it on the web and let people install it themselves because Apple wouldn’t allow side-loaded apps to run.

Despite all the developers who were rankled by this new way of doing things, Apple counted on the fact that end users wouldn’t care about developers feelings, only that their phone was sexy, and Apple was right.  End users didn’t care.  They loved the closed device with the curated experience and made Apple the richest company in tech.  The developers mostly got over the initial shock of learning to develop for such a draconian platform.  The ones who truly wanted freedom just went to Android or the web instead, where there was more of an open road and one didn’t need to pay to play.  Apple, in the meanwhile, started to see the beauty of raking in 30% of every App Store sale for software they didn’t have to code themselves.  That was straight to the bottom line with only the overhead of the approval process, which was partially offset by collecting annual developer dues.  Apple learned that they could get tens of thousands of software developers to pay Apple for the privilege of selling their apps to Apple customers while simultaneously giving up 30% of their own sales revenue to Apple.  Never in the history of computing has a company done so little to earn so much revenue as Apple did with this model.  From a stockholders perspective, this was beautiful.  From a small developers perspective it was highway robbery.

There was one fly in the ointment for Apple, however.  The Mac.  The Mac had been around since 1984 but had never managed to garner more than about 10% of the PC market, no matter what.  It took very little time for the Mac to be overtaken by the iDevices and the almighty App Store on the Apple balance sheet when it came to the core business of making money for Apple.  Apple launched a Mac App Store but it wasn’t the same.  The Mac App Store was an option, but not the only one.  Software could still be purchased, sold, downloaded, distributed, installed, and executed for the Mac without Apple seeing a dime in revenue.  This had always been the case and it seemed it always would be.  If Apple wanted to make the Mac more profitable, they needed to close that third party software gap, at least for the vast majority of consumers.

They also needed to sell more Macs and there they faced a second problem.  If they couldn’t gain market share, if 10% was the cap, they needed to sell more Macs to those 10% of users.  Study after study showed that Mac users tended to use their computers for much longer than Windows users.  This was trumpeted by Apple as proof that while the upfront cost of purchasing a Mac was higher, the overall Total Cost of Ownership (TCO) was lower.  If I spend 50% more for a piece of hardware up-front but it’s usable life is three times longer than the competition, my TCO for the more expensive machine is actually lower.  These TCO arguments were great for Mac users arguing with PC users in internet forums, but they didn’t seem to really drive sales.  The same could be said for the “halo effect”, a term that referred to the idea that consumers would buy iDevices and, in turn, decide to replace their Windows machines with Macs.  Remember the old “I’m a Mac/I’m a PC” commercials?  The Switch Campaign?  Apple tried, repeatedly, to expand the Mac user-base, but they could never quite get there.  So, they fell back on plan B.  Make Macs disposable.

The process of altering the Mac laptops to make them harder to upgrade started as early as 2012, but the final straw was in 2016 (https://www.vice.com/en/article/xygmyq/new-macbook-pros-mark-the-end-of-upgradeable-apple-computers) with that year’s MacBook Pro.  The desktop iMac was also redesigned along similar lines to make it nearly impossible for a normal person to do so much as add RAM or replace a failed drive.  The entire Apple computer lineup, with the exception of the insanely expensive Mac Pro desktop machine, was designed to be thrown away.  The one exception, the only remaining modular machine in the Apple lineup, currently has a STARTING price of $6000.  So, while it is technically true that Apple still sold an upgrade-able machine, the vast majority of users, 99% of the Apple consumer base, would never even touch one.

This strategy allowed Apple to stimulate new Mac sales to the same people who currently owned Macs, but not as fast as one would like.  It became important to establish a schedule for obsolescence for the Macs just as they had for the iDevices.  Users needed to run into a point at which they needed to buy new hardware to run the latest software (even if that point had no real technical rationale).  Apple decided to end the Mac OS X operating system development group and instead converge the iOS and Mac OS dev efforts.  They even renamed MacOS to macOS to be more like iOS and watchOS.  Apple expects iPhone users to buy a new device every two years, iPad users more like three, but Mac users were holding on to machines for five to seven years.  That wouldn’t do.  The solution? Keep pushing out updates to macOS and cutting older machines off the supported hardware list.  Again, this strategy worked to drive adoption to newer hardware and stimulate Mac sales, up to a point, but it also had the internal benefit of allowing Apple to avoid maintaining any responsibility for any backwards compatibility for older hardware and therefore save internal development costs.

There was one final piece to the Mac strategy that is worth noting.  It was certainly going to be controversial when they made the Mac hardware disposable, but they powered through.  It was the same logic by which they removed the headphone jack on their phones, closing another gap in their ultimate control of the user experience.  Both decisions met with initial user resistance but were ultimately copied by competitors.  The final piece was not a case of removing something, the final decision was the choice not to add something: a touchscreen.  Apple was the major innovator and pioneer in the development of touch-friendly computing via iOS.  iOS is, at it’s heart, nothing more than a touch-optimized version of Mac OS X.  It is striking, then, that Apple is the only major computer company that does not offer a touchscreen laptop or desktop and has no plans to ever do so.  The Lenovo I am using to write this post can convert into a tablet by folding in half and has a very nice touchscreen.  My phone and my iPad are both touchscreen enabled.  My Kobo e-book reader, touchscreen.  My work PC?  Touchscreen.  But the Mac?  Never.  Why?

The obvious reason, again, is revenue.  Simply put, a touchscreen Mac would cannibalize iPad sales.  Rather than do that, Apple opted to develop the iPad into a laptop replacement, even going so far as to recently market the iPad under the tagline “your next computer is not a computer”.  The App Store revenue on the iPad alone probably dwarfs revenue for the entire Mac product line.  Apple figured they didn’t need a touchscreen Mac, they just needed people to replace their Macs with iPads.  For many consumers, this is enough, but there is still this stubborn group of users who want an actual computer.  They still buy MacBook Pros, iMacs, and Mac Minis and, those who are really rich might even buy that $6k machine.  These users balk at the idea of attaching a keyboard to an iPad and pretended it is a general purpose computer.

I get it.  I have an iPad Pro with a keyboard sitting here and a MacBook Pro.  They run almost the same software, they are almost the same machine, but the MacBook can simply do a lot more.  If there were only one litmus test needed to highlight the difference, ti would be this: I cannot write software for the iPad by using the iPad.  Let me repeat that.  I cannot create software for an iPad by using an iPad.  In order to create software for an iPad, I need a Mac.  This stands in stark contrast to every personal computer ever made.  Personal computers have always allowed the user to create and compile software on the machine itself.  The freedom to code on a self-contained machine.  The iPad fails that test and the Mac passes that test and for many, myself included, this makes the iPad a Smart Device and the Mac a Personal Computer, even setting aside all the other differences.

So an iPad with a keyboard isn’t a full replacement for a MacBook Pro and can’t be unless iPad users (and iPhone users) can code on their own devices, but they can’t.  If Apple made a touchscreen Mac, enough of their users would prefer that to the iPad+keyboard option so they won’t offer that.  How do they resolve this?  By closing the final gap on the Mac.

I’ve already discussed the fact that the Mac hardware refresh cycle was shortened by a move to disposable hardware and aggressive software updates, how Apple has consistently avoided adding the now industry-standard touchscreen to the Mac to avoid harming iPad sales, and how they have managed to reap massive revenues from the App Store model on iDevices but generally failed to see the same results on the Mac.  The remaining pivot in strategy, after trying all these other avenues, was pretty obvious and I am certainly not the first person who saw it coming.  The final step was to put the Mac in the same “jail” as the iDevices and thereby force all Mac software revenue to come through the Mac App Store.  There was only one problem.  Intel hardware.

The MacBook Pro and the Lenovo Yoga I keep referring to are, in almost every meaningful sense, the exact same machine.  Both are light, modern, laptops with metal shells, similar sizes, similar keyboards, SSD hard drives and Intel processors inside.  They can both run Windows or Linux, and, although Apple has made it difficult to run macOS on non-Apple hardware, both machines can technically run that as well.  There are a few differences.  The Lenovo has a touchscreen and a fingerprint sensor and can convert into a tablet.  It also cost much less than the Mac did.  Both machines, however, are fundamentally the same computing architecture.  Both machines allow me to write code that I can run on them.  Both let me explore and hack and use the computer however I wish.  As long as Apple machines are based on standard Intel hardware there is little that can be done to change this fact.  It was therefore not surprising when Apple announced their intention to take the final step and make their own proprietary processor, the M1.

This was a long time coming.  Apple had to build the facilities to produce chips and also develop the expertise in chip design.  They began with the A-series chips that have powered the iDevices.  Having complete control over both the hardware and the software and free from the hassles of making upgrade-able hardware, the A-series processors could be altered and developed in any way Apple desired with absolutely no consumer impact beyond the usual “buy a new device every couple of years” thing.  Apple could, and often did, even cause apps you already owned and purchased to cease functioning with no warning when it would have been extra work to maintain backwards compatibility with those apps.  They tested the waters and found that consumers got used to several things that were once unimaginable: daily software updates pushed out to devices, loss of ability to downgrade, and the straight up deletion of apps that the user had bought and paid for without any sort of refund or credit.  This slow boiling of the consumers allowed them to streamline their hardware development and maintain the 18-month cycle and healthy profit overhead.  Eventually they needed to move the Mac to this more-profitable business model in order for it to continue to be worth it for them and after years of experience with the A-series processors, they finally reached the promised land with the M1.  They have finally got a proprietary processor that they believe they can sell.

The rationale Apple has pushed for all of these moves?  Ease of use, or security, or thinner and lighter, or faster, or freedom to innovate, all of these reasons for moving the Mac in this direction might be good marketing PR, but they are fundamentally bullshit.  There are ways to develop products that are secure, thin and light, fast, innovative, and all the rest without also being locked down, proprietary, closed or disposable.  Other manufacturers recognize this and Apple of old did as well.  But the iDevice paradigm is such a good business model and you wouldn’t want a free computing experience to get in the way of a good business model, would you?

The M1 Mac will not have a single upgrade-able component and it will be incapable of executing unsigned code.  An individual developer will be capable of self-signing code on their own machine for development purposes, but any distribution of software to anybody else will require they pay Apple for the privilege, unless they distribute their software as source code and the other user signs and compiles a copy for themselves.  The M1 Mac will be unable to run other operating systems.  Linux and Windows will not be options.  Even if they were somehow able to be tricked into running on the M1, they would not be usable, it will be a jailbroken device at that point and could be bricked.  An M1 Mac will be a disposable, fully controlled device, not a general purpose computer, no matter how many apps you might have available to run on it.  For most people, this is irrelevant and Apple is counting on that.  Very few people think about the developer community that creates the software they consume or the issues of right to repair and hardware and software platform openness that those developers are passionate about.  Most consumers just want their computer to be a TV with a keyboard and the internet and for these consumers, an M1 Mac will be indistinguishable from whatever other Apple stuff they use today and Apple will make a mint.

Apple has every right to move the Mac to this closed model, but I, as a consumer, have every right to reject the model and opt for freer, more flexible, and less limiting options.  The Mac was once the most powerful Personal Computer on the market, capable of running any OS and almost any code you could imagine with a long life, high end engineering, and upgrade-able components that justified the higher upfront costs, but with the M1 Apple has taken the final step in the gradual and intentional transformation of the Mac into just another Smart Device, that one oddball member of the iPhone family that happens to have a physical keyboard stuck to it.  It’s ceased to be a PC, it’s now the macDevice.

Ironically, I believe this may ultimately lead to touchscreen MacBooks.  Once all sales on the Mac platform are forced through the lucrative App Store gates and third party innovation on the Mac is effectively quashed, Apple may feel freer to allow the Mac to be more iPadesque just as they have allowed the iPad to get more Macish.  With the macDevice as just another form-factor variant of the same basic product, the freedom to blur the lines between iPhone/iPad/macDevice without harming revenues of other product lines may get the basic MacBook hardware design to evolve for the first time in a decade.  Who knows?  I know one thing, I won’t be along for the ride.  The Apple strategy to achieve the macDevice without market rejection has been well-executed and blindingly obvious for many years.  Each step has followed logically from the one before it and I jumped ship several years ago, one of the earlier Mac faithful who Apple failed to lead down this particular garden path.  I wish to own the things I own, maintain the right to repair and alter those things, and retain the freedom to use them as I see fit for as long as I wish.  If I purchase a product, I do not wish the manufacturer to then dictate the terms under which I can use it or maintain control over how long I can use it.  On more than one occasion, Apple has disabled and deleted software I depended on or enjoyed, while providing no rollback plan and no financial credit, with little or no notification.  For a decade they have removed jacks and ports and hardware options to suit themselves and their business model with every new generation of their products, providing less and less in the way of choice while increasing their grip on users who are too invested in the Apple ecosystem to ever be willing to spend the time and effort to escape.  An Apple consumer can do anything they like with an Apple product unless Apple doesn’t like it and they can only move to another platform by repurchasing music, apps, devices, cables, videos, and peripherals.  The macDevice belongs in this iteration of Apple.  It is the embodiment of Tim Cook Apple.  But don’t call it a Personal Computer.  If it has to be jailbroken and side-loaded in order to run approved code, and then disposed of when it’s reached it’s pre-determined end of life, what you’ve got there isn’t a computer, that’s a Smart Device.

Last week I was pretty depressed.  I didn’t really want to do much of anything.  I laid around the house, played video games, felt like garbage most of the time.  One thing I have learned is that when I am feeling that way it can be beneficial to pick up one of my many languishing projects and attempt to make some sort of progress on it.  This can mean taking a half-build model car and painting a few pieces, or soldering some bits on to a circuit board for a guitar pedal that I never finished making, or maybe performing a small repair on something.  Whatever IT is, I have found that performing some small task that feels like progress can be like blowing a little air on last nights embers when the fire feels like it’s mostly burned out.  When I’m really low I don’t even want to do that much but if I make myself take the first step, I tend to fall forward to the next step.

This particular week I have been distracting myself from my depressing internal monologue by browsing guitars online. I have several guitars.  I do not need another guitar.  I do not, however, have a Fender Jazzmaster and I nearly convinced myself that I don’t need another guitar, but I do need a Jazzmaster.  Blowing hundreds of dollars on a guitar might distract me a bit further but I am well aware that it won’t actually get me out of my funk.  In my experience, “retail therapy” is an illusion created by capitalism.

No, I didn’t buy a Jazzmaster but I DID remind myself that I have a clone of a Rickenbacker 350 down in the basement that is about 90% complete.  I built the thing but I never painted it or installed the electronics and hardware.  What if, instead of buying a guitar, I brought the “Ryanbacker” one step closer to life?  So, that’s what I did.  I took it out, took stock of what needed to happen next, did a few woodworking and masking tape things to it, and ordered some paint for it from StewMac.  Once I had gotten as far as I could with that I was in the mood to do something else.   I owed my grandma a letter so I sat down at my typewriter and wrote it.  In the letter I mentioned a song I recently recorded so I decided I should burn a CD for her with a few tunes, including that one.  I logged on to the studio computer and before I knew it I was cataloging some old session files and had made some new reference mixes of some old tapes.  And so it went, all day, one thing followed from the one before and instead of going to bed with another day of nothingness, I went to bed feeling pretty OK with how the day went.  It carried over to Sunday.  I woke up in the morning and decided that drawing sounded like an enjoyable thing to do and I drew a picture of Buckley because he happened to be right there, modeling for me.  When I was going through old tapes on the studio computer I discovered a couple of demos for songs I had written/recorded in the late 90’s that I had no recollection of.  One of them was going through my head much of the day and when I woke up this morning it was still there along with a few new musical ideas.  I’ve been pretty starved for musical ideas for quite a while.  So that was nice.

The point of all this is that I had to give myself the initial push off the couch but once I was doing stuff, other stuff just followed, and none of it was life changing but it was all better than another day sitting and being depressed.  I collected together some drawings, paintings, and other things I have made into some albums on Facebook just to remind myself of what I can do when I just DO something and it all helped drag me a couple of degrees further out of the dumps.

When the Ryanbacker paint gets here, I’ll paint it.  I did another drawing this morning before work, this time a pencil sketch self-portrait.  I intend to pick some project today and make it go forward a bit, but I don’t know what it will be.  I’m not particularly in the mood to do any particular thing, but that’s not the point.  Doing the thing will help me be in the mood to do another thing, etc, etc.  That’s the point.  I will forget this again and again and I will remember it again and again until I die, but, it will never stop being true.

I often find myself wondering how people can live in the 21st century, using computers and smart phones, benefiting from modern medicine and transportation, seeing the very fruits of scientific discovery in their daily lives EVERY.  SINGLE.  DAY.  while continuing to deny science and reason.

I once had an infuriating conversation with somebody who believed in the flat earth.  When I asked him if he ever used the GPS on his phone he said that or course he did.  When I explained that the very existence of GPS demonstrated that the world was not flat he proceeded to give his imaginary explanation for how the GPS system “really” worked using cell towers, not satellites.  When I explained to him that a) I am an engineer and I know first hand it doesn’t work that way, b) GPS works even when you are nowhere near a cell tower, and c) the GPS system predates the cell towers in the first place he was completely unmoved.   He had seen a video on YouTube and that was all the evidence he required.  Speaking to a person he knew who had first hand knowledge of the topic was not as convincing as his “research”.

I asked him why, if the earth was flat, was I able to see the curve of the earth from the airplane I had been in the week before.  Apparently all airplane windows are designed in such a way that they distort your view to give you the illusion of a curved horizon, no matter what altitude or angle you are looking at.  This apparently includes the flat, front-facing windows that the pilots see out of as well.  It’s all a part of the conspiracy to keep the truth hidden.  NASA is responsible.

OK, but NASA is only in the USA.  What about all the other countries with space programs and airlines?  Why has no rogue nation ever told the truth about the fake GPS system and the flat horizon line?  Well, that just proves NASA is part of a global conspiracy, not a local one.

It went on like this.

For hours.

I demonstrated that I had satellite internet, showed him the line of sight to the satellite I was using, demonstrated the data latency times to show that the information had to travel a great distance from my home to the satellite, unlike the cell tower, explained how that signal travel time could be used to measure distance between a radio and a satellite and how triangulation worked, how if you could know your distance from three satellite signals, you could determine your position in three-dimensional space.

This got me nowhere.

I showed him photos I had personally taken of Jupiter and it’s moons that proved that they existed, they were round, and they were visible to the naked eye.  I demonstrated that the only way they would ALWAYS be round is if they were a sphere, because if they were discs, any change in their angle to our planet would change their visible shape.  I pointed out that nobody has ever observed any planetary body showing as anything other than a sphere so there was no reason to believe our planet should be any different.

None of this mattered.  Not even a little.

I showed him, using a lightbulb and a plate, how the flat earth model he believed in would mean that the sun would either ALWAYS be visible in the sky OR would set in the south and rise in the north.  He didn’t deny that the sun rises in the east and sets in the west, nor did he deny that the sun is only visible to half the planet at any given time, but he still clung to his belief even though it was clearly impossible for those facts to work in his model.

Many times in this conversation I said “I really don’t care what you believe and it’s not my job to change your mind, can we please talk about something else?” but he insisted that I needed to see the truth of his belief or else I would be falling for the Great Lie of the round earth and wouldn’t be able to accept the Bible and be saved and then he would proceed to trot out some other easily debunked “evidence”.

There was no escaping the topic, no possible way to change his mind, and no way for him to see that he was essentially eating an apple while denying the existence of apple trees every time he punched an address into Google Maps.

If he wasn’t an uncle I hadn’t seen in years I would have just given up on civil conversation and mocked him relentlessly until he at least agreed to talk about something, anything, else.  Or I would have shaken my head and walked away.  I did neither.  It was excruciating.

I’ve replayed this incident many times, trying very hard to imagine how his brain allowed him to hold onto such a patently, provably, demonstrably, false idea even as he was having it demonstrated to him.  It was really something and has really taxed my imagination and empathy, while simultaneously giving me a valuable insight into reality-denialism in all it’s forms.  What is it like to be drowning while denying the existence of water?  How do people wind up like this?  Are they born this way?  Conditioned by religion?  What is it?

I still can’t explain it to my satisfaction.  When I was a religious believer, I believed because I was presented with cherry-picked and distorted evidence that gave me the illusion that my belief system was grounded in observable reality.  Eventually, I gained enough knowledge about reality to accept that my beliefs were fantasies and I stopped holding those beliefs.  I was amenable to reason as a believer and also as an unbeliever, once I had enough facts.  But there are people who are not just unreasonable, but who work like mad to simultaneously convince themselves that they are, in fact, reasonable rather than just accepting that reason has nothing to do with it.  They simply like the idea that there are fairies at the bottom of the garden, and that’s that.

I can respect the “blind faith” people about 3% more than the “here are my carefully selected ‘reasons'” people.  None of this would matter all that terribly much except that anti-reality political and religious organizations have massive amounts of power in the society I live in.  The anti-reality people are destroying the environment, killing tens of thousands of people during a pandemic by resisting science and reason, and have removed me from having any meaningful relationship with my parents and most of my surviving siblings.  I can’t easily let go of the hope/notion that there is some way to get through to them.  I know that is a foolish hope to cling to but it’s like an itch that I can’t stop scratching.

Whatever evolves on this planet to replace our species after these people drive us to extinction, whenever sentience next rears it’s ugly head on this planet after we are dust, I sincerely hope they find a cure.

I’m proud to announce the completion and pending release of a new album.  I have once again joined creative forces with Lemuel “Ace” Herlihy (aka: Michael Heuer) and a new album, entitled Amateurs is the result.  Our last album was called Nininger and was released six years ago.  On that one we each contributed a 35+ minute long ambient/noise piece under an anagram-derived moniker.  I was listed as Tasty Rerun and Michael as Lemuel “Ace” Herlihy.  This year we collaborated much more closely, composing and recording together in a series of studio sessions to create a very different beast.  Also this time out Tasty and Lem have decided to band together under the name Nova Pill Beam.

Mixing and mastering work is underway, release will be some time next month.  You’ve been warned.  🙂

Two blog posts in a row, what??

This morning I finished reading an anthology volume called Great Modern Short Novels or something to that effect. The novellas were:

  1. Lost Horizon (James Hilton)
  2. The Red Pony (John Steinbeck)
  3. The Third Man (Graham Greene)
  4. A Single Pebble (John Hersey)
  5. The Light In The Piazza (Elizabeth Spencer)
  6. Seize the Day (Saul Bellow)
  7. Breakfast at Tiffany’s (Truman Capote)

I had never read any of them and I enjoyed them all.  I had seen the film adaptations of Lost Horizon, The Third Man, and Breakfast at Tiffany’s, but even those held some surprises in the reading.  Breakfast at Tiffany’s specifically is much more modern than the film version would have you believe. 

Among multiple pieces of dialog that I found surprising for 1958 was when Holly Golightly was talking about marriage and said “If I were free to choose from everybody alive, just snap my fingers and say come here you, I wouldn’t pick Jose.  Nehru, he’s nearer the mark.  Wendell Willkie.  I’d settle for [Greta] Garbo any day.  Why not?  A person ought to be able to marry men or women or—listen, if you came to me and said you wanted to hitch up with Man O’ War, I’d respect your feeling.  No, I’m serious.  Love should be allowed.  I’m all for it.  Not that I’ve got a pretty good idea what it is.”

Same sex marriage being casually discussed by a character in a novel in 1958?  And it’s far from the only instance in the book.  On another occasion she suggests that Rusty Trawler should “settle down and play house with a nice fatherly truck driver”. 

That’s not the only dialog that seems more apropos to 2020 than 1958.  You know that part in the movie where she’s trying to get her cat to leave and she tells the cat to “Beat it!” and “Scram!”?  In the book she also tells the cat to “Fuck off!”  Hard for me to picture Audrey Hepburn voicing that dialog in the movie.

Like I said, modern.  The story has problematic elements, but I am just noting that I was a bit surprised by Holly Golightly, despite seeing the film version.  In the book she is barely 19 years old but she’s had eleven lovers (“not counting anything that happened before I was thirteen because, after all, that just doesn’t count”) she talks about dykes and gay marriage and bi-sexuality, drops an F-bomb, happily takes money from sugar daddies, and rather than staying with Paul at the end, she leaves the country and he ends up with the cat.  This is hardly a new revelation (https://www.theparisreview.org/blog/2018/12/21/was-holly-golightly-bisexual/) but it was definitely not the BaT I am familiar with.

I haven’t updated this here blog for a few months because I couldn’t. The admin login was broken and I kept meaning to find time to fix it but failing. Today I am happy to say I figured it out, got the site updated to WordPress 5.3, and switched the theme to the new Twenty Twenty theme. I plan to tweak that a bit, but hey, at least the site is fixed. Woot.

I will never forget the first time I encountered the internet. It was 1994 and I was working at my very first computer programming job at a small sales-lead management company near Minneapolis. I had written a DOS program called EDT that used a modem to dial up to various magazine publishers and download their sales leads over the phone. One day my manager, Michelle, entered my cubicle and handed me a piece of paper and said, “I am not sure how this works, but this publisher says that they want to provide their files over something called the internet. Can EDT do that? I signed us up for an internet service account, these are the instructions to get started with our username and password.”

It was the first time I had ever heard the word “internet”. I took a look at the printed instructions. They were from a dial-up ISP called Skypoint. There was a phone number and instructions to connect with a z-modem terminal program. EDT supported z-modem so I dialed up and connected to the internet for the very first time using the very first piece of software that I wrote at my very first job as a computer programmer. Once I was in their text-based menu system, I managed to follow the directions to download something called Trumpet WinSock, which added support for something called TCP/IP to my Windows 3.1 machine, and I was also able to get a piece of software called Mosaic 0.89, which was a browser for something called the World Wide Web.

It took me the better part of the afternoon, but pretty soon I was able to access the sales leads via something called FTP, I loaded my first web page at skypoint.net and my life was never quite the same. I signed up for a personal Skypoint account almost immediately.

The internet of that era consisted primarily of email, listservs, FTP sites, Gopher servers, a fledgling and quite small Web that was almost entirely text-based, Usenet, IRC chat, and dial-up telnet access for when you just wanted to efficiently access information instead of fiddling around with graphics. The dial-up speed was insanely slow, my modem was only able to connect at 9600 kbps, about one sixth of what we would now think of as “dial-up speed”. Windows 95 didn’t exist yet and when it launched it didn’t even include internet access because Bill Gates wasn’t yet sure it was going to be a thing worth doing.

The internet I met in 1994 bore very little resemblance to the internet of 2019. It was global but personal, open yet idiosyncratic, difficult to navigate but immensely rewarding. I would come into the office early just to spend an hour or two exploring. It felt like the beginning of a massive revolution, a cultural shift, that would change everything for the better. I fell in love and for the subsequent 25 years I have stayed online and worked and lived on the cutting edge of internet and computer technology. I have owned many computers, built many websites and web applications, met countless people, and rarely gone more than a day or two without a visit to that virtual electronic universe.

About 10 years ago the internet underwent a profound change with the move to mobile broadband, the centralization of e-commerce, the rise of social media, and the eventual dominance of the online world by the Big Tech companies: Apple, Google, Amazon, Facebook, and (to a much lesser extent) Microsoft and the wild, weird, somewhat chaotic world of the internet I first fell in love with started to be replaced by the corporate internet we all interact with today and I’m here to say that when that happened, we collectively lost something, and I miss it.

I no longer love the internet. In fact, I kind of hate it.

I grew up in a world that had three television channels on VHF and one or two low-watt local stations that were sometimes watchable on UHF. If you wanted to watch TV, you watched whatever happened to be on those channels. Even when I bought my family’s first VCR with my paper route money, I still had to read the TV listings in the newspaper, circle shows I might want to see, and program the VCR to record them if I didn’t want to miss them. Media types, music, movies, TV shows, books, they were all very different from each other, not simply different types of bit streams and if you wanted to hear an album or read a book or watch a movie, it was not as simple as firing up Spotify or Netflix. It was incredibly inconvenient and required a lot of planning and intentionality. Back when I first encountered the internet I was thrilled by the possibilities it promised to create exactly this world we now have. The ability to read all the books, watch all the movies, hear all the music, it seemed like such a great idea, and it really was, but now that we are here, I find that it is not without a price and the price is impact. It turns out that when everything is convenient and available, nothing seems all that terribly valuable or interesting and distraction becomes a serious concern, as does complacency.

Everything, from the works of Marcel Proust to a cat chasing a laser pointer, melts into a sort of stew of sameness. People seem less interesting when you just see what they post on Instagram. To paraphrase a bard from the 1980’s, it feels like there’s 57 channels and nothing’s on. When I go online today, instead of a sense of wonder and curiosity, I feel a vague disgust and boredom, and this makes me sad. Everybody performing for each other, myself included, every click and site visit tracked for SEO and marketing purposes, ultra-intrusive ads, and an endless stream of trivia. This is not the internet I fell in love with.

I’ve recently decided to try to do something about this but I have yet to find what I feel has been lost. In the last couple of years I have made several major changes to my computer habits. I replaced my iPhone with an Android phone, shut all the notifications off, and started using a web browser that blocks all trackers. I have implemented rules for myself for social media usage, limited how often and to what extent I engage in Twitter, Instagram, and the dreaded Facebook. I’ve even gone so far as to rehabilitate a few old computers that don’t have WIFI or modern web browsers so I can use computers to do things like writing and music production without the temptation to lose hours of my life to memes and viral videos, news and gossip, and the rest of the modern digital stream of endless distraction. I’ve blacklisted some websites to remind myself to keep away from them and all of this just seems to make me feel a little more resentful.

I don’t like feeling like I have to be on defense every time I go online. I don’t like the default assumption that I should always be available to respond to tweets, texts, IMs, or even phone calls. It’s not like I want to be some sort of Luddite, not at all. I love the speed and power of modern computers and I love having access to the world’s media on-call, and I probably have more technology in my backpack on a daily basis than most normal people have in their homes, but I’m really struggling to enjoy what we have collectively created. What was once special is now common, and what was once empowering now feels like a sink for time, attention, and energy with very little reward. Where once computers made me feel more creative, they now feel more like they are trying to seduce me into mindless consumption and the amount of work required to regulate the intrusiveness of the technology is tiring and disheartening.

I think there is a philosophical difference in the way computers were designed, envisioned, and used during the initial phase of personal computing and the role they play in modern life and, personally, I’ve found that the earlier ethos fits my personality better than the current one.

Take Apple, for example. When the original Macintosh computer was being designed, the vision Steve Jobs had was “a bicycle for the mind.” Maybe not the most obvious metaphor, sure, but I always liked it. A computer was a tool that allowed the user the power to create things and do things that would otherwise have been beyond their reach in the way that a bicycle would expand the power of our personal mobility, but unlike a car, would not replace it. In high school I used to go to Kinko’s to use the desktop publishing Macs to create and print inserts and labels for tapes to distribute music recorded by my brother and I, which we then sold at school. Later, when video editing became possible thanks to the early iMacs with Firewire/DV cameras, I made short films and even wrote a few screenplays with the hope of shooting a low budget feature. Digital audio workstation software allowed Rhett and I to record albums of much higher sound quality and complexity than would have ever been possible with earlier tape machines. New machines gave us higher quality graphics, faster and easier manipulation of video and audio and images, and more storage and connectivity to cameras and audio equipment. The early internet gave us a platform to promote our work to random strangers all over the world. For years Apple seemed to be focused on making better and better tools for creativity, until they decided the real money was in apps and phones and watches and titanium credit cards. Where once they focused on empowering the user to create, the current focus of Apple is creating a sticky consumer experience, and they are very good at it, but the difference in the product is striking.

It is actually hard to get anything useful done on a computer that is constantly pushing notifications and updates at you, that has embedded social media sharing in all the programs, that has limited ports for connecting to other devices, or limited storage that encourages using their Cloud, or “consumer” grade applications that are so dumbed down that your biggest creative decisions are which filters to apply or which canned beats to loop.

This train of thought has caused me to start considering alternatives and I’ve found a couple and surprised even myself. I started thinking about, of all things, knives. I have a pocket knife that I use for one thing or another almost every day. It’s a single blade with a wooden handle. Nothing fancy. I used to carry a Swiss Army knife with a bunch of tools built in. A corkscrew, a screwdriver, a toothpick, a pair of scissors, but 99% of the time I discovered that I didn’t need a crappy version of all of those tools, I just needed a good version of a small knife, so I switched. I chose specialization over versatility. I thought about what that philosophy would mean for writing. If a modern computer is the ultimate digital Swiss Army knife, what would a single purpose writing computer need? The answer was: not very much. A clear screen, a great keyboard, the ability to save text in a format that could be edited and potentially printed or published with modern tools, and most importantly, no distractions. I found something called the Freewrite, a Kickstarter launched device with an e-ink screen, WIFI, and a kick-ass mechanical keyboard but it was rather expensive. As I was contemplating it I realized I already had experience with a computer that was phenomenal for writing and little else, and it was inexpensive to boot. I remembered that I had once owned a 1991 Apple Macintosh Powerbook 170, a chunky, black and white, extremely primitive laptop, and that I had written tens of thousands of words with it before “upgrading”. A few months later and I’ve reacquired and refurbished a few old Powerbooks, all predating the turn of the century, and it has worked. I’ve taken to writing again and it’s fantastic. Trains of thought stay on their tracks, distractions disappear, and writing is fun again. I’m writing this on a 1997 Powerbook 3400c using a copy of Word that is so old that the built-in dictionary doesn’t know the word “internet”.

This computer can connect to a network. I can download and install old applications on it, but, it can’t surf the modern web. It has a full-size keyboard with desktop-style keys unlike a modern MacBook with it’s low-profile chiclet style keys. The screen is clear and crisp, more than enough for text. There are no push notifications and there hasn’t been a software update available since Bill Clinton was the president. As a tool for focusing, in a contemplative manner, on forming thoughts into sentences and paragraphs, it’s nearly perfect, but it is not social, and it’s multimedia capabilities are laughable.

After I write this, I will transfer the file to my modern laptop, run it through spellcheck and do some editing and proofreading and then I will post it on my blog. It is almost a dead certainty that I wouldn’t have written anything this long if I was working on my modern machine. I just don’t have that kind of self-control.

This experience of using a single-purpose computer to do a focused task has been a sort of revelation. I have found that my writing productivity has jumped so spectacularly with the switch to this older style machine that it caused me to consider taking a similar approach to two other areas of interest: audio production and video editing.

When it comes to audio production, going back to an old computer seems to me like an absurd idea. When I want to record music I just want to focus on the performance. A vocal part, a drum track, a bass line. I don’t want to fuss with storage limits or old technology. So, I considered what I would need. First, I wanted the ability to record at least eight tracks of audio simultaneously with very high sound quality. Second, I wanted to be able to easily transfer a recording to a computer for editing, mixing, and mastering. Third, I wanted to be limited to audio recording, again with no distractions. This lead me to purchase the first dedicated recording machine I’ve owned in over 20 years, a 32-track all-in-one digital studio, which was a few hundred bucks, not much more than what I paid for a cassette four-track in the 90’s. I have had a couple of sessions with it and again, it has been an extremely rewarding experience. It’s been many years since I’ve been able to just plug in and hit record. I usually spend 45 minutes getting things set up, I get prompted to install updates to my recording software, and I lose my momentum and the session fizzles out. With a dedicated machine, I have none of that. Unless it breaks, I can just turn it on and work.

After I finish tracking a song, I just take the memory card out, connect it to my computer, transfer the files and edit and mix. My god I’ve missed working like that.

I don’t think I’ll ever go back to the Swiss Army knife.

Video editing will be a bit more of a challenge, but I have a plan for that as well. Shooting video is the easy part. I have a digital SLR Nikon camera as well as a few other HD camera options sitting around. They aren’t 4K, but, that’s OK. I will use a relatively modern computer for editing video, it would be silly not to, but I don’t intend to work on my mobile, connected, laptop. No, I am taking a desktop machine, an older Mac, and setting it up with an extra monitor, a copy of Final Cut Pro, and no internet connection unless I choose to plug it in. When I want to cut something together, I will shoot the footage, load it up on that machine, and edit it, away from the digital noise. When it’s ready I will export it and share it with the world. That’s the only thing I will use that computer for and as long as it can do that task well enough for my needs, that’s all it will do.

What I’m sacrificing in convenience, I’m making up for in focus. That’s the deal, and it’s a fair trade.

The modern internet is soul killing. Modern computers are so infested with it that they make focused creativity challenging, but it’s a solvable problem and I look forward to a year of enjoying computers and their role in creativity again, even if it means I spend less time online. The old internet is gone anyhow, the new one is boring and intrusive, I might as well get back to riding bicycles.

One more thing…

I started this whole post talking about the demise of the old, weird, internet and the rise of corporate consumer computing, but I’ve only discussed solutions for restoring my sanity in creative domains. What about the actual internet? Well, they may be less popular than they once were, but many of the old ways of doing things online still technically exist and new ways are being invented. Blogs are still out there, even if Medium has hijacked them. People are still out there, even if Facebook has turned them all into an endless feed of reality programming. There is a fledgling movement to reclaim the internet, make it personal again, there is a website called Indie Web that I have recently discovered that is encouraging the development of ways to share things online without Amazon, Facebook, or Google, and I plan to get mindful on that front as well. In the next few months I hope to start running my own web server again, on my home connection, with my own domain, and take control of my online presence, data, and profile. It’s not enough to not be tracked while shopping Amazon for vegan jerky, I want to host and control my own words, images, music, and videos. It will take some work, but that’s nothing new. It was a lot of work to establish NuclearGopher.com and distribute music 20 years ago and very little has been done to make being independent of mega-corporations easier in the last decade, but it’s work worth doing and work that I know how to do. I hate the internet as I’m currently experiencing it, but I haven’t given up on the idea of being connected to the wide, weird, world, no matter what the monetizers and influencers want me to do. I’m looking forward to making this interesting again, I hope others do the same.

I am a person who listens to new music.  At all ages, at all stages in my life, I have made a point of exploring current music, new releases, the latest things.  I try not to  discriminate by genre or popularity level when exploring new music because, hey, you never know.  Every couple of weeks I bring up Spotify (and before that, Rhapsody) and I hit play on all the new releases, seeking treasure.  I will give most songs anywhere from 20-60 seconds to provoke a positive or negative response before moving on, the ones I don’t mind get listened to all the way through, the ones I really like get added to my Library.

I have discovered that I like some Taylor Swift, and some Lorde.  I have discovered that I like TuNeYaRdS and Yaeji.  I have discovered Young Fathers and M. I. A. and Thao Nguyen and the Get Down Stay Down and plenty of other artists over the years listening like this.  Trying to keep an open mind, attempting to browse without pre-judgment.  If I hadn’t done this I probably would never have heard Baby Metal, and that would have been a tragedy.

I have other venues for exploring new sounds.  Magazine articles, websites, a couple great podcasts, but those all have an editorial filter and I usually just prefer to let music tell me if I like it or not.  Put it into my ears, that’s all I ask.

Unfortunately, particularly over the last 10 years, this process has become increasingly less profitable, and it’s not because I’ve fossilized or started making a fetish of “the good old days”, no, the fundamental reason it has become harder and harder is the rise of auto-tune on EVERYTHING.

Here’s an analogy.  Let’s say you like ketchup.  Ketchup is good.  I like ketchup on french fries like any normal guy.  Ketchup is harmless enough.  You can even use it as the primary ingredient in a homemade barbecue sauce.  Yum.  But now let’s say that you go to a restaurant where ketchup is already smothered on every single food item.  It is mixed into the coffee and iced tea.  It is on your shrimp, your salad, your steak, your carrots, your olives, your coleslaw, in your clam chowder, on your pizza, on your buffalo wings, drenching your pita, on your burrito, your sushi, your waffles.  The plates are sticky with it, the table tops slathered in it, the floor slick with it, just gallons of ketchup everywhere.  The booths are covered in it.  The walls and floors and ceiling are red.  There is a ketchup fountain.  The windows drip with ketchup.  As you take your seat in your booth and feel ketchup soaking into your underwear, how do you feel about ketchup at that point?  Will you ever be able to eat ketchup again or will the onslaught of ketchup put you off of it for life?

This, dear reader, is autotune.  This is what it does to the menu of modern music.  It takes every song from every artist in every genre on every subject and slathers them all in ketchup.  Whatever other flavors they may contain, there’s the fucking ketchup.  Rocky road ice cream?  Ketchup on top.  Strawberry crepes?  Don’t forget the ketchup.  Apple pie?  Ketchup please.  And don’t forget to put some ketchup on the side.

Now, I just read a long and impassioned defense of the artistic merits of auto-tune the other day (https://pitchfork.com/features/article/how-auto-tune-revolutionized-the-sound-of-popular-music/) and I believe the author made many good points that could theoretically be defensible if it weren’t literally fucking everywhere.  If it were occasionally used in moderation, it wouldn’t be the ketchup buffet.  It’s just an effect, among many effects.  I don’t personally worry about auto-tune as some sort of purity test, some sort of “kids these days can’t sing” bullshit.  Come on, recording studios have ALWAYS been about using the technology of the day to create artificial performances that are “better than life”.  Trust me.  Even Elvis had slap-back reverb.  No, it’s not a philosophical thing or some sort of elitist judgment that people who use auto-tune aren’t “real artists”.  These are people with feelings making music.  I respect that.  I want to like it.  I want to support it.  I even defend their right to like ketchup.  They are exploiting a technology to create a sound that expresses what they want to express, I respect that.  The only problem is that I, personally, hate the sound of it and I can always hear it when it’s present, and it’s ALWAYS PRESENT NOW.  When it’s been used sparingly (i.e. – to evoke the dehumanized voice of a robot or something) for the purpose of some sort of abstract sci-fi thing for a track or two (I’m looking at you Radiohead) it feels appropriate, so, it doesn’t bother me.  I love electronic shit.  It’s just the modern vocoder, and O Superman wouldn’t be the same without the robot voice.  But I try to listen to the new releases and it’s literally on Every.  Single.  Song.  In.  Every.  Genre. and it makes them all sound like SHIT.

It’s a ketchup buffet. 

I can’t fucking stand it.  Any song, in any genre, can be ruined by this ridiculous, shitty, absurd, stupid fucking effect no matter how talented the artist.  An otherwise good beat, good melody, interesting set of lyrics, moving production, interesting song structure, moving chord progressions, can be present and sitting there, shining in the sun, making you feel something, and then it’s just demolished by the presence of this horrific sounding effect.  I don’t care if somebody can sing or not.  I don’t care if they are in tune.  I just want to hear real music by real people, but AT has become the defining sound of our era, and I have TRIED to learn to tolerate it, but it just sounds so so so so so so so so so so so so so so so so so BAD.  It has ruined over a decade worth of music and it just keeps happening.  It’s on every goddamn song and I just can’t listen to the effect for more than about 20 seconds without getting angry and so vast swathes of music are off limits to me and as a music lover it’s really really disappointing.  I can think of no other musical fad(?) or direction in all of musical history that has been this fingernails on chalkboard irritating.  I don’t love all music ever made, but I can listen to jazz, classical, rock, hip hop, rap, R&B, world music, reggae, spoken word, experimental, ambient, heavy metal, electronica, country, blues, you name it, and find artists I love, songs I love, sounds I love.  But not with AT vocals.  AT vocals just ruin 99% of anything they touch.  I’m sure I am missing out on music that I would otherwise consider to be brilliant.  I am sure there are many artists I would really have loved and admired if they had worked in the pre-autotune era but I can’t make it through a single track, let alone a record.  I can’t hear the “genius” because I can’t get past the gallons of ketchup drowning every syllable of every word.

I have no power to influence the tastes of the general populace who seem to have decided this sounds good.  People like mullets, Red Bull, Donald Trump, and plenty of other tacky, gross, weird, and unpalatable things, but if you can suggest some modern music that isn’t infected with this godawful corruption, I would be open to the pointers.  Any genre, any flavor, as long as it’s not smothered in ketchup.

I read this article today, The Cost of Living in Mark Zuckerberg’s Internet Empire – The Ringer, and a few things jumped out at me:

“I believe that almost everyone actually hates almost every interaction with almost every algorithm online”

and

“I miss the human internet with an intensity that borders on homesickness”

These two statements most definitely reflect my feelings.  I’ve been struggling to really describe what happened to the internet, but I think it’s accurate to state that the internet as experienced by the majority of users has effectively transitioned from the “human internet” powered by millions of individuals and entities into the “corporate internet” in which the Big Five (Apple, Amazon, Google, Facebook, and Microsoft) have become so pervasive that is effectively impossible to use the internet without them.  This, in turn, has lead to a world in which the priorities of those companies define the landscape for everybody else.  

Is it even possible to live in the modern world without interacting with these companies?  I suppose you could avoid using digital technology and you could attempt to only frequent venues that similarly avoid digital technology, but it is likely impossible, short of living off the grid and growing your own food and building your own tools and furniture.

But online, well, the internet would no longer be functional for much of anything if you attempted to boycott any of these companies.  I’m not really talking about a boycott, I am talking about how much more interesting the internet was before it went into orbit around these particular stars.  Today’s internet is simply not all that interesting, not all that engaging.  It’s shallow, vapid, cheap, stupid, and ugly.  Almost everyone hates how it works and almost everyone uses it incessantly anyhow because of social networking, apps, virality, and the rise of the internet as a surveillance marketing platform rather than an information sharing platform.  

It’s fairly easy to live without Apple, mostly.  Microsoft too.  You can buy a Linux-powered laptop and a dumb-phone…  er, I mean feature phone, and at the very least you are not on their platforms, directly.  You are likely still using their products in other ways, but let’s be honest here, the two oldest tech giants made their bones by selling computer platforms, not monetizing consumer engagement, and they are the least threatening of the Frightful Five.  20 years ago, when the internet was wilder, you still probably accessed it using a computer powered by an operating system from one of the two, or Linux, or OS/2, or something else, but it really didn’t change the experience of the web.  That was just personal preference.

But then along came Google, then Amazon, and then Facebook, and each one more and more made you, the consumer, into their product.  They no longer wanted to sell you a product, they wanted to sell YOU to somebody else who wanted to sell you a product.  As all the ad revenue was pulled into their orbits, newspapers and magazines and web sites and everything else online was pulled and bent out of shape, homogenized, centralized.  Less interesting.  More boring.  Shallower.

Why?  Because, quite simply, the internet is people and the things they create and share.  People tend to create and share only in places and ways that other people are likely see, hear, or read and more and more that means on Facebook, or Instagram, and when they do self-host, the only way people will find it (if at all) is via Google search.  The internet may have vast swaths or information, but the number of independent gateways to that information is now quite small.

This new corporate internet is nothing like the fun it was before and it’s a pity.  Wishing we had the old one back isn’t nostalgia, it’s just missing something that was special and interesting, like local radio stations before Clear Channel.  The rest of the internet exists, but it’s more like a ghost town.  Old link sites that go nowhere, unmaintained blogs, and other detritus, all the traffic went to the corporate web and since the internet is just people, and people are the internet, if all the people spend 99% of their time on the same handful of platforms, then those effectively become all there is.

It wasn’t supposed to be like this.  The internet was supposed to be a radically decentralized, democratized, and free platform and it still is…  kinda…  technically…  but it really isn’t.  It is still possible to setup a computer in your own house, run your own stuff on it, use exclusively open-source, and even live without a FB account or ever shopping at Amazon but since so few people actually do any of this, you will be missing out on where most of human culture takes place these days.  The baby pictures you won’t see, events you won’t hear about, and interactions you won’t have will leave you on an island.  You’ll pay more money to buy things you will have more trouble locating.  

There are movements underway to try to claw back some of the freedoms we have surrendered.  One in particular, https://indieweb.org/, seems really focused on providing alternative solutions to the problems we are currently solving via the corporate web but as Kashmir Hill’s amazing experiment at GizModo demonstrates (https://gizmodo.com/c/goodbye-big-five) you really can’t boycott them in the sense of never using their services.  The internet just doesn’t really work anymore without these companies.  They own the damn thing.  The best the rest of us can do is work to demonetize ourselves, get out of their funnels.  We will never wind up back in the pre-corporate web and sometimes I find myself just wishing to ditch the web for whatever comes next.  Maybe the DAT network (https://www.linuxjournal.com/content/beaker-decentralized-read-write-browser) or something like it, but, well, that’s just another ghost town…  for now.  I don’t have a solution, maybe there isn’t one, but I miss the human internet.  It was so much more enjoyable…

 

I just finished Orwell: Ignorance is Strength and it got me thinking about all the games I’ve actually managed to complete or beat in my life. It’s not a long list:

  • GORF (Commodore VIC-20 version)
  • Taipan! (Apple ][e version)
  • Space Quest I/II/III
  • Kings Quest IV
  • Myst
  • Every Monkey Island game (including the Telltale ones)
  • Resident Evil IV
  • The Legend of Zelda:
    • Original NES
    • Link’s Awakening
    • The Ocarina of Time
    • A Link to the Past
    • The Wind Waker
  • Zork (Text Adventure)
  • The Hitchhiker’s Guide to the Galaxy (Text Adventure)
  • Bureaucracy (Text Adventure)
  • Stories Untold
  • Oxenfree
  • Limbo
  • The Wolf Among Us
  • Jurassic Park: The Game
  • Batman: Arkham Asylum
  • Gabriel Knight: Sins of the Fathers
  • Portal
  • Portal 2

There are also a few games that you can’t “beat” per se, but that I’ve played so much that I have mastered them. Sports sims:

  • Madden Football series (have completed seasons in every version since 1999)
  • Forza series
  • F1 series