My first exposure to AI was “bottomless pit supervisor” memes, blatantly absurd and just my type of humor. It was so silly and inconsequential that I genuinely thought that it was supposed to be a marketing gimmick or a new algorithm that superseded Markov chains, but that was it. I spent the better part of the day looking at variations of generated text attempting to see what else the new models could make. I was impressed, but the novelty wore off quickly and was eventually forgotten.
Fast forward two years and suddenly we have AI that threatens to replace the majority of human coders, the entire entertainment industry, all knowledge work, and the pretty plumber too. Humans are obsolete! Hail the machines! Yet as I look at the current state of the art, all I see is a souped-up green text machine, but the discourse has shifted far away. What happened?
Artificial Intelligence as a phrase seems to cover so many things and implies a bit too much. When people talk about AI, it never seems to be something that I can put a finger to. It feels ephemeral and ill-defined despite the widespread usage and impressive technical ability. This ties into the nature of how people talk about “AI” more than it does with the actual technology itself. I think this page will covers a large portion of my criticisms that makes it so hard to see what AI really is.
MACHINE LEARNING
When people talk about AI in computing, it refers to solving fuzzy problems that are easy for people and hard for computers, typically with neural nets or a variation thereof. In a more general sense, AI is any algorithm that has become so byzantine that it cannot be articulated. YouTube recommendations are AI-powered, driving assistance is AI, being forced to identify traffic lights for other people is AI, outsourcing your grocery checkout to India is AI.
What about the here and now? I am fairly certain that most of the hype that we currently see is a about a specific part of AI that we can refer to as “generative artificial intelligence”. This is what we’ve seen over the last year or two that makes things that we consider human – art, music, stories, homework. This isn’t a particularly new way of doing things – previously we had classification models that took images and spat out labels, now we have technology that can take some labels and spit out an image. Impressive, but you’ll have to go somewhere else for a full technical breakdown.
MARKETING HAZE
We can talk about “AI” and the historical implications that it has had on the cultural zeitgeist, where it has previously acted as a stand in for so many other distinctly human elements. In 2001: A Space Odyssey, we see Hal as single-minded and ruthlessly efficient, Star Trek has Data as a “nearly” human, The Terminator makes killing machines. AI is always a distinct and pointed reflection of human elements or fears within the actual piece of media. Then again, this statement is so general that it might as well apply to all media I watch. Let’s try and narrow it down a bit.
There are two facets of AI that seem to hold sway over its public perception: AI as a reskinned version of “Pinocchio, Become Human” or AI as a killing machine. All of these pull a neat little trick; by discussing the implications of making a killing machine, you are implicitly assuming that the killing machine can even be made or already has been made. Do you remember all those ethical dilemmas about who a self-driving car should run over? We still don’t have self-driving cars. As far as I am aware, we don’t debate how an AI in terminator regalia would fill out a spreadsheet or wait in line to buy toilet paper. The current AI tech we have are custom built tools to replace human intensive labor, often indistinguishable from software in any meaningful sense.
AI is a marketing term, a new buzzword, and I am surprised that the buzzword means anything at all to the people that want to buy products. AI is this futuristic thing, impossible machines that tap into our cultural history to give the impression of technical achievement, of rigor and impartiality, falsely implied. Machine learning is leveraged to solve “harder” problems and slapping “AI-enabled” onto a piece of equipment gives an impression of capability that the software probably doesn’t really have. Code is typically a black box, so for anyone not familiar with the inner workings of a piece of software, it doesn’t really matter.
PERHAPS ANOTHER CRYPTO FAD
So, is generative AI just another buzzword to be touted as part of the new decentralized internet? It fits in with the WEB3, NFTs, cryptocurrency and libertarian dreams, solving impractical problems, groundbreaking without disruption. AI could be the future, built using new protocols and blockchain, a technological miracle, shiny and chrome. I personally take issue with this idea – machine learning as a broad category of technology has already solved real problems in healthcare and accessibility, security, finance, and a slew of other industries. There is no fatal flaw in the technology that dooms it to the ultimate destiny of becoming part of a technological, cyberpunk-aestheticized corporatocratic dystopia… but it does seem to be headed there.
Technology has always served as a replacement for human labor and it is not a stretch to note that the switch from technology replacing skilled labor in manufacturing to replacing skilled knowledge-based labor has already begun happening in our current system. Nothing has changed, with the exception of a “wow” factor. Previously, the problems that machine learning could solve has been limited to inaccessible technical fields. A language model that can answer questions is an impressive tech demo to customers, albeit with less capabilities than coherent speech would imply. I personally cannot imagine being impressed by writing an entire op-ed by hitting the middle button of my turbocharged autocomplete bar. After all, that would be a very specific time for me to get to work with you and I will be there at minimum has to be and I can get you and and and.
It is hard to say that AI is not dystopian when it is almost deliberately anti-humanist. The current targets of generative AI are writers and artists, but these industries were not picked randomly. Images and text are readily available for the majority of technology companies, which makes it easier to gather data for and train models on. Perhaps more obviously, it is simply easier to automate generating things that will not immediately bring authorities knocking for accidentally killing people. As it stands, focusing on these industries seem to exemplify AI’s current weakness – unreliability. Generative AI is prone to hallucinate, with no way of reliably detecting or correcting such an error.
I don’t think that machine learning as a field should be lumped in alongside technologies that have yet to solve real problems, but generative AI still shares characteristics of vaporware, similar to the original promises of cryptocurrencies. While the technical demos are impressive and the results are often “good enough” for lazily examined artwork, it is unreliable enough to be unsuitable for work where specificity of language or artistic direction is needed. I can make glorified clip-art of questionable royalty status, but I have yet to see any material disruption in knowledge-based work, or a path to transition generative AI into artificial general intelligence.
POISON THE WELL
Yet, all of these criticisms can also be waved away on account that technology has a tendency to improve exponentially – who is to say the same will not happen to AI? The increases in capabilities that we have seen over the course of the last several years indicate that AI as a technology is rapidly getting better. Even if it can’t tell the truth right now, as long as we have more data to help the model get better, it will be able to do so reliably enough in the future. CUE: nuclear fusion, stage left, ten years.
So, what about using AI right now? It’s good enough to write listicles and insert affiliate links into content farms that seek to perform no research or make anything new. It even has a nice homogenizing effect on the standards of corporate content that already populates the internet. Now all we need to do to make AI better is gather more data from the internet to train our models. How about an AI generated article about incest? Whoops.
Using current AI has accidentally polluted the datasets that we use to train the AI in the first place. Any time we use generative AI to create something, the data that it creates is similar but not identical to human data. These AI outputs have several key differences from human expression that often propagate into strange speech patterns, general incoherence, or broken fingers. Perhaps it’s possible to improve worse models by training it on the output of better models, but the improvement is also limited to the better model. In other words, the creation of AI content is poison to AI models; any content created by AI and posted to the internet limits the capabilities of future models in a way that we cannot currently fix. This would not be an issue if we had a reliable mechanism to determine if something is created by AI, but we don’t, and if we did, there wouldn’t be any reason to gather so much data to train our models in the first place.
WHAT COPYRIGHT?
It really doesn’t help that the current discussion around AI has always veered sharply into ethical questions when a lot of tech companies have essentially adopted a model of “if you break the laws really quickly, you can make a profit before anyone can sue you” à la Bird. I can’t make definitive claims about how AI companies have essentially obtained and trained upon data that resembles copyrighted content to a suspicious degree, but I am saying that the historical evidence suggests that the AI companies likely scraped the data wholesale, as opposed to licensing it.
New York Times agrees with me.
THE FAR AWAY FUTURE IS CLOSER, JUST BARELY
I remain skeptical about AI as a matter of course. Nothing in the state of the art indicates to me that it will be possible to create a general intelligence with the technology that we currently have. AI’s current capabilities is astounding, but I doubt that generative AI models will be the incredible things that OpenAI and Microsoft promise they will. Current progress has stagnated and all the low hanging problems are gone. The amount of compute and energy required is increasing exponentially, the human corpus is nearing exhaustion, which in turn indicates that performance will level off.
Generative AI will likely find a niche in human machine interface problems, like taking reservations, getting a drive through order, providing summaries, retrieving and outputting data from private data sets (RAGS). I doubt it will become a central part of business offerings due to the current nature of AI – it can do your homework for five dollars, but it can only take over mundane tasks that have little consequence for failure.
This is just another step in automating away with things that we can automate – while generative AI is currently unable to replace knowledge work, it acts as an alternative to Google for small and well documented tasks. Chat Bots, while generally unhelpful, represent a tangible way of either obsoleting or siphoning off a portion of the Google’s internet real estate. Everything else, the hype and glamor, is just a distraction from the real money maker – data and advertising.
If we want something like the singularity to happen, it isn’t going to happen just by increasing the amount of computing time we let a model run for or feeding it ever increasing amounts of poisoned gibberish. The next step is increasing the effectiveness of a model by building architectural differences into the model itself. A petri dish can play Pong, but being good at zero-shot tasks might require that current models steal architecture cues from human brains as well, not just mimicking a puddle of neurons.
Edit (6/17/24): Generative AI fails at taking drive through orders. This is a bit surprising, considering that this ought to be low hanging fruit, especially when the orders are supposed to be consistent, structured, and easily verified by the customer.