The AI made me do it.
Statement on the use of AI
There’s much that can and should be said on this topic, but I’d like to deal a moment with the practical dimension both of engaging with technology that utilizes LLMs in one way or another…
To begin, in the context of growing discussions about “AI,” it is critical to have clarity in terminology. Where it’s appropriate, I prefer the term Large Language Models (LLMs) over AI. This isn't only a matter of semantics, even though I'm aware that the battle over terminology may already be lost. I’ll sometimes grudgingly call it AI for the sake of clarity. because few seem to agree with me on this, but I hope it’s understood what I actually mean.
There’s a larger issue about the vaguery of the term “intelligence” that I dealt with on Modern Mythology. More prosaically, LLMs represent a specific subset of technology, whereas “AI” is a broad, unclear, and often misused term. The distinction highlights the importance of human agency in utilizing these technologies.
“Artificial Intelligence” can be used to shift agency. Arguments like “The AI made me do it" obscure the reality that these systems are tools, controlled based on human decisions. It’s like a next-level version of the selective use of the passive voice used in News headlines.
The reported use of these systems to select air strike targets is clearly the most obviously egregious type of ‘agency laundering,’ whereby humans can select from the options provided by an LLM, but there is essentially no one to blame at the end of that chain. But there are many others, such as when executives or HR make the claim that AI drove them to make layoffs. It only “drove” it in the exact same sense that capitalism itself is a form of AI. (This is an idea explored by countless authors prior to the current AI craze, from Douglas Rushkoff and William Gibson to Verner Vinge and Kevin Kelly).
Technology makes a convenient scapegoat. The tendency to offset the responsibility of leaders upon an oracle or deity is hardly a new development in society, and it is one we would be particularly wary of, if we had any sense.
Misdirecting blame towards AI for corporate decisions, such as layoffs or algorithm-based hiring, overlooks the humans making these choices. If you've wondered why so many corporations that stand to make a profit from AI seem to goad on discussions of “evil AI" and “AI run amok" as the chief risk here, this is almost certainly a part of that reverse psychology PR strategy. We should not be surprised at the results of misuse, nor should we attribute those results to the agency of this technology, as that is not a thing that it has.
Here I'd like to speak more practically about how I currently look at my own use, as a freelancer, as an artist, and in those lucky cases where the two intersect:
An iterative process rather than a single solution. Both as a freelancer and a multi-medium artist, AI is integrated as a component of a broader creative process. For example, I can’t imagine asking an LLM to “write a book for me.” I would ask it to help organize some rambling early notes on a subject into an unsorted list, to assist in de-essing an audio track or picking out trouble frequencies, to voice a story that I wrote, and so on. Always check its outputs, and seek to do something with those outputs afterwards that involve human eyes, ears, or hands. (Whenever relevant to the end use).
This process is tailored to suit the specific intent, medium, and end-goal of each project. For tiny independent projects like The Fallen Cycle, it can be a force multiplier, if used judiciously.
Scale. Available budget is a major factor in calculating “good” from “bad” use. Morally, I look very differently on a billion dollar corporation using LLMs to cut creative teams than a band who have collectively pooled $1000 to make an album, although creativity I’m more inclined to ask “...but was what they did with it any good?”
Don’t undercut your own authority. I don’t use AI outputs blindly, but instead as a directable automation tool within domains I've long worked in. If we fully automate our workflow without engagement, subpar results should be expected.
Which is to say that if you automate your entire “pipeline," simply turn the machine on and walk away, don't be surprised if all that comes out the other end is shit.
Case-by-case basis. The context of use is an important factor in determining what sort of process is appropriate. Making batman eatsa the spaghetti memes is a very different use case than doing an art commission or designing a book cover, and those are specific to anything from the desires and budget of a client to the range of options in a particular style.
Results aren’t all the same. There are and will be many examples of this technology being used to produce substandard results. We should all endeavor to make sure that anything we put out is of equal or greater quality than anything we put out before AIs entered the picture.
There are other ethical issues one might rase about the specific models or training data used in one case or another. Some of these are valid concerns, although they are in no way different from the scrutiny we might apply to these corporations in any other domain: Microsoft, Amazon, Apple, Meta, etc etc. More pressure can and should be applied onto how corporations use this technology, but not only this technology. It’s quite possible to be critical of Apple’s corporate practices (for example) and still own an iPhone, although this is certainly a part of what we’ve each got to wrestle with and come to our own conclusions about where the line is.
Even if you don’t work with ChatGPT or MidJourney or Stable Diffusion, LLMs are a part of Illustrator, of Photoshop, of the chip inside your computer. Looking forward, as more individuals recognize the ubiquitous nature of AI in various aspects of daily life, there will likely be a growing acknowledgment that the applications and users of this technology are diverse and not uniformly effective.
Some uses a boon, others a curse.
So far as that goes, this is not unique to AI but applies to all technological advancements. A shovel can also be used to dig a foundation, or a grave, or to put someone in one.