In what has - unsurprisingly! - become a bit of a tradition, I bring you yet another blog post motivated by Really Bad Twitter Discourse. I'm well aware that this is a very heated topic for some people, and that's no surprise - art is the financial lifeline for many, as programming is for me - so it's understandable that this debate is very emotionally driven. In lieu of all this, I wanted to spend some time putting my thoughts to paper, airing a perspective that I see significantly less often, and one that I think people could benefit from thinking about more.
But before we get into all of that, let's take a step back and think about something entirely different for a second.
That's right, bay-bee. This is my blog, so you get to experience the most jarring transitions between ideas that my severely neurodivergent mind can come up with. It'll make sense soon though, I promise!
Most people who are technologically inclined happen to be big fans of science fiction - think Blade Runner, Dune, and the like - so this feels like a natural place to start. Whether sci-fi or otherwise, I'm willing to bet that at some point you've watched (or read!) something you really, really loved. Most likely, that experience left you wondering why nothing else really felt the same, or why everything else seemed so shallow in comparison. And well, why is that?
There are lots of things that make a good narrative - let alone a good sci-fi narrative! - but I have a hunch that I think explains a lot of what makes some sci-fi so captivating:
Good sci-fi is speculative fiction. Good sci-fi is not about a new and fantastical gizmo or a dreadful and torturous doohickey - not directly. Good sci-fi is about who owns the gizmos and the doohickeys, and how they're used - or abused! - in everyday life. Again, it's speculative; it's commentary on how things could look given some piece of technology or what-have-you, not flippant fetishism for the technology itself.
So, with that in mind, let's go back to the topic of generative AI - creative AI specifically. I think the term "AI" is a bit of a misnomer here, but it's what a lot of people will understand things like Stable Diffusion, Dall-E and Midjourney to be, so it's what I'll use going forward. Considering the more speculative lens of sci-fi, what are some of the worst-case scenarios that we can dream up?
Well, a human isn't really the one doing the drawing or painting anymore, so that's a good start. A company could use generative AI to cut down on costs, and replace all of its workers with AI processes of some sort. In fact, this has already happened. There are a great many stories of this happening elsewhere too, but a lot of them don't grapple with artists specifically. But nonetheless, it's scary stuff!
So, okay, there's that. What else? These AI models need to get trained somehow, so maybe that's another worst-case scenario to take a look at. There's already been a lot of discussion about a lack of consent from the artists who've had their work used to train these models. Likewise, people have raised concerns about "AI art theft" - cases of AI regurgitating art from its training dataset nearly verbatim - prompting people to worry about copyright, fair use, and the like. So this is another worst-case scenario: a company could prompt generative AI to produce art for a project, and then unwittingly use somebody else's work without attributing them or paying royalties for their work.
This is already a major concern for some other AI tools, like GitHub Copilot; one that has prompted a slew of litigation and mass public vitriol to boot.
So that's definitely a worst-case scenario, but maybe we can think of one more. The last one that really comes to mind is one that's a bit more personal, and less directly impactful, but perhaps a worst-case scenario on more of a philosophical level for some people; it's the assertion that - since there's no artistic input - AI art "has no creative output" and anything produced with it must be bland and soulless in comparison. And well, that one seems reasonable enough, and it's certainly worrying to think about. So it'll do!
So, to recap, we've come up with three (3) things that Really Suck, and they're all things that generative AI has something to do with, in one way or another:
I've thought about all three of these things for a long time, and I've thought about them very seriously. They're genuine issues and criticisms, and like all such issues, it's important that they're aired. After all, where do people go once AI has deemed them no longer necessary? What happens to these people? What happens to me once AI becomes better at reasoning, and my programming knowledge becomes less valuable?
Again, it's scary stuff! It's something that I imagine we'll see discussed a lot more in the next few years, as the issues become more pressing and more people become affected by them.
Now, with all of that metaphorical scaffolding done, let's jump ship for a second, and talk about something else for a little bit.
Let's talk about political systems. A particularly famous exchange between Benjamin Franklin and Elizabeth Willing Powel consisted of Powel asking Franklin about the government of the United States, to which Franklin responded with "a republic, if you can keep it." And, at first, that doesn't really make much sense. How the hell do you keep a republic, anyway - are you supposed to feed it twice daily, clean its litter box and call it a good girl? Or are you supposed to take it on long walks in the mid-afternoon sun, racing after lone birds in the hopes of someday catching them?
The answer, of course, is neither of these things. It's a comment about governance, the way we structure it, and how power influences it. One very interesting observation - and a really important takeaway about governance as a whole - is that a system is only as good as the people who uphold it; or, in other words, a chain is only as strong as its weakest link.
This is an observation that you can very quickly apply to a lot of governance structures adjacent to socialism or communism, both theoretically and practically. I think most people will agree that working for the good of everyone else, and having the government provide for you sounds really great in theory. You wouldn't have to be concerned about income or poverty, and you could live a relatively utopian life without worrying about being made redundant or laid off because your company wanted to save some money on the side. In practice, systems like these have fallen apart in the past for one reason or another - occasionally quite spectacularly! - and there's generally a lot of public distrust for these systems because of this. Like many things, this is especially true in the USA, where a very extreme breed of pro-capitalist rhetoric is rampant; there's a blatant distrust for governance systems based on the idea of doing things "for the greater good", which mostly seems to stem from fears of totalitarian dictatorships, poverty, starvation, and the like. These fears aren't entirely unjustified, of course; these things have been major issues with governments like this in the past, and one can imagine that they'll remain that way for the foreseeable future.
But if we look into it a little bit more, what can we glean from this? The answer is, ultimately, that a system with generally positive influence can be misused and weaponized to instead hold a negative influence over others, often because of some secondary goal, such as the accumulation of power or wealth. But this misuse is not inherent to the system; rather, it is a product of the concentration of power in one or more individuals, and the application of that power in an inappropriate manner.
So, where does that leave us?
Let's look at those three things that Really Suck again, and think about them a little more. After all, are these issues really a property of generative AI, or are they a property of its application? A property of its misuse within a framework we ourselves established, and one borne of a republic we couldn't keep?
Like I said earlier, I've thought about this for a long time, and I've thought about these issues very seriously. I've talked to a lot of people about these issues; what they mean for me, and what they mean for everybody else.
And after all of this, I've come to realize that pointing the finger at generative AI is seriously wrong. Not because these issues aren't real issues, or because of some belief that these trespasses against everyday people can just be swept under the rug; no, I've come to realize that the ghosts in this machine are far more vivacious, far more insidious, and far larger than generative AI alone. Pointing the finger at generative AI - as if it's some sort of "isolated incident" - neglects the real crux of the issue, and is principally what allows it to remain unsolved.
So. Those three things:
Let's start with numero uno then, shall we?
A particularly famous saying dictates that when one is given a hammer, everything begins to look like a nail. If you were to give somebody an awfully large hammer, and they were to mistake another for a nail, striking them over the head in the process, where does the original fault lie? Though another person was harmed, such a thing could never have happened if that awfully large hammer had just remained in your uncle's garden shed; by exercising better judgment, you could have avoided this issue wholesale! After all, it's oft repeated that an ounce of prevention is far superior to a pound of cure.
So, it follows, then, that a critique of generative AI based on its application is not actually a critique of generative AI itself. It is a critique of the system that allowed such application in the first place. In this case, the real critique is of the nature of capitalism; principally, its desire for costs to go down, and profits to go up.
Capitalism, after all, is a system that is not only modelled on endless growth, but predicated on it. Look at wall street and the way investment works; those with large pools of wealth will spare you a crumb, but only if you can promise them growth in return.
Though scientific and technological advancements have the potential to vastly improve human life, it's all too common for them to be nothing but tools for making costs go down and profits go up. Take, for example, the printing press - though it made books and literature significantly more accessible, it displaced many a scribe, and allowed those in the business of distributing literature to do so significantly faster, and at a dramatically lower cost of labour.
Generative AI being misappropriated to displace artists is, by all merits, the same story; it's no fault of generative AI itself.
Much of what was mentioned above also applies here, and I'd prefer not to repeat myself. However, I do want to talk about attribution and copyright. Attribution is something I consider to be vastly important, especially as a matter of respect. To take somebody's work and use it/expand upon it without giving credit to the original author is incredibly disingenuous, and belittles all of the time put into the work.
Copyright law and the notion of intellectual property are very specifically capitalist takes on attribution; they espouse that attribution must be given in the form of royalties, or that the original work may not be used/altered at all. Both of these are strategies that exist to guarantee income for artists, and more importantly, the companies that employ those artists.
So, yes, people are right to be concerned about attribution and royalties. Again, these are genuine issues! But they are issues only within the framework of capitalism, and they stem from the very corporate desire to make art less accessible, so that more income can be gleaned from its scarcity.
This one is a bit of an "odd one out". It's not predicated on the same sort of financial dangers that the first two issues are, but it is something I see thrown around a lot and want to address. Particularly, one of these is the claim that well, generative AI doesn't make you an artist, because you're not doing the painting yourself, and you're just writing a prompt instead. There's often a second part to this claim, that "writing words doesn't make you an artist" which I also see thrown around a lot.
Over time, I've come to the conclusion that, ultimately, art is but a piece of work that another can appreciate. Perhaps it's a piece of work that can be appreciated for its beauty, or for the excellence of its craft, or for the emotion it conveys. But it being art is not decided strictly by the presence of one or all of these things; they certainly amplify the appreciation you may feel for a piece of work, but that's the extent of it. These things, in and of themselves, do not decide what is and is not art.
With that said, let's get into that second claim from above, that "writing words doesn't make you an artist".
For one, this is... demonstratively false. Those who write songs, those who write poetry, and those who write novels are all considered artists of some form. It genuinely baffles me that people make a statement that is so absurdly rife with counter-examples like this so often. So, in lieu of that, I think a lot of the time what people really mean when they say this is that "writing a prompt for an AI doesn't make you an artist." And - guess what! - there's actually a much more interesting lede buried here; it's the assertion that writing a prompt for a generative AI makes you less of an artist than, say, a lyricist. After all, both of you are still "just" writing words, despite the fact that the outcome is radically different. What makes the lyricist so much better than the person using Stable Diffusion or the like?
A pretty common argument for this is that, well, the process is more involved for the lyricist, so that gives their work more artistic merit. Much like debating whether group A is more or less oppressed than group B, debating whether art piece A has more artistic merit than art piece B is quite frankly just... really silly. You're comparing apples to oranges, not because there is genuinely a difference in artistic merit, but because the processes and established expectations for art pieces A and B differ so extremely. You cannot feasibly compare the artistic merit of a sculpture and the artistic merit of a painting, nor can you compare the artistic merit of a piece of digital art and the artistic merit of a knitted sweater. It follows, then, that you cannot compare the artistic merit of a traditional painting and the artistic merit of a piece of generative art.
That's because these things are fundamentally different categories and processes! A crucial part of this whole debate that I think a lot of people miss is that nobody sane is trying to say that the processes involved in traditional art and generative AI art are actually the same. They're completely different!
(And, quite frankly, "artistic merit" is a relatively bogus concept; one that is mostly relative to your own level of appreciation for the work, but that's a whole other can of worms)
When you stop looking at generative AI as being inherently bad, and instead being "bad in specific contexts" - such as the context where it's used to exploit or mistreat artists - some more interesting properties crop up. Something that I actually haven't seen discussed all too much is that generative AI is an accessibility tool, and this is actually the issue that underpins a lot of people's fears about the technology. Because it makes the production of art more accessible, it devalues the labour of other artists, and displaces them in the process. And that fucking sucks! Again, though, this is a Capitalism Thing. It's no direct fault of greater accessibility, and rather the fault of how that accessibility is wielded.
So with that established, let's turn our heads slightly. Generative AI as an accessibility tool kicks ass. It rocks that somebody who is blind, or has arthritis, or cerebral palsy - or anything like that! - can more easily produce art that can be appreciated by others without suffering for literal years trying to fit in with everybody else's definition of "making art". Just because other people have suffered to get to a certain point in the past, because they literally had no other option at the time, does not mean that things should stay that way. There is absolutely zero obligation for a group of people to suffer in the future just because they suffered a hell of a lot before.
It's honestly bonkers to me that this part of generative AI isn't being celebrated more, because imagine - for what is maybe the first time in your life, you can translate what's in your head into a piece of art without harming yourself. I would genuinely be ecstatic in this sort of scenario, and it's fucking awesome that technology has gotten to a point where this is possible.
That mostly concludes all of my ramblings for this blog post. I'm sorry to say that, unfortunately, I have no grand conclusions to make here, other than "maybe we should think about things a little more" and "maybe we could be nicer to each other". My main goal, really, was for this to provoke some sort of internal monologue, so that maybe you might be able to see things with a slightly different framing. If you liked this blog post, I'm really glad! If you didn't, well, you can let me know! I'm always open to discussion, and I'm always trying to do better.
I've been blessed to be around a lot of very mature individuals who've talked about this sort of thing at length, and who have shown me that it's absolutely not as black-and-white as it's often portrayed. Special thanks go out to a certain green platonic solid, a certain enby friend of mine, and whoever it was on twitter that pointed out that a lot of anti-AI sentiment is inherently reactionary. Your input has changed my perspective significantly, and I think I'm a lot better off for it.
I'd also like to thank this blog post for putting something I've long thought about (with regards to power and governance) into a form that's a bit easier to digest. It served as significant inspiration for the structuring of a large portion of this blog post.
And, of course, thanks to you, dear reader, for sticking around until the end!