You Can and Should Write Better than AI
80 years ago Orwell warned us about slop and clickbait.
In 1946, George Orwell wrote an essay called Politics and the English Language which often gets mentioned for its six “rules” of good writing. Back when I was first trying my hand at writing, those rules rewired my thinking about how to write well. Over the years, however, I’d mostly forgotten the specifics of the essay, including what it had to do with politics. Re-reading it now, the specifics (and even the six rules) are less relevant than I imagined, but the more fundamental insights about what separates good writing from bad apply shockingly well to the written (and spoken) word in the age of social media – including pointed warnings that could have been written today about AI.
It got me thinking more deeply about the purpose of writing in our society, and the power we have as both writer and reader. Good writing isn’t about following rules of correctness, but working out how to create thoughts, images, emotions and ideas in other minds. When we don’t respect the importance of that process, it can be dangerous for both writer and reader. Despite huge shifts in writing style since 1946, the dangers have remained, including the potential to distort political thought and opinion that so concerned Orwell.
Today’s chatbot AI systems now pose a new danger, because the technology itself is effectively designed to produce the most dangerous kind of bad writing that Orwell feared.
The Rules Change But the Game is the Same
The stated purpose of Orwell’s essay was to critique and correct what he saw as a degradation of the English language. He believed that this degradation of language was the cause of a dangerous degradation of clear political thought on issues of global importance. His target was the academic elite of political discourse that he saw concealing weak ideas behind an overwrought, impressive-seeming writing style that was in vogue at the time.
The particulars of this argument about the degradation of English have not aged well. The writing style he criticizes has long since gone out of fashion (nobody is scoring points these days by using unnecessary Latin) and It was always an elitist stretch to call this academic style of writing the English Language. Even the idea that a language itself might be in need of repair, and that this could be accomplished through the “conscious action” of learned men, ignores how the evolution of language nearly always happens in reality. See linguist John McWhorter’s many excellent arguments about how language is always evolving organically in order to meet the needs of culture.
Orwell’s main point, however, is a more timeless one. He believes that writing well is worth pursuing as a writer, and that recognizing bad writing is worthwhile as a reader, not just as a matter of aesthetics but as a necessary service to yourself and to society. His insights into what makes for good writing and the dangers of bad writing apply straight through to today’s era of social media and AI.
What exactly separates “good” from “bad”? When we say “bad writing”, we tend to think about the sorts of mistakes that get you red marks on a term paper, but that isn’t what’s important to Orwell:
It has nothing to do with correct grammar and syntax, which are of no importance so long as one makes one’s meaning clear ...
Making your meaning clear, that’s the critical point. It seems obvious, but writing does something incredible when done well. It takes thoughts from the mind of a writer and puts them into the minds of many readers. Good writing does that clearly and concisely, grabbing the attention of the reader and rewarding it with a blossoming of sharply-defined new thoughts in their mind.
Good writing isn’t created automatically by the formal grammar and syntax of language, but by how the writer uses it. It is a matter of writing style. Regardless of what you write, for what purpose, or who your audience is, the fundamental principle of good writing style is universal: Use the diverse tools of language, as you find them, to hold the attention of your readers and effectively put your thoughts into their minds.
Language itself constantly evolves, as does the cultural context of your audience, and so the best way to use language to hold their attention and put your ideas into their minds must also evolve. How you write should depend on what you’re writing and who your audience is. That means that the specifics of good writing are never fixed or universal, and they certainly aren’t about only using “proper” language (although it has its place); sometimes clocking her tea is just the clearest, most concise way of making your point.
It’s tempting to seek out rules we can follow when writing, but specific rules of writing style will have a particular purpose, medium, and audience in mind (think newspaper style guides) and so they may not necessarily apply to what you’re writing – and they will always have an expiration date.
So while Orwell’s six rules were an insightful snapshot of using language well in the academic world of 1946, some are starting to feel a bit creaky 80 years on and a world apart. Some of the more general pieces of advice, however, are timeless. One of the best is to avoid metaphors and figures of speech that are so over-used that they’ve lost their impact and don’t evoke our essential point in the minds of readers. Better to put in the effort as a writer to invent something fresh and vibrant.
As another example, Strunk & White’s famous The Elements of Style, originally from 1920, is also a classic for writers. Even its section titles continually echo in the mind, like “Use definite, specific, concrete language” and “Omit needless words”. But 100 years later, some of the specific advice doesn’t quite hit like it used to unless you’re working for a particularly stodgy newspaper.
We can pick up tips and tricks, but there’s no shortcut. To write well, we have to put in our own thought and care for the topic and our audience.
The Miracle and the Mission of Good Writing
It’s a miracle of human biology that our minds can perform a kind of collaborative data compression when we write and speak. We choose sequences of words to capture our meaning with the expectation that other minds will be able to unpack and expand what we’ve written or spoken into pure thought. The compression is collaborative because it only works when the minds of readers contain concepts, images, and emotions that the writer can evoke through the right choice of word. When that’s true, the writer can play games in a reader’s mind, building up new thoughts from more elemental ones like building blocks.
In a way, writing is similar to composing a musical score, with the minds of readers supplying the orchestra. The composer knows that orchestras generally have strings and horns and percussion that are similar, and they can compress the music in their minds into a simple marks on a page for each type of instrument. Players can then expand those marks back into the combined music of a symphony. (Of course the players provide personal touches, and creative writers often consider their writing a collaborative work with the reader’s imagination.)
Good writing takes maximum advantage of these miraculous capabilities of the mind, and an understanding of the audience, to flow your ideas smoothly through the compression/expansion process. You can think of it as a problem to solve: what words will you choose to beam the idea in your head into your reader’s heads like a pink laser of pure information? Are you compressing your thoughts in a way that other people’s minds, each unique in the experiences that have shaped how they interpret language, can re-expand them easily? Can you invoke concepts already in their minds to paint a picture more quickly? Can you proof read from their point of view, and do the words effortlessly pour out meaning and imagery as you go, or do you have to stop and squeeze them to extract anything satisfying?
Orwell describes one way to go about it:
When you think of a concrete object, you think wordlessly, and then, if you want to describe the thing you have been visualising, you probably hunt about till you find the exact words that seem to fit it. When you think of something abstract you are more inclined to use words from the start, and unless you make a conscious effort to prevent it, the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning. Probably it is better to put off using words as long as possible and get one’s meanings as clear as one can through pictures and sensations. Afterward one can choose – not simply accept – the phrases that will best cover the meaning, and then switch round and decide what impression one’s words are likely to make on another person. This last effort of the mind cuts out all stale or mixed images, all prefabricated phrases, needless repetitions, and humbug and vagueness generally.
When the words are well-chosen, a description of a scene leaves a vibrant image in the reader’s mind like a memory of a real place. An explanation of a scientific theory gives the reader a clarity of understanding as if they had come to the conclusions from their own observations. A story can evoke emotions and insights as if the reader had been through the events of its characters. The reality of a well-written book or article is only slightly less incredible than a neural interface in the back of your neck that can teach you kung fu.
No matter what we write, it’s our responsibility to try to solve that problem as well as we can. Rules and guidelines from Orwell or Strunk & White are useful references for solving that problem, but ultimately we have to know our audience and choose our words based on what works. What you had in mind or what you meant doesn’t matter if it comes out garbled and confused on the other side.
The process of solving that problem well also has an additional important benefit. For you to work out what words will re-expand in other minds as crisp vibrant ideas, you first need to have those ideas crisp and vibrant in your own mind. Good writing helps create clear thought for the writer. It’s the source of the old saying that you don’t really understand a topic well until you teach it. If your ideas are vague, what you write can at best convey vagueness.
The Danger of Bad Writing
To understand how writing can go wrong, I want to separate two essential parts of good writing: being clear and being compelling.
Clear writing uses words that accurately convey thoughts to a reader.
Compelling writing uses words that captivate the reader and hold their interest.
Good writing is both clear and compelling. When one of them is lost, we get into trouble.
Writing that is clear but not compelling may be so dry, verbose, or difficult to follow that nobody will hang in there to extract the ideas. Think of a textbook written without passion for the subject, with long (but perfectly factual) paragraphs that you have to read ten times for anything to sink in. Or maybe it’s something written in language or lingo that is just too unfamiliar to the reader. The juice is there, but the reader has to really squeeze to get it, so they may miss your meaning or not bother trying.
Writing that is compelling but not clear may be captivating at first, but falls apart if the reader stops to think about it. It may be comfortingly accessible, welcoming, and speak to people just like them – but it goes no deeper. Think about marketing slogans, trash self-help books and seminars, or common sense that turns out to be false. It might get the clicks, signups, and impulse purchases, but there’s no substance.
In both types of bad writing, good clear ideas fail to be transmitted from one mind to another.
Also in both cases, the writing may not seem obviously bad. It may be perfectly grammatical and even skillfully written. As a result, if you don’t detect the signs of bad writing, you may be inclined to take it at face value, even if you’re a little hazy on the details. For example, the clear-but-uncompelling writing may come off as dry but seemingly written by smart people so maybe you just skip to the end and accept the conclusions. Compelling-but-unclear writing may seem clever, exciting, relevant, and insightful; you may stay engaged and come to be impressed by the writer and their point of view and fail to notice that they’re talking nonsense.
That means that bad writing isn’t just ineffective, it’s potentially dangerous.
It’s dangerous for the reader because it can use language that seems smart, familiar, or impassioned, but actually conceals lazy thinking, obscuring of inconvenient facts, a lack of cohesive ideas, or outright lies. Bad writing can still give the outward appearance of containing meaning and thought – it can even be beautiful – while being ultimately hollow. We can be easily misled and manipulated when we don’t recognize bad writing that uses words designed to be compelling without actually being clear.
It’s also dangerous for the writer because you might not realize that your own ideas are weak or nebulous if you have the skill to spin out compelling language that feels good, grabs the attention of your audience, and generates immediate positive feedback. Maybe you’re really in tune with your audience or know just what kind of writing style has really worked for other writers. It feels like you’re writing good stuff, but ultimately it falls flat in other minds. It becomes clickbait rather than a miraculous tool to share thought between minds.
Bad Political Writing Then and Now
Those dangers are the crux of Orwell’s argument in Politics and the English Language about the degradation of language in political writing. While the specifics have dated, his description of the fundamental danger of bad writing is useful to understand even today.
One of his great fears was “orthodoxy” – of not having ideas of one’s own but rather only trying to fit in with the group. In Orwell’s time, the orthodoxy he was concerned about was fashionable but intellectually lazy support of communism in academia. Books and papers used language that mashed together all of the outward indicators of research, historical basis, and deep thought, but really said little more than “I’m in the smart-guy club too!” The dense academic language was compelling to a receptive academic in-group, but was very far from clear and concealed the lack of any strong rational argument.
Today political orthodoxy looks much different. It’s the party line, the echo chamber, the meme. Influential orthodox writers are now much more likely to be social media influencers, echoing variations on a narrative that speaks to a particular in-group that will like and share based on how instantly compelling it is, even when it’s not just unclear but often absurd or false. We’ve obliterated the elite of yesteryear, and replaced it with thousands of influencers saying ”This must be right because look at my follower count and how many other people are saying it!”
The danger is this: Then as now, orthodoxy means people believe things not because of clear and convincing supporting ideas, but because other people believe them. Orthodox writing skips past thoughtful argument and jumps straight to conclusion. A conclusion without the clear thought to support it is like a shell, and bad writing can be just that: a thin and sometimes beautiful shell surrounding emptiness. Writers whose primary concern is orthodoxy choose words not to convey the strength of an idea, but to echo the language used by the group. Orthodox writing doesn’t try to gather readers in support for good ideas, it polarizes readers into in- and out-groups who agree or disagree with foregone conclusions. And historically when political conclusions are foregone rather than based in reality, the really dangerous stuff starts to happen.
As readers, we can detect the kind of writing style that tends to be used when orthodoxy rather than thoughtfulness is the motivation. Orwell describes the warning signs:
As soon as certain topics are raised, the concrete melts into the abstract and no one seems able to think of turns of speech that are not hackneyed: prose consists less and less of words chosen for the sake of their meaning, and more and more of phrases tacked together like the sections of a prefabricated hen-house.
He goes on to describe how this applies not just to orthodoxy-driven writing, but also speech.
Orthodoxy, of whatever colour, seems to demand a lifeless, imitative style. [...] A speaker who uses that kind of phraseology has gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself. [...] And this reduced state of consciousness, if not indispensable, is at any rate favourable to political conformity.
Sound like a political influencer? Or for that matter, a TikTok trend chaser or a corporate middle manager talking about circling back to the go-forward actions?
My favorite advice from Orwell about how to detect dangerously bad writing is this:
[Bad writing] at its worst does not consist in picking out words for the sake of their meaning and inventing images in order to make the meaning clearer. It consists in gumming together long strips of words which have already been set in order by someone else, and making the results presentable by sheer humbug. The attraction of this way of writing is that it is easy.
And that brings us neatly on to the topic of writing with AI.
AI Always Writes Correctly but Never Writes Well
The Orwell quote just above made me do a double-take when I re-read the essay recently, because this...
... gumming together long strips of words which have already been set in order by someone else, and making the results presentable by sheer humbug.
... is a remarkably concise and accurate description of today’s chatbot AI systems. That is literally how the technology works. And clearly the reason people are using it is because, as Orwell says, it is easy.
There is a longer story for another day about how today’s popular AI chatbots, LLMs or “large language models”, are built (if you want to jump straight in, this video from Andrej Karpathy is excellent). But for the purposes of the topic at hand, here is the short form.
The company creating the LLM collects as much written text as possible (the entire internet, all books, etc.) and feeds it into a special “training” algorithm. Across the written output from millions of humans, the algorithm pays attention only to the orderings that words have been put into, and specifically how common various word orderings are. We should expect certain patterns to emerge. For example, grammar itself tends to make specific word orderings much more common because humans tend to choose and order their words grammatically. Standard turns of phrase will make certain orderings of words appear frequently. Different vocabulary and patterns of words will tend to be used in the context of different topics. Common questions will tend to be followed by common answers.
The truly clever thing about LLMs is how they efficiently capture and store those common relationships between words, including extremely long sequences of words. The stored distillation of those word relationships is the “model” of the LLM. Once you have the trained model, it can be used to generate new text. If you give the LLM some input text as a starting point (typically a combination of your “prompt” and a large amount of additional hidden text the AI company feeds in), it will then generate output text that would be likely to follow based on all of its cleverly distilled word-sequence data.
Unsurprisingly, that output text is perfectly grammatical, uses common turns of phrase correctly, draws on things people have written in a topic-specific way, and can reproduce correct answers to common questions. In fact the training text is so unthinkably vast that the LLM’s likely word sequence data encodes even more subtle things. Correct logical conclusions, insightful commentary, and relevant emotional sensitivities are generated in output text simply because those things are more likely in the writings of humans than text that is irrational, absurd, or insensitive.
The result is that using a chatbot feels a whole lot like interacting with a thinking human that knows the rules of grammar, has learned common turns of phrase, knows about a staggering array of topics, and can answer questions with what seems to be logical thought and insight, all with a human touch. But what it’s really doing under the hood is literally gumming together long strips of words which have already been set in order by someone else, and making the results presentable by sheer humbug.
LLMs produce exactly the kind of bad writing that Orwell railed against, but written by a mindless word-ordering machine rather than an intellectually lazy human writer. Like orthodox writing, LLM output provides conclusions without thought. LLMs are fundamentally incapable of motivation, reason, understanding, or care, but they are marvelously capable of generating sequences of words that give the impression of those things.
In short, LLMs are machines explicitly designed to produce writing that is often compelling on the surface, but is only clear at conveying ideas by statistical accident. When an input prompt fits into patterns that match enough human-created writing, the thoughts in the minds of the original writers may barely survive the trip through the training process plus the LLM’s mindless gumming-together process, to be recovered in the mind of the reader. But because the LLM is only choosing likely combinations of words rather than choosing words to compress its own thought (which it does not have), the thoughts that happen to be patched together by those words are often muddled, vacuous, or simply factually incorrect. And no matter how much thought you put into your own prompt, the LLM has no mind to understand those words and the intention behind them; it only receives the prompt as relationships of words that influence the likelihood of the output words to follow.
This makes LLM AI not just a particularly bad tool for writing, but a particularly dangerous one.
As readers, we should reject writing produced by AI. It is at best a waste of your time, teasing and leading you on with compelling language that gives an extremely convincingly outward appearance of insight while only delivering it by accident. If it wasn’t worth a person’s time to write something themselves to get the point across clearly, it isn’t worth your time to read it.
Not only that, the dangers of AI-produced writing are substantial if we don’t actively reject it as a society. LLMs are excellent at producing a bottomless supply of compelling orthodox content that doesn’t need to convey meaning to achieve its purpose. It only needs to do what LLMs excel at: using the right words to generate endless tacked-together variations of existing messages, the kinds of messages that draw people into polarized in-groups and reassure them by overwhelming repetition into accepting foregone conclusions, no matter how irrational.
For writers, the best advice comes from Orwell:
What is above all needed is to let the meaning choose the word, and not the other way about. In prose, the worst thing one can do with words is to surrender to them.
By outsourcing your writing to AI, you lose the purpose of writing. You surrender to a sequence of words chosen not by your meaning but by mindless mathematical likelihood, virtually guaranteeing that your thoughts will not be conveyed to other minds effectively while lulling you into believing the opposite. And if you’re writing doesn’t need to convey any real thought, why does it need to be written in the first place? I would argue you’re only increasing society’s horrible poverty of attention.
By not putting in the work of choosing the best possible words to convey your thoughts, you are also robbed of the valuable process of forcing yourself to clarify those thoughts for yourself and others. AI’s command of language will ironically drag you away away from insightful, vibrant, meaningful prose, toward hollow imitation, lifeless word salad, and clickbait.
An LLM may write skillfully and quickly, but it will never write as well as you about your own thoughts, ideas, stories, beliefs, and feelings. While AI has some legitimately good uses, writing is not one of them, no matter how good that writing may seem on the surface. You owe it to yourself and your readers to choose your own words.
