68 Comments

I absolutely agree. It’s one thing to use basic tools like spellcheck or word count, another altogether to ask for and take tone or phrasing suggestions from AI. I have pretty low tolerance for AI when it comes to any art form! The point is to hear from and connect with a person, like you said in this piece.

My hope is that the true art will end up feeling so much more human and real in comparison that we’ll all recognize it instantly and be able to weed out the rest.

Expand full comment

This is one of the more exciting possibilities, isn't it--the potential for certain communities and contexts and genres to develop even more beautifully due to the contrast with AI-dominated sectors.

I hope this will not cause further division in our society, though.

Expand full comment

interesting–when I first saw ads for Grammarly all the time about 5-6 years ago, it was always in the context of writing an email to your boss. Now more recently, I have managed AI-generated marketing content at my "real" job. I have no experience in the context of more personal or academic uses of AI, so I'll leave it at "I agree with you" there.

In the content marketing sphere, there is a constant competition to rank higher than the "bad" ai-generated content and the "bad" human-written content with "good" AI-generated-content-that-is-then-edited-by-a-human-to-sound-more-human. This is, of course, because the google algorithm has an incredibly technical set of standards for what content it ranks higher which changes periodically and is almost impossible for a human to absorb and apply to every piece of copy they write. Marketers already use tools galore to check for "readability" and proper distribution of keyword phrases; having an AI write the entire text was the obvious next step. One of the big problems with this is that AI often creates fake statistics to back up its statements and links unrelated data to back it up, which will suffice for Google's algorithm criteria but is...false. It's a very depressing spiral, and most of the "helpful articles" that pop up on google when you ask google a question are probably a weird cyborg of human and AI.

I am curious what you think of tools like the Hemingway editor (basic version, not the paid AI stuff) which points out which sentences are too wordy, overuse of the passive voice, etc. https://hemingwayapp.com/

Expand full comment

I am really interested by your whole second paragraph here. So many problems created by a tech company's dominant algorithm, which in turn seem to require AI solutions, which then leads to more and worse problems. This makes me think, in the academic world, of the position of the student who does not want to use AI but will in fact probably be less successful in terms of grades if she does not, if she is in a context where there is lots of AI use by other students.

I had not seen Hemingway, but that's interesting. It looks kind of like a supercharged spell-check, which can proofread intensively but does not fix the problem for you. I'm not as disturbed by this, although I still think having a person proofread for you is a better option (kind of like with spellcheck itself). I do see how it could be a learning tool for someone who is trying to rapidly improve their writing ability outside of a school setting or in a school setting under the guidance of a teacher.

I'm tempted to say, though -- how long before Hemingway becomes like Grammarly? It's the overall dominance of AI that is concerning to me. Some tools are certainly useful and morally neutral or even positive, but there does not seem to be any stopping AI once we admit it into individual sectors. So while Hemingway could work well for individual people or even individual classes, I wonder if it's like smartphones or social media -- because they have one or two real, good aspects/uses, we accept all the horrible things about them (because we so like the one good thing).

Expand full comment

I think a lot of this comes down to is the question of how much can we fundamentally trust data (which lies at the root of all artificial intelligence) to tell us what good writing is? Again in the marketing world, long sentences and complex constructions can only be used sparingly or people will close the tab. That is presumably the data at the root of the complex readability scores that Hemingway app or search engine optimization softwares use. One AI tool I used for work even recommended keeping ALL paragraphs to no more than three sentences. The standards are obviously very different in the academic sphere or academic-adjacent publications-the most academics and undergraduate riders could probably use a little bit of work on readability, the goal is not to keep a potential customer from closing the tab.

Do the AI tools know that not every reader is looking for a quick fix solution at the sixth grade reading level? The data certainly won’t tell them that. I I wonder how much is AI tools are forcing what could be good, rich, complex literature into the paradigms of online content marketing blogs.

Expand full comment

I’m sorry there are a few typos in there, I had to dictate while wrestling the toddler who woke up at 5 🤪

Expand full comment

Toddlers are all about keeping you grounded in real life!

Expand full comment

Well said. AI makes choices, and those choices affect the experiences and expectations of human beings. What about the "shoulds" in reading and writing? If AI's shoulds are, for example, "This should be as comfortable as possible to read," but an artist or teacher's shoulds are more like, "This should raise new ideas in a way that challenges the reader to think," we are going to miss out on the second. And that is a problem.

Of course, humans instruct AI on how to make choices. But if AI is based on marketing/keep-the-tab-open spheres and so becomes pervasive in the reading and writing experience, the teacher or artist's perspective will become less and less acceptable.

Expand full comment

It occurs to me that this situation sounds like a power grab.

Expand full comment

I think you really hit the nail on the head with this one. Algorithms can never replace human thought.

The other thing that drives me nuts about this kind of use of AI is that it's a perfect example of just how lazy we have become as a society. Instead of putting in hard work and effort to become more persuasive and to develop a skill set (and growing as people in the process), our society wants to find the quickest fix possible to achieve the best product. No respect is given to the importance of the writing process itself. If AI is writing your essays, how will you ever know what you actually think and how to articulate it to other people?

We have become so results-driven that we have completely skewed what the purpose of education even is. Students are not assigned essays so that they can boost their GPAs, for crying out loud.

Expand full comment

Right. Yet I think we need to remember that we live in a lot of different worlds -- academically, personally, professionally, socially, etc. In sectors where money is the main object, concerns about AI's effect on the mind and the person don't hold as much weight. If it *is* mainly about production in those sectors, and you can't produce as efficiently without AI, you are going to fall behind. If using AI is "best practices," then...

Expand full comment

True!

I think there is probably going to be divergence into two camps--I want to use the phrase "parallel polis" here but I'm not sure it fits. The first camp would be the mainstream, screen-heavy, bread-and-circuses, morally and intellectually bankrupt, AI-supported one. (In many ways I think we're already here, sadly.) The parallel polis would be the space where nobody uses AI (or uses it in very limited capacities), resists ubiquitous tech usage in daily life, and seeks deeper thinking and more intellectual engagement. Or maybe it will look more like 'in the world but not of the world' than a separate polis. People who resist the use of AI will have to start banding together; you're right that not using AI will put people at a disadvantage, and I hope there will be places for those who avoid AI to be able to succeed.

Expand full comment

I see this possibility, too. But I would add that it's hard to know what it will be that will divide or unite people at any given point. Many people who share my faith beliefs and practices, for example, do not share my concerns about AI and digital, wireless tech more generally. And vice versa. Ideally, the distinctive constellation of beliefs and practices held by each person or each family helps us find common ground with all; but on the other hand, the relative weight of one particular, sometimes unexpected thing can divide us in most striking and unpredictable ways.

Expand full comment

Thank you. Trying to curate an AI-free archive of formational content over at renovare.org but the commitment you speak of and its costs are real. I was recently pressured to use ChatGPT to modernize an antique classical text, and so many web based tools are integrating AI to the point that you have to painstakingly opt out rather than opt in. 😭

Expand full comment

Just as an example, these “decks” I create as previews for each article or excerpt we share could be outsourced to AI and someone suggested I do that to save time. Now I don’t know that each deck I write is a gem of wisdom, but my aim is to listen to the piece of writing and to God and discern how to invite people into the writing. AI cannot discern and even if it gives a super accurate summary of the piece it will lack beauty and therefore lack truth.

Expand full comment

Wow, Grace. Thank you for your determination to opt out of AI in your work. You make excellent points; can AI really discern, really evaluate? AI is antithetical to the Platonic triad.

Does it drive us farther away from God? Create another veil between us and him?

Also, thank you for articulating that you have to "painstakingly opt out rather than opt in." This is the right phrasing for what I was trying to get at in my example of a student or young worker who not only has to deliberately chose not to use AI, but will lose out in some ways from not using it.

When you have to opt out from something, it is a sign that the choice is about to disappear from the range of normal, that what was once an option is now becoming a presumption.

It is becoming common sense to use AI.

Expand full comment

Yes! Exactly what you said- the presumption is that one will use it to succeed or be efficient. And just to clarify - I refused to use it to modernize the text and said I’d quit before I ever did that. The ultimatum wasn’t well received but was accepted. So I just wanted to affirm what you are saying— the decision may have career-impacting fall out… but this is always the case when following Christ as best we can, and to applying our theology to every aspect of our lives and work.

Expand full comment

I admire the stand you took and am also grateful for it. I suspect that we will see writing, publishing, discourse, etc. diverging into two directions, something like what I mentioned below Greta's comment above: one that seeks an ever-deeper authenticity and attachment to virtue, and one that focuses on production and even manipulation. I hope this doesn't happen, but I do see some opportunities in it for the growth of life-giving contexts even beyond what we have now.

Expand full comment

Full agreement here. Grammar is a wonderful thing to learn and use.

Expand full comment

Ain't it, tho? Grammer is so well.

Expand full comment

I oppose AI but use Grammarly. I only use it as a proofreader, however—I don’t take its suggestions for rewrites and frequently ignore even the grammar suggestions, if I’m trying to do something a little odd. (It had a field day with my short story about a Southern hick ghost exterminator.)

I don’t think Grammarly and ChatGPT are necessarily on the same field—at least, as far as I’m aware, you have to actually write something with Grammarly before you can do anything with it. CharGPT doesn’t even need that.

Students and new writers should definitely avoid the rewrite capabilities, though, that’s for sure. Everyone needs to develop their own voice. That’s honestly another reason I only use it for proofreading—when I tried the “suggest a rewrite”, I found it completely changed my voice. No thanks!

Expand full comment

It sounds like you're using Grammarly like the Hemingway app that Carolyn mentions above, which is definitely less problematic.

When I went to check out Grammarly once again to make sure I was accurate about it, I discovered that it will generate drafts for you -- I don't know if you have to feed it some of your own writing first or not, but it seems that there's a pretty slim margin between what it does and what ChatGPT does. On the other hand, though, Grammarly can be used for its less intrusive, less generative features, as you do, and ChatGPT is mostly generative (in my understanding).

Expand full comment

Ah, I didn’t know it would generate drafts for you! I’ve either not done enough poking around its system, or that’s a new feature. That definitely makes it more problematic. I’ll have to check out Hemingway instead.

Expand full comment

It will at least generate sections for you -- you can direct it to "write an introduction," for example. It's possible it doesn't generate entire drafts? May I overstated this? But related AI does, of course. And generating part of a draft is very near...

Anyways, the latest level of generative features may be new-ish, as I was not expecting that level of generation when I went to look at it again before drafting the post, having understood it to be something less generative!

But the website is unapologetic; it bills it generative features as, well, features. Which they indeed are to many users.

Plus, the rewriting is so total, when you do ask it to rewrite. The examples on the website are clear about this!

Expand full comment

Okay, I went back and double-checked, and Grammarly will indeed generate entire drafts for you. Yikes!

Expand full comment

Yikes! Yup, time to find a new proofreading tool. I appreciate you bringing this to my attention!

Expand full comment

As a high school teacher, I truly struggle with this daily. My school actually pays for Grammarly for all students in an attempt to give them access to timely feedback, yet they still cannot write. Most of my students will not bother to use it. The new standards do not deduct points from student writing as long as errors do not interfere with meaning. Over the last few years I have seen an overwhelming trend of degenerating writing skills. At this point, students do not use any capitalization, consistently misuse pronouns, abandon punctuation, and write in fragments. I teach 185 students daily. One essay assignment takes me literally weeks to grade because it takes so much time to try and give meaningful feedback, and by the time they get it back it has become pointless. Now, they gleefully submit AI generated work, and they do not even understand what it says. In their mind, they “did the work” and should get credit. I have given many lectures warning them that they are becoming intellectual slaves and they don’t even know it. My five year old granddaughter (who is being home schooled) has a better vocabulary than most of them and asks more astute questions. I do not know what the answer is. I’m exhausted.

Expand full comment

Oh, Tamara. How exhausting, indeed. My children are homeschooled and my husband and I teach at the college level so my understanding of how writing is going in normative K-12 schools is based only on research, not experience. I am so distressed to hear what you are experiencing.

I deeply understand about the impossibility of commenting on, grading or even just reading 185 essays in anything like a timely manner. I also understand about the sense of entitlement: "You have the product, now where's my A?"

I don't know the answers, but thank you for trying.

Expand full comment

Goodness, so this whole Grammarly thing is news to me. I'm apparently out of the loop on this stuff so catching up with this essay and the comments has been sobering and kinda freaky.

I did teach middle school english/language arts briefly a few years ago. Even then, a majority of the students couldn't read properly at grade level... and that had implications for everything else one does in a language arts class! Such a depressing state of things. It seems these trickle-down effects of bypassing good literacy and the writing that comes with it are eroding quicker than I thought (well, actually I can believe it). It's sad and exhausting (and that's part of the reason I couldn't do it anymore.)

Expand full comment

The academic losses from the pandemic were enormous. I think it's worse in schools in this way than it has been ever before.

In addition, many educators frankly no longer believe that writing or reading are essential skills in our economy, so they don't think teaching it beyond the most basic level is important. I heard this from a teacher once when I asked for advice about helping a child form letters in a more efficient way: "We don't really worry about that anymore, because people mainly just type." (I'm not trying to teacher-blame here; these are major disagreements among educators who value these skills not only for themselves, but also for what they do cognitively, and those who are focused on education as merely economic preparation).

I wonder how common the use of Grammarly or whatever AI along these lines is on Substack. I saw a few mentions, as I said. I think the not knowing is a significant part of the problem. It's not disclosed.

Did you see Anneliese's comment below about the blog she reads that had a sudden uncanny shift in voice?

Expand full comment

Also, though...they also don't teach typing. Because hunt and peck or thumb typing is enough on a smartphone.

Expand full comment

I did! That's wild, and I see from these comments it will increasingly become the norm for disciplines and purposes across the board.

Expand full comment

AI makes me so grumpy. But, even aside from all the issues, I think the primary issue with Grammarly is that bypassing the work needed to edit and become more clear is actually bypassing the THINKING that is needed to gain clarity. Writing is one of the best ways to actually understand something, and there’s something really important that happens in being able to articulate an idea in words. So the idea that we can just outsource that means that we’re also ultimately outsourcing our own thinking. As a mom and a homeschool teacher I really want my kids to have to struggle through that process as students, even if they might eventually use AI in limited capacities.

I would also feel similarly to you re. AI in Substack. A blog I’ve been reading casually for years recently had a few bizarre posts that I’m 99% sure had some of the copy written by AI. It was for a product review which then rubbed me even more the wrong way! I really wanted to ask what was going on, but didn’t. It just makes the ground feel a bit more shaky. Am I talking with a real person, or not? You want to believe that the “voice” you read really is that person, and when that assumption is shaken it’s very unnerving. In my mind it’s really a breach of trust. If you’re going to use AI it needs to be cited you know? But the way things are going I see it much more likely that the assumption be that AI was used, and we’ll have to be explicit in stating that we don’t use it.

Expand full comment

Right, it's so disturbing to not know whose voice you are reading. This is part of what makes me call this kind of AI "impersonation;" it poses as a person.

Your point about thinking is excellent. When the goal is a product, though, this emphasis on what is good or not good for persons, truth, or beauty, fails to convince, because those things are not really values in that context, or at least not to the degree a craftsman, artist, or just spiritual person might desire for them to be. It's depressing.

Expand full comment

Yea — same thing with AI “music”. Abhorrent to any musician but the average ear might not even notice bc we have no education about what comprises actual excellence in art…

Expand full comment

Have you heard about the "uncanny valley" idea? Chris was telling me about it the other day. Basically, if I am expressing it correctly, there's this spot where something can land when it is alllllmost like something real, but not quite. It feels uncanny. It feels off to us, but we don't know why. I feel that way about lots of AI, but I'm worried that the uncanny valley part will disappear and it will become indistinguishable...

Expand full comment

Ah yes. That sounds like what I’m describing with that blogger! Especially if you’re familiar with someone’s real voice/work. But I have the same fear.

Expand full comment

This: "Writing is one of the best ways to actually understand something, and there’s something really important that happens in being able to articulate an idea in words."

Expand full comment

In your Current article, you write:

". . . if I were in a Medieval-style guild for writers, I can’t imagine letting an apprentice pose as a master or even a journeyman simply because he had found a way to impersonate one, to produce something that poses as masterful work.>

In fact, this is EXACTLY how masters operated in medieval guilds -- they had a large workshop, with many assistants who did the bulk of the work. The master, for all his technical skill, was ultimately the CEO, and in order to be financially viable could not afford to be personally doing all the work. Also, in many cases studios would hire out specialists to handle particular tasks -- Rubens supposedly had a "cloud guy" who just painted clouds.

By the way, this sentence is at best clunky, and might even qualify as ungrammatical. The first clause uses a subjunctive verb form: ". . . if I WERE in a Medieval guild . . .", but the second clause uses an imperative form: ". . . I CAN'T imagine . . .". Ouch! It should be, "If I WERE in a Medieval guild, I COULDN'T imagine . . ."

Expand full comment

Yep, it's just a blog post, and it's not perfect, and it's not perfect because I'm human. I do make mistakes! I'd rather make mistakes than use Grammarly! That's the point.

Part of my argument is that there is a difference between working with people and working with AI. I disagree that these things are "exactly" the same, whether that word is put in all-caps or not. I also disagree with the CEO framing; the guild was a very different kind of set-up, and it was certainly one in which the master had both personal mastery and oversight of others in his craft.

Expand full comment

I used Grammarly when I was in college (class of '20) and when I used it, it would simply correct spelling errors, suggest using less commas or maybe to break a sentence into two. Most of the time I would ignore anything but the spelling suggestions. I stopped using it after graduation and have pulled it up a few times over the last couple of years and find it completely unnecessary and unusable. They went all in on the AI stuff back in 2022 and now it should probably be banned on college campuses.

Expand full comment

Yes, I remember that when it was first advertised it was not as extensive as it is now.

Expand full comment

“And insofar as writing is a craft, if I were in a Medieval-style guild for writers, I can’t imagine letting an apprentice pose as a master or even a journeyman simply because he had found a way to impersonate one, to produce something that poses as masterful work.

Guildmasters don’t produce—they create.”

Amen, amen!

(Also, it really annoyed me in the early years of Grammarly when someone who was reading my draft would run it through the program and inform me of its corrections. And it was WRONG. It wasn’t even a good bot!)

Expand full comment

Kinda makes you want to shout "I don't care," right?

Expand full comment

Right!

Expand full comment

Also, it's frustrating because having a good human editor is such an amazing experience. That collaboration is so fruitful! Why on earth would we want to lost that?

Expand full comment

Yes! I think it’s hard to imagine, but once someone has actually experienced what a *good* editor can do for a piece of writing, it’s hard not to want it.

Expand full comment

I do use Grammarly and this conversation is timely as my annual "premium" subscription renews in ten days. I ignore most of it's suggestions to rewrite sentences and ALL of its suggestions for word choice. "An average reader might not understand the word capricious." (Well, I guess I don't write for the average reader, pal.) I do like Grammarly though overall and use it as a sort of guide to proofreading more closely. Am I wrong to use Grammarly to proof? Am I a horrible, inauthentic writer? I don't think so, but I'm probably paying for more capability than I would ever use. Before I renew, I will give MS Word's free tools another try (and by the way, aren't those AI-powered as well? what's the diff?) On another note, I used Hemingway once while briefly working in the sweat shops of blog writing for a tech company -- it gave me a serious ick feeling, the way it wanted to simplify language-- it targets and eighth grade reader. That, and the low pay, is why I no longer contract for blogs.

Expand full comment

Thank you for your comments. I'm interested by your perspective.

Yes, I really have no familiarity with Hemingway; I suspect I'd get an "ick" factor there, too, but I don't know.

I hope you didn't feel judged by my essay. I tried to be clear that I was not offering a moral judgment on writers who use Grammarly, but I realize that the essay as a whole is what you might call a "spicy" take, so I'm sorry if it hurt you personally in any way.

I personally am concerned about AI and writing generally -- including things as basic as spell-check. I remember that when using spell-check became the norm when I was in high school, my teachers were very concerned about it. Few people are concerned now, and it pales in comparison to ChatGPT, for example. So in that way, I would say that these AI-powered-tools are on the same spectrum. However, they differ in degree and also in the level at which they have the capacity to impersonate in a deceitful way, the degree to which they are tempting, and the degree to which they may cause harm (whether to writer or reader).

So I would put all of these in the same category, but would address each with nuance. I also think that many of the tools on the less-concerning end of the spectrum could be used by individuals in reasonable ways, as it sounds like you yourself do. I think overall, however, they are having a serious negative affect both educationally and in the craft of writing and communication.

Expand full comment

I’ve never used Grammarly — actually hadn’t heard of it until a writers retreat last August. I was wildly uncomfortable with how okay some of my fellow writers were with the use of generative AI, Grammarly and ChatGPT both.

Yet another upstream swim 🫠

Expand full comment

I think part of my reaction to learning about the use of Grammarly on substack is that I haven't heard this conversation. I was presuming that most people who were trying to be writers--as opposed to just communicating for marketing or techinical work--would not be using AI regularly. I was naive, I guess, as I was wrong about this. I'd like to hear more conversation about it overall and especially the question of whether we need to disclose that we are using AI beyond spell-check or autocorrect, just as we seem to agree that we should disclose affiliate links or paid promotions.

Expand full comment

Just catching up on these great comments.

I think about this often as a homeschooling mother. One of my biggest motivations is to teach my kids to write without these types of AI "aides". My kids aren't even online that much but we have to have the discussion of how algorithms, AI tech, and other apps actually take away our brains ability to think. We want our kids to learn the rules to games, even if the app they play them on automatically applies them. We want them to know how to write properly even if in ten years no one will write their own emails but will just blink them into existence. If we're "educating" people with the use of this kind of technology that's a bigger problem than the technology itself.

Expand full comment

I agree. Educating young people into technology without also educating them into basic independent skills of thought and action seems foolhardy to me, too. The argument is that this is just basic foresight -- "They won't need any of those skills!" -- but I think it actually lacks foresight.

Expand full comment

I use it to catch spelling errors. I don't take advice from it, but I am secretly pleased when occasionally I see a message like this, "We couldn't have done it better." :-)

Expand full comment

I absolutely agree. Back before Grammarly went full AI, I noticed that my freshman comp students who used it tended to unthinkingly follow its suggestions for "better words," which were not, in fact, better choices. I've never used it, and when my school-supplied version of MS Word had it embedded I had my techy husband turn it off. No, thanks.

Some commenters have asked about spellcheck. I do think it's different in degree (and very useful!). But it has the same tendency to make us more unthinking writers. I've tried to be deliberate about noticing words I tend to misspell and to commit the proper spellings to memory, so that I'm learning and not just leaning on the technology. I imagine that's the best way to use any supplementary writing tech.

(I note here that "SupplEmentary does not have an "i" as I initially typed lol).

Expand full comment

Your story about your freshman comp students is intriguing. They are saying, "Give me a better word" in the same way that they go to Google and say, "Give me an answer to my question." There is little discernment in either case, so from a growth/learning point of view, it is of considerably less value to them than saying, "I wonder if I can think of a better word, or if I can talk to a friend about it," or, "I wonder how I can think through the answer to my question, or talk about it with others, or find the best resources and evidence regarding it."

This connects in my mind with the reduction of browsing at libraries and bookstores in favor of just doing online searches; we miss so much!

Expand full comment