I've started seeing this in video scripts. People saying it out loud and you can hear it's ChatGPT with a human voice. Most of the time they have no idea it's obvious. I mean it's great people are using AI, especially as content creators and solopreneurs, but it's clear AI literacy needs to be the next thing we focus on this year.
Yes... "it's not this, but that," or "didn't just to this, but that"... the contrast/parallel structures, also pretty much in every other paragraph, are an AI annoyance from an editor's point of view.
Thanks for providing guidance here on how to use AI thoughtfully. One perspective I might add is a bit of compassion for people who are newly entering the world of AI and are still figuring out how to engage with it. There’s often a real process of learning how to work with AI, how to find one’s own voice again after receiving help articulating something or anything. What can sound like “slop” or feel robotic is often someone in the middle of trying to say something at all. Even when it’s imperfect, it’s still part of a human working their way toward expression.
I agree there is a real process behind it, that's why I recommend people use the voice dictation feature to give it more context and a real voice and tone, versus just a few words as if it would be a Google search.
"LinkedIn is the worst for it. You’ll see a “thought leadership” post that says absolutely nothing, then 50 comments underneath acting like it changed their life… and they all end with a question, funnily enough." this alone is making me dread using LI!
One comment about the commenting... I COMPLETELY agree about the "auto-agreeing" thing. But I also wish more people would consider what a good comment actually IS. So many show up auto-disagreeing on posts, as if they don't know how to do it. They come in and try to poke holes in it, "grade" it like they're a teacher, accuse things of being written by AI, or be an expert in someone else's post comments. It's rude, honestly. I'm not saying you can't *disagree*... what I'm saying is, if you do want to disagree, do it knowing that the author is reading it, start a conversation. It's not black and white (agree or disagree) -- it's *joining a conversation*
I love the prompt and the idea of having AI come in only after the first draft. I feel one area where people don't use AI as much is just raw voice journaling. AI can tidy up thoughts and messy voice journals and even help you reflect in a more aligned direction on what you want to write about. Thanks for putting this together Sam and Daniel :)
This articulates the problem with unusual clarity. What’s broken isn’t AI. It’s the moment where judgement is supposed to enter and too often doesn’t.
The framing around “publishing without ownership” really lands. You can feel when something was produced versus decided, even if the prose is clean. Writing that carries weight almost always includes a moment of exposure, a choice someone was willing to stand behind.
I also appreciate how practical this is. Write first. Decide first. Then use AI to challenge, complicate, and stress-test rather than to generate the point itself. That sequencing feels like a durable norm, not a temporary tactic.
The line about comments being judgement in public is especially timely. Engagement without position is just noise. This pushes toward a healthier, slower culture where responsibility stays with the writer, and AI stays in service of thinking, not in charge of it.
Thanks so much, Mark. You are the gold standard of people who comment with real integrity and judgement. Maybe we could train an AI tool to learn from you! 😉
I read a post somewhere yesterday (can’t remember) saying that if your post can be copy-pasted by someone else and nobody can spot it then there’s no “you” in it. Post things that has something in it that is truly “you”.
You’re right, this is largely an incentives problem rather than a talent one.
Platforms reward frequency and surface polish, so behaviour follows the signal.
Adobe’s 2024 Digital Trends report shows over 65 percent of marketers now use generative AI weekly, which explains why tone and structure are converging so quickly.
As automation rises, judgement, editing, and knowing when not to post become more visible markers of credibility.
Scarcity of output can now signal seriousness of thought.
Do you think platforms will redesign for discernment, or will trust migrate to people who publish less and mean more?
It's a great question Melanie. I really hope that some platforms do design for this. I've been impressed with LinkedIn's recent decision to make it very clear when images are being designed by AI or not. I think this is the way forward.
Love the line "publishing with ownership." One of the toughest things I see is people struggling to know when to leave AI alone. It might have started as making work easier but now most people reply on AI to reply to the most basic things. This was such a thought provoking read.
You're welcome and I'm here for this kind of AI Literacy. I have to admit that I'll end up using "when to leave it the heck alone" more (esp in meetings) but I'll be happy to quote you 😂
Well I have been ranting vehemently in my own words for a number of years now, words carefully conflated and cleverpy orchestrated, and expelled from my person with every heartfelt emotion attached.
The trouble is, it's quite exhausting and a lot of work, and most troublingly, no one even listens to me!!!
If I handed over to an AI, and asked it to vent randomly about every 2-4 days on some unrelated subjects, mainly using expletives and m-rules, no one would even notice!!!!! FFS
Love this line: “If you’re not willing to use judgement, don’t comment. Silence beats auto-agreeing”
Thanks Phil! Kind of the digital equivalent of better to keep your mouth shut and have people think you a fool than open it and remove all doubt. 😅
I can definitely see that and think its sage advice :)
Ha my dad used to say that 😐
I've started seeing this in video scripts. People saying it out loud and you can hear it's ChatGPT with a human voice. Most of the time they have no idea it's obvious. I mean it's great people are using AI, especially as content creators and solopreneurs, but it's clear AI literacy needs to be the next thing we focus on this year.
Thanks Pinkie. The hardest of all agrees. Real AI literacy is about knowing when to use AI and when to leave it the hell alone. 🙏
Yes... "it's not this, but that," or "didn't just to this, but that"... the contrast/parallel structures, also pretty much in every other paragraph, are an AI annoyance from an editor's point of view.
Good ones!😃 "It’s not just a diet. It’s a lifestyle." said my favorite health & fitness thought leader and the magic broke a little...
🤪🫣
Thanks for providing guidance here on how to use AI thoughtfully. One perspective I might add is a bit of compassion for people who are newly entering the world of AI and are still figuring out how to engage with it. There’s often a real process of learning how to work with AI, how to find one’s own voice again after receiving help articulating something or anything. What can sound like “slop” or feel robotic is often someone in the middle of trying to say something at all. Even when it’s imperfect, it’s still part of a human working their way toward expression.
I agree there is a real process behind it, that's why I recommend people use the voice dictation feature to give it more context and a real voice and tone, versus just a few words as if it would be a Google search.
I will actually write a feature soon about this.
Judgment is the part people keep trying to outsource, and it shows.
"LinkedIn is the worst for it. You’ll see a “thought leadership” post that says absolutely nothing, then 50 comments underneath acting like it changed their life… and they all end with a question, funnily enough." this alone is making me dread using LI!
One comment about the commenting... I COMPLETELY agree about the "auto-agreeing" thing. But I also wish more people would consider what a good comment actually IS. So many show up auto-disagreeing on posts, as if they don't know how to do it. They come in and try to poke holes in it, "grade" it like they're a teacher, accuse things of being written by AI, or be an expert in someone else's post comments. It's rude, honestly. I'm not saying you can't *disagree*... what I'm saying is, if you do want to disagree, do it knowing that the author is reading it, start a conversation. It's not black and white (agree or disagree) -- it's *joining a conversation*
This is so true, Tracy. Someone should definitely do a masterclass on how to leave effective, kind, and useful comments on Substack and beyond.
I will steal this idea for a post I think. 😎
Good stuff! Told my students something similar this week.
This is great to hear, Jennifer. If they like this, you should get them to sign up to the Slow AI curriculum for critical literacy. 🙏
I love the prompt and the idea of having AI come in only after the first draft. I feel one area where people don't use AI as much is just raw voice journaling. AI can tidy up thoughts and messy voice journals and even help you reflect in a more aligned direction on what you want to write about. Thanks for putting this together Sam and Daniel :)
Yes, Chintan Wispr Flow for the win combined with Gemini. This is exactly how I do a lot of my best work.
Awesome great to hear I am doing something that aligns with your recommendation :)
This articulates the problem with unusual clarity. What’s broken isn’t AI. It’s the moment where judgement is supposed to enter and too often doesn’t.
The framing around “publishing without ownership” really lands. You can feel when something was produced versus decided, even if the prose is clean. Writing that carries weight almost always includes a moment of exposure, a choice someone was willing to stand behind.
I also appreciate how practical this is. Write first. Decide first. Then use AI to challenge, complicate, and stress-test rather than to generate the point itself. That sequencing feels like a durable norm, not a temporary tactic.
The line about comments being judgement in public is especially timely. Engagement without position is just noise. This pushes toward a healthier, slower culture where responsibility stays with the writer, and AI stays in service of thinking, not in charge of it.
Thanks so much, Mark. You are the gold standard of people who comment with real integrity and judgement. Maybe we could train an AI tool to learn from you! 😉
Maybe! I have an AI chatbot I trained to talk to me like I'm Batman. But it only does so like the Burt Ward Robin from the 1960s TV show.
“Holy binary batarangs, Batman! My brain is basically a Bat-Computer snack pack!”
Super useful and practical!
Thwack! POW!
One mans slop is another mans unlock... 😂
So so true Chris. 😅
I read a post somewhere yesterday (can’t remember) saying that if your post can be copy-pasted by someone else and nobody can spot it then there’s no “you” in it. Post things that has something in it that is truly “you”.
That's a great strategy for posting excellent work.
You’re right, this is largely an incentives problem rather than a talent one.
Platforms reward frequency and surface polish, so behaviour follows the signal.
Adobe’s 2024 Digital Trends report shows over 65 percent of marketers now use generative AI weekly, which explains why tone and structure are converging so quickly.
As automation rises, judgement, editing, and knowing when not to post become more visible markers of credibility.
Scarcity of output can now signal seriousness of thought.
Do you think platforms will redesign for discernment, or will trust migrate to people who publish less and mean more?
It's a great question Melanie. I really hope that some platforms do design for this. I've been impressed with LinkedIn's recent decision to make it very clear when images are being designed by AI or not. I think this is the way forward.
Love the line "publishing with ownership." One of the toughest things I see is people struggling to know when to leave AI alone. It might have started as making work easier but now most people reply on AI to reply to the most basic things. This was such a thought provoking read.
Thanks so much Shi! That for me is true AI literacy, knowing when to use AI and when to leave it the heck alone. 🙏
You're welcome and I'm here for this kind of AI Literacy. I have to admit that I'll end up using "when to leave it the heck alone" more (esp in meetings) but I'll be happy to quote you 😂
Haha. Please do. 🙏
Thanks for the follow bthw. Should immediately update my meeting slides title to "Knowing when to leave it the heck alone.-Sam IIlingworth"
I go in details on why on my recent post: https://teodoracoach.substack.com/p/you-are-already-raising-ai-and-you
Great post, Teodora - I wasn't aware of the anecdote that they don't know HOW it learns.
I've been recently quite impatient with my GPT, so hopefully he gets the idea that humans won't stand for its BS. 😉
I write all about how to train them and how they learn if you would like to learn more. And soon I will be live if you have any questions
Is anyone going to listen to that message though?
Well I have been ranting vehemently in my own words for a number of years now, words carefully conflated and cleverpy orchestrated, and expelled from my person with every heartfelt emotion attached.
The trouble is, it's quite exhausting and a lot of work, and most troublingly, no one even listens to me!!!
If I handed over to an AI, and asked it to vent randomly about every 2-4 days on some unrelated subjects, mainly using expletives and m-rules, no one would even notice!!!!! FFS
Hahaha. But I imagine people would realise it lacked your strong sense of morality Alice. 💪
mmmm....