How is the author complaining about the quality of their own writing while admitting to not even bothering reading what they wrote, let alone editing it?
(Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
And that's, I think, a valid choice; you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry? That's for formal writing, and blog posts are not formal.
Not reading what you write smells more like laziness.
Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.
Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.
LLM cant really do that. It can help you produce correct sentence where you struggle to create own, but it does not have capabilities to do what you suggest.
Plus, "lazy" would actually be just using AI to edit the writing.
> you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry
If it is dry, then it is not stylistically perfect. Per definition, dry writing is just an imperfect writing. Stylistically perfect writing does not have to be dry and usually is not dry.
What happens here is that people use "stylistically perfect" when they mean "followed a bad stylistic advice".
I do not mean this comment to be kick against AI. It is very good for some stuff, it is less good for other stuff. What annoys me is someone calling output superior while actually complaining about it being inferior.
Hey, maybe that llm needs to be used differently to achieve actually good writing results.
The problem is that it has a pretty high false positive rate. Maybe it thinks it's AI because there are absolutely no spelling mistakes. Or maybe you're French and you use latin-roots words in English that are considered "too smart" for the average writer.
And the problem is that people run those tools, see "80% chance to be written by AI", and instead of considering that 20% is high enough to consider you don't know, will assume it's definitely written by AI.
Grammarly has seriously started rewriting whole paragraphs recently, I have been having to reject more and more "prompts" where in the past I would accept them almost by default because they actually were Grammer checks.
Personally, I would recommend them to simple use any old editor with spellchecking enabled. That suffices for most writing where you just want to keep your own voice. To me, the red crinkly line just means that I should edit that word myself. In the rare case where I'm stumped on the spelling I'll look at the suggested edit of course, but never as a matter of course.
Computers, digital text, and digital information distribution/transportation have made writing and thoughts cheap. Arguably due to what we are surely all aware of, humans rarely value that which is cheap, whether monetarily or in effort and consequential qualities. What people seem reluctant or maybe unable to acknowledge is that predating the current AI Slop, was what could be called Human Slop, low quality, low effort, careless output that was cheap; regardless of whether AI slop now outperforms.
It is why you are justified in pointing out that even in the post complaining about AI Slop, the human has apparently abandoned what would have been common practice in just the recent past, using basic spellcheckers or simply reviewing what was written and also practicing with deliberation; the art and skill of writing, grammar, and sentence structure.
No one is perfect and that is also what makes anything human, somewhat inexplicable and random variation. However, it takes a certain refinement before unique human character becomes a positive quality and is not just humans being sloppy ... human slop.
Just like hand made items are popular for their imperfections.
eg: https://ids.si.edu/ids/deliveryService?id=SAAM-2011.6_1
from: https://americanart.si.edu/artwork/mandara-79001 https://www.museumofglass.org/ltlg
Personally, I'll just continue to use my own voice. I try to correct spelling and grammar mistakes, and proof-read my writing before posting.
It's not perfect, and my writing can at times be idiosyncratic, but it's my voice and it's all I've got left.
But don't be mistaken in thinking that those mistakes make it better, it just makes it mine.
I want real humans giving real human opinions not ai giving their best opinion on what is the most "rewarding" weighted opinion
> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
What AI can't do is convey emotions.
You're trading ability and competence for convenience.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
For years, even before LLMs, there have been trends of varied popularity to, for lack of a better word, regress - intentionally omitting capitalization, punctuation, or other important details which convey meaning. I rejected those, and likewise I reject the call to omit the emdash or otherwise alter my own manner of speaking - a manner cultivated through 30+ years of reading and writing English text.
If content is intellectually lacking, call that out, but I am absolutely sick of people calling out writing because they "think it's LLM-written". I'm sick of review tools giving false positives and calling students' work "AI written" because they used eloquent words instead of Up Goer Five[0] vocabulary.
I am just as afraid of a society where we all dumb ourselves down to not appear as machines as I am of one where machine-generated spam overtakes all human messaging.
That should leave you with media sources like nyt and your local library, which seems healthier to me. And maybe it might encourage a new type of forum to emerge where there is some decentralized vetting that you are a human, like verifying by inputting the random hash posted outside the local maker space.
I hope editorial departments everywhere are taking careful notes on the ars technica fiasco. Agree there's room for some kind of quick "verified human" checkmark. It would at least give readers the ability to quickly filter, and eliminate all the spurious "this sounds like vibeslop" accusations.
It does not resembles that. It is usually grammatically correct writing, but it is also pretty ineffective writing bad writing with good gramar.
The structured thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
I get that the mainstream ones have been RLHF'd to death, but surely there must be others that are capable?
This is called Hemingway because he was apparently good at communicating efficiently which made him a popular author.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
I never passed any AI writing as my own. I would feel utterly awful. Also, I love tweaking words until they sound perfect.
The number of people who just nonchalantly admit that AI writes their messages is honestly scaring me.
What it is going to be is a 'Slop Decade' - a much better label if you insist on having one.
"Save during the summers and you'll make it through the winters".
Several subreddits became AI slop submission repositories and their human engagement dwindled. Some subreddits that were inundated with AI slop implemented policies that ban it, and it seems to work well.
Strict no slop policies work, and surprisingly, so do rules that require AI submissions to be tagged as AI. Forcing slop slingers to tag their slop does a good job at discouraging said slop, it turns out that admitting your slop is slop is embarrassing or something.
Or maybe there'll be the elite enjoying the world, while the rest of us have to work manual labor. But at least it'll be AI systems ensuring our compliance!