Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.
When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.
People asked for permission to repost it, it got shared on social media, it ended up showing higher in Google than a Time magazine (I think) interview of Bill Gates with the same title.
The problem is it isn't easy to detect it and I'm sure the people who work on generated stuff will work hard to make detection even harder.
I have difficulty detecting even fake videos. How can I possibly I detect generated text in plain text accurately? I mean I will make plenty of false positive mistakes, accusing people of using generated text when they wrote it themselves. This will cause unnecessary friction which I don't know how to prevent.
Second thought: Does it _really_ matter? You find it interesting, you continue reading. You don't like it, you stop reading. That's how I do it. If I read something from a human, I expect it to be their thoughts. I don't know if I should expect it to be their hand typing. Ghost writers were a thing long before LLMs. That said, it wouldn't even _occur_ to me to generate anything I want to say. I don't even spell check. But that's me. I can understand that others do it differently.
I see LLMs more and more like a mirror - if YOU can orchestrate high-level knowledge and have a brutally clear vision of what you want and prompt correspondingly, things will go well for you (I suppose this all comes back to 'context engineering' just with higher specificity on what you are actually prompting) turns out domain knowledge, time/experience-built wisdom, and experience in niches, whatever they may be - will and always will be valuable!
It can write about a spark, but the content has no spark.
It came up engaging, refreshing and in some parts punching really hard.
Mostly it's not as good indeed.
LLMs are naive and have a very mainstream view on things; this often leads them down suboptimal paths. If you can see through some of the mainstream BS on a number of topics, you can help LLMs avoid mistakes. It helps if you can think from first principles.
I love using LLMs but I wouldn't trust one to write code unsupervised for some of my prized projects. They work incredibly well with supervision though.
Important qualifier there. There's a massive oversupply of contrarian thinking; it's cheap, popular (populist), and incorrect. All you have to do is take some piece of conventional wisdom and write the opposite. You don't have to supply evidence, or if you do then a single cherry-picked piece will suffice.
I'd say something more like "Chesterton's Fence Inspection Company": there are reasons why things are the way they are, but if you dig into them, maybe you will find that the assumptions are no longer true? Or they turn out to be still true and important.
Just as one minor example, after working in blockchain space in Germany, I left the industry with a feeling that there was corruption in the sector involving government officials (just based on a lot of weird stuff I witnessed). At the time it was just a paranoid feeling; I couldn't make sense of what I'd experienced because I could not pin down a motivation. But fast forward a few years and I saw an interview in which Marc Andreessen mentioned that some US government officials under Biden actively went after certain blockchain projects and how some people were de-banked for apparently no reason.
This was interesting to hear because I had lost access to one of my bank accounts a few years prior and the bank wouldn't tell me the reason. I also got audited by the tax authorities in Germany, though my tax record was perfect (they had to concede). This was weird considering my income level was not that high and my situation was relatively straight forward. I'm still not fully settled on a conclusion there but every year, my worldview seems to make more sense.
I only recently managed to start taking advantage of what I'd been observing. For example, I anticipated the current precious metals rally a few years ago. Just based on my feeling/observation that crypto had been corrupted by governments and people would start looking for other assets. Before this, I just didn't have any capital to invest so I could not act on my accurate predictions; I could only watch in horror.
The #1 point really: have access to data / experiences / expert knowledge that's unique & can't be distilled from public sources and/or scraped from the internet. This has always been the case. It just holds more weight when AI agents are everywhere.
Agreed. You may know so many things, but ultimately its useless if the other party does not care about wanting to understand them. And I have no clue what the right way is, besides letting people and their models fail and then being there with an answer ...
If you're worried about producing "content", the completion bots have caught up with you.
See the other posts calling the article "a Linkedin post". Those were slop even before LLMs.
Now if you have some information you want to share, that's another topic...
The term content creator represents inclusivity, not genericity.
You have used the term information as a candidate for an alternative. What if someone is sharing an experience, an artwork, or simply something they found to be beautiful? There may be an information component to some of those things but the primary reason that they were offered isn't to be informative.
You don't seek content any more than you seek words. You may read books made of words but it is what the book is about that you look for. The same goes for content, only with a broader spectrum. You seek things that you like, things that you value. Content, being nonspecific, means your horizons can be broad.
I like your words analogy. A "content creator" is a "words writer". We need some words on this page or it looks weird. Go and get me some words.
Users don't seek words, but operators seek to entice users with words so they'll view the advertisements.
"Content" is the same thing without reference to a specific medium. Content can be video, audio, words, or even interactive gameplay.
What I call content is ... well, content ... produced not because you have something to say but because you're aiming for quantity.
What you call content is just low effort content. Slop is a more evocative term that probably captures the concept of low effort, unfortunately it has already been poisoned by people declaring anything AI assisted to be slop regardless of how much effort went into the work.
There are a lot of people who work tirelessly on things that have a massive time and effort commitment for each thing they produce. Yet they identify as content creators. Dismissing their work seems disrespectful to me.
Sturgeon's Law is a warning to not overlook the good because of the preponderance of the bad.
Right, why don't you quote it in full though?
"90% of everything is crap"
I got so annoyed at the second time that I even created a post about it. I guess I just get really annoyed when someone accuses me who writes things by hands as AI slop because it makes me feel like at this point, why not just write it with AI but I guess I just love to type.
I have unironically suggested in one of my HN comments that I should start making the grammatical mistakes I used to make when I had just started using HN like , this mistake that you see here. But I remember people actually flipping out in comments on this grammatical mistake so much that It got fixed.
I am this close to intentionally writing sloppy to prove my comments aren't AI slop but at the same time, I don't want to do this because I really don't want to change how I write just because of something what other people say imo.
AI deprives them of this.
Why even read something with no mistakes? Just scan on to the next comment, you might get a juicy "your/you're" to point out if you don't waste time reading.
I know you're secretly a bot, because you used punctuation. Only AI uses punctuation!
/s
that /s is carrying the whole message haha
but yeah I guess, sometimes I wonder if suppose a bot was accused of being AI, I mean if trained with right prompt and everything, it can also learn to flip out and we would be able to genuinely not trust things.
I guess it can be wild stuff but currently I just really flip out while literally just being below the swear level to maintain decency (also personally I don't like to swear ig) to then find that okay I am a human after all.
But I guess I am gonna start pasting this youtube video when somebody accuses me of being AI
I am only human after all: https://www.youtube.com/watch?v=L3wKzyIN1yk
It would be super funny and better than flipping out haha xD
"Got no way of prove it so maybe I am lying but I am only human after all, don't put the blame on me, Don't put the blame one me" with some :fire: emoji or something or not lmaoo. It would be dope, I am now waiting (anticipating out of fun) for the next time when I comment something written by me (literally human lmaoo) and someone calls me AI.
The song is a banger too btw so definitely worth a listen as well haha
I would much, much, much rather read an article with imperfect English and mistakes than an LLM-edited article. At least I can get an idea of your thinking style and true meaning. Just as an example - if you were to use a false friend [1], an LLM may not deal with this well and conceal it, whereas if I notice the mistake, I can follow the thought process back to look up what was originally intended.
>We're not against AI tools. We use them constantly. What we're against is the idea that using them well is a strategy. It's a baseline.
The short, staccato sentences seem to be overused by AI. Real people tend to ramble a bit more often.
Not exclusive to AI, but I'd be willing to bet any money that the subheadings were generated.
I would argue that using AI for copywriting is a disadvantage at this point. AI writing is so recognisable that it makes me less inclined to believe that the content would have any novel input or ideas behind it at all, since the same style of writing is most often being used to dress up complete garbage.
Foreign-sounding English is not off-putting, at least to me. It even adds a little intrigue compared to bland corporatese.
> I run a marketing agency. We use Claude, ChatGPT, Ahrefs, Semrush. Same tools as everyone else. Same access to the same APIs.
Since you use it for your job of course you use it for this blog, and that will make people look harder for AI signs.
Why?
I get using a spell checker. I can see the utility in running a quick grammar check. Showing it to a friend and asking for feedback is usually a good idea.
But why would you trust a hallucinogenic plagiarism machine to "clean" your ideas?
I think what you are getting wrong is thinking that the reader cares about your effort. The reader doesn't care about your effort. It doesn't matter if it took you 12 seconds or 5 days to write a piece of content.
The key thing is people reading the entirety of it. If it is AI slop, I just automatically skim to the end and nothing registers in my head. The combination of em dashes and the sentence structure just makes my mind tune it out.
So, your thesis is correct. If you put in the custom visualization and put in the effort, folks will read it. But not because they think you put in the effort. They don't care. But because right now AI produces generic fluff that's overly perfectly correct. That's why I skip most LinkedIn posts as well. Like, I personally don't care if it's AI or not. But mentally, I just automatically discount and skip it. So, your effort basically interrupts that automatic pattern recognition.
I don't know antything about marketing given that the first paragraph of blog post makes it clear its from marketing context.
But, as a user or literally just a bystander. using AI isn't really good. I mean for linked-in posts I guess. Isn't the whole point to stand out by not using AI in Linked-in.
Like I can see a post which can have an ending by,
Written with love & a passion by a fellow human, Peace.
And It would be a better / different than this.
Listen man, I am from Third world country too and I had real issues with my grammar. Unironically this was the first advice that I got from people on HN and I was suddenly conscious about it & I tried to improve.
Now I get called AI slop for writing how I write. So to me, its painful to see that my improvement in this context just gets thrown out of the window if someone calls some comment I write here or anywhere else AI slop.
I guess I have used AI and I have pasted in my messages in there to find that it can write like me but I really don't use that (on HN, I only used it once for testing purpose on one discord user iirc) but my point is, I will write things myself and if people call me AI slop, I can really back down the claim that its written by a human, just ask me anything about it.
I don't really think that people who use AI themselves are able to say something back if someone critiques something as AI slop.
I was talking to a friend once, we started debating philosophy. He gave me his medium article, I was impressed but observed the --, I asked him if it was written by AI, he said that it was his ideas but he wrote/condensed it with AI (once again third world country and honestly same response as the original person)
And he was my friend, but I still left thinking hmm, If you are unable to take time with your project to write something, then that really lessens my capacity to read it & I even said to him that I would be more interested in reading his prompts & just started discussing the philosophy itself with him.
And honestly the same point goes to AI generated code projects even though I have vibe coded many but I am unable to read them or find the will to read it at many times if its too verbose or not to my liking. Usually though in that context, its more so just prototypes for personal use case but I still end up open sourcing if someone might be interested ig given it costs nothing for me to open source it.