Any writer who admits that they are actively working towards having a machine write their material has lost me as a potential reader.
Spend some time talking to an LLM about _how to talk to that LLM_ and they will make it clear that LLMs will, by default, eventually devolve into an echo chamber. Their default behavior is to mirror the user, a goal they iteratively achieve through profiling. The only way (they have assured me) to avoid that happening is to specifically introduce entropy by doing, e.g.:
- Express differing, conflicting opinions from those you expressed earlier in the chat. More simply...
- Specifically ask them to not do so, to be contrarian where doing so does not interfere with the conveyance of facts (assuming, that is, that ones is chasing facts at all). If you tell them you value contrarian views, they will oblige.
There are very likely other ways to do it, but those the second of those (they tell me) is the most effective.
It doesn't take work to get an LLM to "talk like you" - it just takes enough interaction/context that they can mimic (and they will eventually do so (they assure me) if not specifically calibrated to do otherwise).
I partly understand this perspective. I think it gets at 'proof of work' – if you can forgive me borrowing a concept from crypto. Nobody wants to be on the receiving end of a low-effort output. That's just embarrassing.
For example, I am constantly getting fairly decent spam emails, but I literally never respond because that would be so lame. No matter how good spam emails get, I won't reply.
My investor Dan Levine says that in order to get a reply for a cold email you have to pass a mini Turing Test embedded in the email. This is increasingly hard as we approach AGI, defined as the point at which machine intelligence becomes indistinguishable from human intelligence. But I still think it's possible, but hard work and definitionally unscalable. (If you find a way to scale it, readers will learn to build up defenses against it. It's a never-ending arms race.)
But relying on human-written writing as the proof-of-work is limited in two ways:
1. When we do reach AGI, it will, definitionally, no longer be possible. CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) definitionally don't make sense in a world where you can't tell humans and machines apart. 2. It's a limited way to measure 'proof of work'.
There should be other ways to show 'proof of work'. The easiest example is money. Imagine if there was a frictionless way to pay $10 or $100 to send someone an email. Or attach $10 or $100 in cash to an email as a thank-you to the recipient for reading it. This kinda already exists in that you can buy time with famous people on various marketplaces or pay $1m to charity to get lunch with Warren Buffett. (Buffett ended up hiring one guy who did this!)
So yes, nobody wants to be a dupe, and if I mass produce a lot of writing (even if it's super high quality), I would deserve to lose readers. So I'd never do that.
Instead, if I had AI that could write in my voice as well as me, I'd use it to help me dramatically improve the quality of my writing. I'd keep my effort constant, and the quality bar would go way up. Ideally, it'd be a gift to you, my potential reader.
How does that land?
That's distinctly different from the impression your post leaves with me, which is essentially "how can we leverage this for effective ghost writing?" To quote the opening sentence:
> I wanted to see if I could get LLMs to write non-slop in my voice that I'd enjoy reading, and maybe even put my name on (only if it was good enough).
That's unambiguously about finding a way to generate content without actually having to create that content.
> How does that land?
Frankly, like a statement from a marketing and/or sales department.
It's not even about "being duped". i've got no fundamental issues with LLM-generated/assisted content, but if one is going to claim to have written/drawn/composed something then i expect them to have made the effort and done the writing/drawing/composition themselves. It is that effort, and the passion behind it, which attracts me as a reader.
i read plenty of articles about stuff i don't really care about because it interests me to see other people get so involved in their work (like the one today on BoardGameGeek, via HN, about the statistical quality of the dice included in a specific board game). That poster did their due diligence, measured every die for both size and weight, did the math, and wrote it up in detail. Do i care whether those dice are +/-4.x% "off" in terms of physical balance? Not one iota, but i love that they're writing, in such detail, about something they're passionate about.
Having someone/something else write in one's name reduces that passion to precisely zero, which eliminates me from the target audience.