This is a wild statement that does not seem to be supported by any actual data.
What does it mean? Does clicking on a link counts as labor.
I think we might be seeing what happens when people are being paid too much to spend all day emailing each other and jockeying excel/gantt charts/org charts. Yeah for some definition of "work" I guarantee that a LLM could perform 3.25 years worth in four weeks.
> people are being paid too much to spend all day emailing each other
Hmm, this does not sound exactly right. Also, does anybody seriously think that communication is not work, or is not important? A number of really impactful things started from people emailing each other. (Hell, Linux kernel development is still much about people emailing patches each other.)
Coordination consumes a larger and larger amount of employee time to the point that, in the absolute largest organizations, the vast majority of employee time is internal coordination vs. actual improvement/selling of the customer offering.
So if you go from 100 employees to 1,000 employees, they can MAYBE do 4X the work. Not 10X like you'd think. And this effect gets even worse as you scale further.
So if an AI can do 10X more labor in a human day, and can coordinate instantaneously via a central context ledger (say a git repo), it doesn't just create 10X gains in productivity for large orgs. It creates a multiple of that 10X due to also removing the human coordination overhead.
This is why having less people and more agents actually makes sense but the coordination problem remains either way.
And you cannot escape it because it is simply mathematical.
Here's an easy non-AI example:
In the past, a 'computer' was literally a person [1]. If you needed to synthesize large amounts of data, you needed to split the task among a team of people writing things down and then a team of people to check their work after the fact and then a team of people to combine all the work and then a team to double-check the combined work.
Tasks that in the past would have taken a room full of people coordinating with pencils are absolutely done by 1 machine today (what we know as computers) that no longer needs to split that task and coordinate, which is exactly what will happen with 'agents' who can take on vastly more work per unit of time.
The math doesn't care whether the nodes are people, CPUs or language models. If agent A's next action depends on what agent B decided, you've introduced a sequential dependency.
1) The purpose of algorithms is ultimately to create value, not compute some fixed value X. This is important as it gives flexibility to choose different value producing tasks where parallelism dominates over serial tasks, whenever the the latter becomes a bottleneck.
2) In terms of producing value, perfect accuracy or the best possible solutions are not always necessary. Many serial tasks can become very parallel tasks when accuracy or certainty do not have to be complete.
3) Solutions that are reusable changes the math further. No matter how serial a calculation is, if something is calculated that can be reused, that serial part becomes effectively order O(1), after calculation if reused exactly, but as neural network demonstrate, many serial tasks become very parallelized after training a model that can be reused for now a wide class of specific problems. Resulting in very amortized serial computing costs.
It doesn't matter how many steps something takes, if those steps are now in the past and the value is "forever" reusable.
4) The economics of serial and parallel computation are not static, but improve relative to economic value achieved. Meaning that demand for cheaper serial time and currency costs result in improved scaled up hardware that delivers cheaper serial costs. This may have less impact than the previous points, but over years makes a tremendous difference on top of all those points.
This can go on.
The point being Amdahl's law certainly applies to specific algorithms, but is not the dominant determinant of computing in general, and not useful application of computing to a significant degree, where problems can be strategically chosen, strategically weakened or altered, and can be strategically fashioned to create O(V) of value - to balance any O(S) cost of serial computing, via direct reuse and generalization.
The computer flattened the coordination dependencies of that room full of people by doing all the calculations by itself. As they get smarter, you can theoretically assume 1 agent could eventually run the entire US federal government.
In the historical [human] computer example; if 15,000 calculations needed to be done, a CPU doesn't need to wait on Bob to come back from lunch to do the next 20 calculations...and doesn't need to wait on Alice to combine his work with the 20 calculations done by Jane...and doesn't need Bill to wait for everybody to be done to double check Jane's work.
The CPU does all 15,000 calculations instantly, by itself. This will be similar with AI agents.
The lingering question is if the intermediate LLM translation steps will actually make our communication more efficient - or just amplify the already inefficient parts.
I bet in 2000 years they will still be writing about it - yeah, technology changes our lives (for better or worse).
For example, one task takes a document with data, charts, and metrics, and Perplexity Computer was tasked with creating a 10-page slide deck for a presentation. Prior to AI, that took human capital and labor costs.
I can't say whether the $1.6M in labor costs is legit or not, but these tools are not just clicking links in 2026.
send me the data and ill ask my own AI to do it in my favorite silly voice.
I want to know pre-"personal computer by perplexity"
I think their numbers of $1.6M and 3.25 years is still probably a massive overestimate, but the order of magnitude seems plausible.
The typical market research , Google analyze , put into spreadsheet is almost gone job. Imagine how many people were doing that as major part of their work
What does this mean? The computer isn't alive. It's physically located on my person? Phones and watches have already cracked this.
If I say "Bob lives with me", that just mean that they generally share a residence with me. Desktop PCs already do that.
I just don't understand what's even intended by this.
But they want you to think of it as alive. They're anthropomorphizing it.
I might be misinterpreting, but according to the landing page, this is the intention:
> Personal Computer gives Perplexity Computer and the Comet Assistant always-on, local access to your machine's files, apps, and sessions through a continuously running compact desktop.
> It's a persistent digital proxy of you. Controllable from any device, anywhere.
That being said, the grandeur and bombastic language also seems fitting for something less sinister, like an even worse version of MS Recall maybe? Combined with, let's say... agents!
That's it! You Personal Computer is your agent and not only may act on your behalf, it also communicates your preferences and intentions.
Futuristic, right?
>Personal Computer runs on a dedicated Mac mini that can run 24/7, connected to your local apps and Perplexity’s secure servers.
Choose Perplexity Computer if you: want a managed, safer, minimal‑setup agent for research, content, presentations, and business workflows, and you’re fine paying a subscription for a polished cloud experience.
Choose OpenClaw if you: are technical, want local code execution and device automation, prefer full control over models/tools, and are willing to own the security and troubleshooting burden.
Would a real person risk their reputation like that?
--
With regard to the attempted redefinition of a commonly used term, I'm reminded of Gretchen, from the Mean Girls, trying to redefine "Fetch!"[1]
It's just not going to happen.
Maybe in a couple of iterations, you'd be able to trust the AI to straight up drive your computer with access to all important parts of your digital life most of the time and only occasionally have to manually stop it from wiring all your savings and 401k to a struggling Nigerian prince.
https://www.fastcompany.com/91497841/meta-superintelligence-...
… particularly with acts that have legal implications like … well, almost everything, but particularly communication with investors or board members.
If people can get slides or summaries by pushing a button, they don't need others to push the button for them.
The slide deck won't be viewed by a human. It'll be read by the human's pet LLM and then summarised into 3 bullet points.
So a more polished OpenClaw that integrates with Perplexity?
In general interesting, if it's not just limited to Mac Minis. Would love to put this on my VPS that's currently running OpenClaw
Also this "system" just seems vulnerable af.
The broader trend is pulling back a bit on “minimalism,” right? I think we hit peak (or valley?) minimalism already so I guess there’s only one way to go.
However, in my opinion this specific typeface and aesthetic is been taken up by AI companies to harken back to the likes of the 1984 Macintosh ads and such...in an attempt to try and convey that "$(AI_PRODUCT) is just as revolutionary as the first desktop PCs".
Build everything, do anything, give AI all your data and thoughts and system access and it will give you the world!
I'm not surprised our own "roaring" 20s is seeing this shift.
One thing I noticed is that whatever harness PPLX wraps around the models, the output is noticeably lower quality in aggregate. I assume some kind of token compression being used before passing your query to a given model but to my knowledge that's never been proven or confirmed?
Anyways, I get the most value out of coding and PPLX has seemingly pivoted away from that. Probably a good play to not try and compete directly with Claude Code/Codex and find a better niche, but I am not sure who or what their market is. Lovely design, however.
- Perplexity: This one has been promoted on (insert general audience media skewing toward the older set) enough to be a household name still.
- ChatGPT: General people in some demographics (see immediately above) are averse to this, on account of negative publicity its parent company has received. (Still very strong popularity and positive sentiment in some demographics, though)
- Claude: Some semi-literates have glommed onto this one, possibly as a result of its more recent success among the developer set.
- Grok: People can be either for or against, based on how they feel about its owning company and its ownership; no more need be said
- Gemini: Again, if you are in the universe of its owning company (or decidedly not), the draw (or repulsion) can be strong here.
For general LLM use, the above are all about the same. To be clear, this is just me shooting from the hip for how each offering might be viewed. IMO, it's not a bad idea to submit the same input to each and see how they compare, if one is so inclined.
They may not come after all the niche companies, but they definitely come after the most successful markets, especially those with low effort moats.
Same goes for relying on the Apple/Google app stores (ex - Apple literally got slapped in court for copying successful apps and then pushing their offering to the top of their stores... talk about wildly abusive behavior).
I may still choose to use AWS/GCP/Azure while trying to find product-market fit as an immature startup, but I'd look real, REAL hard at ditching them as soon as possible afterwards.
Unless you have particularly bursty workloads, they aren't even a good cost saving measure anymore.
https://www.reuters.com/investigates/special-report/amazon-i...
It's difficult to understand what this is because its name is "Personal Computer", and it seems like their definition of Personal Computer is very different from everyone else's.
Also it's funny that it shows making a revenue report with their brand template. AI can replace HR jobs but they still have to make reports for noble executives? They are basically saying "We won't replace CEOs/executives".
I thought of zombo.com the other day and booted it up. There is maybe no other website that continues to bring me as much joy as zombocom
Seriously though, Perplexity, like most of the AI wrapper companies, seems unable to innovate much beyond the query-response chat paradigm. I don't understand why VCs continue to fund these ai-slop companies. I see a new company's advertisements on the NY subway every week, and they're all the same: Anthropic/Google/OpenAI resellers who are selling some UI wrapper (or at best a bespoke model worse than the flagships) on top of pretty basic prompt engineering or tools.
This is what happens when we invert the product-paradigm: we're not solving problems with technology, we're taking technology and applying it to problems.
I use AI every day, so I'm hardly a luddite, but this bubble is so ridiculous at this point. This perplexity product, more than any other so far, feels so representative of peak craze.
https://www.perplexity.ai/hub/blog/everything-is-computer
They designed a program (copied OpenClaw) and called it a computer
...because this thing will go rogue faster than you can blink.
I swear, it's like nobody at the company even reads the slop they're generating or thinks about it for any amount of time. In what world is advertising a kill switch as one of its essential features a positive? It's basically admitting from the start that this is unreliable.
There's a sense of "early bitcoin" around clawbot and other agent frameworks. I think if you wait for another 2 years for it to mature, you'll have missed out as if you waited ten years after bitcoin began.
They're insecure and janky, sure, but on the other hand you've got millions of dollars of compute and tens of thousands of very motivated developers working on making them secure, reliable, and competent. There's something magical about AI that actually gets real work done while you're doing other things, and that's what Perplexity is probably hoping to sell.
Just need a reliable local model, though - AirLLM, other hacks allow you to run bigger models more slowly, so you can build out a completely API-free scheme to run pretty capable agents even without big GPUs.
Could be a Moravec's paradox thing - all these people are thinking that the solution looks enticingly within reach, but it might be an absolutely horribly complicated quagmire with no easy solution short of AGI. I'd bet on clawbots and agents being very secure and great to work with in the very near term, though.
I would be willing to try this new product of theirs, but definitely on a secondary computer (i.e. not main system).
Do I have to sign up to install their version of an OS/openclaw?
No, it doesn't, because it's not alive.
Given the inherent unpredictability of LLMs, I'm not convinced that an openclaw-like system but with more security features bolted on top is really a positive in the sense that the false sense of absolute security probably outweighs whatever actual security has been added.
It is easier to understand that openclaw is definitely insecure.
Basing this concept on what we have today with LLMs is a call for chaos, unreliability and slop communication; at best.