We are living in a totally bonkers time.
(If you are a VP at Amazon, yes, I'll consider acquisition offers. I'm also working on an enterprise version of this with additional features.)
Show HN here: https://news.ycombinator.com/item?id=48151287
This is the same thing.
"I used to get my work done on time and leave"
This sounds like you just wanted to get your work done and not foster any work relationships. This is fine, but you will not get promoted this way (as you've seen).
Moving up in a company is 30% work and 70% networking/being likelable/noticed.
I stopped that nonsense years ago. I work for myself now as a consultant. If I work more, I get paid more.
There are other reasons why the bad behavior gets rewarded. If the management is incompetent, they genuinely focus on the optics and not on the actual work. And if they are competent, they understand that the people who stay behind unnecessarily or come over the weekends are more exploitable in the long run. And if the people in management are the kind of people who stay behind unnecessarily, having a team full of people who do the same, rewards them as well.
While it may be true that it's pretty standard, I'm convinced that any organization that relies more on face time and friendships than on actual skill is absolutely toxic.
It's ultimately a combination. A pretty good software developer who is friendly and pleasant is, in most organizations, going to get promoted over the grumbling angry software developer who is brilliant but everyone hates talking to. A lot of this has to do with most work at more senior levels being communication.
Additional story points completed per week, versus token-dollar spent, or some such combo would seem more sane.
But maybe they aren’t really tracking productivity, so tracking tokens is all they have? … I dunno which part of that is dumber.
Edit: I think it may have been from Capers Jones's _Programming Productivity_[1]. Published in 1986, based on research covering the prior 30 years(!) or so. We have known that bad incentives specifically distort the performance of programming teams for a long time.
1 - https://archive.org/details/programmingprodu0000jone/page/n1...
Things that rhyme with this have indeed been happening at the biggest names.
It's their money. They want to do stupid things? So be it.
That's it? I've seen people that are consistently putting out four PRs per day. I don't/can't even code review them. So much of what we do is now just rubber-stamping PRs. We were even told that we shouldn't be writing code by hand anymore.
It's a tractors on farms kind of moment.
The Wright Brothers couldn't cross the Atlantic in their first flier and plenty of subsequent designs crashed and burned (literally). But now air travel is commonplace. Same will happen with AI, we just have to get past these early pains.
Some companies might just have been scammed by the marketing that told them that AI would make all their employees 10,000x more productive and save them billions and when that didn't happen the assumption was that it's because employees weren't using the magical AI as often as they should be.
Other companies, especially those working on their own AI products, might want employees to use AI as much as possible because they hope it will provide them with the training data they'll need to eventually replace most or all of those employees with the AI. Punishing workers who refuse to train their AI replacement might make sense to them because even though it's costly right now they expect the savings down the road to be much much greater.
We have always been living in bonkers time.
Turns out the price I saw in the booking portal isn’t actually what Amazon paid. It’s kinda more like a rack rate listing. But then there’s all kinds of discounting/cash back that happens on the backend based on the amount of travel booked each month.
And the fact that it is an industry-wide meme at this point makes bright red flashing lights and klaxons go off on my mind that a catastrophic reckoning can't be too far. There's not enough money in the world to keep this up for too long.
So if AI screws something up and re-writes it and then screws it up again, needing another re-write, that counted as more positive than if it was done correctly, and simply, the first time.
It’s more like bragging about compiler cycles spent.
I'm confused how anyone could believe it isn't an enhancer, unless they have refused to use any of the technologies.
Negative 2000 Lines of Code
One reason it works out like that for travel funding is that it’s often the ‘use it or lose it’ kind of funding. If you do not use all of the funds allotted, you can’t ask for more and could realistically get less.
Incompetent use of a coding agent, or just general shenanigans, can burn tokens all day but it's not going to get tickets done.
Just looking at the work output - how many story points, tickets, how many new bugs are opened, etc. has not become any less relevant a metric for productivity with AI. If you're a skilled and proper user of AI those numbers would be changing in the right direction, compared to before you had it.
If some guy decides to spend a bunch of money bringing AI tools into the company things might get very uncomfortable for him if they're seeing zero return on that investment. He's sure not going to get recognition and a massive bonus for it. If on the other hand, he can put some numbers in a spreadsheet or powerpoint showing that employees are using AI all the time and profits are up again this quarter, maybe he can take some credit for that or at least keep his boss or the company's shareholders from questioning the wisdom of dumping so much cash into those AI products.
I'm actually a little curious about how long it has been. Bad managers have always prioritized irrelevant metrics, of course, but I have a feeling (backed by no data, just vibes) that management in general crossed a point of no return as soon as "data-driven" became a cross-industry buzzword.
Like, I vaguely remember a time when consumer interactions didn't always come with a request to fill out a survey (with the results getting turned into a number and fed into a dashboard somewhere). And then that changed, and now everything must turned into a number and that number must go up.
Incentivizing people who are already using AI to use as many tokens as possible does seem a little crazy, though.
Users attest to higher productivity and point to material but intermediate factors like token use, generated lines of code, pr counts, etc, but there doesn't seem to be a convincing revolution in the quantity or quality of mature software being delivered.
Combine that puzzling impressions of outcomes with a sense, for many, that they don't feel like they have a personal problem that warrants a new tool, and you end up with a pretty earnest and defensible indifference.
To get hold out engineers using AI, the industry needs to be focused on demonstrating relatable workflow improvements and demonstrating practical improvements to finished work product. Instead, policies like token use incentives just rely on luring them into pulling the slot machine handle with the expectation that once they do, they'll join the cadre of other converts who justify their transition with subjective improvements and intermediate metrics.
So now introduce AI and then tell every developer that they need to be 20% more effective 20% of what?
Among skeptics, I've only seen people won over by using it themselves, because when they use AI for their own work, they invest the time to review the code, understand it, and assess its quality by their own standards. That's how people learn to trust AI coding assistance.
Others will use AI, and it will make your life miserable. You need to know enough about AI to be able to fight back.
The experience: one employee, self-selected, assigned themselves to a task of configuring integration with MySQL HA deployment. They produced a mountain of code in a short month (we are talking about close to a hundred thousands lines of Python code). And they decided to go with Oracle's tools, instead of Galera...
Everything this employee produces is, quite obviously, AI-generated. Also, in the initial stages, they worked on their project completely alone: no reviews. To give some sense of size of this insanity: one of the configuration scripts I'm working with now is a 9K+ loc of Python that's supposed to run from `mysqlsh`. About half of it is module-level variables.
It will take many months to restructure this "prototype" by hand. It's a pain to read and to navigate. GitLab UI has a perceivable lag just trying to display the script, forget about diffs. I will absolutely need AI to try to make sense of it (I'm not allowed to fix it). But, and if it ever comes to fixing, I can't imagine this to be done without automation of some sort.
Unfortunately, AI generates problems that, sometimes, only AI can fix. :(
I have. My conclusion is... humans are deeply irrational when it comes to rapid change.
Egg or olive oil prices spike, humans out an entire government.
The rate of immigration spikes, humans throw them into camps and break useful treaties.
Most of the resistance I've observed amongst engineers is resistance to change generally.
And then digging in when challenged.
Nah, software engineers were always butterflies fluttering from one language or framework to the Next Hot Thing. Change was part of the job, if you didn't keep up you fell behind and atrophied.
Resistance to AI is, I think, more because it is seen as an existential threat, or because it's something whose ultimate long-term outcome is still undefined. It's going to be either a benefit or a hazard, and we don't yet know whether we'll need Bladerunners to rein it in.
Most engineers I've known are enthusiastic when given the opportunity to play around with a new toy. What they don't like is anything being forced on them. There's nothing irrational about that. They've often invested a lot of time into optimizing their workflows.
I've also found that if something actually makes their work easier, you will never have to twist their arm to make them use it. They'll apply it everywhere it helps. They'll even try using it in places and in ways it was never intended for. If they're digging in, you likely haven't made a very compelling case for your changes.
Not just coding, but things like "here is my teams mandate, go through all my company's slack channels, linear tasks, notion pages, and recent merges in got, summarize any work other teams are doing that intersect with my team's work."
That'll burn a lot of tokens.
Set that up to run once or twice a week and give a report.
Two years ago everyone would have told you that 'impact' was the way to measure people, and been aghast at tracking inputs like hours. Say what you will, but at least showing up at 8 didn't cost the company money. Today I see people spending time and money vibe coding tools in search of a problem, just to spend tokens and demonstrate that they're on board with the singularity.
I had a manager like this once. He didn't last very long, but it was without a doubt the most fun six months of my career.
Managers love metrics. Bad managers particularly love metrics. Tokens used was almost the obvious bad metric that was going to be used.
I would argue that tokens used has actually exposed a useful metric: any manager who focused on this, demanded this or ranked based on this should be fired, for being a bad manager.
[1]: https://evan-soohoo.medium.com/did-elon-musk-really-fire-peo...
Good manager (to good engineer): "can you please churn some code to update your LoC metric so I don't have to give you a worse rating?"
I'm sorry but any manager who just claims they're a passive victim of company-wide mandates is a lazy and bad manager.
No, I'm not talking about the engineer who can point to significant contributions outside of code: writing technical specs, leading architecture discussions, etc. I'm talking about the ones who just say they're just coding, but are actually not working at all.
TL;DR LoC and commit count etc can be used only to flag for review likely cases of quiet quitting.
I worked for an international (mothership in the UK, later acquired by the US) company, which had... sort of a similar policy.
So, the (mothership) company acquired a lot of satellite companies, all in banking business. All over the world. Then they figured their CEO was corrupt, got in problems with the law, got kicked out. While they were waiting for the new "real" CEO to step in, they let some "interim" CEO to take his place.
New new (interim) CEO didn't seem to have a clue about the business she was supposed to run, nor did she care. She knew her time was running out, and she figured she'd spend it traveling the world and partaking in fine dining in every corner of the world the company's tentacle could reach. But, to make it seem more plausible, she, sort of, created a policy of "experience exchange", which sent random troupes of select individuals from different branches of the company to "exchange experience" with another similarly randomly assembled troupe. Of course, the company picked the bill when it comes to lodging and dining.
Our inconsequential branch in Israel saw a pilgrimage of high-ranking banking managers from all over the world, but, mostly the wealthier parts of it. Some didn't even bother to show up in the office though, and proceeded straight to the banquet hall of the most expensive hotel on the Tel Aviv beach.
To be fair though, the interim CEO got the boot even before her time was supposed to end, but it was serendipitously close to the acquisition by the US company, and so she was let go as part of a "restructuring" and "optimization"... but it was a crazy year!
Note that it has beaten capitalism, making rational choices to increase earnings has lost to this AI dream.
Also, don't forget that their datacenters will burn our electricity and boil our rivers at rates much cheaper than what we are billed in our homes. So while you're happy generating mountains of AI slop, somewhere there is a datacenter boiling a river.
I'd compare this to a new patented formula of water that's nobody asked for, and the patent owners are trying to replace all water supply with their crap before we wake up.
>For example, IBFAN claims that Nestlé distributes free formula samples to hospitals and maternity wards; after leaving the hospital, the formula is no longer free, but because the supplementation has interfered with lactation, the family must continue to buy the formula.
That's basically how it seems to be with AI. Just replace "spent X fighting terrorism" with "spent X implementing AI workflows" or "invested X in AI" or whatever. Nobody actually knows or cares just how far the dollars are going.
At one point seemingly out of nowhere he pointed out on his screen share "Look at how many tokens I've used this month. I run so much Opus." It was a number that was offensively large.
I remember thinking "That's a really odd flex, this crap is so expensive the fact that you use so much should be a red flag"
He demonstrated a number of Claude Code use cases he had to manage and tweak AWS infrastructure that made me, the old greybeard sysadmin older than the internet think "You've used AI to do something that was a single command."
So this story makes sense. They were being encouraged to just blast away at it six plus months ago.
But if you hit "tab" it'll claim that as an AI-edited line, LOL.
(A lot of the rest of it is stuff I could already have been doing just as fast if I'd ever bothered to learn to use multiple cursors, learned vim navigation, or set up some macros—I never did because my getting-code-on-the-screen speed without those has never been slow enough to hold anything up, in practice)
Probably there is no dichotomy going on and it depends on multiple factors, but it seems so weird to see reports that are so different between each other.
If you are making extremely specific, high quality products over a long time window and your founders are deeply experienced in that field of engineering, then no, you don't need agentic engineering and probably want very little llm code in general (outside of some boilerplate, internal toolings, etc).
This is work related. So you can't expect everyone to have the same input demands or output expectations.
> Probably there is no dichotomy
It's literally staring you in the face.
As time passes and the layers of abstraction pile up, later generations won't understand the underlying layers of the abstraction. This is a huge weakness in our systems development -- and a huge potential attack surface for adversaries.
Yes, and that’s a good thing! This is in fact where a lot of AI value lies. You dont need to know that command anymore - knowing the functional contract is now sufficient to perform the requisite work duties. This is huge!
Of course I lose about as much time as I save to its fuck-ups, so I'd still have been better off learning to actually use a text editor properly. Though (as I mentioned in a another post) part of why I've never done that in 25ish years of writing code for pay is that my code-writing speed has never been too slow for any of the businesses I've worked in, i.e. other things move slowly enough it never mattered.
I find it hard to read "You can do things without knowing things" as a positive improvement in work, society, life, anywhere
It's hard to tell anymore because I have encountered people who genuinely do seem to think that disliking AI is gatekeeping somehow
I use the shit out of opencode to do things as a force multiplier, not as a way to keep me from knowing what its doing.
The point at which we're optimizing for "we don't need to know that anymore" is the point at which everything blows up, because agentic work is not fully deterministic, models hallucinate even simple things.
Blindly relying on your agent weapon of choice to just do the right thing because you didn't take the time to understand how the lego fits together is an actual problem.
I have a pretty good sense of the quality of work my coworkers output, where they tend to struggle, where they're talented, what level of review is required, what I should double check, etc.
By contrast LLMs are more like picking a contractor out of a hat. Even with good guardrails the quality and types of issues vary wildly prompt to prompt.
Quite frankly it was embrassing. We've had tools for static analysis for ages. Use them.
Someone with better knowledge could work 100x faster using 100x fewer resources. They did it the slow, expensive way but at least didn't have to think? Odd flex.
If AI breaks production this way, you just tell AI to fix it! And look, now you've consumed tokens twice. Think on that and I'll see you at the end-of-year performance review.
This reminds me of the story of how the USSR nearly made whales extinct to meet a quota for whale meat that nobody wanted to eat.
How are we sliding face first into “snowpiercer but dumber”?
The problem with not burning tokens is when you not meet the performance KPIs, get labelled as luddite and off you go, even before the job gets taken over by AI.
I do agree with the sentiment, that and war mongers destroying the planet.
I see it a lot and assumed it was concern trolling from plastic manufacturers or libertarians funded by them but you seem genuine.
Have you just fallen for that concern trolling? Grown so cynical that nothing matters anymore? I don't understand the intention if you have a genuine desire to improve society.
What would we be doing differently in a world where we were still using plastic straws? Would that have freed up enough mental energy for a revolution? Would people be blowing up private jets while sipping their diet coke?
I don't mind paper straws (though I don't really use straws in general) but it is frustrating trying really hard to be sustainable and then hearing that Amazon is encouraging people to use as much computing power as possible.
One of the real problems of greenwashing is that it's trying to sell an idea that with just a tiny, almost unnoticeable change to lifestyle, you can keep doing what you're doing and still have the peace of mind that you're not doing anything bad for the environment. Plastic recycling falls into this category--oh, just recycle this thing instead of throwing it away, that means there's no more guilt to be had over the environmental costs of plastic production (meanwhile ignoring the fact that plastic recycling is largely nonviable and so all of that goes straight to the waste stream anyways.)
The hope is that in the alternative world, instead of praising companies for taking what are ultimately only token steps towards environmental stewardship, we'd actually castigate them harder and get them to take real steps to improving the environmental aftereffects of their activities.
Nah, one gets a cocktail with paper straw and feels like they are doing their part saving the planet.
Plastic straws aren't hard to find, by the way.
Reactionaries are often arguing against good things, which makes it difficult for them to directly attack them.
So they develop consistent techniques to attack them from oblique angles:
> Hirschman describes the reactionary theses thus:
> According to the Perversity Thesis, any purposive action to improve some feature of the political, social, or economic status quo only serves, perversely, to exacerbate the very condition one wishes to remedy (compare: Unintended consequences).[4]
> The Futility Thesis holds that attempts at social transformation will be unavailing, that they will fail to "make a dent" in the problem, and the motives of those who keep attempting futile reforms are suspect.[5][6]
> The Jeopardy Thesis states that the risk of the proposed change is too great as it imperils some previous, precious accomplishment.[6][7]
> He characterizes these theses as "rhetorics of intransigence" that do not further constructive debate.[8] Moreover, he says they turn optimism about social advancement into pessimism.[9]
The futility (and perversity) ones are what I think of when people are angry about straws on the internet.
I just don't understand how complaining endlessly about leads to solving any of the bigger problems. Each of which could be dismissed in the same way.
USSR barely accounted for 15% of the world caught amount (with Japan as the leader).
> that nobody wanted to eat
unsubstantiated.
Agreed for USSR, but I think the person you replied to is misremembering the country, I believe they are thinking of Japan. I heard it recently on Stuff You Should Know, which usually does a good job of researching their stories, and it sounds like it is substantiated but may be a bit more complex than presented, but literally true.
https://podcasts.apple.com/us/podcast/save-the-whales/id2789... https://theworld.org/dispatch/news/regions/asia-pacific/japa... https://theworld.org/stories/2019/04/16/whaling-japan-2 https://japantoday.com/category/national/75-of-meat-from-jap... https://www.traffic.org/site/assets/files/3994/whale_meat_tr...
Luckily I work in app management and I know they can only see the last date used so if I just put in one query per day I'm good.
But I'm so sick and tired of this AI hype :(
Now, they might be; they've certainly used silly metrics in the past (LoC, commit count, etc.) without ever fully acknowledging it. But I don't believe that it's as simple as more tokens = more better.
We have token tracking dashboards that leadership is looking at. I know because they show us in these manager meetings. Haven't opened them to everyone yet as some kind of leaderboard, so at least that's nice.
Lots of rumors token spend will be involved in perf reviews. Leadership denies it... but then holds more meetings telling us how important it is to increase our token spend and discussing inadequacies from the token spend dashboards.
I wish I was kidding, but they really are pushing increased token usage. Like I said, we push back. When we push back they acknowledge it's a bad metric and lately have started to add qualifiers about how we don't want to burn tokens unnecessarily and in fact we should be looking to use tokens more efficiently.
And then in the next meeting we are once again talking about how to encourage our teams to use more tokens.
The goal is to increase AI usage of course, but the only metric they track to measure progress on that goal is token usage. Also endless presentations of vibed tools that we never hear about again after a week. Get a lot of those too.
People in FAANG likely worked hard to get in there or lucked out or some combination of both. I feel like my soul would be crushed if I hacked away at Leetcode for months on end just to babysit and gaslight some algorithm into asymptotically following my instructions.
Overall I would say most are exploring the new tools while waiting for the madness to subside. Work in $BIGCORP for long enough and you get used to leadership being out of touch with the work on the ground.
Engineers in $BIGCORP jobs are by and large not the hacker types anymore btw.
The problem explodes at any company that puts up a token use leaderboard or hints that they might do layoffs for engineers that refuse to use AI tools. This triggers a race to use as many tokens as possible to stay ahead.
Anecdotally, the problem is worst among devs who read a lot of social media. Twitter, Threads, Mastodon, LinkedIn, and others are filled with recycled viral stories about companies going AI-native and firing people who don't use enough AI. Anxieties are high right now so nervous developers see this and think they must burn tokens faster than their peers to avoid an inevitable culling.
I'm kidding, of course... but human stupidity is infinite, so...
When it comes to dollars, it's hard to know what "value" even means.
Stuff that could be easily done as shell scripts gets asked how could we make an agent out of it.
Not sure if this is still the case, I rarely use PowerPoint.
Apparently the github one is more useful for its target audience.
Big companies have thousands of leaders. Many good, many bad.
[1] https://locusmag.com/feature/cory-doctorow-full-employment/
This isn't like that, as it isn't funded through taxes. This is private companies experimenting with their money, and risking downstream cost increases that may cause people to go elsewhere, as they do when they try anything new.
This is much better than just funding people regardless of productivity through forced taxes.
[0] https://nintil.com/the-soviet-union-achieving-full-employmen...
Either way, I don't know what this has to do with Amazon getting workers to use AI more, which is what my comment was addressing.
This is simply not true, especially when you consider the massive amounts of government support so many parts of this "experiment with their own money" is getting. As a Utah resident its extremely evident in how forcefully they're pushing through what will be one of the largest datacenters in the world despite near universal disapproval from the citizens.
I don't think USSR poverty rates surpassed those of Tsarist Russia that preceded them. To their credit, I think ideologic competition between capitalist and communist blocks was part of what allowed improvement of life conditions of workers in capitalist countries, after WWII. Fear of revolutions avoided one-percenters taking all productivity gains in the period. They had to share some to keep guillotines away. As soon as things went south in the USSR, from the 70s onwards, and capitalism took over the whole world, lacking any sort of viable extant competition, we reverted back to the old norm, the workers were denied their share of the productivity gains since then, and here are us now. A regime premised on free competition was undermined by lack of competition to itself.
They're using tokens for pointless stuff right now in order to figure out use cases where it helps. You can't do that without also learning where it doesn't help.
My company is doing the same thing.
There's definitely some pressure from managers when they hear about N00% productivity boosts in internal presentations, but where I am at they would figure out if you were making up tasks rather than working pretty quickly and the pressure comes from aggressive deadlines and a shift from the yearly OP1 process to a more agile one.
One person I've talked to has someone in their org who is running GasTown and chews through tokens 24/7. They don't contribute very much, but they're comfortably in the #1 spot.
But the thing is, the problem is the person, not the technology. He was already like this before LLMs. He would "refactor" repos into smaller repos, and all of a sudden all of the code has his name. If you just skim, it looks like he build a huge chunk of the codebase in the company. He also has a history of saying no to stuff I want to do, then he does it himself. Also nitpick my PRs to no end (or straight says he doesn't think he should do that thing) and then he turns around and implements it himself. He doesn't copy paste my code, but he does re-implement himself the same ideas that he just said no to after my PR was open. Very smart guy, very dishonest. But he's good at being dishonest. If you ask him about it he says "oh I just though that this way would be more organized" or something like that. From the outside you could make the argument that one way is better than the other (for reasons I would claim are irrelevant), so it's not obvious that he's being dishonest. But since I see 100% of what he does, it's entirely clear to me that this is a pattern.
EDIT: just remembered another one. One time I asked him to take a specific week of holidays. He didnt say "no" but he did mention that we're under a lot of pressure to deliver The Thing, and if I would delay my holidays. I said "No, I'm not going to delay them", so he approved it. Then when the time came around, he took holidays in the same week. On this one I didn't challenge him, I already know him well enough to know the truth which is he's no ashamed to ask from others things that he would never himself accept.
I asked Alexa (on the amazon web page) about it and it couldn't tell me which carrier had the items or why they were delayed, directed me to a non-existent phone number and then denied it had done so. The customer service bot I was eventually redirected to was even worse, and started telling my that items would be delivered both tomorrow and by May 27 in the same message. Finally I got human intervention, who said the items would arrive tomorrow and that the delivery status had been updated, but the order page still says they're arriving at the end of next week.
Of course at some point the 'benefit' is outweighted by the 'negatives', e.g people making up work. Tokens used is about as useful a measure of productivity as 'hours in office'.
EDIT: My use-case still have relatively low token usage though lol
Add a pre-commit hook to re-create the diagrams on every commit (in case anything changed, of course), that way you can really burn tokens and look good to management.
Have heard very similar stories to what the article describes. There were also outright revolts from tech folks being forced to use Amazon’s own shit self-built AI vs Claude Code and other top-tier products.
Given Amazon’s early start with Echo and Alexa they should have absolutely dominated this AI revolution but have been scrambling in a panic ever since ChatGPT showed up on scene and always seem two steps behind the market.
It all paints a picture inside Amazon of clueless leaders at the top and mobs of others below them just gaming the system so a silly dashboard looks green. “Day 2” has arrived.
that and make sure the tools are actually up treating amazon internal as real customers.
its hard to stay excited about the tools when they can be down for a week because kiro launched.
Slack will start serving porn next.
Like I tell my kids: If every experiment you do succeeds, you aren't trying hard enough.
2. this may be ok. A good way to learn a piece of software or tool or process is to play with it. We learn lots of general knowledge through play and experimentation. Heck we get better at musical instruments by playing on them.
Mandates are kind of dumb in many ways. But they will force the issue of discovering whether anything useful can come from AI other than coding.
Every time I see "not... but..." I suspect an AI article. Not sure if this is the case here.
I think the company realizes this and is actively trying to avoid this, since for the new tools there isn't a leaderboard.
I've chosen the wrong profession.
Choosing to wait for the PIP instead, if $EMPLOYER goes this way. Tell me the work I'm not doing and how pieces of ~~flair~~, sorry, tokens might help. Or don't, I don't care.
For companies doing this there is no 'justify the expenditure'. Employees are being praised for high expenditure, regardless of actual outcome.
Leadership see the problem as 'people resisting AI'. Embracing AI is seen as the solution, and token usage is seen as the measure of success.
I'm also hesitant to 'go for the gold' because it only means more B2B monopoly money, juiced stats, or expectation. Or, God forbid, become the resident Token Expert. That praise you mention is exactly what I don't want!
Anthropic sends .gguf and a claude-serve binary?
No, they don’t.
If you can't change your company, change your company!
If I own part of a company, and I spend money on their goods, and a result their revenues climb and consequently my valuation does too - then my firm value will be higher.
This would also explain the gung-ho approach. Some pretty devious financial engineering akin to arbitrage
This is analogous to measuring productivity by LoC output.
True, but it looks like productivity to people whose own productivity is measured by how busy their subordinates appear to be.
I wonder when we'll see our first "My startup went bankrupt on AI use" post. Amazon is being dumb but at least they can afford it.
Burn resources at all costs to appear productive and use proxy metrics to measure success.
Fire productive employees to ensure we have resources to fund the proxy metrics.
AI slop fool’s gold is the product.
But, I can't figure out how to put my job back into command mode :-(
This is an early symptom of the future devaluation of the skill of developing software. The value is going down because there is too little software developing work for the number of people who currently can do it.
theres so much work available that teams try to avoid taking stuff on as much as possible.
the bottleneck to building more is almost certainly the cross team coordination
likely the best place add agents too. an llm tpm would be super handy tool to scale amazon productivity, rather than coding agents.
I believe there has to be some downward pressure on these executives to take these decisions but I would like to know where it's coming from exactly and what's the logic behind them. Is it some big institution like Blackrock which has leverage on many of these companies? That's always been my bet but I never knew for sure.
Tokens is just yet another proxy for business value.
The problem they face is if everybody is judge by business value in dollars, crappy managers are the first to go
The original (third reich): "Wheels must roll for victory!"
It will end in the same manner.