--Mark Gurman, Bloomberg https://x.com/tbpn/status/2016911797656367199
Probably smart time to rent and not buy if they plan on buying in a downturn.
I don't think that's part of their decision making, Liquid Glass moved most things around for seemingly not much else than novelty and that's not the first time.
They have done this before, release something large early in anticipation of a major shift and iron out issues before the shift happens. Liquid Glass started off a little janky but they appear to have been ironing out initial issues with each update.
That doesn't change the fact that I can hardly read some of the user interface in Apple Music for example.
It's not that the idea is bad, but it's badly executed.
Nobody asked for a phone with fake buttons and a fragile wrap around screen.
Nobody asked for the UI to drastically change at random.
I wish smartphone companies would treat their products like they were completed devices with no innovation required. They are fully mature.
Instead, work on making them actually improved in ways that matter rather than trying to find “the next big thing.”
Be more like Toyota and less like Tesla.
Liquid Glass was Apple’s logo change moment
The best is ChatGPT voice mode. It understands non English words and accents amazingly well, and even though the LLM model isn’t the full fledged one, I can have deep conversations with it for an hour without it missing a beat.
Does any voice assistant do this right now? Genuine question, I don't actually know. It sounds useful as long as it's not invasive.
Alexa+ does, but I don't use it for anything except kitchen timers and home automation triggers, so I can't speak to how well it works in a longer conversation.
Zoom's meeting notes excels at this, Google Meet is terrible at it. Meet mishears our company name about 90% of the time; various attendee names are a coin toss.
* "this" being: context consideration in speech-to-text/transcription.
And though I have the feature enabled that should cause it to ask ChatGPT about things it can't answer, that works even less frequently.
But even if all of these things were true, the stuff on your phone you would expect to be exposed to the model as available tool calls, are not. So their efficacy is very limited.
(edit: iPhone 16 Pro Max, if anyone is curious)
There seems to be something about how intents get triggered by Shortcuts on iOS that feels flaky to me. Whenever some app suggests a shortcut (most recently Starbucks promoted a shortcut that orders your "usual"), the success rate when I tap it is <50%.
It's possible it's uniquely worse on my device, since I haven't done a "clean install" (vs letting the device upgrade flow copy over) in like a decade. But I'm also not up for dealing with the pain of setting up from scratch just to find out it's bad on a fresh profile, either.
Things that Sam Altman would prefer people not say lol
Just looked it up in my order history: I went from an "Echo Show 5 (1st Gen, 2019 release)" to a "Amazon Echo Show 8 (newest model)".
Whether I should have needed to upgrade is a separate question, but, yeah.
My preference, however, is for a voice-control UX just like I get with my Amazon Echo and "classic" Alexa like I have been for the past 10 years I've been using it: I think I can best describe it as a "voice-driven command-line" just like your OS' CLI shell, which makes its interactions predictable, even if it means I need to "know" what commands are valid in a given context. We all need predictability and reliability when it comes to my home-automation integrations.
...but computer interaction with a LLM / transformer-driven / "AI agent" is anything but predictable. When Amazon opted everyone into Alexa+ I agreed to give it a go and see if it really made things better or not - and it did not. I opted-out of Alexa+ and went back to something actually reliable.
Seems like an agent given 20-30 tool calls like "read_sms" "matter_command", and "send_email" would be able to work out what to do for things like "set the house to 72° and text Laura that I did it."
Incidentally, a major headline in the news this past week was about a coding-agent that wiped its company's entire system, including backups; which the company's staffers were confident was utterly impossible (as it didn't have any access to that system), and yet somehow, it did[1] (the TL;DR is the agent randomly came across an unprotected God-tier admin API-key/token saved to a personal text-file in a filesystem it had read-access to). If an agent can do that with only read-only access to a company's routine/everyday storage area then there's no way I'm giving it the ability to deactivate my house's fire-alarms and security-cameras via Google Home/Matter/Thread/HomeKit/X10/OhFfsNotAnotherCloudBasedAutomationScheme.
[1] https://www.theregister.com/2026/04/27/cursoropus_agent_snuf...
the HN thread about that case was much more of a "why are you putting your prod keys in random text files" and "the sota in prompt engineering is that putting DONT FUCKING DO THE BAD THING" makes the agent more desperate to get stuff done
putting limits at the harness level would do just fine. one LLM call, one tool call per voice message.
you dont have to give it a ton of turns
I'm on the iPhone 16 now.
You could have tried Alexa+ at the start when it was shitty compared to plain Alexa, and maybe it's better now. But equally none of the people that comment that it is "amazing" in its current iteration qualify their statements with their experiences comparing and contrasting the old version vs. the new version making them seem either unqualified to make statements based on how much "better" it is than the old version or at worse they are shills (paid or not). The best take is that they are comparing (e.g.) day-one Alexa+ vs. the current Alexa+ without a comparison to the original Alexa.
... which is to say that it really feels like there are no clear conclusions that could be drawn from all of this.
Also, one of my first interactions with this Alexa+ thing was “how long is it until 8:45am”, one of only a few commands I use it for to work out how much sleep I’m getting, and it proceeded to ask me what the current time was… I immediately turned it off after that
Aren't hallucinations part of GenAI? I would assume that "AI" voice recognition doesn't have that baked in, but I'm not working in either of those spaces so maybe I'm missing the details. So many things are being looped into the "AI" umbrella that would have just been called machine learning or pattern recognition a decade ago (e.g. "facial recognition" vs "AI" at a time when "AI" also means chatbots like ChatGPT).
I've had enough bad experiences with products that never got better, or just got worse (Exhibit A: Windows 11). Like most primates, I am capable of learning, and I've learned that once a consumer product/service goes bad there's little hope of a turn-around. I accept that you're telling me that it's gotten better, but of the people I know IRL who also use an Echo, none of them have told me that Alexa+ is worth trying, let alone committing to.
Yes, it's on me for not giving Alexa+ a second chance, but I'm not willing to give Alexa+ a second chance because, as a technology product/service customer, I just don't feel respected by the industry I work for (...lol); if Amazon, Microsoft, Google, et al won't respect me, why should I venture outside my comfort-zone for... what benefit, exactly?
I'm not telling you this. I'm basically saying that with Alexa/Alexa+ and with Google's Gemini vs Goole Now(?) I've seen many posts like this. Where someone complains about the AI version, but then there are other posts that come in and claim how much better it is. Even for things like Claude Code you get people complaining about how many mistakes it makes, and then people coming in and saying that it's because they are "doing it wrong". Either "Claude has improved by 10x in the last 6 months. It's so amazing! If you used it a year or so ago it doesn't even compare!" or "You aren't using the most expensive tier of Claude which increases context and thinking abilities that are hobbled in the cheaper versions!"
I never really see a comparison on the same level and it sounds like people talking past each other or some people having legitimate complaints and then others coming in to shill for a product.
I'm not in anyway implying that "You should totally try this out now that they fixed everything" or anything of the sort. I even stated that I don't use any of these tools, and I was commenting as something more akin to an "outsider."
On Windows 10, the Photos app package is about ~140MB on my computer. A good chunk of that is because the package includes a lot of dependencies - including platform deps that I'd expect would be part of the UWP runtime in the OS - kinda like how since the introduction of Swift/UIKit/etc in iOS the IPA packages all bundle their platform dependencies, even though they're demonstrably redundant, because UIKit isn't an OS-provided framework anymore... I'm not up-to-date in the iOS dev scene so I'm unsure why Apple went with that approach.
The new Alexa powered by an LLM is objectively better that previous Alexa in a few ways. This much was apparently from day one and has only gotten smoother.
1. It can reliably execute direct or vague-ish commands "play X movie in app Y" or "play x show" and can infer X movie is only available in app Z so use that.
2. Speech recognition seems better (less instances of 5x round trips)
3. Conversational with multi-turn --- my wife can have a back and forth clarifying a topic.
4. Seems to understand intent a bit better. (user asked A so they are probably thinking about B)
Those may seem small but they were a tremendous source of annoyance for her -- and thus for me -- "Alexa is not listening, do something!"
...how does that work, exactly? (or rather: what's the context here?); there's no possible way for an Alexa+-powered Amazon Echo to control my AppleTV or interface with VLC on my desktop.
I ruined multiple dinners with timers that didn't work (with a time/labor cost).
I had to get out of bed in the freezing to turn the lights out. It's easy to hit the lights when I go to bed but annoying having the tool fail and getting back out.
Music stuff didn't work well because I used Youtube Music not Spotify.
Those were my 3 use cases for Google voice, and it failed them all enough I just stopped using it all together. Who cares if it works today if in another month they just change something and break it again? They've shown it's not a tool to use for tool things, it's a 'gee wow' thing. I don't need to be impressed. I need not burnt food.
I do like Gemini better than Assistant, even though it's not quite there yet. But that's just a matter of time because they actually designed it from the ground up to be a drop in replacement for Assistant.
But for one-on-one, it is a really outstanding experience. Especially since they tamped down the way over-the-top humanisms.
The first problem is that it's just slow. If I want it to turn off some light, it takes a long time before responding.
But yeah, the failure to do basic tasks. I have a routine that I used to have it run (controls several devices at once). Now:
10-20% of the time it runs it.
60% of the time it says it's running it but it doesn't do anything.
20-30% of the time it says it can't do it unless I opt in to invasive permissions. And when I opted into them, it still failed about a third of the time. So I opted out again.
Man, I hate touch screens. And I hate Android Auto. My previous car had an aftermarket Bluetooth system (radio, etc). It was way, way better than Android Auto or any entertainment system I've seen in any car.
I have never had trouble setting timers with either.
It is much better today than 3 months ago.
But timers and smart home actions are definitely less reliable and sometimes take absurdly long to respond (like 20-30 seconds p99).
To give you an example, I was having coffee the other morning while unloading the dishwasher and asked the speaker if today was a good day to apply weed and feed on my lawn. This was not possible with the old assistant and was useful to me.
And now if I want to use Gemini on my phone I have to replace Assistant. Nah, I'll keep Assistant thanks, and just have a shortcut to load the Gemini in the browser.
Except the browser experience is so fucking buggy, constant reloads needed..
WhisprFlow produces much better speech-to-text for long text messaging-by-voice (dictation / transcription) than apple's speech-to-text does. Whisper models in general seem to do a lot better than most built-into-OS/app models. Which is interesting, because there's nothing stopping them from just using Whisper models.
I love MacWhisper personally. Also, Gumroad is a fantastic app distribution platform for my personal values.
https://goodsnooze.gumroad.com/l/macwhisper
As far the "decision tree" side ... there's not much that can be done about that now. Agentic agents still go too "off-the-rails" to be productionized out to the billions of smartphones of the world. I'm working on voice-controlled agentic-with-rails AI features for my HomeAssistant, because Alexa / Google Home suck. But that's a hobby project and rogue AI actions only affect me, not billions of customers.
Still love not having google's paws all over my data, though, so not going back.
Any of the Whisper-based apps on the App Store.
(It misunderstands my wife from California all the time, though.)
So if you buy Apple products based on that value proposition it’s a big problem for Apple if they can’t seem to keep their brand-promise in this area.
https://blog.google/company-news/inside-google/company-annou...
Be careful what you wish for.
ChatGPT’s voice model has a great user experience and seems like it is seamlessly integrated into the chat, but its actually a far smaller and dumber model. @husk.irl on instagram has videos displaying how dumb and undiscerning it is
People were wowed by the magic at one point, but its faded. Apple avoids those things and the limitations havent been solved
You have to remember all of the AI companies are making cash bonfires. People aren't going to stop buying iPhones because Siri can only do what it does now.
If Apple focuses on hardware and skips the pay-for-inference bubble they'll come out the other side with the best consumer hardware everybody already has for local inference which is going to eat the whole industry's lunch.
nvidia is going to have a hard time convincing people they need to buy $1000 LLM inference hardware. Apple isn't going to have a hard time convincing people to buy the next generation of phone/tablet/laptop.
The 2010s was marked by Intel's lazy product lineup, year after year pumping rehashes of older products, iterating on top of their 14nm lithography with increasingly minor improvements on its architecture until AMD overcame them. In the process, Apple's partnership with Intel became a liability it had to solve, and a push for the unified ARM architecture was no small feat.
If you ask me I don't think it's justified to degrade the user experience for the sake of focusing on this. It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.
In any case I think it explains really well why Siri feels so abandoned.
Intel is already being evaluated to fab Apple's entry level chips, if they can meet performance, energy efficiency, and production targets.
With fabs, other companies can still compete, but you absolutely require a partner with deep pockets to place big orders, since the costs have grown exponentially.
It's the CPUs they have built for their purposes, which is next level hardware independence.
Money can often just be one part of the equation.
To do things well you also need - available & capable technical resource, suitable facilities, available & capable leadership and management (with engaging at the right level in the business) and a clear vision of what you're trying to achieve/working towards.
Given how Apple appears to operate, I wonder if a strong desire for senior management control/oversight over major developments means they (artificially) limit how many concurrent large-scale things they can work on at any given time?
Maybe not, but that'd be my guess.
I didn't imply, it's explicit in my comment. it's what their actions show. Their updates make their systems worse and worse, Tim Cook is out and Siri is in shambles. It might have been something else, but I'm willing to give it the benefit of the doubt, because the alternative is just sheer stupidity.
There's no way they couldn't do a better Siri. For some reason, they just ... won't.
Classical homework assignment -- the Mythical Man Month and related essays
If Apple can't harness the potential of the currently overfilled labor pool, that indicates a systemic issue within Apple. The entire raison d'etre of management structures within a business is to increase efficiency of capital to drive productive forces. If they cannot do that, then that would indicate an extremely problematic competency crisis within Apple's management organ.
This kind of failure when you are a company with the valuation of a first world country's GDP should be raising alarm bells in any rational person's mind.
They have great kernel, drivers and low level engineering but the stack above that has a lot of questionable stuff.
Some parts of their software stack -- higher up than the kernel -- are actually pretty great. There's a lot of realy brilliant stuff in their system frameworks, and in SwiftUI, Cocoa, and UIKit. I've been using Linux at home recently, and I find myself missing some of it.
But, on the flip side, suddenly you hid maddening bugs, crashes, or terrible developer-experience papercuts. And, of course, there's the App Store, which is just evil. For my next app I'm just going to go Notarization only, and see how that goes...
Do they? Is Linus' rant about porting git to OSX now obsolete? At least, unlike MS with ReFS, they managed their HFS+ -> APFS migration.
Apple Intelligence is a placeholder and a toe in the water.
Unless you're implying something else?
> People end up thinking Apple invented something because they tend to make the first usable version
I think we can all agree that the original iPhone is the conceptual progenitor of virtually every phone that’s mattered in the market since it was released.
Smartphones prior to it have essentially zero descendants. For all intents and purposes they effectively did invent the smartphone. Hell “smartphones” as a distinct market all but don’t even exist any more. They’re barely even “phones” at this point. And this entire arc of development points directly back to the original iPhone release.
But at the same time.... I had been doing nearly everything the iPhone could do in terms of raw functionality (plus plenty of stuff that took 1+ years to land on iPhones) on multiple different Windows Mobile and Palm smartphones pre-iPhone.
Saying pre-iPhone smartphones don't count because "ugly nerdphone with gross keyboard" is just as ridiculous as a "iPhone was overhyped and no better than existing smartphones" claim.
Apple created a device category within smartphones that then consumed and became what we now think of as a "smartphone" after iPhone and Android together strangled the first movers.
Like, the famous Steve Jobs "an iPod, a phone, an internet communicator" line was just listing standard smartphone features by that point. More or less the definition of a smartphone in fact.
Meanwhile the majority of people on earth own one or multiple devices that are more or less clones of the original iPhone, only faster, larger, thinner, and exponentially more capable.
Because by the time of the iPhone coming out everyone j knew had a Blackberry or some internet connected Nokia slider - the iPhone was significantly less capable than either of those. And yeah, both Nokia and Blackberry screwed the pooch. But again, pretending like smartphones didn't exist or that they were a "blip in the market" is intellectually dishonest, or like I said - based on what you read not based on actual lived experience. Unless you live in the US iPhones were a curiosity for years, a status symbol.
And in this particular war, it's even worse, the "winner" will actually just be the "biggest loser", contrarily to a traditional war.
Really not true both in real wars and in tech wars. There's no evidence to support this claim.
Android only exists as the dominant mobile platform because it went to full scale war with Apple when the iPhone launched. Those that didn't take part and came after the battle have like <1% market share and Apple and Google are printing money from the cut to their app stores.
Apple doesn't take part in the AI race because whichever AI wins the war in the end, they'll have to be on their Appstore to reach the users, so Apple wins regardless due to their Appstore monopoly. AIs are no threat to their phone, laptops and Appstore business.
But Google can't afford not to take part in this race because AIs are a threat to their search and ads business.
Same with real wars, US is the world superpower because it got involved in WW2 even though it didn't have to be. Same with Russia and Ukraine, provided they don't wipe each other out scorched earth, their militaries will be the most advanced on the planet on modern drone warfare they invented after the war is over, and every other military on the planet will be paying them for their gear and expertise, which they already are.
Anthropic probably couldn't give the uptime guarantees that Google can, right?
If you have terms that conflict with theirs, they aren’t very flexible. Anthropic can be similarly difficult, and their needs from a business perspective probably don’t align with Siri. I would imagine that Google has a more flexible/long term approach to absorbing some risk in a revenue share arrangement than anthropic who generally wants cash.
Anthropic’s only purpose is to juice whatever KPI‘s are gonna increase their IPO market cap.
The last sentence doesn't make that much sense to me though. An agreement with Apple to be the lead AI partner would likely juice the IPO a great deal. The financial details wouldn't matter much for the IPO (as the initial financial commitments are going to be small but the halo effect would be real - I think it would in the market anyway).
I think Anthropic has real commitment to their way of doing things which can cause short term issues (and hurt the IPO). And they seem willing to keep those values rather than just making deals to pump the IPO. As you say Apple also sticks to their way of doing things even if it frustrates their partners.
I think not being the lead partner with Apple may well be good for Anthropic long term. But if all you cared about was the IPO just agreeing to Apple's terms likely would have been the best option.
These SpaceX, Anthropic and Open AI possible IPOs are so extreme it is hard to make judgements about them; so maybe there are Anthropic IPO issues to an Apple agreement that I don't appreciate.
It's a weird market and these companies want global domination. TBH, i don't have the knowledge or context to understand how to think in that mode and what the real facts are.
I wouldn't put much stock in the deeply held principles of Anthropic (or Apple for that matter). That's an appeal to emotion. I love the product, but they're happy to randomly rug-pull the product and how it works, both in the publicly available products and other contexts. It's just another company.
Apple is far from perfect but that doesn't mean they don't have a position (say privacy) that they care about and give a lot of weight to when making decisions. But as a huge company they also have many competing priorities.
Caring about privacy or potential abuses of LLM/AI services does not mean that a huge company is going to perform on those areas the way those that want maximum privacy or... want.
I do also believe Apple's marketing reasons to promote their focus on privacy make sense. And even if they don't do as much as I would want they do make a big difference (on privacy) it seems to me. And I believe long term there is big value to Apple building systems to stop users private data from being abused.
The US government in 2026 is openly and cravenly corrupt, and I don't believe anything at face value. The story about the targeting may be real and material, or backwards engineered to fit the reality. OpenAI is aligned with Larry Ellison and Oracle, and given the favor granted to them by the government, I'd look to that relationship first.
https://daringfireball.net/linked/2025/12/01/gurman-pooh-poo...
Obviously, _what_ someone chooses to leak can still benefit them, even if it's true. You can be selective about what information you share.
This is the important point.
Sending their internal code, documentation, secret tokens, etc. to Anthropic would be completely irresponsible.
But if they are running the models on their own servers, why not!
Yuck. a lot of those replies have LLM smells. Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?
I feel the same. Quality of both submissions and discussions have considerable decreased. It is still the best general purpose “aggregator” I know of, but it is not what it was. It is becoming more and more FotM hype and boring group-think.
HN was great due to the breadth of unique, interesting, nerdy topics, most of which I would have never come across on my own; and the insightful thought-provoking commentary, often by insiders with unique insights and perspectives.
Now it is just the same LLM agentic coding harness hype cycle astroturfing 100x engineer 37k LoC/day BS I could get from Reddit or LinkedIn or Twitter or anywhere else.
The moderators are still doing a fantastic job though! I feel like that is the last big differentiator from just being orange Reddit.
Both the really old timey graybeard techies and the green haired alternative techie communities are reducing in numbers.
There is a market for buying and selling "aged" Hacker News accounts (3 USD <-> 15 USD for ~500 points) and upvotes / downvotes
By purchasing just ~300 karma points, founders can unlock an uplift of tens of thousands of dollars in visibility on the home page (clients and investors).
So the LLM comments are not here just for fun, they are clearly farming points.
Ironically, it also increases actual human engagement. This way the day Ycombinator wants to announce something, they already have more public than if there was low engagement.
Like the shilling you mentioned, these bots can push downvotes and flag competitors service.
Essentially the same as on Reddit. If you have incentive, you have a market.
I think I give out about 1 updoot a year. Good to know I've been starving them.
That said, the social media feeds are so trash filled that I avoid them; it's extremely depressing opening up an incognito youtube and seeing what Google thinks will monetize well for an average consumer.
arse
The title refers to most machinery being a "centaur," meaning a thinking human is carried by the machine doing the heavy lifting, while the goal of AI companies is to replace high value work with the opposite. They want to turn people into meat appendages that serve unthinking machines.
The first question, answer is yes - most people live their lives mindlessly, with or w/o LLMs (think every idiot you knew 20 years go throwing in punch lines from "Friends" to sound "funny"), To the second question - most people have a twisted view of identity. It is supposed to mean something identifying you uniquely,but to the most people it means, identifying you as a member of a large group (nationality/political view/religion/major music genre you like). So, now when every proverbial Dick, Tom and Harry use LLMs to generate Confluence content with shiny emojis, what are the proverbial Emily or John to do? Of course, they will adopt this new identity - its who people are now - shallow, hollow puppets for LLMs to fill in. And to think of the irony - mother Nature perfected this super-efficient, low energy and highly capable thinking machine, each and every one of us holds in their skull. Its already put us on the moon once, before we even had a semblance of a functioning computer! And we choose to throw it away, for fucking what? Verbal diarrhea and pain inducing coloured walls of texts?
All so some retarded antisocial VC-funded "AI founder" can call themselves a tech visionary?
The lost of identity is imo this. It's people being given horrible harmful options for their meaning, health and wellbeing and so we get a general sense of most people being lost. Lost in identity as you asked, though I think it's more than that. In my initiatory work with men (being initiated, not initiating others) we learn that part of the breakdown in this for most people is being given harmful identity frameworks of dependency and reliance on others. In the initiatory process we learned an identity of service beyond ourselves through deep embodiment, and exercise and practice beyond just an intellectual grokking of it, edit: this is what we used to have through human history but today now as is described in the works most people have only what would be called pseudo-initiations (marriage, school graduations, children & work changes) which do not meaningfully contributing to meaning, contribution or purpose.
What most of us have today and what the AI companies want us to believe: We will give you the money to live (though of course, when you're truly dependent on others, and they see no purpose or value for you and even your entertainment value has gone, why would they keep you around?)
I look at all those files the same way as IDE configuration cruft--it's workstation-specific configuration that shouldn't even go into source control. I would .gitignore all of those files. Is this not what is done in industry?
EDIT: Wow, thanks for all the replies. Very eye-opening to see what's happening outside of my hobby-experimentation with the technology. I was coming at it with the assumption that 1-2 out of 20 people on the team were using CLAUDE.md, so why have it in source control. But if all 20 people are using it, I can see the benefits. This reply chain has really opened my eyes, thank you HN.
I tend to include a well documented justfile, so between the readme and that common commands are covered. If there’s a style guide it should be its own file, or summarized in the readme.
If Claude is making errors I tend to just update my global Claude file, but I haven’t updated it in 6 months — only to disable Claude signatures on generated commit messages.
Most agents use README as a storage for EVERYTHING related to the project by default, which is annoying for humans who just want to figure out a) what the thing does b) how to install it. Then you start reading and there's some intricate documentation on how data flows through the application etc.
If you're only working on your personal projects with no collaborators, just a global claude file is just fine. Per-project files are more for things that are specific to that project.
otherwise it's like leaving vim dotfiles in the repo or something
Its critical that its part of the source code.
They often describe:
- Overall architecture
- Repository layout
- Processes to use
- Things not to do: code styles to avoid, libraries to not use, etc.
While they’re primarily documenting these things for an agent, the information is similarly useful to a human.
The number one reason is, you are on a 10-dev tea and it just doesn't make sense for everyone to waste their token budget creating separate instances of this file, which an also requires ingesting the othe whole repo... That is 50, 60%.
The other bit is that you have a review pipeline hooked into CI/CD, and it is the easiest way to tell the bot how to review your code.
I used to be a purist about IDE configurations, but if everyone isn't on the same page about formatting and stuff like that you see a lot of file churn as things move around.
I would have said the same thing about the .github/ folder, but I've had to add things to it to prevent Copilot from thinking bad patterns in existing code are actually good patterns that should be repeated.
It makes more sense when your communication between teammates is constrained to the repository, because your other communication channels are already saturated. They're meta concerns that really have nowhere to go outside the repository without getting lost.
IMO that is what automated static analysis jobs are for. Let me configure my IDE how I want.
VS Code is one notorious offender in that realm; it will try to commit settings.json, even if their gitignore's are set up to ignore all other cruft.
In general, the question of what should go in the source folder is a bit of a mess. Source code, README and License make enough sense, but what about files describing project governance or CI configuration logic? Or what about files that are used to make the forge you're using render the repository in a certain way (for example: bug tracker templates). Those are all cruft insofar that they have nothing to do with code, but it's generally agreed on that you're supposed to commit those, maybe in a dot-folder if necessary.
Version control everything (inputs)
The idea of having to repeat something to your computer is ridiculous.
If you want private stuff you use CLAUDE.local.md
Also it looks like there's a compilation step to these files, which is interesting. The raw file was included, not the environment specific file.
And tests, linter configuration, doc...
If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed.
No, AI code review doesn’t help. Claude can’t even give me correct line numbers 80% of the time, literally just makes them up, and more than half of it is false positive BS anyway.
Our brain is designed to fill in gaps, it's why memory is so blurry when it comes to reciting the facts of what we saw in a trial.
It's why you could swear you saw "x" in the production software you were about to push. But it really comes down to expectations - and those expectations help reduce cognitive load/increase cognitive efficiency (resource usage).
So after more and more people get used using AI, you will see these mistakes occur more frequently. B/c it's how our brains work.
I'm not sure why. It just doesn't feel very Apple-like
Like doing long division by hand instead of trusting a calculator.
It is no secret that Apple has an enormous R&D budget.
It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives. Apple is an enormous multinational company, it is unlikely they have zero-AI on-site.
What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.
I'm sure journalists and Anthropic would love to have you believe otherwise, but I think we need to keep our feet on the ground here and accept the reality is more old-school.
Afterall, as others have pointed out already here ... whilst the rest of Silicon Valley has been shoveling truckloads of cash at AI, Apple have been patiently sitting, watching the bandwagon trundle along the rails.
Having worked there this is a perfect description of the organization from my experience.
> So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives.
> What is almost guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide.
100% agree
The research surveyed 121.000 developers across 450+ companies. A striking 92.6% of them use an AI coding assistant at least once a month, and roughly 75% use one weekly
It's weird to believe that large corporations should be ashamed to use AI.It's a standard engineering practice, otherwise it's like if you refuse autocomplete because autocomplete is not right 100% of the time.
My point is:
Apple's customers aren't ready for that.
They don't understand the nuance.
And they don't think they should pay for a computer to do the work instead of a human, because "computers work for free".
You say this with such confidence. Do you have some inside source with enough access that you can be that certain?
You can include project/team based md files in your repo and exclude env/system md files (eg from you home directory, which includes your personal coding instructions).
So yeah.. nothingburger.
Seems like at some point most of the actual humans just gave up on replying.
Had some issues with my monitor apparently seeing connection to my Mac Mini, but the Mac Mini displaying black, apparently somehow got out of sync with my monitor, sleeping the display controller then waking it solved it.
Gathered a bunch of data, wanting to submit a report, since I'm a Apple Developer Program member since like two days ago, and I wanna be a good c̶u̶s̶t̶o̶m̶e̶r̶ user, so I opened up Feedback Assistant.
It asks me for my email, I input it, press enter. A password input appears, but keyboard focus doesn't move there automatically. I know is such a tiny nitpick practically, but tiny shit like this makes it so obvious that not a single person actually tried this UX. 10-15 years ago, Apple would never release something that isn't perfect, but now there are these UX edges absolutely everywhere across the OS.
I ended up not logging in at all, wrote my fix into a tiny fix-display.swift file which I'll run when it happens instead.