Incredible article, a lot to unpack here, but I found this particular offhand tidbit interesting. It does seem like any attempt at tech industry regulation over the past decade or two (that isn't somewhat in the interests of big tech anyway, i.e. age verification and so on) has been either overly vague, or overly specific, leading to easy workarounds.
It seems like a microcosm of a wider trend in regulation; the disconnect between intentions and results. On the rare occasions that consumer-friendly legislation does go through, there is no working mechanism for evaluating its effectiveness and refining the rules as quickly as big corporations can adapt to them. I like how the article frames this, of how the regulations are targeting the wrong thing, how they're defined by the problem rather than the desired end state.
For more thoughts along these lines I'd highly recommend checking out Jennifer Pahlka's blog Eating Policy: https://www.eatingpolicy.com/
The demand for AI is currently overwhelming. As in, can't build data centers and GPUs melting overwhelming, companies growing 3x in a month while already at multi-billion revenues.
The models get better and better, Chinese open source is falling further and further behind American companies. The productivity gains are, at this point, obvious. The best talent works (or wants to work) in America and get compensated obscene amounts, the most capital flows through America, this is still by far the best place to start a technology business in the world
I think American technology was on the decline for the past few years before LLMs, but for the foreseeable future as long as American companies control the talent flywheel I think the new world of tech is going to be much more American than before.
> Chinese open source is falling further and further behind American companies
This is simply not true?
Just like Chinese EVs and Chinese renewables eventually beat the West, I have no doubt that China can probably eventually pull ahead, but I think it is probably accurate to say that China is currently still behind (how far is hard to say) because they have a slight technology handicap imposed by the US.
Hardware capacity is a separate issue entirely.
> have consistently been keeping up with (albeit a few steps behind)
I mean, this sentence is self contradictory, no? > Hardware capacity is a separate issue entirely.
It seems like hardware capabilities are at the very heart of both training and inference which is why Nvidia, TSMC are hitting record income and capitalization. Feels like divorcing hardware from the equation is discounting a big part of winning this race.As others have pointed out, no, not at all. For specifics, see the chart from this link posted by another commenter: https://hai.stanford.edu/news/inside-the-ai-index-12-takeawa... . If anything, Chinese models are closing the gap with US models, not falling behind.
> Feels like divorcing hardware from the equation is discounting a big part of winning this race.
Depends on what race you're talking about. When it comes to "who has the most powerful models", I'd argue it's actually not really that significant - China obviously has the power to train good models.
By benchmarks, the Chinese models are ahead of where the proprietary US models were ... something like 6 or 12 months ago. And all the benchmarks are a bit fuzzy anyway on whether a small gap is trivial or significant. The Chinese aren't having any problems keeping up on model quality. The gap isn't going to lead to any difference that matters unless the US pulls a rabbit out of it's hat.
Plus dollar-for-performance they might be leading in practice, it is hard to compete with self hosted.
Otherwise, no one would need to buy from Nvidia or contract with TSMC.
This depends on how many proprietary APIs are in the way of the model itself.
But if you run your own models then you're not subject to anybody's whims anymore. You have full control of how your software works and what it does.
At some point, though, the balance could tip. It's impossible to say, and it'd be irresponsible to try to predict it, but there isn't any reason English is natively superior, any more than French was 150 years ago, or Latin 600 years ago. But it's a major advantage the US has that isn't acknowledged often enough.
1. English became the lingua franca right when the world really became globalized. So everyone from Europe to Asia to Africa has wanted to learn English as a second language for decades. So even if American power went away, I still don't see English falling from its perch. I often say it's really hard for Americans to learn another language because if you go to another country hoping to learn that language, so often you'll find many/most people just want to speak to you in English.
2. The only other power I could see surpassing the US in the mid term is China (and that's in no way guaranteed), but the Chinese language (Mandarin), and especially Chinese writing is inherently more difficult for foreigners to learn. I'd also argue the Chinese writing system is inherently more poorly suited to the digital world.
Russian is commonly viewed as a difficult language, but it become a regional lingua franca in their sphere of influence. The only reason we aren't speaking Russian is because they lost the cold war.
I do agree that Mandarin speakers might become more open to Pinyin if more foreigners started learning the language. I'd also point out that English and Romance speakers find Mandarin difficult. For Mandarin speakers, is their own spoken language actually difficult for them? They might find English to be a difficult language.
Mandarin eliminates all of these problems. The tones and characters are difficult, sure, but questions and answers being grammatically identical along with consistent pinyin is a lifesaver.
If you're using pinyin it's already easier than English.
It’s an interesting question: for how long will it remain important to know multiple languages in the age of LLMs? Of course, it’s better to know foreign language(s) — no doubt about that — but for day-to-day work, unless you’re living abroad, it seems that their practical utility will slowly decrease. And speech-to-speech translation will likely continue to improve as well.
- The culture is, I think, the root of the flywheel. The entrepreneurship and competitive intensity is unlike anywhere else I've lived (not an American). It's okay to go bankrupt. It's okay to fail multiple times and burn millions in VC money, in fact it's encouraged! Take a break and raise another round and go again, VCs like second time founders. In my home country having one business go under is the worst thing imaginable.
- The capital markets, even YC (one of the lower tier accelerators by now) gives you 500k for 7%, sometimes pre-revenue. That is an absurd proposition elsewhere
- Surrounding yourself with top talent raises the ceiling for what you think is possible and accelerates your career really fast. It's inspiring for me to be around so many smart and successful people.
Older people here in Northern Europe often seem to speak English quite well, in France less so.
It isn't a moat, My partners written English surpasses mine and it is her third language.
It is money.
Specifically, right now, petro-dollars. For a while before that, it was pounds
The writer is asking how much longer that will continue to be true that it is petro-dollars.
Actually, there is. English is relatively unique in its ability to incorporate loan words and features of other languages. This is in part due to its history as a merger of 10k French (thus, Latinate) words into an otherwise Germanic language. But it's also due to the unfortunate history of the British empire, followed by American hegemony, which spread English to many other cultures who freely adapted it.
Whether this is enough to justify a continuing status as "the international language" is obviously debatable. But English is different from almost all other human languages, not because it is better, but because it is just ... more
Because most grammatical markers are isolated prepositions, there are no problems caused by phonetic mismatches with the words to which they are associated, like it happens in the languages where a borrowed noun must fit into a declension pattern, which can produce phonetically awkward words.
While among the European languages, for English it is indeed the easiest to borrow new words, one can easily construct an artificial language that would be even better than English from this point of view, and which would remedy various problems of English, like the necessity of knowing separately a written form and a spoken form for every word, or the existence of a lot of semantic ambiguities that do not exist in other languages, or various difficulties to express various nuances using the existing modal verbs, or the too verbose methods for expressing certain verbal tenses, moods and voices.
Thus English does not really have any technical advantages. Its moat is the inertia caused by its so widespread use in the present, which will prevent any other language to replace it, regardless of how much simpler and better that language would be.
This is actually another strength of English, not a weakness.
> one can easily construct an artificial language that would be even better than English from this point of view,
The history of artificial languages (Esperanto in particular) is not encouraging in this regard.
> Thus English does not really have any technical advantages.
I wasn't really trying to suggest a technical advantage, but rather a cultural one. English users, as a worldwide bloc, are really incredibly open to loan words, modified grammar, and even whole new vocabulary. All of this happens in other languages too, but the culture of english makes or allows it to happen much faster and much more broadly.
But this advantage is vanishing. While automated translation is still not good enough for someone fluent in English to tolerate, it's more than good enough already and the progress have been insane over the past few years.
I don't think English speakers are going to have any edge moving forward.
There are applied AI cos making 100-400M+ in just a few years of incorporation, does that count as financial gain?
Academia is currently 6-12mo behind the frontier of the industry due to secrecy and publication times, so any "long term" study, even for a year, would be out of date on arrival
The question is not whether companies are investing in AI, it's whether they're getting anything in return. Or, whether execs are just as anxious and confused about the story being sold as everyone else, taking the ludicrous amount of capital being put behind it as evidence that there's a "there" there, and hopping on the train out of pure FOMO and hedging, whether they're actually getting anything out of it at all.
If we start to see spend go down because projects fail and companies run the ROI calculation and determine it's not worth it, then ill stand corrected and happily admit that
code wrappers - cursor (special case), lovable, replit
part model part applied - perplexity, 11labs, cartesia, suno
applied branches of model labs - codex, claude code, deployment cos & fde teams
ai roll ups - thrive, longlake, some stealth ones
applied - cognition, sierra, fin, harvey, legora, glean
part data part applied - scale
Margins vary, but many of these companies' revenue are already a chunk higher than what was last publicly reported
Wouldnt be surprising to see some of them 2-5x rev in the next few years
Anecdotally, I'd wager that the modest/incremental but real gains from boring, daily application pale in comparison to the wasted cycles on terrible ideas, disrupted roadmaps due to poor business decision making, and the uncritical injection of insane, LLM generated bullshit into official business documents (fake KPIs for unmeasurable outcomes, references to nonsensical or non-existent process, data-driven decisions backed by hallucinated data. etc.).
I'm deeply skeptical that organizations will see real, lasting gains. I think they'll see some acceleration of copy/paste-adjacent workflows and gains in non-work like generating slide templates, but that's about the limit of it.
As prices rise to meet actual cost, I shudder to think about the idiotic, reactionary ripples it will send through corporate leadership, with everyone scrambling to evade responsibility at the same time and blaming their tech teams for failing to deliver on bullshit/impossible AI initiatives.
TL/DR yeah, I'd also like to see some real numbers.
> The demand for AI is currently overwhelming. As in, can't build data centers and GPUs melting overwhelming, companies growing 3x in a month while already at multi-billion revenues.
This isn't a sign of a successful, sustainable business; it's what a bubble looks like. Between the aggressive marketing (including astroturfing!) that LLM companies are engaged in, the perceived stock market advantage companies can gain by shoving LLMs into their offering, and the missile-gap-style approach that many businesses are taking around this, this centre cannot possibly hold. > The models get better and better, Chinese open source is falling further and further behind American companies
American companies are, to be fair, flaunting safety and ignoring the wider social impacts of this technology, and both the US federal and state governments seem to be more than willing to go with the flow on that, probably at least partly because of a recognition that the LLM industry is propping up a significant part of the US economy. > The productivity gains are, at this point, obvious
They are, emphatically, not. For me and my peers (most of us, individual contributors in software -- and emphatically, those of us working at companies who haven't fully leaned into vibe coding), our jobs have become babysitting claude agents and spending most of our time cleaning up its messes and doing code review. Short-term, sure, this might lead to some productivity gains, but long-term, this is going to lead to mass burnout. > The best talent works (or wants to work) in America and get compensated obscene amounts, the most capital flows through America, this is still by far the best place to start a technology business in the world
Unfortunately, the US is in the midst of cracking down on immigration, and the international perception of the country is increasingly that it is an unsafe one. > I think American technology was on the decline for the past few years before LLMs, but for the foreseeable future as long as American companies control the talent flywheel I think the new world of tech is going to be much more American than before.
What I see in the US's LLM-backed economy is what I see in many businesses in this same economy, increasingly: the blanket of AI is being used to paper over serious, systemic issues in the organization, but this clearly won't hold. In a world where we have an ounce of responsibility for what we produce, and where customers care about the quality (notably, quality as in correctness) of what's being delivered, this will eventually collapse.I think it's obvious that demand is overwhelming supply right now. I agree that we don't know how much of the demand is due to perception, perverse incentives, or poor management, and how much of the demand is 'real'. I personally believe that the demand is mostly real and will continue to go up, but I don't have a crystal ball.
I also acknowledge that the productivity gains are highly dependent on your specific company's implementation and the work that you're doing. I think the role of a technical IC (which I am as well) is going to be managing fleets of agents, and many people who aren't suited to that type of work will leave the industry (and many people who are will join).
I generally agree with you on the points about American politics, I don't think the way they are cracking down on immigration is very wise.
As for correctness - it's a nontrivial problem to deploy AI in prod that works and doesn't blow up over millions of runs+. Hence why the initial value has accrued to the intelligence layer (labs) but the bulk of the remaining value will accrue to the applied layer in my opinion.
Our demand for compute and software is infinite, but our price sensitivity is also high.
When developers say that LLMs make them more productive, you need to keep in mind that this is what they’re automating: dysfunction, tampering as a design strategy, superstition-driven coding, and software whose quality genuinely doesn’t matter, all in an environment where rigour is completely absent.
They are right. LLMs make work that doesn’t matter easier – it’s all monopolies, subscriptions, VCs, and lock-in anyway – in an industry that doesn’t care, where the only thing that’s measured is some bullshit productivity measure that’s completely disconnected from outcomes.
...
One group thinks this will make the world ten times richer. The other thinks it’ll be a catastrophe.
(from an earlier post, https://www.baldurbjarnason.com/2026/the-two-worlds-of-progr...)
I personally disagree with that worldview. (I read the article and the guy's tone is lowkey salty)
The reality is it's insanely hard to convince people (/especially/ consumers. //especially// technical consumers) to pay up to use software. Anyone who has tried to sell software as a startup knows, customers are laser focused on outcomes and value and anything that raises an eyebrow means you're toast
Ofc there are perverse incentives and I think those are bad
My 2cts
The industry is in an extremely bimodal situation, which drives most of that rot.
You have the startups and small businesses who can't get businesses or customers to pay up. And you have the SaaS giants, who already have their customers and can charge whatever they want.
And this is where the "rotten software industry" and doubts about AI feasibility intersect: Both of these business archetypes lack a clear use case for AI.
If you're small, congratulations you can now spend thousands a month on tokens and still have $0 of revenue. AI doesn't really help you "catch up" to customer expectations as now you're also having to compete with the myriad of slop-shops and in-house AI software development.
If you're a giant, well... why bother? Why give OpenAI or Anthropic a million dollars in tokens? They don't need to make the software better nor do need any "AI efficiency" to do layoffs.
My view is they both have a clear use case for AI, because every business has a use case for more intelligence on tap. Enterprises big and small already shell out billions upon billions for AI so I'm not sure how your premise holds
In fact AI has resulted in more startups than ever starting to take market share from the incumbent software companies (and the market has started to price that in)
This is the part I would contest.
Obviously there's some disagreement about how much AI is actually meaningfully intelligent for any given task, but even outside of that:
Turning "intelligence" or staff skill into revenue is not automatic or trivial. You can hire the smartest process engineer on the planet, but if your assembly line has no major inefficiencies, they just can't do anything. They can build you a 2nd assembly line but if there's not the market demand to buy that much product, it's pointless.
For software development: "More code" does not really translate into "more revenue".
If you are a SaaS giant, you don't need "More code". You're not in the business of doing software development. You're in the business of rent seeking. No need to replace people with AI, you can just fire them and replace them with nobody.
And if you're a small firm, code isn't your USP. Everyone has code. Good god especially now with AI, every dumbass' startup can have a trillion lines of code. As has oft been observed (long before AI), the code is a liability. There's not really much efficiency to be had by using AI, because the devs are already minimally writing code.
By your logic, shouldn't these enterprise's cash flow be expanding due to AI instead of shrinking?
They all do, but for small companies it won't be a benefit, it will be table stakes. It will also not increase revenue for them, it will reduce it because more competitors will be introduced, and customers won't be able to easily differentiate the true slop from the expert-guided and curated slop. The only alternative will be to become more of a slop shop, i.e. replace expensive programmers with cheaper AI, lowering your quality. Or to shut down.
For big companies who have always had terrible quality that didn't matter at all to their bottom line, of course it's a good investment. They can fire programmers. Do buybacks.
So the solution is to reduce the cost to zero, instead of competing to provide the best outcome and highest value?
That results in the winners providing insane value to both customers and equity holders
But we already know US doesn't, the AI competition is largely Chinese talent vs Chinese talent that the Chinese gov allows to work in west, which they control plurality of global AI talent pipeline, and can cut off at any time, like the reverse has already happened for western semi talent in PRC. Leverage applies to many other sectors.
Simple law of large numbers, i.e. generating comparable STEM than RoW combined = the best talent going forward is some Chinese... with little English fluency. English fluency deprioritized from mandatory a few years ago in PRC, the smartest kids with access to most modern corpus of research in most productive academic system is going to be locked behind mandarin in future.
Western models are not getting better vs massive compute difference predicted during period where compute gap vs PRC is expanding. And better in what which ways? There's entire industrial sectors US models can't get better in va PRC for the simple reasons the industrial chains do not exist in US (or at scale in west as whole). Throwing $$$ at half the problem... is severe misallocation, but the group think in the $$$ group probably feels like everything is peak because muh valuations and fomo investments while digital companies figure out how to integrate AI to write better newsletters, meanwhile some PRC dark factory goes brrrrt. A little hyperbolic, but you get the point.
I think something to be said that PRC can cut off talent pipeline to US AI at anytime, but hasn't... nor losing shit over AGI threat complex. They see absurd amount of $$$ being dumped into western AI and ask themselves, why stop this hyper financialized capital bonfire.
which companies are growing, the ones mining for gold or the ones selling the shovels?
Wait until they charge the real pice, if I sold a dollar for 10ct I'd also have a lot of demand.
I'm burning billions of tokens on chatgpt "deepresearch Pro extended" for things I wouldn't even bother googling, the second I have to pay even 2x the price I won't use that anymore
If the LLM was GPT-1, most people wouldn't even use it for free. So clearly there's another axis here?
Until their finances are open you can't trust anything they say
Real AI is being suppressed and it seems that it will not be allowed to exist in the mainstream, especially in the US.
Prior restraint is going to put a damper on American state of the art for the foreseeable future.
https://thezvi.substack.com/p/the-ai-ad-hoc-prior-restraint-...
In the longer term, companies won’t be able to build AI infrastructure fast enough to keep up. The construction capacity isn’t there. The hardware production capacity isn’t there. Raw materials, energy, water—not enough of any of it. The supply chain is a fragile, grotesque joke.
> as long as American companies control the talent flywheel
The companies are eating their seed corn. Senior devs are going to age out and there won’t be enough juniors coming up the ranks to replace them. The oncoming demographic crisis multiplies this problem.
Americans decided to sabotage their own public education system for generations. They were able to bridge the gap with foreign undergrad/grad students for a while but that well has been poisoned, probably for good.
I'm sad that America is making it more difficult for foreign talent to come in. But with the flip-flops between D/R in the white house it's really hard to predict what immigration looks like even 5 years from now
1. People really voted for getting violent criminals out, in which case there is going to be a massive backlash against the current policies.
2. People are really convinced that immigrants are making their lives worse, in which case as things actually get worse with the lack of immigration, they will probably double down. Politicians can keep using immigrants as a scapegoat, and fascism here we come!
And that's in the US, the rest of the world is all using Chinese models as well. Which means these models get far more collaboration from the global research community being developed in the open. They will set the standards in terms of how APIs work. And they will be what everyone uses going forward.
The closed approach simply can't compete with that. The same way Linux destroyed Windows on servers, open AI models will destroy proprietary solutions as well.
For most use cases, you don't actually need frontier performance either. Customization, cost, and data sovereignty are far bigger practical concerns. If you can run your own model on prem and tune it exactly what you need, then you're both saving money and getting better quality output.
It's also wroth noting that tooling can go a long way to improve the quality of output from the models as well, and this is very much an under explored area right now. For example, ATLAS agentic harness does a clever trick where it gets the model to generate multiple candidates then uses a second lightweight model as a heuristic to score them keeping the promising ones. And this drastically improves coding capability.
https://github.com/itigges22/ATLAS
There's also a paper along similar lines discussing how using a harness to force a project structure also allows it to work on much larger projects successfully.
https://arxiv.org/abs/2509.16198
So, I don't think that raw power of the model is even the most important part at this point. We can squeeze a lot more juice out of smaller models we can run locally by using them more effectively.
We're basically in the mainframe era of this tech, but the pendulum always swings to tech getting more optimized and moving to edge devices over time. And I think we're already starting to see this happen with local models becoming good enough to do real work.
And there are also occasional statements like the one by Airbnb here disclosing what they use https://www.bloomberg.com/news/articles/2025-10-21/airbnb-ce...
"Chinese models are what pretty much every AI company in the US is using now" - just untrue. you think people inside Cursor use composer for most of their work? haha
the talent at the labs far surpasses the global research community its just not comparable
I'm not saying I prefer it this way, I want open source to do well but it's just not happening at the current pace
The idea that the talent in the US surpasses the global research community is laughable. China already tops the world in artificial intelligence publications. https://www.science.org/content/article/china-tops-world-art...
China also has a population of 1.4 billion people, and an excellent education system. Pretty much all top universities are Chinese. https://www.nature.com/nature-index/institution-outputs/gene...
And let's not forget that top AI researchers from US are now fleeing to China. https://www.scmp.com/news/china/science/article/3353398/lead...
Not denying that China is a close #2 btw.
And specifically to AI, practically all major innovation that's been published and is used in the wild comes from Chinese companies. Before DeepSeek, everybody just assumed you needed a gigantic date centre to train models. Qwen is showing that you can get near frontier quality on your desktop. Nothing of the sort is coming out from the US.
And frankly when you look at the recent report from Stanford, it's embarrassing af for the US. Look at the chart on how much money is going into AI in US relative to China, and then at the chart showing how there's practically no difference in quality of the models. The only thing the US is ahead in is burning through capital like there's no tomorrow.
https://hai.stanford.edu/news/inside-the-ai-index-12-takeawa...
The article is delusional. In particular, these claims:
- The Iran war is over.
- Iran has "won" the war.
- The US has lost influence with Asian allies.
- The petrodollar is over.
- The US economy is weaker due to billionaires and the stock market.
It's especially laughable given the recent diplomacy with China.
I also predict a secular government is running Iran before the fall...
Iran has gone from a peak of power and it's proxies pulled off the biggest attack against Israel in decades. Three years later most of the leadership involved is dead, their power is at a nadir, and Israel has re-established itself as the dominant power.
That Iran isn't totally incapable of fighting back and hasn't capitulated isn't much of a feather in their cap.
Technology wise the West has a string of victories. SpaceX, AI, Waymo, and Apple are leaps and bounds ahead of any Chinese competitors.
Nothing has changed. The US honest to goodness lost a war 50 years ago and continued to dominate. Not to mention Iraq/Afghanistan. Iran being something less than a perfect and clean victory doesn't fundamentally change anything.
This lack of consideration will lead to significantly less favorable trading for all of the businesses you listed, regardless of their current prowess.
Fundamentally nothing has changed about the world or the relationship between the US and its allies. Once Trump is replaced by someone closer to European social values and less of an asshole the temperature will change. Just like it did from Bush to Obama.
Technology has politics, and it often serves to reproduce terrible modes of operation instead of something that could be described as "good progress" for humanity. The renewable energy landscape is the best example of a space that has had to fight against the old world's financial interests, even in the face of obvious monetary and technological supremacy.
The software world unfortunately has followed adtech + social media companies' operational structures, and we lost decades of "good progress" to attention-funded software.
I have a feeling this is why very few novel companies are springing up from this LLM shift: the relationship of a) lines of code b) solving problems to achieve progress c) getting paid for it has been decoupled for so long, because attention has been the main currency online.
Unsurprisingly, the Chinese technology market leap is fueled by a focus towards the "physical" (raw materials, manufacturing) and it's no surprise that a highly educated population is beating many Western economies in the electronics market (from small gadgets all the way to cars and energy). It's not impossible to try catching up by educating our people to reorient money to industry that brings "good progress", instead of industry that brings virtual money in the form of stocks or tech that mainly serves vices and/or entertainment.
this is the sad part for me.
I remember when computers did things FOR you.
now you have to do this careful calculus where you balance what you get vs what you give up.
We just finished watching a 90s Dennis Potter TV series, Lipstick on your Collar. Strange and mannered, and about in part the preparation for Suez at the end of empire, by an elderly leadership that hadn't realised that the British empire was already done (and at a time when the young were only interested in America, the new power). More stupidity than malice there. What we're getting today looks like both.
That many people don't know what a file is, is most probably down to the very explicit war of one company, namely Apple, on the very concept of a file. And I fully agree that it is a terrible idea that makes people completely forget that what they're handling is actually a computer that could be doing so much more than what Apple allows them to do.
Anybody have any idea what diagnostic shapes he's talking about?
Web version here, if you want to see what it's like https://psytests.org/arc/ssten.html
I call the Hormuz crisis the biggest strategic blunder in US history and it's not even close. It's such a blunder it will probably be written about in history books as the end of the post-1945 era. It's not lost on people that the US would rather let the world burn than split with its attack dog in the region, even slightly. We're also seeing that, as the author notes, a tiny power can strategically defeat a military that over $1 trillion a year is spent on.
The author rightly points out of the lawlessness of everything going on and the destruction of trust in financial markets. All of this is correct. But I don't think the auuthor really identifies the reasons for the push for AI. And that is, labor displacement and wage suppression. Or, to put it another way, further wealth concentration into the hands of the "oligarchs". I guess it's another version of "whatever our oligarchs want to steal this month, they get."
This crisis created billions of arms sales which is a success for some, especially as it made the other scandal go away.
And now there's evidence that Epstein was behind the prosecution of Swartz. He knew the man was onto something.
The authoritarianism is only more obvious. No one bothers to hide it. The social irresponsibility ramps up and up. Genocide in Burma? The cost of social connection. The cost of freedom.
At some point, it all breaks. No one knows what happens next. Models smooth reality, but reality, at some point, detests smoothness enough to become pointed.
This as been so overwhelmingly obvious in 3rd world countries (viz. India's "non-alignment" foreign policy) but, still, Europe, Canada, Japan and Australia didn't fully get it: the concept of "rules based world order" is just a layer of makeup over "American Imperialism". Americans make rules the same way Tony Soprano made rules: strictly for self-advantage. We should be thankful to Trump to wipe out that makeup, finally.
True, Mark Carney explained that in Davos. But I am not sure Canadians got it.
> Typescript, Visual Studio Code, GitHub, npm, and so much more exist primarily because Microsoft executives believe this will lead to more business for Azure and other Microsoft offerings.
I don't think its a conspiracy theory to think Microsoft releases their tools with the intention of people using their paid platforms/services. But the original person I replied to definitely thinks the author of the blog post is implying something insidious, which they don't seem to be.
Also, never trust Microsoft.
You may find it easier to function in modern society without having such a strictly literal view of language. Idioms and metaphors do exist.