I've only ever worked at startups before, but HashiCorp itself left that category when it IPO'd. Each phase is definitely different, but then again I don't want go back to roadmapping on a ridiculously small whiteboard in a terrible sub-leased office and building release binaries on my laptop. That was fun once, but I'm ready for a new phase in my own life. I've heard the horror stories of being acquired by IBM, but I've also heard from people who have reveled in the resources and opportunities. I'm hoping for the best for Nomad, our users, and our team. I'd like to think there's room in the world for multiple schedulers, and if not, it won't be for lack of trying.
Every IBM product I've ever used is universally reviled by every person I've met who also had to use it, without exaggeration in the slightest. If anything, I'm understating it: I make a significant premium on my salary because I'm one of the few people willing to put up with it.
My only expectation here is that I'll finally start weaning myself off terraform, I guess.
During my time at IBM and at other companies a decade ago, I can name examples of this:
* Lotus Notes instead of Microsoft Office.
* Lotus Sametime Connect instead of... well Microsoft's instant messengers suck (MSN, Lync, Skype, Teams)... maybe Slack is one of the few tolerable ones?
* Rational Team Concert instead of Git or even Subversion.
* Rational ClearCase instead of Git ( https://stackoverflow.com/questions/1074580/clearcase-advant... ).
* Using a green-screen terminal emulator on a Windows PC to connect to a mainframe to fill out weekly timesheets for payroll, instead of a web app or something.
I'll concede that I like the Eclipse IDE a lot for Java, which was originally developed at IBM. I don't think the IDE is good for other programming languages or non-programming things like team communication and task management.
I've seen a lot of failed projects for data entry apps because the experienced workers tend to prefer the terminals over the web apps. Usually the requirement for the new frontend is driven by management rather than the workers.
Which is understandable to me as a programmer. If it's a task that I'm familiar with, I can often work much more quickly in a terminal than I can with a GUI. The assumption that this is different for non-programmers or that they are all scared of TUIs is often a mistaken assumption. The green screens also tend to have fantastic tab navigation and other keyboard navigation functionality that I almost never see in web apps (I'm not sure why as I'm not a front end developer, but maybe somebody else could explain that).
I'll defend green screens all day long. Lots of people like them and I like them.
Everything else you listed I would agree with you about being terrible and mostly hated though.
Back in ... maybe 2005 or what, in our ~60 people family business, I had the pleasure to watch an accountant use our bespoke payroll system. That was a DOS-based app, running on an old Pentium 1 system.
She was absolutely flying through the TUI. F2, type some numbers, Enter, F5 and so on and so on, at an absolutely blistering speed. Data entry took single-digit seconds.
When that was changed to a web app a few years later, the same action took 30 seconds, maybe a minute.
Bonus: a few years later, after we had to close shop and I moved on, I was onboarding a new web dev. When I told him about some development-related scripts in our codebase, he refused to touch the CLI. Said that CLIs are way too complicated and obsolete, and expecting people to learn that is out of touch. And he mostly got away with that, and I had to work around it.
I keep thinking about that. A mere 10 years before, it was within the accepted norm for an accountant to drive a TUI. Inevitable, even. And now, I couldn't even get a "programmer" to execute some scripts. Unbelievable.
The best oart was that it was entirely keyboard driven. If you can touch type, you can just read the paper and type away. The job was mind numbing, but the software itself was great.
Consistency is a thing. Old windows apps often followed a style guide to some degree, that was lost with web (while it's also hard, as styleguides differ between systems, like Windows and Mac) and wasn't ever as close as Mainframe terminal things where function keys had global effects.
All of the major platforms have a HIG that tells developers how to maximize the experience for users. Webapps have dozens of ways to do things like “search”. Those who never developed for a platform with a HIG do not value it and keep reinventing everything.
A few years later in college I worked there again and by that point they'd transitioned to a much slower GUI that basically just wrapped the underlying green screen system. The learning curve was slightly better, but it wasn't nearly as fast.
Purpose-built mainframe-based TUIs were amazing. We lost a lot in pursuit of colored pixels.
Despite its obvious downsides, for people who do regular form input and editing, it's often better than the flavor of the day web framework IMO
I mean, I wouldn't choose to use it, but I get it
At the end of the day most people are lazy and most things, including (especially?) things done for work, are low quality. So you end up with the default more often than not.
Maybe someone has examples of web apps made for also a high skill ceiling?
I've heard Linear, Superhuman does something like that while maintaining a nice interface, but I've never used those
then, every couple of years, a startup tries to carve out a niche by making a product that caters to power users and makes efficiency a priority. those power users adopt it and start to recommend it to other regular users. this usually also tends to work quite well because even regular users are smarter than expected, especially when motivated. thus the product grows, the startup grows and voila, a tech giant buys it.
now one of the tech giants managers gets the task to improve profits and figures out, the way to do this is to increase the user base by making the product easier to use. UX enshittification ensues, the power users start looking out for the next niche product and the cycle starts anew.
rule of thumb: if the manager says "my grandma who never used a computer before in her life must be able to use it", abandon ship.
- Confirm this is correct? (Yes=F1, No=F2) - Would you like to make any changes? (Yes=F1, No=F2)
And maybe sometimes flip the yes/no F-key assignments as well.
In theory this was done to force users to read the question and pay attention to what they were doing, in practice, users just memorized the key sequences.
/s
Post-web and post 9/11, where web browser UI has infested everything, we are now in a cambryan explosion of crayon-eating UI design.
It seems our priorities have been confused by important things like 'Hi George. I just noticed, that for the admin panels in our app, the background colours of various controls get the wrong shade of '#DEADBF' when loading on the newest version of Safari, can you figure out why that happens?'. 'Oh, and the new framework for making smushed shadows on drop-downs seems to have increased our app's startup time on page transitions from 3.7 seconds to 9.2 seconds, is there any way we can alleviate that, maybe by installing some more middleware and a new js framework npm module? I heard vite should be really good, if you can get rid of those parts where we rely on webpack?'
Agree! Back in 2005, I was involved in a project to build a web front end as a replacement for the 'green screen' IBM terminal UI connecting to AS400 (IIRC). All users hated the web frontend with passion, and to this day, I do not see web tech that could compete in terms of data entry speed, responsiveness, and productivity. I still think about this a lot when building stuff these days. I'm hoping one day I'll find an excuse to try textualize.io or something like this for the next project :)
The fact that someone who has been doing it for years can do it faster is obvious, and pretty irrelevant.
Take someone who has never used either, and they'll enter data on the web app much faster.
You don't see keyboard nav in most web apps for similar reasons. First-time users won't know about it, there's no standard beyond what's built-in the browser (tab to next input, that kind of thing), and 90% of your users will never sit through a tutorial or onboarding flow, or read the documentation.
I would agree with you if we were talking about a customer facing webpage or something. But an app for say an accountant? That should be a TUI or as fast as a TUI. The workers are literally hired to get over the learning curve and become fast with the app, so it's not as big a concern if first-use is more difficult. You arent trying to sell them a product and drive higher percentage click through.
I 100% agree with you for applications for say online shopping. Those should prioritize new user experience over long time user efficiency probably.
But yeah some elements of that list have convinced me to steer very clear from any products from that company
We were given old Macs running Classic to run Notes so we had two computers. One being MacOSX. Notes was the biggest pile of crap I’ve ever had to use. With one exception…
On the OSX box we were happily running svn until we were forced to use some IBM command-line system for source control. To add insult to injury, the server was in Texas and we were in Boca Raton (old PC factory as it happens). The network was slow.
It had so many command-line options a guy wrote a TCL for it.
Adding to that was the local IBM lan was token ring and we were Ethernet. That was fun.
I have no idea how/why IBM of all places developed or sold this software but it badly needs to die in a fire.
Database technology which would seem outdated in 1994 with a UI and admin management tools to match.
I expect it to be still used in aviation or army related domain, maybe pharma.
It works great for Python and C++, honestly. If you're a solo dev, Mylyn does a great job of syncing with your in-code todo list and issue tracker, but it's not as smooth as the IDE side.
However, its Git implementation is something else. It makes Git understandable and allows this knowledge to bleed back to git CLI. This is why I'm using it for 20+ years now.
My favourite Sametime feature within Pidgin was, well, tabs (I can't remember if the Windows client had tabs as well..?), which was revolutionary for an IM client in 2005.
But my secret actual favourite feature was the setting which automatically opened an IM window /tab when the other person merely clicked on your name on their side (because the Sametime protocol immediately establishes a socket connection), so you could freak them out by saying hello even before they'd sent their initial message.
https://trends.google.com/trends/explore?date=all&q=terrafor...
https://trends.google.com/trends/explore?date=all&q=terrafor...
That being said, it'll be interesting to see if it's still a rounding error 2 years from now.
But with a very IBM move and with some tunnel vision, they got triggered by the few people who abuse RedHat license model and rugpulled everyone. More importantly universities, HPC/Research centers and other (mostly research) datacenters which were able to sew their own garments without effort.
Now we have Alma, which is a clone of CentOS stream, and Rocky which tries to be bug to bug compatible with RHEL. It's not a nice state.
They damaged their reputation, goodwill and most importantly the ecosystem severely just to earn some more monies, because number and monies matter more than everything else for IBM.
Remember. When you combine any company with IBM, you get IBM.
Alma is not a clone of CentOS Stream. You can use Alma just like you were using CentOS. It's really no different than before except for who's doing the work.
I agree that communication was bad. But why do you believe that Red Hat isn't able to screw up on their own?
I'll kindly disagree on this with you. Reading the blog post titled "The Future of AlmaLinux is Bright", located at [0]:
> After much discussion, the AlmaLinux OS Foundation board today has decided to drop the aim to be 1:1 with RHEL. AlmaLinux OS will instead aim to be binary compatible with RHEL.
> The most remarkable potential impact of the change is that we will no longer be held to the line of “bug-for-bug compatibility” with Red Hat, and that means that we can now accept bug fixes outside of Red Hat’s release cycle.
> We will also start asking anyone who reports bugs in AlmaLinux OS to attempt to test and replicate the problem in CentOS Stream as well, so we can focus our energy on correcting it in the right place.
So, it's just an ABI compatible derivative distro now. Not Bug to Bug compatible like old CentOS and current RockyLinux.
TL;DR: Alma Linux is not a RHEL clone. It's a derivative, mostly pulling from CentOS Stream.
> I agree that communication was bad. But why do you believe that Red Hat isn't able to screw up on their own?
Absorption and "Rebranding and Repositioning" of CentOS both done after IBM acquisition. RedHat is not a company anymore. It's a department under IBM.
Make no mistake. No hard feelings towards IBM and RedHat here. They are corporations. I'm angry to be rug-pulled because we have been affected directly.
Lastly, in the words of Bryan Cantrill:
> You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end.
You're wrong. CentOS Stream was announced September/October 2019, too close to the IBM announcement to be an IBM decision; it had been in the works for quite some time before, and in fact this all started in 2014 when Red Hat acquihired CentOS.
From 2014 to ~2020 you were under the impression that nothing had changed, but Red Hat had never cared about CentOS-the-free-RHEL. All that Red Hat cared about was CentOS as the basis for developing their other products (e.g. OpenStack and OpenShift), and when Red Hat came up with CentOS Stream as a better way to do that, Red Hat did not need CentOS Linux anymore.
Anyhow, I've been through that and other stuff as an employee, and I'm pretty sure Red Hat is more than able to occasionally fuck up on its own, without any need for interference from IBM.
Underneath it all, compatibility is what matters. At AlmaLinux we still target RHEL minor versions and will continue to do so. We're a clone in the sense of full compatibility but a derivative in the sense that we can do some extra things now. This is far, far better for users and also let's us actually contribute upstream and have more of a mutually beneficial relationship with RH versus just taking.
Sometimes the hardware or the software you run requires exact versions of the packages with some specific behavior to work correctly. These include drivers' parts on both kernel and userland, some specific application which requires a very specific version of a library, so on and so forth.
I for one, can use Alma for 99% of the time instead of the old CentOS, but it's not always possible, if you're running cutting edge datacenter hardware. And when you run that hardware as a research center, this small distinction cuts a lot deeper.
Otherwise, taking the LEAPP and migrating to Alma or Rocky for that matter is a no-brainer for an experienced groups of admins. But, when computer says no, there's no arguing in that.
Basically the goal is still to fit the exact situation you just brought up. I'm not aware of this ever not being the case if it weren't to be the case for some reason, then we have a problem we need to fix.
All of the extra stuff we do, patch, etc. is with exactly what you just stated in mind.
As I said, in some cases Rocky is a better CentOS replacement than Alma is.
But to be crystal clear, I do not discount Alma as a distribution or belittle the effort behind it. Derivative, clone or from scratch, keeping a distro alive is a tremendous amount of work. I did it, and know it.
It's just me selecting the tools depending on a suitability score, and pragmatism. Not beef, not fanaticism, nothing in that vein.
Let us know if you have any issues!
I'd also argue that CentOS classic was mostly bug for bug compatible but probably close enough for most. It shared sources but did use a different (complex) build system as I understand it.
Not anymore. I just use the latest ubuntu LTS and call it a day.
IBM/RedHat was soo predictably short sighted on this.
In general, while RHEL is obviously still an important revenue source, there's also a lot of focus on OpenShift going forward which has done of pretty good job of covering (and more) inevitable RHEL declines moving forward.
They are completely different products just reusing branding to confuse what people are asking for.
RHEL Developer is closer, as a no-support, no-cost version of RHEL, but you still have the deal with the licence song and dance.
CentOS gave folks a free version that let you run some dev environments that mostly mirrors prod, without worrying about licences or support. CentOS stream doesn't do this out of principle. It's upstream.
RHEL is the enterprise gold standard.
Fedora is a lot of the pipeline for it, which itself has become an incredible server and desktop platform.
All the work with Open shift, backstage, podman / qubelet, etc.
They're going to be fine, from my graybeard position.
Fedora is the upstream for RHEL.
You are going to see RHEL transition to bootc: https://docs.fedoraproject.org/en-US/bootc/
Get with the times, fellow gray beard: https://github.com/redhat-cop/redhat-image-mode-demo
---
* What is RHEL Image Mode?
RHEL Image mode is a new approach for operating system deployment that enables users to create, deploy and manage Red Hat Enterprise Linux as a bootc container image.
This approach simplifies operations across the enterprise, allowing developers, operations teams and solution providers to use the same container-native tools and techniques to manage everything from applications to the underlying OS.
* How is RHEL Image Mode different?
Due to the container-oriented nature, RHEL Image mode opens up to a unification and standardization of OS management and deployment, allowing the integration with existing CI/CD workflows and/or GitOps, reducing complexity.
RHEL Image mode also helps increasing security as the content, updates and patches are predictable and atomic, preventing manual modification of core services, packages and applications for a guaranteed consistency at scale. ---
Show me on this doll where RHEL touched you.
Apart from that, in terms of keeping RHEL relevant, most of the attention is on making it easier to operate fleets at scale rather than the OS itself. Red Hat Insights, Image Builder, services in general, etc.
Those are the key things that would keep it competitive against Ubuntu, Debian, Alma, Oracle etc.
Of course I can’t speak for all the teams, but all new projects are going out on kubernetes and we don’t care about rhel at all, typically it’s alpine it Debian base images
Not a product, but a service: is Red Hat Linux a counter example?
I worked for a company acquired by IBM, and we held hope like you are doing, but it was only a matter of time before the benefit cuts, layoffs, and death of the pre-existing culture.
Your best bet is to quit right after the acquisition and hope they give you a big retention package to stay. These things are pretty common to ease acquisition transitions and the packages can be massive, easily six figures. Then when the package pays out you can leave for good.
none of that has happened for us at Red Hat. Other than the one round of layoffs which occurred at the time that basically every tech company everywhere was doing much larger layoffs, that was pretty much it and there's no reason to think our layoffs wouldn't have been much greater at that time if we were not under the IBM umbrella.
Besides that, I dont even remember when we were acquired, absolutely nothing has changed for us in engineering; we have the same co-workers, still using all Red Hat email / intranets / IT, etc., there's still a healthy promotions pipeline, all of that. I dont even know anyone from the IBM side. We had heard all the horror stories of other companies IBM acquired but for whatever reason, it's not been that way at all for us at least in the engineering group.
We had a really fun time where the classic s-word was thrown around... "s y n e r g y". Some of the folks I got to meet across the aisle had a pretty strong pre-2010 mindset. Even around opinions of the acquisition, thinking it was just another case of SOP for the business and we'd be fully integrated Soon™.
They key thing people need to remember about the Red Hat acquisition is that it was purely for expertise and personnel. Red Hat has no (or very little) IP. It's not like IBM was snatching them up to take advantage of patents or whatnot. It's in their best interest to do as little as possible to poke the bear that is RH engineering because if there was ever a large scale exodus, IBM would be holding the worlds largest $34B sack of excrement we've seen. All of the value in the acquisition is the engineering talent and customer relationships Red Hat has, not the products themselves. The power of open source development!
It's heartening to hear that your experience in engineering has been positive (or neutral?) so far. Sales saw some massive churn because that's an area IBM did have a heavier impact in. There were some fairly ridiculous expectations set for year-over-year, completely dismissing previous results and obvious upcoming trends. Lost a lot of good reps over that...
Oh the “synergy” rocket chat channel we had back then…
Things have been changing, for sure. So has the industry. So have our customers. By and large, Red Hatters on the ground have fought hard to preserve the culture. I have many friends across Red Hat, many that transitioned to IBM (Storage, some Middleware). Folks still love being a part of Red Hat.
On the topic of ridiculous expectations…there’s some. But Red Hatters generally figure out how to do ridiculous things like run the internet on open source software.
FWIW, the change at Red Hat has always been hard to separate between the forces of IBM and the reality of changing leadership. In a lot of ways those are intertwined because some of the new leadership came from IBM. Whatever change there was happened relatively gradually over many years.
I agree with you FWIW. The company also basically doubled in size from 2019 to 2023. It's very hard to grow like that and experience zero changes. And COVID happened shortly after so that also throws a wrench into the comparisons.
The point is, it's hard to point to any particular decisions or changes I disliked and say "IBM did that"
Cormier and Hicks have their strengths. Hicks in particular seems to care about cultural shifts and also seems adept at identifying key times and places to invest in engineering efforts.
The folks we have imported from IBM are hiring folks that are attempting to make Red Hat more aggressive, efficient, innovative. Some bets are paying off. More are to be decided soon. These kinds of bets and changes haven’t been for everyone.
Longtime Red Hatter here. Most of any challenges I see at Red Hat around culture I attribute to this rapid growth. In some ways it's surprising how well so many relatively new hires seem to internalize the company's traditional values.
We thought the same thing at VMware until Hock moved WITH THE QUICKNESS to jack up prices and RIF a ton of engineering staff.
That said, I'm in tech sales at the Hat now, and IBM is definitely around, but it's still a cool company that tries hard to treat their people right.
They also care A LOT about being an open-source company. Most of my onboarding was dedicated to this, and sales treats it seriously.
I know there are horror stories around this acquisition and lots of predictions about what will happen, but only time will tell. On a minimum, it has been a delight to use the Hashicorp software stack along with the approach they brought to our engineering workflow (remember Vagrant?). These innovations and approaches aren't going away.
I used it literally this year to create a test double of the NUC that runs my home automation stack. I also used Packer to configure Flatcar and create the qcow2 that Vagrant consumes.
Vagrant is still the best tool for creating a general purpose VM on your machine. It got kind-of forgotten in the containers and Kubernetes hype, but it still gets the job done. Packer is also the best tool for creating VM images that got buried for the same reasons.
The datacenter is coming back, though. IBM would be smart to invest in these tools as loss leaders to TFE and Vault and monetize the providers, IMO.
There are worse companies to get bought by, but if you've only ever worked at startups then you're not likely to enjoy what this becomes.
When IBM acquired that company, after a few weeks, this guy had a meeting with new engineering people. The very first meeting, they changed things for him. Instead of a single winding road of development, they wrote out a large spreadsheet. The rows were the distinguishable parts of his large'ish and clever architecture; the columns were assignments. They essentially dismantled his world into parts, for a team to implement. He was distraught. He didn't think like that. They did not discuss it, it was the marching orders. He quit shortly afterwards, which might have been optimal for IBM.
One could argue that to deliver his best continuously he adapted to changing circumstances and left.
Regardless of the general sentiment, hoping for the best outcome for all of you.
I know we aren't perfect. In fact it's my turn this week, and I've utterly failed at keeping up with triage!!
We get tremendous joy and value from engaging with our community. Thanks for your patience (with me in particular!)
I really love these stories. Our customers constantly shock me with how large of clusters they're able to manage with just a few people. >10k nodes by <10 people isn't uncommon although we are not good at rigorously collecting data on this.
Nomad is far from perfect, but we really strive to ease the pains of managing your compute substrate.
No matter what they tell you, your day to day will not improve. For my area, it was mostly business as usual, but a net decrease in comp because IBM's ESPP is trash.
As you know the layoffs that happened were around the same time as the rest of the industry layoffs were happening (fashion firing), I don't feel like it had a significant effect on the culture.
I am fully remote though, and have been for 15 years.
A lot of what was communicated during the acquisition process was how IBM was going to super power Red Hat and help Red Hat grow into an even larger entity, and how Red Hat actually need IBM to survive.
Got promoted twice, once between teams. Went to multiple countries and I was in product security (cost center ) at the time.
> No matter what they tell you, your day to day will not improve.
I am down to a 40 hour work week, down from 50-60.
> Not long after I left, they laid off a bunch of staff.
I don't see any difference, corporate has laid off many different groups, I guess the only different to the mothership laid off people in the US so it hits closer to your home. I've seen layoffs in Support in phillipines, Documentation in Brisbane, Most of Singapore GLS/GPS.
The USA has always been insulated from the layoffs because execs find it harder to lay off people you meet in the corridors.
> A lot of what was communicated during the
> acquisition process was how IBM was going to super
> power Red Hat and help Red Hat grow into an even
> larger entity, and how Red Hat actually need IBM to
> survive.
Maybe this was in the US, most of the IBM briefing/cheerleading meetings were not held during the APAC timezone so I skipped them and prioritized high value work. I have always assumed that if I needed to know something my manager would tell me.
If i was getting let go, nothing I could do about it if I was already doing the work I was supposed to be doing.
I'm not going anywhere! My grammar isn't always the best, so I apologize for the confusion.
Then they did the license change, which didn't reflect well on them.
Now it's being sold to IBM, which is essentially a consulting company trying to pivot to mostly undifferentiated software offerings. So I guess Hashicorp is basically over.
I suspect the various forks will be used for a while.
There have been lifecycle rules in place for as long as I can remember to prevent stuff like this. I'm not sure this is a "problem" unique to terraform.
Terraform's provider model is fundamentally broken. You cannot spin up a k8s server and then subsequently use the k8s modules to configure the server in the same workspace. You need a different workspace to import the outputs. The net result was we had like 5 workspaces which really should have been one or two.
A seemingly inconsequential change in one of the predecessor workspaces could absolutely wreck the later resources in the latter workspaces.
It's very easy in such a scenario to trigger a delete and replace, and for larger changes, you have to inspect the plan very, very carefully. The other pain point was I found most of my colleagues going "IDK, this is what worked in non-prod" whilst plans were actively destroying and recreating things, as long as the plan looked like it would execute and create whatever little thing they were working on, the downstream consequences didn't matter (I realize this is not a shortcoming of the tool itself).
It’s fair to complain that terraform requires weird areas of expertise that aren’t that intuitive and take a little bit of a learning curve, but it’s not really fair to complain that it should prevent bad practices and inexperience from causing the issues they typically do.
https://registry.terraform.io/providers/hashicorp/kubernetes...
> The most reliable way to configure the Kubernetes provider is to ensure that the cluster itself and the Kubernetes provider resources can be managed with separate apply operations. Data-sources can be used to convey values between the two stages as needed.
There are shortcomings in the kubernetes provider as well that make wanting to maintain that in one state file a nonstarter for me.
That's what I expected lifecycle.prevent_destroy to do when I first saw it, but indeed it does not.
That said the failure mode is also a bit more than "a badly reviewed PR". It's:
* reviewing and approving a PR that is removing a resource * approving a run that explicitly states how many resources are going to be destroyed, and lists them * (or having your runs auto approve)
I've long theorised the actual problem here is that in 99% of cases everything is fine, and so people develop a form of review fatigue and muscle memory for approving things without actually reviewing them critically.
like, what happens if you forget to free a pointer in c? sorry for snark but there are an unbelievably numerous amount of things to complain about in tf, never heard this one.
Assuming you mean 'forget' to free malloc'd space referenced by at least one pointer, that's an easy one .. it's reclaimed by the OS when the process ends.
Whether that's a bad thing or not really depends on context - there are entire suites of interlocked processing pipelines built about the notion of allocating required resources, throughputing data, and terminating on completion - no free()'s
Of course you're going to hurt yourself. If you didn't put lifecycle blocks on your production resources, you weren't organizationally mature enough to be using Terraform in production. Take an associate Terraform course, this specific topic is covered in it.
I think the only way to avoid accidentally destroying a resource is to refer to it somewhere else, like in a depends_on array. At least that would block the plan.
Azure Locks (which you can also manage with Terraform), Open Policy Agent, Sentinel rules, etc. will prevent a destroy even if you remove the definition from your Terraform codebase. Again, if you're not operationally mature enough, the problem isn't the tool, it's you.
No, it's code for "don't build a load bearing bridge if you don't understand structural engineering."
> It's fine to point out that that's a suboptimal design for a tool.
This isn't "suboptimal" though. If you delete a stored procedure in your RDBMS and it causes an outage, it's not because SQL/PostgreSQL is suboptimal. Similarly if you accidentally delete files from your file system, it's not because file systems are "suboptimal". It's because you weren't operationally mature enough to have proper testing and backups in place.
$terraform console
>var.whatever
"its value"
>whatever_resource.foo.whatever_attr
"its value"
If you mean somehow printing things when the configuration is being applied... I think you just need to understand that it's neither a procedural language (it's declarative) nor general-purpose (it's infrastructure configuration).Plus, there are many times I don’t want to have to use the REPL. Maybe I’m in CI or something. The fact that I cannot iterate over values of locals and variables easy to see what they are in say, some nested list or object, easily and just print out the values as I’m going along for the things terraform does know is just crappy design
Mutation via hook?
So no. Terraform has the information internally in many cases. There’s just no easy way to print it out.
I find this to be a very strange criticism and is probably indicative of a poor workflow or CI/CD system if anything.
No serious organization with any scale is going to have the only thing standing between them and a production database deletion being two over tired engineers rubber stamping each other's code changes.
Bad config pushes to prod do happen and they can cause outages like the 2024 Cloudstrike outage. You don't want a tool that takes a minor (but significant) error and turns it into a catastrophic one because of poorly thought out semantics. It's better to just start with a tool that requires at least two engineers to explicitly sign off on deletion.
Still not envisioning what your expected solution would look like.
You check the plan, you asses the actions it will take and you either abandon the plan or apply it. You can't roll back destructive changes in cloud environments that's not possible today for obvious reasons.
What IaC does provide is a mechanism to quickly rebuild if you do accidentally wipe out resources.
I've worked in environments were our entire fleet was wiped out by AWS (thousands of VMs) and we were able to rebuild in hours because of IaC.
> No serious organization with any scale is going to have the only thing standing between them and a production database deletion being two over tired engineers rubber stamping each other's code changes.
Most "serious organizations" either have policy as code (Sentinel) or are running Terraform with credentials/roles that have reduced capabilities.
> Bad config pushes to prod do happen and they can cause outages like the 2024 Cloudstrike outage. You don't want a tool that takes a minor (but significant) error and turns it into a catastrophic one because of poorly thought out semantics. It's better to just start with a tool that requires at least two engineers to explicitly sign off on deletion.
This is a criticism of all software/infrastructure deployments with no guardrails. There's nothing stopping you from having two engineers sign off on a TF plan. You can absolutely build that system on top of whatever pipeline you are running.
* Funny you mention databases because that's one of the few AWS resources that can be guarded in TF directly - https://registry.terraform.io/providers/hashicorp/aws/latest...
Except for not "feeling" secure, the only thing everyone wants is a Windows AD file share with ACLs.
Just no one realises this: all the Vault on disk encryption and unsealing stuff is irrelevant - it's solving a problem handled at an entirely different level.
Actually for me, the company I was at that IBM purchased was on the verge of folding, so in that case, IBM saved our jobs and I was there for many years.
Now, we are actively hiring for numerous positions.
Personally, I am not planning to stay much longer. I had hoped that our corp structure would be similar to RedHat, but it seems that they intend to fully integrate us into the IBM mothership.
End of an era.
---
[1]: https://blog.webb.page/2018-01-11-why-the-job-search-sucks.t...
I'm so sorry that happened to you :( I hope you found somewhere else that filled you with excitement.
Two months ago a founder reached out to me, gave me a coding project, I completed it (and got paid!), spoke with his co-founder, and then...nothing. At least I got paid but man, YOU reached out to ME. I don't get it.
I wonder if there is some kind of legal liability involved with sharing feedback with potential candidates? I know this is sometimes the case for referrals/what can and cannot be sad in a reference check, etc.
I ended up having to move out of my hometown (Boston) to stay with my wife's friend's family and now we live in CA. I have a delicious loquat tree in my backyard so things worked out, haha!
> filled to the gills with folks who spent a decade or more at IBM before landing at Red Hat
Was this true before the acquisition?Still broadly correct.
If never profitable (or terrible return on equity), why would you call the layoffs "arbitrary"? It seems pretty reasonable to me.
1. hope you can sucker someone into buying the company
2. keep the VC $ flowing and continue growing, then loop to # 1
3. worse case, need to start making a profit and hope you can survive until # 1. If #1 does not happen, pray(?).
During this time, the founders are pulling in a great salary.
Q: What do you get when you cross Apple and IBM?
A: IBM.
But then the joke was on me when I finally worked for a company owned by Apple and IBM at the same time, and experienced it first hand!
I gave Lou Gerstner a DreamScape [4] demo involving an animated disembodied spinning bouncing eyeball, who commented "That's a bit too right-brained for me." I replied "Oh no, I should have used the other eyeball!"
Later when Sun was shopping itself around, there were rumors that IBM might buy it, so the joke would still apply to them, but it would have been a more dignified death than Oracle ending up lawnmowering [5] Sun, sigh.
Now that Apple's 15 times bigger than IBM, I bet the joke still applies, giving Apple a great reason NOT to merge with IBM.
[1] https://en.wikipedia.org/wiki/Kaleida_Labs
[2] https://en.wikipedia.org/wiki/Taligent
[3] https://en.wikipedia.org/wiki/AIM_alliance
I'm a heavy user of Terraform and Vault products. Both do not belong to this era. Also worked for a startup acquired and dumped by IBM.
So do you find Terraform and Vault good or bad? (sorry, not a native English speaker and I had problems to transcript the sentence)
Secrets whatever your cloud provider has (Google secrets manager etc).
Crossplane is excellent but you need to understand CRDs and kubectl at what I'd consider n intermediate level to really grok it whereas Terraform's CLI is almost fool-proof.
Relying on cloud key vaults is expensive and locks you in. Vault and Consul can run anywhere, even in your toaster. They also support those same KMS. Also, dead easy TUI and GUI with Vault Enterprise
What, in this era, replaces provisioning cloudy stuff that doesn't require heaps of YAML or a bootstrap Kubernetes cluster for operators to run within?
Some of this is obvious (linux and mainframes aren't a bad combo). Some of it I'm a bit surprised by (openshift revenue seems strong).
Probably already basically returned purchase price in revenue and much more than purchase price in market cap.
A noticeable thing is
Most of the these type plays the home page has stacked toolbars / marketing / popups / announcements from the parent company and their branding everywhere (IBM XXX powered by Redhat)... I see very little IBM logo or corporate pop-up policy jank on redhat.com.
People who worked at companies acquired by IBM and could not afford going anywhere else.
A mixture of both will be involved from now on in decision making regarding your platform formation core products.
And Hashicorp are experts in HCL so I am sure they will love it.
I only correct you because it's an even bigger indictment of Notes that IBM switched off of it.
I knew the company had lost the plot at that point.
It feels quite ridiculous, especially if you are managing "soft" resources like IAM roles via Terraform / Pulumi. At least with real resources (say, RDS instances), one can argue that Terraform / Pulumi pricing is a small percentage of the cloud bill. But IAM roles are not charged for in cloud, and there are so many of them (especially if you use IaaC to create very elaborate scheme).
The kind of customers it is good to have.
Because filtering out price sensitive customers is a sound business strategy.
As a rule of thumb, solve any problem your customer might have. Except not having money.
Business have made a killing in China and India for a reason, after all.
+ For what it is worth, the just-one-percent-of-all-Chinese is historically a poor business strategy.
+ As you point out, targeting price sensitive customers puts you in competition with Walmart and Amazon. Not only that but you are competing for their worst customers.
you're not fighting every startup on the planet for the same handful of clients
Not having access to good clients/customers suggests the business idea might not be viable. Chasing money from people without the wherewithal or will to pay, does not make your business idea viable.
But again it is a rule of thumb.
The only point I was trying to get across is that even "bad" customers are still customers, and that there's still a lot of money to be made meeting people's needs doing the work others don't want to do. I feel like this applies from the bottom of the socioeconomic ladder all the way to the top - that's all. Perhaps I should've made that clearer, and that's on me.
An unsolicited side note: I think the bristling to this post was because of the language you were using. Talking about the poor as if they were to be discarded made you look a bit as if you have no empathy, which might not be fair to you. I get it - business require being hard-hearted if you want to get ahead because if you don't make tough decisions, someone else will - but it probably wasn't your best look, you know?
The context was Hashicorp pricing for a web service, I was not talking about the poor.
Not being able to afford a B2B service is not an injustice.
there's still a lot of money to be made meeting people's needs doing the work others don't want to do. I feel like this applies from the bottom of the socioeconomic ladder
Are you betting your breakfast on walking your talking?
even "bad" customers are still customers
That’s why I don’t recommend going out to find them. They tax your ability to provide high quality. You will have enough problems without trying to get lava from a turnip.
it probably wasn't your best look, you know
For better or worse, it’s not going to keep me up grieving on long winter nights.
It doesn't seem to be good for the customers or the people using the software or the people contributing to the open source code. It also doesn't seem to have been good for the investors, looking at the other comments.
A good customer makes it easier to stay in business.
For fixed price customers that means paying a premium over time and materials.
If a customer pays more under time and materials pricing, they were not a good customer because they were making it harder to stay in business.
Although Terrateam is more tightly integrated with a VCS provider.
Disclaimer: I co-founded Terrateam.
Which is to say strong sustainable products need both.
... but ffs don't let the entire company use enterprise as a reason to ignore practitioner feature requests.
However, to play devil's advocate, the number of Terraform resources is a (slightly weak) predictor for resource consumption. Every resource necessitates API calls that consume compute resources. So, if you're offering a "cloud" service that executes Terraform, it's probably a decent way to scale costs appropriately.
I hate it, though. It's user-hostile and forces people to adopt anti-patterns to limit costs.
The previous pricing model, per workspace, did the same. Pricing models are often based on "value received", and therefore often can be worked around with anti-patterns (e.g. you pay for Microsoft 365 per user, so you can have users share the same account to lower costs).
In that world, I think it'd make more sense to charge per run-time second of performing an operation. I understand the argument you are making but the issue is you get charged even if you never touch that resource again via an operation.
It might make sense if TFC did something, anything, with those resources between operations to like...manage them. But...
That would make sense if you paid per API call to any of the cloud providers.
- Computes a list of resources and their expected state (where computation is generally proportional to the number of resources).
- Synchronizes the remote state by looking up each of these resources (where network ingress/egress is proportional to the number of resources).
- Compares the expected state to the remote state (again, where computation is generally proportional to the number of resources).
- Executes API calls to make the remote state match the expected state (again, where network ingress/egress is proportional to the number of resources).
- Stores the new state (where space is most certainly proportional to the number of resources)
This is a bit simplified, but my point is that in each of the five operations, the number of resources can be used as a predictor for the consumed compute resources (network/cpu/memory/disk). A customer with 10k resources is necessarily going to consume more compute resources than one with 10 resources.
And then at the end as you said "stores the new state". Which is basically a big JSON file. 10 resources? 1M resources? I'll leave you to work out how much it probably costs to save a JSON file of that size somewhere like S3 ;)
The previous "per apply" based model penalized early stage companies when your infrastructure is rapidly evolving, and discouraged splitting state into smaller workspaces/making smaller iterative changes.
Charging by RUM more closely aligns the pricing to the scale/complexity of the infrastructure being managed which makes more sense to me.
That said it has tempted me to move management of more resources into kubernetes (via cross plane/config connector)
There were runtime limits IIRC but there was nothing stopping Hashicorp offering a “per user” fixed rate plan at several hundred dollars per month to enterprises for the same service.
The various clients I’ve worked for who used TF would have lapped this up. RUM (or the equally opaque “call us” - we won’t answer! - enterprise pricing that proceeded it) not so much.
Not great for investors, but insiders benefitted a lot!
I’m not passing judgment as to whether that’s “good” or “bad.” It simply is.
They will get capital losses.
That's not perfect.
At least in the startup narrative that circulates on HN, most early employees at a company with that kind of IPO would hope to have a lottery like level of financial windfall. Now their upside is if they manage to get luck a second time they get to offset their winnings? :/
I don't.
But the IRS will let you pay your taxes that way.
It obviously depends how much equity vs income you're talking about.
My doubt in the value of the company was that I've been using Terraform for years in Enterprise settings and never needed to pay the company for anything.
Running a few products. Quoted $1MM or so over 3 years for support. I was able to say no and saved six figures each month.
My kind of operations work VERY well in highly regulated industries because I'm meticulous about regulations. Just because I can do it quickly doesn't mean I do it poorly, and I don't appreciate the assumption that I'm a "cowboy" being reckless.
And it would appear to me you are underinformed about how much migration off of VMware is really happening in these highly regulated industries. There's a tremendous amount of low-profile engineering going on to migrate away. No one is tipping off Broadcom because they don't want BCom to try turning the screws even more.
OpenTofu[0] is the OSS fork though.
[0]: https://github.com/opentofu/opentofu
Disclaimer: involved with OpenTofu
Retail will always be holding the bag. This is known.
They didn't with HashiCorp certainly. Bought some but not too much and were part of a housecleaning a few years back (which I'm glad I did).
lmfao what the fuck? The source they reference: https://www.idc.com/getdoc.jsp?containerId=US51953724
These clowns want $2500 goddamned american dollars for the privilege of reading their bloviations on this topic, which i absolutely will not pay.
You know it's bad when the only people making money on this crap are management consultants.
Thinking back to 2014 using vagrant to develop services locally on my laptop I never would have imagined them getting swallowed up by big blue as some bizarre "AI" play. Shit is getting real weird around here.
You aren’t the target market for their “bloviations” - they are targeted at executives, and it isn’t like the executive pays this out of their own pocket, there is a budget and it comes out of the budget. Plus these reports generally aren’t aimed at technical people with significant pre-existing understanding of the field, their audience is more “I’m expected to make decisions about this topic but they didn’t cover it in my MBA”, or even “I need some convincing-sounding talking points to put in my slides for the board meeting, and if I cite an outside analyst they can’t argue with that”
Commonly with these reports a company buys a copy and then it can be freely shared within the company. Also $2,500 is likely just the list price and if you are a regular customer you’ll get a discount, or even find you’ve already paid for this report as part of some kind of subscription
Who might not have much of an engineering team, or not one with relevant expertise… and why should they trust the vendor’s engineering team? If they are about to sign a contract for $$$, being able to find support for it in an independent analyst report can psychologically help a lot in the sales cycle
While the most useful reports for sales are those which directly compare products, like Gartner Magic Quadrant or Forrester Wave - a powerful tool if you come out on top - these kind of more background reports can help if the sales challenge is less “whose product should I buy?” and more “do I even need one of these products? should we be investing money in this?”
My bills have been paid by working for vendors, where I have seen how sales and marketing use their reports in action. I have seen the amount of effort engineering and product management put in to try to present the best possible vision of their product and its future potential to these analysts. (I've never been personally directly involved in any of those activities though, I've just observed them from the margins.)
But, it isn't like the vendors have a huge amount of choice – if you refuse to engage with the analysts and utilise their reports in your sales cycle, what happens when your competitors do?
Hopefully they do the right thing and hand hashicorp over to Redhat so they can open source the shit out of it. So they can do things like make OpenTofu the proper upstream for it, etc.
"Modern digital businesses need to be able to adapt to changing end-user demand, and since feature flags decouple release from deployment, it provides a good solution for improving software development velocity and business agility," said Jim Mercer, program vice president of IDC Software Development DevOps and DevSecOps. "Further, feature flags can help derisk releases, enable product experimentation, and allow for targeting and personalizing end-user experiences."
Wait. What? This reminds me of the trope of the "wikipedia citation" in high school and college.. that move was worth at most a C+. Are you seriously saying these fucks actually seriously cite this bullshit? In this day and age where even crowdsourced wiki articles seem "credible"? What the actual fuck? I hate this shit.
After the haze of the LLM bubble passes, I hope startups have an exit strategy other than "we'll just get 0.01% of users to pay 6+ figures for support" or "ads".
Good tech deserves a good business model such that it can endure for the long term.
I met some great people along the way that I'm glad to have gotten the opportunity to work with. Godspeed all!
this sounds like corporate AI slop
(Asking for a friend).
In any case, make sure to reach out via the website chat widget / email / demo form, we’re happy to help!
The migration from Terraform to OpenTofu is pretty seamless right now, and documented in the OpenTofu docs[2].
[0]: https://github.com/spacelift-io/spacelift-migration-kit
[1]: https://spacelift.io/blog/how-to-migrate-from-terraform-clou...
[2]: https://opentofu.org/docs/intro/migration/
Disclaimer: work at Spacelift
If given the chance, just take the exit rather than trying to integrate into IBM.
As someone working at Red Hat since before the acquisition, this does not match my experience of "the Red Hat treatment" even a little bit.
I don't doubt that they've handled acquisitions badly in the past but they did a decent job leaving us alone.
For engineering almost no difference other than switching to Slack.
That said, I think a playbook in HCL would be worlds better than the absolutely staggering amount of nonsense needed to quote Jinja2 out of yaml
I would also accept them just moving to the GitHub Actions style of ${{ or ${% which would for sure be less disruptive, and (AIUI) could be even opt-in by just promoting the `#jinja2:variable_start_string:'${{', variable_end_string:'}}'` up into playbook files, not just in .j2 files
https://docs.ansible.com/ansible/11/collections/ansible/buil...
For simple use cases, sure, but you could also just use AWS ECS or a similar cloud tool for an even easier experience.
Most of my issues with it aren't related to the scale though. I wasn't involved in the operations of the cluster (though I did hear many "fun" stories from that team), I was just a user of Nomad trying to run a few thousand stateful allocs. Without custom resources and custom controllers, managing stateful services was a pain in the ass. Critical bugs would also often take years to get fixed. I had lots of fun getting paged in the middle of the night because 2 allocs would suddenly decide they now have the same index (https://github.com/hashicorp/nomad/issues/10727)
I know, it's really sad. Kubernetes won because of mindshare and hype and 500,000 CNCF consulting firms selling their own rubbish to "finally make k8s easy to use".
We were ready to invest in HC tools, but they were so damn brittle once we actually got past the smooth apple-like marketing and actually used them. Plenty of odd, not-well-documented behavior, random crashes, even the clusters required a bunch of manual steps to recover from. A major reason to even have a cluster is the self-healing, something tools like MongoDB did right 10+ years ago. Yet we had to manually edit a peers.json file and do all this garbage half the time when our clusters kept dying. It was infuriating. I kept insisting my devops guy had to be wrong when he told me that was the way it was done. I just couldn't believe that anything in 2021 required manual editing of JSON files when we have a million different self-discovery mechanisms (whether it's cloud metadata, or mDNS, etc). But he was 100% right, much to my disbelief.
So we ultimately pulled the plug after months of HC stuff running our QA system, because I just didn't feel comfortable pushing it to production given all the random crashes and behavior issues. And I feel vindicated.
I think if their stuff was more solid, I would 100% be happy to pay for it for our use cases. I thought generally that their ideas and levels of abstraction felt "right"