Peter Gassner converted Veeva (pharmaceuticals software) to public benefit a few years ago. Veeva is probably 10X smaller and mostly captured by big pharma customers who don't want to compete on software infrastructure and want one place to stay in (FDA) compliance. So in that case the stakeholders have a pretty clear and distinct voice and mission.
In the absence of dominant stakeholders, it's unclear who would govern OpenAI as a public benefit corporation. At a minimum, it's a moral hazard and an invitation to bureaucracy and politics.
I'd be interested in OpenAI developing super clear and transparent customer feedback channels. That could improve AI, give incentive for people to participate in model improvement (instead of reflexively protecting their data), and be a forcing function against governance gaming. Amazon/Bezos has strong experience in managing by metrics, so it's a shame OpenAI has committed itself to Azure/Microsoft.
Another alternative is the M-form, where subsidiaries are coordinated but live or die based on their own performance rather than cross-subsidies (recently in Google becoming Alphabet). But the history there isn't particularly good for innovation, and even profit lags except in industries with stable markets (Berkshire-style).
> Amazon/Bezos has strong experience in managing by metrics, so it's a shame OpenAI has committed itself to Azure/Microsoft.
If they are a customer/investor, as opposed to an internal leadership team, it seems preferable to have someone with comparable resources who is more permissive/hands-off.
> However, it has proved popular with AI companies. Both Anthropic, the maker of the Claude AI tool, and Musk’s xAI are PBCs. One person close to xAI said this meant its probability of being sued was reduced if it did not act in accordance with the shareholders’ interests.
[...]
> “It is intentionally a way for incumbent management and directors to entrench themselves,” he said. “If you can convey the idea to people that you are a good enterprise, a morally safe enterprise and that comes with very little constraints, that has to be tempting to entrepreneurs.”
But I guess these things goes both ways, and the path to hell is paved with good intentions.
We just can't have any nice things.
Yes, a big benefit of a PBC is that you're not only beholden to increasing shareholder value - that's obviously inherent to the PBC model, and I think it could be used for both "good and bad". At the same time, PBCs need to say what their "public benefit" is, and (my understanding is that) they have fiduciary responsibility to others besides just shareholders. This, I imagine they could also be sued by people saying they aren't fulfilling their public benefit mission. Point being, PBCs have pros and cons, and it will take some court cases to find out where the line truly lies.
But like with all entities, there is nothing inherent about them that would make the individuals within them act differently than a simple C-Corporation.
It all comes down to the bylaws.
An astonishing justification proffered for OpenAI's attempt to remove itself from being controlled by a non-profit entity. A PBC might be better than a regular c-corp, but it is not better than a non-profit. OpenAI is pursuing this arrangement in order to grant Sam Altman more control and enable fundraising; the PBC thing is a way to fob off those concerned by exactly the wrong things (i.e. that Sam Altman might be incorrectly removed from power by external stakeholders, rather than, uh, being correctly removed from power by internal stakeholders).
And up to 60% tax deductions for the for-profit entity and executives that carry forward for 5 years when unused and exceeds current year Modified Adjusted Gross Income
Here is the default reality:
A) they can do this to any "60%" non-profit. 60% refers to the category of maximum tax deduction
B) they won't be in control of the assets after they donate
C) but they'll be loosing money they otherwise would have that exceeds the tax burden
But all of these are surmountable by already having your own non-profit with the same board members, or aligned board members:
A) OpenAI is a 60% non-profit that they control and so
B) they'll still be in control of the assets they donate as if they never moved it
C) you can donate illiquid things that would have never been able to be converted to dollars. for example, those PPUs? or perhaps just some cloud compute credits? shares, or membership interests, of an organization as long as a market value can be pointed to. A bunch of GPUs?
An illustrative example would be how the new for-profit OpenAI entity sold some shares for $6bn to represent a 157bn valuation, this means that 3% of the shares were exchanged for dollars. And all 97% of the shares are said to be worth the exact same instantly, and indefinitely into the future. You could donate 1% of the shares to the non-profit OpenAI and that's a $1.57bn tax deduction against whatever income you currently have that year. and if you don't have enough income to offset then it keeps rolling forward.
in a 60% non-profit, assets can only offset up to 30% of your tax bill, and cash donations can be used to offset an additional 30%. Alternatively, cash alone can offset up to 60%. Since donating cash is suboptimal, do 30% illiquid, appreciated, assets, and 30% cash. The cash can also be found by borrowed funds but this is not seen as optimal, it can be seen as strategic though.
Once again, all the donated assets are in a non-profit you still control, while obtaining the tax benefits.
How is this useful? many ways. Non profit has financial ammunition and firepower. It can pump investments you also own by purchasing, or getting involved with. The regulations curbing this are quite flexible. Its a pretty high percentage ownership threshold between you and the nonprofit for it to violate self-dealing regulations, and even when violated you have like 1 - 3 years to get under those thresholds. But even that's just for shareholdings. It can print revenue for things you like, buy more GPUs, burn more energy in compute from your organization, make external investors enamored.
It will also be doing its stated mission, research. Honestly, the stated mission is exactly what I would do if I was also interested in doing all of the above.
The longest preparation is creating the non-profit and getting that approved. They already have that. so the rest is just pressing play.
And it wasn't ever thus, either. Dodge v. Ford Motor Co. really screwed things up.
Contrary to what a lot of other commenters are saying, the shareholder primacy rule is relatively recent (basically since the 80s), and before that there was broad recognition that corporate boards could balance the interests of various stakeholders. This essay also argues that Dodge v Ford Motor (which was in the Michigan Supreme Court) actually had little practical impact - it was cases like Unocal and Revlon in the Delaware courts that had a much bigger impact on actual board behavior in the US.
I'd also question whether there is actually going to be a meaningful check on any of OpenAI's actions, compared to what there would be for a for-profit corporation. I'd bet no.
The amazing thing is that the pursuit of personal enrichment, when regulated in moderation, tends to create a system that is in the public’s benefit. So we allow companies that are self interested because that, at scale, actually supports the public benefit more than requiring all companies to do specific things mandated as for the public benefit.
Tends is doing a lot of heavy lifting here.
The model of "Let's have the worst people do whatever they want, for the most selfish possible reasons, and hope for the best" has also produced a horrific amount of suffering. Crazy amounts of socialized externalized losses get hand-waved away.
Nope, it's also 27th in social mobility[2].
[1] https://en.wikipedia.org/wiki/List_of_countries_by_Human_Dev...
[2] https://en.wikipedia.org/wiki/Global_Social_Mobility_Index
I sure hear a lot of faux-market fascists telling me how to live these days.
Yes, fully socialist countries seem to be doing even worse, but so many of the good parts of my employment contract came from the New Deal and its wild success shows in everything from the infrastructure to the demographic charts that are still defined by echoes of the baby boom! It's no wonder the businessmen got together and tried to coup FDR and have spent the last 50 years trying to wind back these policies. We don't need a Glorious Revolution but we do need another Roosevelt.
This is a profoundly unserious framing. Suffering is the default state of man, and our current system has done far more to eradicate said suffering than any other.
The historical alternative system was voluntarily abandoned by all of its impoverished practitioners in favor of the current system.
The current "alternative" to our system is just our system with high taxes.
Just take the L.
What an awful and depressing view to hold. I hope one day you find a small piece of happiness in your life.
Things used to be worse, and our default state is “suffering” so we should give up, stop here and lick the boots of the current, obviously flawed system? That doesn’t even make sense as an argument.
This seems like gaslighting. Of all the takes, "You were meant to suffer, so if I made you suffer and then I did stuff to reduce the suffering you should be grateful" is certainly one of them.
Capitalism is good at avoiding boondoggles but it's a sucker for asset pumps. We don't have to pay heavy taxes to support a gigantic network of high-speed trains to nowhere but we do have to pay monopoly rents on land, two-sided markets, Metcalfe-law networks, and so on.
Would the Coca-Cola company be allowed to exist in this world? Are professional sports team good for society?
Coca-Cola could easily be a PBC, so could pro sports teams. It just gives you the freedom to shut down short-sighted activist shareholders who can't see past next quarter. It doesn't require you to be Mother Teresa, it just doesn't let anyone punish you if you decide that's what you want to be.
This would sidestep the problem of:
> but what about all the stuff in the middle
The phrase "requiring benefit" is doing a huge amount of heavy lifting in this statement by assuming benefit is something that can easily be agreed upon, or that it's not already taken into account.
Since it will obviously be difficult to agree on this, what if we just vote? I'd propose a system where each of us gets a certain amount of tokens, and then we use our tokens to vote on which products are beneficial. Those companies that make beneficial products will receive the most tokens. They can distribute some of them to their employees and shareholders. The companies that don't provide enough benefit will naturally go under as they fail to accumulate enough tokens to attract employees. We can also set up systems to ensure that everyone receives a minimum number of tokens, and also so that certain people don't accumulate so many tokens as to give them too much control over society.
I believe this system will work well for rewarding companies which produce benefits and punishing those who don't and look forward to it being implemented.
Perhaps we also could list 5 which we think are having a positive impact. The assets of the bad ones could be auctioned off to the good ones, the money made from that auction could go towards severance for anyone whose job was dissolved.
Yes occasionally corporate owners and managers do bad things, and escape serious financial consequences due to the liability shield. So what. That is an acceptable consequence considering all of the other benefits.
This seems like a very murky structure. Probably fine in good times but how would it operate in practice when things are going badly?
A net worth of $150 billion is quite an achievement.
And not too long ago he was touting he was in it for the good of humanity, and "proved" it by only taking a $60k salary with no equity.
A true paragon of honesty and straight-dealing.
https://www.reuters.com/technology/artificial-intelligence/o...
Source: I made it up
> ChatGPT maker considers largely untested company model to protect chief executive Sam Altman from outside interference
Or maybe the motivation is some other technical maneuver.
Or maybe the motivation is PR/optics.
Is there anyone who trusts OpenAI to be honest about their intentions, at this point?