During the process, there was a bidding war. They said “make your prime offer” so, knowing he was a mathematician, we made an offer that was a prime number :-)
So neat to see him be recognized for his work.
EDIT: the example programs for their book are available in Common Lisp and Python. http://incompleteideas.net/book/the-book-2nd.html
Also, Moore's law has become a self-fulfilling prophecy. Now more than ever, AI is putting a lot of demand on computational power, to the point which drives chip makers to create specialized hardware for it. It's becoming a flywheel.
Say you have a module written in VHDL or Verilog and it is passing regressions and everyone is happy. But as the author, you know the code is kind of a mess and you want to refactor the logic. Yes, you can make your edits and then run a few thousand directed tests and random regressions and hope that any error you might have made will be detected. Or you can use formal verification and prove that the two versions of your source code are functionally identical. And the kicker is it often takes minutes to formally prove it, vs hundreds to thousands of CPU hours to run a regression suite.
At some point the source code is mapped from a RTL language to gates, and later those gates get mapped to a mask set. The software to do that is complex and can have bugs. The fix is to extract the netlist from the masks and then formally verify that the extracted netlist matches the original RTL source code.
If your code has assertions (and it should), formal verification can be used to find counter examples that disprove the assertion.
But there are limitations. Often logic is too complex and the proof is bounded: it can show that from some initial state no counter example can be found in, say, 18 cycles, but there might be a bug that takes at least 20 cycles to expose. Or it might find counter examples and you find it arises only in illegal situations, so you have to manually add constraints to tell it which input sequences are legal (which often requires modeling the behavior of the module, and that itself can have bugs...).
The formal verifiers that I'm familiar with are really a collection of heuristic algorithms and a driver which tries various approaches for a certain amount of time before switching to a different algorithm to see if that one can crack the nut. Often, when a certain part of the design can be proven equivalent, it aids in making further progress, so it is an iterative thing, not a simple "try each one in turn". The frustrating thing is you can run formal on a module and it will prove there are no violations with a bounded depth of, say, 32 cycles. A week later a new release of your formal tool comes out with bug fixes and enhancements. Great! And now that module might have a proof depth of 22 cycles, even though nothing changed in the design.
My worst fear, which is happening because it works-ish, is vague/fuzzy systems being the software because it's so like humans and we don't have anything else. It's a terrible idea, but of course we are in a hurry.
That's the approach behind Max Tegmark and Steven Omohundro's "Provably Safe AGI":
https://arxiv.org/abs/2309.01933
https://www.youtube.com/watch?v=YhMwkk6uOK8
However, there are issues. How do you even begin to formalize concepts like human well-being?
Oh agreed! But with AI we might(!) have the luxury to create different types of brains; logically correct brains for space flight, building structures (or at least the calcuations), taxes, accounting, physics, math etc and brains with feelings for many other things. Have those cooperate.
ps. thanks for the links!
This is what I consider the limit of the human mind: we have to start with a few assumptions we can't "prove" to build even a formal logic system which we then use to build all the other provably correct systems, but we still add other axioms to make them work.
It's hard for me to even think how AI can help with that.
https://en.m.wikipedia.org/wiki/Quis_custodiet_ipsos_custode...
excerpt of the first few paragraphs, sorry about any wrong formatting, links becoming plain text, etc. just pasted it as is:
Quis custodiet ipsos custodes? is a Latin phrase found in the Satires (Satire VI, lines 347–348), a work of the 1st–2nd century Roman poet Juvenal. It may be translated as "Who will guard the guards themselves?" or "Who will watch the watchmen?".
The original context deals with the problem of ensuring marital fidelity, though the phrase is now commonly used more generally to refer to the problem of controlling the actions of persons in positions of power, an issue discussed by Plato in the Republic.[citation needed] It is not clear whether the phrase was written by Juvenal, or whether the passage in which it appears was interpolated into his works. Original context edit
The phrase, as it is normally quoted in Latin, comes from the Satires of Juvenal, the 1st–2nd century Roman satirist. Although in its modern usage the phrase has wide-reaching applications to concepts such as tyrannical governments, uncontrollably oppressive dictatorships, and police or judicial corruption and overreach, in context within Juvenal's poem it refers to the impossibility of enforcing moral behaviour on women when the enforcers (custodes) are corruptible (Satire 6, 346–348):
audio quid ueteres olim moneatis amici, "pone seram, cohibe." sed quis custodiet ipsos custodes? cauta est et ab illis incipit uxor.
I hear always the admonishment of my friends: "Bolt her in, constrain her!" But who will watch the watchmen? The wife plans ahead and begins with them!
Who will take custody of the custodians?
no comprendere tu commentum
but
apologia unneeded est
Apologia not uh in the realm of consideration, marginally insightful because shitty latin marginally enjoyable
The downside is that you will sometimes not get the optimizations that you want. But, this is sort of already the case, even with human made optimization algorithms.
But if you do AI research with the idea that by teaching machines how to do X, we might also be able to gain insight in how people do X, then ever more complex statistical setups will be of limited information.
Note that I'm not taking either point of view here. I just want to point out that perhaps a more nuanced approach might be called for here.
At the very least we know consistent language and vision abilities don't require lived experience. That is huge in itself, it was unexpected.
I don't think that's true. A good chunk of the progress done in the last years is driven by investing thousand of man-hours asking them "Our LLM failed at answering X. How would you answer this question?". So there's definitely some "lived experience by proxy" going on.
I was there, at that moment where pattern matching for vision started to die. That was not completely lost though, learning from that time is still useful on other places today.
Best lesson for me - vowed never to be the person opposed to new approaches that work.
I think you'll be surprised at how hard that will be to do. The reason many people feel that way is because: (a) they've become an expert (often recognized) in the old approach. (b) They make significant money (or something else).
At the end of the day, when a new approach greatly encroaches into your way of life -- you'll likely push back. Just think about the technology that you feel you derive the most benefit from today. And then think if tomorrow someone created something marginally better at its core task, but for which you no longer reap any of the rewards.
Game programs like AlphaGo and AlphaZero (chess) are all brute force at core - using MCTS (Monte Carlo Tree Search) to project all potential branching game continuations many moves ahead. Where the intelligence/heuristics comes to play is in pruning away unpromising branches from this expanding tree to keep the search space under control; this is done by using a board evaluation function to assess the strength of a given considered board position and assess if it is worth continuing to evaluate that potential line of play.
In DeepBlue (old IBM "chess computer" that beat Kasparov) the board evalation function was hand written using human chess expertise. In modern neural-net based engines such as AlphaGo and AlphaZero, the board evaluation function is learnt - either from human games and/or from self-play, learning what positions lead to winning outcomes.
So, not just brute force, but that (MCTS) is still the core of the algorithm.
I've personally viewed well over a hundred thousand rollouts in my training as a chess bot =P
What do you call 2500 years of human game play if not brute force? Cultural evolution took 300K years, quite a lot of resources if you ask me.
A human grandmaster might calculate 20-ply ahead, but only for a very limited number of lines, unlike a computer engine that may evaluate millions of positions for each move.
Pattern matching vs search (brute force) is a trade off in games like Chess and Go, and humans and MCTS-based engines are at opposite ends of the spectrum.
What do you call the attraction of bodies if not love? What is an insect if not a little human?
no, not really, from the paper
>> Also important was the use of learning by self play to learn a value function (as it was in many other games and even in chess, although learning did not play a big role in the 1997 program that first beat a world champion). Learning by self play, and learning in general, is like search in that it enables massive computation to be brought to bear.
important notion here is, imho "learning by self play". required heuristics emerge out of that. they are not programmed in.
The goal of DeepBlue was to beat the human with a machine, nothing more.
While the conquest of deeper understanding is used for a lot of research, most AI (read modern DL) research is not about understanding human intelligence, but automatic things we could not do before. (Understanding human intelligence is nowadays a different field)
> most AI (read modern DL) research is not about understanding human intelligence, but automatic things we could not do before.
Yes, and that's a bad thing. I don't care if shopping site recommendations are 82% accurate rather than 78%, or w/e. We've traded an attempt at answering an immensely important question for a fidget spinner.
> Understanding human intelligence is nowadays a different field
And what would that be?
for example there are clever ways of rewarding all the steps of a reasoning process to train a network to “think”. but deepseek found these don’t work as well as much simpler yes/no feedback on examples of reasoning.
[1] There's a lot of confusing naming. For example, due to its historic ties with behavioural psychology, there are a bunch of things called "eligibility traces" and so on. Also, even more than the usual "obscurity through notation" seen in all of math and AI, early RL literature in particular has particularly bad notation. You'd see the same letter mean completely different things (sometimes even opposite!) in two different papers.
It’s silly and dangerous. Because you don’t like thing A and they said/did thing A all of their lofty accomplishments get nullified by anyone. And worst of all internet gives your opinion the same weight as someone else (or the rest of us) who knows a lot about thing B that could change the world. From a strictly professional capacity.
This works me up because this is what’s dividing up people right now at a much larger scale.
I wish you well.
This has nothing to do with his professional life. He has made these comments in a professional capacity at an industry AI conference... The rest of your comment is a total non sequitur.
>And worst of all internet gives your opinion the same weight as someone else (or the rest of us) who knows a lot about thing B that could change the world. From a strictly professional capacity.
I've worked professionally in the ML field for 7 years so don't try some appeal to authority bs on me. Geoff Hinton, Yoshua Bengio, Demis Hassabis, Dario Amodei and countless other leaders in the field all recognize and highlight the possible dangers of this technology.
I do agree that there is some level of inherent safety issues with such technologies - but look at atomic bomb vs fission reactors etc: history paves a way through positivity.
Just because someone had an idea that eventually turned to have some evil branch off way further from the root idea doesn't mean they started with the evil idea in the first place or worse, someone else won't.
Sutton and everyone else who has advanced the field deserve condemnation IMO, not awards.
I don't think it's a question of whether their achievements are nullified, but as you mention, how to weight the opinions of various people. Personally, I think both a Turing award for technical achievement and a view that humanity ought to be replaced are relevant in evaluating someone's opinions on AI policy, and we shouldn't forget the latter because of the former.
(Also, this isn't about Sutton's personal life - that's a pretty bad strawman.)
Repressive laws on open AI/models—giving elites total control in the name of safety?
And this alternative perspective from the cult should disqualify someone from a Turing Award despite their achievements?
Reminds me of a quote from Jean Cocteau, of which I could not find the exact words, but which roughly says that if the public knew what thoughts geniuses can have, it would be more terrified than admiring.
In the talk, he says it will lead to an era of prosperity for humanity, however without humanity being in sole control of their destiny. His conclusion slide (at 12:33) literally has the bullet point "the best hope for a long-term future for humanity". That is opposite to you saying he "doesn't care if humans all die".
If I plan for my succession, I don't hope nor expect my daughter will murder me. I'm hoping for a long retirement in good health after which I will quietly pass in my sleep, knowing I left her as well as I could in a symbiotic relationship with the universe.
That seems to be a harsh and misleading framing of his position. My own reading is that he believes it is inevitable that humans will be replaced by transhumans. That seems more like wild sci-fi utopianism than ill-will. It doesn't seem like a reason to avoid celebrating his academic achievements.
Edit: especially since I think your implied claim that Sutton would actively want everyone to die seems very much unfounded.
If "we" don't build it, someone else will.
It's not just one Youtube video, it's a repeatedly expressed view:
https://x.com/RichardSSutton/status/1575619655778983936
Valuing technological advance for its own sake "beyond good and bad" is an admirably clear statement of how a lot of researchers operate, but that's the best I can say for it.
This talk isn't that. There are no substantive arguments for why we should embrace this future and his representation of the opposite side isn't in good faith either, instead he chose to present straw-man versions of them.
He concludes with "A successful succession offers [...] the best hope for a long-term future for humanity. How this can possibly be true when ai succession necessarily includes replacement eludes me. He does mention transhumanism on a slide, but it seems extremely unlikely that he's actually talking about that and the whole succession spiel is just unfortunate wording.
To me robots are just as cool.
How is AI going to make its own chips and energy? The supply chain for AI hardware is long an fragile. AGI will have an interest in maintaining peace for this reason.
And why would it replace us, our thoughts are like food for AI. Our bodies are very efficient and mobile, biology will certainly be an option for AGI at some point.
OK, so do you support laws preventing chip manufacturers and energy providers from becoming reliant on AI?
Pay naive humans take care of those things while it has to, then disassemble the atoms in their human bodies into raw materials for robots/datacenters once that is no longer necessary
a timeless classic that I still highly recommend reading today!
I wish a lot more games actually ended up using RL - the place where all of this started in the first place - would be really cool!
Shows he has integrity and is not a careerist focused on prestige and money above all else.
He gave up his US citizenship years ago but he explains some of the reasons why he left. I'll also say that the AI research coming out of Canada is pretty great as well so I think it makes sense to do research there.
Great people and cheap cost of living, but man do I not miss the city turning into brown sludge every winter.
From that perspective location still matters if you want to maximise impact
Wheras the introductory book Grokking Deep Learning walks you through implementing your own pytorch, and has a portion about rl near the end, then has a follow up book on rl, and it is trivial to have your own from scratch model and framework playing tic tac toe, snake, even without any math skills beyond multiplication.
This happens without just smacking the reader with the modified bellman equation, and a bunch of chain rule backwards, and padded paragraphs intended to sell additional versions to universities.
> The ACM A.M. Turing Award, often referred to as the "Nobel Prize in Computing," carries a $1 million prize with financial support provided by Google, Inc.
Good on Google, but there will be questions if their mere sponsorship in any way influences the awards.
If ACM wanted, could it not raise $1m prize money from non-profits/trusts without much hassle?