I graduated high school in the early 2000s and graduated college with major in computer science and a minor in math. My goal is 5-8 more classes for a second degree in math (major).
Wish me luck!
[0] Study guide: https://course1.winona.edu/bperatt/M311S25/Tests/Test%202/te... Course: https://course1.winona.edu/bperatt/M311S25/Administrative/M3...
It was insane how much better the courses were in the community college. Tiny class of 15. $300 or something. Amazing professor that you could ask questions to like you could in high school. Normal 20-30 question textbook homework where you just work basic problems and build confidence that you know the material.
Meanwhile UT was the opposite. I think I paid $1400/class/semester (and that's a bargain). Lecture halls where you couldn't possibly ask a question. Weird math/physics homework that was like 3-5 super hard questions that I often couldn't figure out, demoralizing. Often a TA that could barely speak English. It's actually quite insulting.
I sometimes think about enrolling in a local college for fun, the experience was that good.
Had this experience at an elite uni as well for math courses. At the time I felt like it pushed me to really grow, and it was absolutely necessary to do well in that specific course (tests often had questions that ~required you to know how to do all the uber-hard homework problems), but I wonder what the research actually says about this sort of homework vs your more standard variety.
I have a vivid memory of one of the question on a final being basically “sketch the outline of this important thing we studied”. I couldn’t do it. I took the class but didn’t see the forest for the trees.
Later I met people who talked about things with each other, including the big picture. That’s the community I was missing when I took the class solo.
In retrospect, I could have gotten something more out of those problems that I thought were so hard.
Is the course about learning the material at hand, or laying the foundation for graduate level courses in the same subject? About teaching the most efficient way or getting a student used to deriving equations when there's not a plug and play formula.
I'm sure we can draw similar parallels between csci college courses, big tech interviews, and professional software development. Even though it's all the same pipeline, each stage/stakeholder has different goals, motivations, etc... If you're having a discussion about the pros and cons of an approach, you have to make sure the goals are aligned else you'll just be talking past each other.
I found out I can't stretch my brain to truly understand the fundamentals, so I stopped after a bachelors and don't use my degree at all. I don't mind. It takes truly special people to push the limits, and a lot of not so special people to keep the world running for them.
From what I remember, the university course also had some rote exercises for homework so it isn’t like everyone is only focusing on working the trickier exercises.
This also reminds me of the story Donald Knuth has around working every exercise in the book for a calculus class.
Which would be a non-issue if the US simply had single-payer or universal healthcare.
The only negative for me was that the students were pretty checked out.
I didn't put a lot of thought into where I went to school but if I could do it over again this is something I would have considered when I applied. The school I ended up at did not have many serious students. It was a night and day difference taking courses with even one or two students who were similarly engaged with the material, but most of those students ended up transferring to better schools after a year or two.
You also run into the issue later on that the people you went to school with wash out of industry (or never work in it to begin with) at much higher rates in comparison to those who went to more serious schools.
UT is research focused. Depending on the department, they make the professors teach classes, which is often not aligned with their interests at all. Sometimes I think they are actively trying for bad reviews from students to incentivize the university from making them take on course load.
Whereas in big universities the professors really don't much want to teach they want to accomplish scientific breakthroughs, themselves.
a lot of big universities have people there for research. there is money to be made, grants to be given, and degrees to be minted. and you can feel that too.
source: got out of the military and went to one, then the other.
The tech school considered it a boast that it had more graduate students than undergrad. It was clear where the professors' emphasis was. I recognize the lecture halls where you couldn't ask questions, and the barely-anglophone instructors. (Everyone in the EE department, in particular, seemed to come "fresh off the boat" from China bringing precious little English knowledge with them. The prof for my introductory EE course mumbled on top of it.)
Then I went to state school. Ho-lee shit. Complete difference. The bad profs were incompetent chucklefucks who couldn't cut it in real academia. The good profs actually cared about teaching undergrads.
I learned a lot about choosing a college -- a few years and a few tens of thousands of dollars too late.
I hope people don't take away the negative side of the article, brain slows down, but the positive side: brain gets better with usage. Its uncomfortable, I can churn out programs as complex as programs I've already written and go to review meetings and planning meetings without much effort. But being able to solve PDEs reasonably quickly and accurately, I cannot, or have not without a great deal of practise. It's unconfortable in some weird mental but physical sense. But I'm sharper in everything else I do.
One interesting thing about software as career followed by math classes is that there's no compiler - you can type any janky thought into LaTeX and if you don't detect that it's bogus, nothing will, until you show it to a professor.
Also, the information density of maths notation is way higher than (good) code. We want code to be readable by some that doesn't know it; a lot of math seems to be readable when you sort of 80% already are familiar with all the prereqs. So no just skimming and then hitting compile/test/run (whatever validation you do). It's typing letter by letter and taking the mental effort to actually see and decipher the letter (at least, for me in my current stage; I'm trying to do novel research, but my demonstrated understanding of the details of the previous research is embarrassing low).
Also, weirdly, I still have the same fear of professors that I did as a young person. I manage it better with my decades of maturity (really) but it is still a part of my social interactions.
No one - young or old - does well in math without a great deal of practice :-)
The formal proof community is very interested in exactly this problem! It's not my specialty, but I believe that Lean (https://en.wikipedia.org/wiki/Lean_(proof_assistant)) is one of the very active communities.
It's funny, at the end of each lecture I just want to yell... "NO! Don't stop! I must see how this ends!"
Very similar to when I stop our children's movie and tell them to go take a bath.
I will look into Lean that is mentioned here.
I recently turned 40 myself and I'm working through their Foundations courses (made to help adults catch up) before tackling the Machine Learning and other uni courses.
Math Academy does what every good application or service does. Make things convenient. That's it. No juggling heavy books or multiple tabs of PDFs. Each problem comes with detailed solution so getting them wrong doesn't mean looking around on the internet for a hint about your mistake (this is pre ChatGPT era of course, where not getting something correct meant putting down MathJax on stackexchange).
> better than just prompting ChatGPT/Claude/etc
The convenience means you are doing the most important part of learning maths with most ease: problem solving and practice. That is something an LLM will not be able to help you with. For me, solving problems is pretty much the only way to mostly wrap my head around the topic.
I say mostly because LLMs are amazing at complementing Math Academy. Any time I hit a conceptual snag, I run off to ChatGPT to get more clarity. And it works great.
So in my opinion, Math Academy alone is pretty good. Even great for school level maths I'd say. Coupled with ChatGPT the package becomes a pretty solid teaching medium.
Their marketing website leaves a lot to be desired (a perk since they are all math nerds focused on the product), but here are two references on their site that explain their approach:
- https://mathacademy.com/how-it-works
- https://mathacademy.com/pedagogy
They also did a really good interview last week that goes in depth about their process with Dr. Alex Smith (Director of Curriculum) and Justin Skycak (Director of Analytics) from Math Academy: https://chalkandtalkpodcast.podbean.com/e/math-academy-optim...
Anything in the soft sciences, or biology/organic chemistry, or comp sci. I know there are a lot of courses for the latter especially, but I'm looking for accredited ones.
What I didn't like about the content is I often had questions about it but there was no-one to ask the questions from. Whoever wrote that material was no longer around. It's a frustrating feeling when you can't really trust what you're studying is factually correct, or is misleading.
I assume AI will have a huge improvement in this respect.
at least, i think i heard alt samman say so.
you plebs and proles better shell out the $50 a month, increasing by $10 per day, to keep dis honest billionaires able to keep on buying deir multi-million dollar yachts and personal jets.
be grateful for the valuable crumbs we toss to you, serfs.
Years of martial arts ingrained that sense of being a life-long learner. I was taught the mantra of "Progress comes to those who train" and "Practice makes permanent" and even though those phrases were focused on learning to beat someone up, I've carried them on into other parts of my life.
This YouTube playlist was invaluable for me: https://youtube.com/playlist?list=PLmM_3MA2HWpYYo7QExaRvor_u...
I'm doing something similar: I just turned 50 and have been taking graduate ML classes where I work (at Carnegie Mellon). When I finish the graduate certificate program in generative AI and LLMs that I am enrolled in, I will be only two semesters away from earning a full masters degree.
> My goal is 5-8 more classes for a second degree in math (major).
Why not get a masters degree?
Edit: answered here: https://news.ycombinator.com/item?id=43282629
> Wish me luck!
You don't need it. :)
Me too. High five!
> My goal is 5-8 more classes for a second degree in math (major).
But why? Wouldn't it make more sense to go for a master in computer science? Are you going to use it for work. Otherwise, aren't you going to "lose it" anyways? Also, is your job paying for the degree or are you paying out of pocket?
Maybe it's standard in lots of places, but I've mostly seen study guides where they just list a ton of topics and that's it.
The main thing is there are no surprises or tricks. The exams are straightforward and EXHAUSTIVE. I do all the assigned homework twice. Once when we cover the material and again before the exam. Let’s hope that strategy pays off again.
Good luck!
Do colleges usually let you do this when you're adding to a degree you earned 20 years ago?
Best of luck on your pursuit.
you can learn everything with latest llms
That's a good idea if your goal is a degree in hallucinations.Prompt:
I have padlocks that I use to lock up my tools, or my bike, etc. The problem is, I often go several months without using some of them and forget the combinations. So, I decided to write down their combinations, but then I always lose the sheet. Being the math geek that I am, I decided on the following solution. I choose a 3 × 3 matrix and multiply this matrix by the combination and write the result on the back of the lock. For example, on the back of one lock is written “2688 − 3055 − 2750 : Birthdays,” indicating that the 3 × 3 matrix that I chose for that particular lock is the matrix whose rows consist of the birthdays of my brothers and me (from youngest to oldest). My brother Rod was born on 7/3/69, I was born on 7/28/66, and my older brother was born on 7/29/57. What is the combination of the lock?
Now, technically the LLM didn't quite know how to parse "2688 − 3055 − 2750" and ran the calculation with "[2688;-3055;2750]" and produced a response of, "These values are clearly not typical lock combinations, which suggests a potential issue with the encoding process."
Smart, kind-of. I reran with a more explicit prompt and it calculated the correct combination.
Overall though, I'm impressed with using ChatGPT as a linear algebra tutor. I wouldn't hesitate to use it in the future.
The last 25 years have been particularly painful for people like me who favor academia and pure research over profit-driven innovation that tends to reinvent the wheel. When I look around at the sheer computing power available to us, I'm saddened that people with wealth, power and influence tend to point to their own success as reason to perpetuate the status quo. When we could have had basic resources like energy, water, some staple foods and shelter provided for free (or nearly free) through automation. So that we could focus on getting real work done in the sciences for example, instead of just making rent.
I've been living like someone from movies like In Time and The Pursuit of Happyness for so many decades without a win that my subconscious no longer believes that the future will be better. I have to overcome tremendous spidey sense warning signs from my gut in order to begin working each day. The starting friction is intense. To the point where I'm not sure how much longer I can continue doing this to myself, and I'm "only" in my mid-40s. After a lifetime of negative reinforcement, I'm not sure that I can adopt new innovations like AI into my workflows.
It's a hollow feeling to have so much experience in solving any problem, when problem solving itself will soon be solved/marginalized to the point that nobody wants to pay for it because AI can do it. I feel rather strongly that within 3 years, mass-layoffs will start sweeping the world with no help coming from our elected officials or private industry. Nobody will be safe from being rendered obsolete, not even you the reader.
So I have my faculties, I have potential, but I've never felt dumber or more ineffectual than I do right now.
I suspected something very different based off the first sentence. Like someone living in a high crime area and trying not to get dragged into it. Or constantly struggling with poverty, food insecurity, etc.
A quick search returned some articles. Here’s one: https://pmc.ncbi.nlm.nih.gov/articles/PMC7525587/
A Ted talk: https://www.ted.com/talks/rutger_bregman_poverty_isn_t_a_lac...
It's because as hard as it is to believe, especially for young people: life these days is decent despite the status quo, not because of it.
In other words, had we continued on the trajectory we were on before loosely 1980 and trickle-down economics, we could have had moonshots to solve each of humanity's problems in the order of need rather than profitability. We could have consulted academics to invent 25% efficient solar panels for under $1 per watt and had them installed on over 50% of homes by 1990. We could have invented lithium iron phosphate batteries at that same time and had $10,000 electric cars, because they simply aren't that complicated. We could have had blue LEDs, and WiFi, and flatscreens, and everything else we enjoy today, decades earlier. Stuff that doesn't even exist right now but should, like affordable public buffets, mass transit in small cities and single-payer/public healthcare. Robotic hydroponic greenhouses. Living closer to work (I know, inconceivable).
Instead, I had to watch everything roll out at a glacial pace under a risk-averse private system that allowed the Dot Bomb to happen around 2000. That defunded nearly all pure research and outsourced the jobs that provided a healthy work/life balance. That marginalized eBay businesses and online advertising and the resale market so that influencers and the ultra-wealthy could capture all of that low-hanging fruit while the rest of us have to work. And boy did I have to work, at jobs that sapped every bit of my passion, motivation and self-determination, leaving me too exhausted to pursue my side hustles fast enough to get to market before someone else beat me to it or a deregulated recession wiped me out again.
When you've watched progress flounder for as long as I have, it becomes obvious that sabotage is where the money's at. The powers that be denied innovation at every turn, in order to prop up aging industries centered around a 20th century fossil fuel economy that still dominates our lives today.
And now suddenly AI falls in our lap because a billionaire finally decided to fund it. Now you see what happens with a moonshot. Things change so rapidly that we're left reeling with their implications. The luddites come out. Politics devolves. Time runs backwards to the 1950s, the 1940s, the conditions that fanned the flames that turned into world wars.
Now they gleefully say "see! we should have kept stifling innovation! ignorance is strength!"
It's.just.so.exhausting.
I find that people fall very strongly into 2 camps, which could be loosely mapped to left and right: those who suffer knowing what could be, and those who defend what is to deny their own suffering.
Try to remember, AI is a tool, not a solution, and there will always be new problems. There's a strong case that unlike every other time people said that technology will kill all the jobs, this time it actually will. But a helpful framework comes from Clayton Christensen's Innovator's Solution (not the much more famous Innovator's Dilemma) - whereas a business has well defined needs that can be satisfied by improving products, customers (i.e. people) have ever evolving needs that will never be met. So while specific skills may lose value, there will always be a demand for the ability to recognize and provide value and solutions.
it's still a Pareto distribution, I'm sure, but mega-stardom kinda died and was replaced by all these mini-stars, as far as I can tell. I'm not sure it supports your hypothesis.
I'm not really in touch with other genres, but I like to watch chess videos/streams on Youtube and Twitch. The vast, vast majority of views and revenue are captured by about ten people.
I like those people too, but I've also watched a lot of smaller acts, even some amateur players not much stronger than me. So I get those recommendations, and I see their view counts. They aren't making anything at all.
There are other people who have some followers, but even 50,000 followers would be a dream for most people doing it and they will make next to nothing from that. I'd guess there are at least 30x the number of strong, titled players in the 50k group as there are in the 1MM+ group. These are all people who were chess prodigies as kids, won every scholastic tournament in their state, took gap years or went to colleges that let them basically major in chess, travelled the world for tournaments, with awe-inspiring skills, and they are not making anywhere close enough to live on.
And the thing is, I think software might even be tougher in twenty years. Its hard to get people to change from a system they use to another thing, much harder than recommending a new face on Youtbue.
I think history will rhyme with the offshoring trend but with AI this time.
I was inspired to get into programming by Star Trek in the early 2000s because I thought I could contribute to automation that would lead towards that kind of society; much like you've stated here. Some will say we're naive and unrealistic, but all the ingredients for having society function in this way are attainable with a bit of a cultural shift. I was fine with the idea that society could take baby steps towards it, but it seems the last 25 years have been a mixture of regressing and small incremental improvements to things that don't contribute towards that goal. Just like you, my expectations have been utterly destroyed and my outlook for the future is grim.
It's awfully naive to think that you can solve the information problem with a "small cultural shift". Statements like this strike me as deeply ignorant of economics and the history of attempts to plan society. People are messy and their needs are hard to predict in any meaningful and responsive way that respects their preferences.
Imagine answering the question how many washing machines should we make. Assuming you could figure this out, you need to consider the different kinds of washing machines people may want and need. Apartment dwellers need small efficient one, and people with a lot of kids want big ones. This in turn has baring on the number of motors you have to make, feet of copper wire you need to product, plastics, rubber, and on and on. And don't forget that's just washing machines.
Now you need to figure out how to get these washing machines to people.
You just can't plan and automate everything, its far too complicated.
Automation requires resources, but it also requires vision, cooperation among affected parties, a workable regulatory framework, maturity and availability of required solutions, and availability of competent integrators. There are all kinds of reasons something remains manual besides mere resource availability. And all those things change over time.
There's not much you can do about most of those things, but becoming a programmer and working to develop better solutions is one way to make a difference. Even if you don't work directly in automation, your work can trickle down to the people like me who do cencern themselves with automated sewing and strawberry harvesting.
I picked those two examples because you can literally build a robot to do it, but it is either unworkable in the case of the shirt or financially not viable like the strawberry robot.
At some point the resources necessary for development are there but the technology itself has not actuated. This invalidates your original claim that: "Anything that could be automated but isn’t automated is because it isn’t cost effective to do so or there is insufficient capital to invest."
Again this is tech hubris and a lack of understanding of economics and history.
No one ever disputed that. The principle still holds if we apply this logic to physical processes; by automating or reducing the labor necessary to conduct a physical process, I can enjoy the benefits of the process without having to engage in the labor of the process.
> How would you go about automating sewing a shirt?
https://www.youtube.com/watch?v=oeSu9Vcu0DU
> How about picking strawberries?
https://www.youtube.com/watch?v=H2gL6KC_W44
https://www.youtube.com/watch?v=M3SGScaShhw
https://www.youtube.com/watch?v=OyA9XnW6BV4
To respond to the edit you made to your comment:
> Anything that could be automated but isn’t automated is because it isn’t cost effective to do so or there is insufficient capital to invest. These are resource allocation issues. You can’t just wave that away.
This is true in the long run and I suppose the argument you are making is that any attempt to interfere with the present system of resource allocation will constitute a centralization that would be less effective than free market capitalism, so the notion that we could redistribute the surpluses generated by labor saving devices to the average person is inherently a call to economic centralization. This might be true, but I would propose an alternative reading:
The surplus of labor-saving devices has primarily accrued to the owners of these devices. You might then claim that these owners are owners because they have found a means of servicing a market demand. Each dollar they possess is a vote from the market that these guys really know what they are doing, and that the world wants more of it. If we were talking about spherical billionaires in a vacuum, I'd agree with you - but this issue is complicated by the compounding impacts of inheritance and its correlation with access to credit, as well as with the existence of competitive moats (e.g. network effects, intellectual property, sunk costs, natural monopolies, etc).
The optimistic read of the technology sector in the 2010s was that businesses would compete with one another to provide services that would ultimately improve people's lives. Instead, we got Windows 11. That wasn't a consequence of users voting with their dollars, it was a consequence of Microsoft entrenching itself into workflows that cannot be economically altered in the immediate future. There are lots of examples of the market not being particularly effective at economic allocation if we step outside of the logic that any purchase is a revealed preference which indicates approval of the good or service being purchased. Apply this logic to the purchases of gamblers, alcoholics, drug addicts, or murder-for-hire plots and the limitations of the logic become obvious.
To my minor aside, look at that shirt. You have to essentially glue the fabric into a board and all the robot can do is a rudimentary set of side seams and sleeves on the tshirt. There’s no finishing work on the collar or hem so it’s useless. That robot exists as a demo and is used precisely nowhere. You could in theory do this, but it makes no sense economically.
And yes I’m aware of Japanese strawberry picking robots. You’ve clearly misunderstood what I’m saying. These thing may be technically possible but they remain infeasible for other reasons.
This is exactly what I said you would say:
> I suppose the argument you are making is that any attempt to interfere with the present system of resource allocation will constitute a centralization that would be less effective than free market capitalism
Further:
> Trying like the op suggested to automate away work is utopian and improbable at best.
We are a long way off from the self-replicating systems that could feasibly make work effectively optional, but you haven't made a convincing argument as to why it is improbable that automation could reach that point.
> And yes I’m aware of Japanese strawberry picking robots.
You clearly were not aware of them or you would have picked better examples. Your original comment consisted solely of the statement: "It's the same thing." and now you're continuing with that flippant attitude by pretending that I'm misunderstanding your argument when I anticipated it in its entirety.
This Star Trek stuff is improbable because everything has to be coordinated somehow and waving your hand and saying magical future ai is the only proposal anyone ever has. So yeah, maybe super advanced AGI could do it, but probably not. We don’t even have good models now of how large economies work down to a granular level. People are like I said messy and respond in weird ways to their environments. The best we can do right now is working with prices as signals for the amount of effort other people are willing to put into something. And while that’s imperfect, it’s just improbable that we can do much better. Which is not to say that narrow objectives aren’t possible, only that the bigger and broader you aim the more impossible it becomes.
You cited them as examples of tasks that would be difficult to automate. The pickers have been commercially deployed for the last four years.
> This Star Trek stuff is improbable because everything has to be coordinated somehow and waving your hand and saying magical future ai is the only proposal anyone ever has.
Redistribution already occurs without the use of an AI.
Yes because they are. I specifically gave an example where a machine exists but it's impossible to use for the real world, and an example where economics generally prevent adoption. That gets to my whole point.
> The pickers have been commercially deployed for the last four years.
Yes narrowly, and in only a few places where there are extreme labor shortages.
You are clearly misunderstanding me.
> Redistribution already occurs without the use of an AI.
I didn't make the claim that it didn't happen.
I feel like you're willfully ignoring what I'm saying. These things are hard and rolling them out universally often doesn't work because it is either impractical or economically infeasible to automate things or you run up against regulatory/cultural/material issues. The best we can do is piecemeal progress where incentives align.
No there are literally no companies using that sewing robot, you can't buy that shirt.
> No, you're wrong. You clearly know nothing about this issue
You're being very rude, this isn't twitter.
Is there any evidence to back up this wildly huge assertion?
Or perhaps the world is a bit more nuanced and it may very well be that we're stuck in some local maximums that our current methodologies don't allow us to escape but escaping them is relatively easy if we chose to implement a meagre amount of resources for that purpose which is something we don't do because we're stuck in that local maximum and so on and so forth.
Another way of looking at what you're saying is that we're doing things optimally and that there's no room for improvement when that very obviously is not the case.
There are many gross inefficiencies in our system as it currently is -- look at food production for example. How much of the food produced globally is outright wasted? 30%? 50%?
If we made a conscious effort to tighten that up we could reallocate those resources to solving the problems of automation issues that you're describing.
The true hole in one for automation is a durable machine that can make a copy of itself as well as useful economic goods. Bonus points if this machine can be in a humanoid form to integrate into our existing economic infrastructure.
Once you have a self replicator you can have it make as many copies as needed to solve any problem you need with minimal human effort.
But a self-replicating machine isn't on anyones radar. Have you ever seen a politician or policy person discuss this?
"Look at this lead pencil. There’s not a single person in the world who could make this pencil. Remarkable statement? Not at all. The wood from which it is made, for all I know, comes from a tree that was cut down in the state of Washington. To cut down that tree, it took a saw. To make the saw, it took steel. To make steel, it took iron ore. This black center—we call it lead but it’s really graphite, compressed graphite—I’m not sure where it comes from, but I think it comes from some mines in South America. This red top up here, this eraser, a bit of rubber, probably comes from Malaya, where the rubber tree isn’t even native! It was imported from South America by some businessmen with the help of the British government. This brass ferrule? [Self-effacing laughter.] I haven’t the slightest idea where it came from. Or the yellow paint! Or the paint that made the black lines. Or the glue that holds it together. Literally thousands of people co-operated to make this pencil. People who don’t speak the same language, who practice different religions, who might hate one another if they ever met! When you go down to the store and buy this pencil, you are in effect trading a few minutes of your time for a few seconds of the time of all those thousands of people. What brought them together and induced them to cooperate to make this pencil? There was no commissar sending … out orders from some central office. It was the magic of the price system: the impersonal operation of prices that brought them together and got them to cooperate, to make this pencil, so you could have it for a trifling sum.
That is why the operation of the free market is so essential. Not only to promote productive efficiency, but even more to foster harmony and peace among the peoples of the world."
Star Trek doesn't show the 50 billion landwhales watching Netflix all day, because it makes for bad television. It shows the 1% who still work even when they don't have to, who work because they want to.
I wouldn't worry though, if the last 4 years are any indicator, we will continue to see LLMs refined as better and better tools at a logarithmic rate, but I don't really see them making the jump to replacing engineers entirely unless some monumental leap happens. If AI ever gets that good it will have replaced vast swathes of white collar workers before us.
I am somewhat optimistic, tech adoption is only going to go up, and the number of students pouring into CS programs is cooling off now that there aren't $100k jobs waiting for anyone who can open up an IDE. My ideal future is people who really love tech are still here in 10 years, and we will have crazy output because the tooling is so good, and all the opportunistic money seekers will have been shaken out.
There's nothing wrong with people who have the ability to work for groceries being compelled to work for groceries. The rent issue is complicated by the fact that land ownership prioritizes those who have already had time to accumulate wealth over those who have not. There are some issues with abandoning prices on land entirely (e.g. if land has no cost, how do we decide who gets to live in the most desirable locations?), but there's a compelling case to be made that the contemporary system of real estate financialization is similar to the enclosure movement both in terms of its structure and impact. It becomes a question of those with good credit (typically the rich and old) being able to (in aggregate) buy up all of the desirable land and thus to set monthly claims on the income of those with bad credit over and above the level of claim that would be possible if the property purchases could not be financed by loans.
There is a legitimate cost to constructing a building and renting it out, but there is no real cost to land except the cost the market assigns to it. This might not be the worst thing (recall our example of allocating land in desirable locations), but when prospective landlords can take out loans against the property, the property's value is driven up beyond what any reasonable person would be willing to pay for the property's use. If you couldn't derive rental income from property, it would not make economical sense to finance these purchases beyond what you needed for your own use. This would (in theory) lead to lower prices.
Henry George is the figure to look at here.
Many people are compelled to do that, but almost everyone wants more out of life. Strong evidence is that they take more whenever they can get it.
Of course I wouldn't do ALL of that, since even without work there are only so many hours in the day. But I certainly wouldn't want for things to do!
But even without a job, you still need energy and motivation. The tax of switching between tasks (or hobbies) doesn’t magically disappear. Neither does the time suck of social media.
>You could study a language before work in the morning, and then go row for a bit.
Ok, gotta be in by 9am, 30-60 minutes commute, 30 minutes learning a language, gotta eat, shower, coffee, get my row boat mounted and at the lake 20 minutes away, prep, do a 20 minute row, back again so realistically you'd need to be up at 6am, not unreasonable.
> Then go to work. Then you could play computer games from 5 to 6
Did you end work at 4pm or work from home, either way that is likely a short day but ok. A lot of people are forced to have commutes or work in a job that can't be remote, not to mention work much longer days. Hell isn't "60 hours is the sweet spot" for a work week now? (quoting Google's founder recent comments).
> play ping pong with kids from 6 to 6:30,
Have enough room to have a ping pong table at home, that must be nice, but yeah doable.
> eat a dinner, coach kids soccer from 7 to 8,
Who cooked dinner? Who cleaned up? That shit doesn't just happen by itself. So you prepped, cooked, ate and cleaned up, wrangled kids into car for soccer, and got the game field ready to play all in 30 minutes? Nope.
> volunteer open source from 8:30 to 9:30,
Game ended on time, kids didn't hang around to talk to team mates, straight in the car, no issues, and less than 30 minutes transport. Nope.
> catch a movie at 10.
30 minutes to get kids to bed, baby sitter on time (and you can afford one), doable at some ages sure. Movies are regularly 90-180 minutes so you're in bed at like 1am? For a 6am start? Again transport not taken into account.
The reason people think you can work 60 hours a week, every week, is because they don't do all the everyday things that need to get done, they have other people to do it. Also rarely do they leave enough gaps in their schedule for other peoples priorities.
You wake up at 7. Quick 15 minute breakfast then push your kayak out to the lake and row 45 minutes on the water.
From 8 to 9, you can study a foreign language (same duration as a university course)
At 5 you can game for an hour and decompress. Then ping pong at 6.
By the time you finish ping pong with kids at 6:30, you’ve spent 90 minutes just playing around. Time for dinner, prepared by your partner. Kids have 25 minutes to get dress for soccer and eat dinner. The soccer field should be no more than 5 minute drive from your home.
After the game ends at 8:30, you could schedule an additional 20 minutes for your children’s frivolity if you like. Once you drive home you can cut down to 30 minutes working on open source stuff. A small sacrifice for their joy.
Send kids to their rooms by 9:30. Let them sleep whenever they feel like as long as they are quiet and in their room. Spend time with your partner and prepare yourselves for the night out.
By 9:45 the baby sitter arrives and you two head out for the movies. A baby sitter can be very cheap if your kids are older, often they are just a high school student doing homework or watching TV while your kids sleep or play. Don’t need a PHD.
You could be home by 1 AM depending on movie length. 6 hours of sleep is good enough, you can do it all again the next day.
It’s very doable, especially if you decide you don’t actually want to follow the same schedule everyday.
If you want this schedule, prioritize a WFH career and find a partner who wants to stay home and earn enough money to hire a babysitter. If you don’t then this won’t be available to you and it’s your own fault.
What was good enough yesterday is expected today and won't be good enough tomorrow.
That is practically what makes us human.
Whatever you get today with no effort won't be enough tomorrow.
The ideal modern life is really one that is challenging enough that you don't get everything at once but not too hard that you can't make progress.
I want effort, lot's of it, but let's not nitpick ...
Off the top of my head: Nobel Prize winning, world-beneficial research; lots of loving, open, deeply connected relationships; grow rapidly; be someone people turn to for support (because I help them), ...
I already do at least one of those things. :)
Why should people have to work to be able to afford rent and groceries?
Poverty is difficult enough to escape--not having to worry about rent and groceries would sure help.
There is a reason why school meal programs are such a success.
"Just man up", maybe?
Sorry for the snark but I don't think they can just magically make you feel better. An example or two could change my mind.
That's the toxic stuff you get from society, which leads to you hiring mental health professionals that can teach you healthy, effective ways of dealing with stress.
My son went to a few sessions and completely got his OCD under control. He doesn't have to go anymore. I used similar technique to quit smoking 30 years ago after at least a half-dozen serious tries by other means failed. Still off them. It applies to all kinds of issues though, its also very effective for depression. According to the literature studies I did twenty years ago, it was the only technique that actually showed sustained benefit for depression other than medication.
I have zero faith any therapist can help me. They'll likely start with "but it's for your own good!" and I'll just say "yeah yeah, like 200 other things I have been told and zero of them turned out to be true". That's how I imagine it.
I am not against paying professionals. Obviously. I just don't believe in therapy at all.
What would you do to start with, with a guy like me? (I am aware you are not a therapist yourself.)
Even in the language you used "severe learned helplessness" and "extremely stupid", you are revealing a state of mind (cynicism, self-flagellation) that is not oriented to improving your condition.
You know you have a strong bias against therapists—given your seeming lack of knowledge about them, where do you think that bias came from? Fundamentally, we are a social species and evolved to live with strong connections to small groups.
Our society is no longer set up like that. So professionals like therapists and coaches provide the essential value of a caring, supportive, and helpful relationship that we lack. Like getting an essential nutrient that your diet lacks.
Do you have health insurance? Many of them cover mental health—the site Headway can help you find one that takes insurance. Try a few and gather some first-party data before writing them off fully. The downside is a few hundred dollars. The upside is a much brighter and materially better future.
I think an important result of successful intervention is to awaken (or reawaken) the mind to the idea that thoughts and perceptions are internal and not always accurate representation of an objective, external world. Much psychological stress comes from these internal experiences, and subtle shifts in your mental posture can change this environment.
That's not to say that real stressors and stimuli don't exist. It's just that often times a person can spiral in a way that makes their internal reactions counterproductive and harmful to well being.
Another important result is learning better coping and adaptation strategies, so you can start to shift your mental posture or even change lifestyle and environment to reduce chronic stress.
It's not always easy, not magic, and not perfect. But, it can help...
If you are completely against meeting with a therapist though, you can start with books. I wish I could recommend one that I've used, but this is an example of one that looks really promising to me, with a practical approach: https://www.amazon.com/Retrain-Your-Brain-Behavioral-Depress...
https://en.m.wikipedia.org/wiki/Feeling_Good:_The_New_Mood_T...
What would you do to start with, with a guy like me
IANAT either, but mine would start with asking how I feel and then why. Then we’d talk about my vision of practical ways to stay afloat, the ways I maybe don’t see due to my focus, what exactly makes it hard to push through, in both known and never-tried situations. There would be some belief, avoidance, anxiety, algorithm, or a set of these. In CBT there’s a clear formalized method for each, which you can pick and work with until the next week or two. Examples are: logging your emotional responses, compiling a list of “musts”, start doing un-usual things, asking what exactly is wrong with something that seems bad.
That is, if my depression was on low. If on high, we’d address that first. Last time I pushed through it by following physical regime, a few supplements and lots of anger against it (depression can’t turn off my anger, ymmw as well as methods).
I think the main difference (speaking as a northern European) is that when you Americans speak of therapy you seem to mean the stereotypical "talk therapy" where as basically every therapy here is cognitive behavioral therapy.
Can cognitive behavioral therapy help someone who has a bit of existential dread about his tech job? Maybe. I don't think it's silly on it's face though to say "really?" if the poster's life is in order otherwise.
Over the years I’ve become disappointed and disillusioned. We have nothing like the Bell Labs and Xerox PARC of old, where researchers were given the freedom to pursue their interests without having to worry about short-term results. Industrial research these days is not curiosity-driven, instead driven by finding immediate solutions to business problems. Life at research universities isn’t much better, with the constant “publish-or-perish” and fundraising pressures. Since the latter half of January this year, the funding situation for US scientists has gotten much worse, with disruptions to the NIH and NSF. If these disruptions are permanent, who is going to fund medium- and long-term research that cannot be monetized immediately?
I have resigned myself to the situation, and I now pursue research as a hobby instead of as a paid profession. My role is strictly a teaching one, with no research obligations. I do research during the summer months and whenever else I find spare time.
What you stated is true, but my disappointing observation is that the people with wealth/power are only marginally smarter than the rest of us on the topic you mentioned. And then I suspect that even if one had a rich benefactor, pulling that off is not easy. It takes a threshold number people who have a holistic view of things to pull of what you mentions i.e nearly free basics of life. Check my profile etc. - some of what I wrote may strike a chord with you.
Also the proponents on Technocracy (Hubbert etc.) about a 100 years back, essentially touched on the subject you state. Note: The word technocracy today has a different connotation.
For some perspective, bone evidence of pre-Columbian Indians showed that they regularly suffered from famine. There was also the constant threat of warfare from neighboring tribes.
The American colonists didn't fare much better, their bone evidence was one of extreme overwork and malnutrition.
If I may so bold as to refer to you as "my friend" (having never met you)...
My friend, I think I understand what you mean. I am about the same age too.
I would like to propose an idea to you - and it is something I have been exploring very deeply myself lately.. maybe the thing we need to start spending our time on is exactly this meta problem now. The meta problem is something like (not perfectly stated): we as humans have to decide what we value such that we can continue to give our existence purpose in the future.
I don't think AI is going to be the be-all-end-all, but it is clearly a major shift that will keep transforming work and life.
I can't point yet at a specific job, or task - but I am spending real time on this meta problem and starting to come up with some ideas. Maybe we can be part of what gets the world, and humans, ready for the future - applying our problem solving skills to that next problem?
I mean all of the above in 100% seriousness and I am willing to chat sometime if interested to compare notes.
But I don't really have any time. There are so many things to do, to learn. Younger people who happen to stumble upon this reply, please please prioritize financial freedom if you don't have a clear objective in mind -- and from my observation many people don't have a clear objective when they are in their 20s! If you can retire around 35-40, you have ample time to pursuit any project you want for the rest of the life.
Putting in a plug for MIT OCW 8.962 [1]. I also had this itch, and was able to find time during the pandemic to work through the course (at about 1/2 speed). But true to what others are saying, life intruded for the last few lectures, so still have some items on my todo list. I thought Scott Hughes laid out the math with terrific clarity, with just the right amount of joviality. It is not for everyone, but if you have a suitable background it may turn "scratch an itch" into the obsession that it has done to me.
And to make the obligatory on-topic comment: I'm 61yo. Now get off my lawn.
[1] https://ocw.mit.edu/courses/8-962-general-relativity-spring-...
BTW I hope you are going to get more free time in a few years so that you can come back and enjoy the education again.
Hell of a lot more difficult now when I need to work and don't really have the same amount of time to dedicate to studying. Hell of a lot easier when you're younger, your whole life basically revolves around the education, and any job you have generally fits around your school life rather than the other way round.
And it got worse after my son was born a few years ago. I would count the number of weeks available, not the days, because there has been whole weeks that I couldn't do anything. After all those are two full-time jobs.
As for your CS education, I'd recommend getting into some side projects and explore from there. If you go to a school, it's going to take too many courses.
It takes me one call in the morning, of me saying for the hundreth time in the past 8 months that the integration is still missing data, to get me off the rails for the day. I know at 10AM that I won't touch anything else after work.
Been contemplating starting early and dedicating "the best hours" to myself.
That was something I have considered for a while, but then figured out it is unrealistic because I have a kid. But original replier probably can do if he/she doesn't have one.
I realized that frustration from work usually spills over to other parts of my life, not surprising as work is usually the first big thing we do during a day. I'm exactly like you -- when I have a lot of frustration from work, then I wouldn't want to work on side projects. It has nothing to do with how many hours I have.
I also have some bad sleep patterns as I only sleep about 5-6 hours every night most of the time.
I think, it might be useful to learn some mental skills to compartment one's mental state. If I could somehow put that frustration from work into a separate space without it spilling all over the ship, it would definitely help a lot. But so far I don't know how to do it -- plus I have a kid so I can't chill down after work until late night.
Esp. good were:
https://ocw.mit.edu/courses/6-001-structure-and-interpretati...
and
https://ocw.mit.edu/courses/6-042j-mathematics-for-computer-...
Not really. You'll find that as an experienced programmer, you have a massive advantage at times in your classes.
My tricks that I don't always follow, is work out every day, get enough sleep, and stay off of most short form social media. I realized when I was on short form social it would zap a lot of time and kill any focus I had.
These people have succeeded in making money and that's all. But life is so much more than just making money.
This advice could really backfire badly if taken literally by young people.
Optimizing for financial reward early in your career could be the surest way to end up in a dead end from a mission/purpose/domain/skills perspective.
20 years later, you realize you burned two precious decades accumulating money that, honestly, does not help you at all make sense or use of the next two.
Ah, but it does. Speaking as someone approaching fifty, you feel every penny. Everything about your financial situation weighs into your decision-making, makes different options possible or impossible. It changes which jobs you can take, and which jobs you can turn down. It affects how much time you can take between jobs. It affects how much energy you pour into keeping your job or chasing a promotion versus investing your energy in education or other things you find satisfying.
People worry that they will accidentally pursue money with such single-minded focus that they turn off every other part of their soul, and miss out on what they "really" want to do. But I don't think that's possible. Replace money with anything else: fame, family, intellectual achievement, hedonism. If you try to dedicate yourself 100% to one thing when something else is important to you, you'll hear the voice in the back of your head. You'll feel what it is, and if you ignore it then, that's on you.
If you don't hear that voice yet, lay down the foundation that will give you the freedom to follow it when you finally do.
My point was that, at some point, money has a negative effect on your career. Shooting for the top percentile of revenue can take you off track for life.
But you are saying that having a few hundred thousands bucks when you hit 40-50 is a life-changer and you are absolutely right as well.
Our point of views are not incompatible and were not captured by my first comment.
IMO it's a lot better than the situation I myself am in right now, when I can clearly see myself working my ass off for the next 20-25 years in domains I totally hate, and then hopefully I can start working on interesting things when I'm ... 65?
I'd further argue that the only downside of my strategy is that you already have a clear non-monetary objective but decided to go with the money for 20 years. That's definitely a bad thing, and that's why in my original reply I rooted this out -- if you already have an objective, go for it.
The majority of your life is spent working so you absolutely MUST find it fulfilling or you will burn out (at best) or destroy your body and mind as a sacrifice to the insatiable Mammon.
Even people who find a job they love often find after 10-15 years they are sick of doing it all the time. This is likely to happen to you unless you are careful not to let your job alone be what defines you. This is normal.
Don't get yourself into a job you hate. (part of this is not being so picky you hate everything!) however liking - much less loving - your job is optional. Then go home and do something else for fun.
the important point is don't get so lost in saving money that you don't enjoy now.
If you didn't come from a somewhat privileged background chances are you started your career with more professional debt, without a rich contact network, you're probably a bit too humble to negotiate wages and even narratives like "when I started my business I had come from a working class family, and had to scrap by raising 80k from my relatives to start my business" are out of your reality. So, prioritize being financially secure first.
This angst about a sense of purpose is basically a privileged class malady, if you are poor our friend Maslow will ensure you have more pressing issues to care about first.
You are describing some extreme case of money chasing and/or complete ignorance to everything else. Having the "luxury" to be covered financially for the rest of your life allows you to pursue whatever goals you have in mind at mid-life. If you are susceptible to not knowing what you want, having less money won't help you find out but having more money might.
Is it any better to know what you want to do for the next 2 decades and not ever be able to afford do it? From a practical perspective you are still missing the opportunities you want or dream of, except you're also doing it with little or no financial buffer for the things you need.
My attitude and the way my brain processes things is completely different. Getting laid off or fired goes from something you might fear or see as a bad thing to a neutral or even positive event that just encourages you to go spend your time in a different way for as long as you want.
And that's not even considering health. 20 years of being in a bad mental place (stress is bad, but a perceived lack of purpose and agency might well be worse) will leave its marks.
Why?
Damn I wish I had a million so that I could just drop my job and twitch my coding and gaming streams 12/7. I can't do that.
Agreed on prioritizing financial freedom.
Grinding is soul-sucking, and having someone at home was the only way I made it through the roughest patches.
I semi-retired in the 35-40 range, but if my choices were being retired and single or working but with my family, I’d 100% take the latter.
Physics and Math in a formal setting like school is rigorous, not fun. I found it really hard to stay motivated. I don't know how I would practically use that knowledge, i would never contribute anything scientific. It would take years of grinding through foundational math and physics to get there.
This. To Infinity.
Please prioritise financial freedom. I missed a few steps, but as I get old, I realise this is the biggest blocker to almost anything.
Money == Free time.
You olds have all the money, all the time.
I know it sounds stupid but I started to but lottery tickets, not to win, because statistically it is impossible, but just to give me hope, because lottery is the only thing in the world that can land a mountain of cash in one shot, with a very small investment. Nothing else can do that.
That's why humans purchase lottery tickets all the time throughout history. It's too cheer themselves up.
Anyway I'm half joking. I do buy lottery but it is just to improve the mood of the day. Oh a good mood for a few hours is so important to keep being sane.
But let's say a shallow understanding is good enough...even just completing a General Relativity graduate course with good mark is good enough.
There is absolutely zero evidence that 35 is some mystical cut off for "understanding." That poster has NO clue what they are talking about. Seriously, feel free to ignore that comment.
As for practical advice for learning, you should look into learning how to learn and then spend about 1-2 year habituating to the proper way to acquire knowledge. The science says your (not just you, practically everyone) current intuitions and habits are incorrect; as evidenced by almost everyone in this post. Youtuber Justin Sung is pretty much second to none in terms of a practical program for acquiring these skills.
If you want general guidelines to follow to determine who's telling you the truth and who isn't use the following wikipedia article to guide you: https://en.wikipedia.org/wiki/Active_learning#The_principles...
Note: Simply reading that article and "understanding" what it is saying is not equivalent to having a study program that implements these things, and having a program that implements these things is not the same thing as actually executing on and habituating to said program. This process takes many months to years.
Best of luck.
Think about the tech nerds (me) who never learned how to cook, and are in their thirties. Or lawyers and Doctors who are sick and tired of feeling like they don’t understand how computers work, and want to learn. Or an accountant who loves maths, and wants to get into the scientific side of the field. Or the homemaker who wants to re-enter the workforce now that their kids are grown, and wants to pick up carpentry and welding to become a tradesperson.
If cognitive decline comes from failing to practice it regularly, then the cheapest solution is free education for life to encourage as many people as possible to keep learning new skills and remain cognitively engaged.
I just don't understand these statements that "this or that should be free". Do you plan to enslave the people who would provide this education? Do you not subscribe to the saying "You get what you pay for?". Public education through High School (in the US) has been free for many generations. Ever wonder what would happen if you make the next 4 years "free"? (Hint, you're not going to pop-out of those 4 years with any skills that are differentiated enough from everyone else who took-up the "free" education and not be right back in the same position you are now.)
If you don't have the motivation to prevent your own cognitive decline by taking advantage of a plethora of already free (high quality) education (e.g. https://ocw.mit.edu), then taxing the rest of us so you can be spoon-fed all the free "formal education" you want for life isn't the answer either.
Do you believe that the people who provide public education through High School are enslaved? If yes, how? If not, why do you assume providing free public college education requires enslavement?
> Public education through High School (in the US) has been free for many generations. Ever wonder what would happen if you make the next 4 years "free"?
No need to wonder. Tuition for bachelor's degrees is free in multiple countries, for instance Germany, Finland, Sweden, Scotland and Norway. What happened there?
If “x should be free” was a solution to anything, why stop at education? Let’s make everything free!
If something being free implies forcing people to provide it, to the point that "enslaving" them is a reasonable analogy, why have anything free whatsoever? Let's have nothing free!
For the longest time, it used to be free for state residents attending State colleges in California.
One way is for universities to limit the available places in each degree for first year enrolment, and assign these places based on entrance exam results (as in Spain, where tuition isn't free but is very cheap compared to the U.S.). Another way is to have unlimited first year places, but restrict places from second year onwards to a given number n, allowing only the top n students from the first year to progress (as in France, where tuition is not technically free but averages to < 200€ a year).
A high school diploma used to mean something because it was a filter. Once graduation rate became the goal, standards were lowered, and just showing up became enough to graduate.
Higher education does some filtering. Either they filter aggressively at admissions and graduate everybody (Ivies), filter with weed-out classes and lesser degrees (respected public universities), both (other public universities), or offer a middling education and are ranked accordingly. So the degree means something.
You're 100% right that a modern American High School Diploma does not reflect any degree of basic competency, because standards were constantly refined downward to promote graduation at all costs; I argue college degrees (and many technology certifications) are much the same, providing little more than a demonstration of taking on debt and rote memorization capabilities, rather than being a functional worker.
So if that's the case, and they're not of practical value as credentials anymore, it could be argued there's no harm in opening fundamental/foundational courses in skills to the entire populace, paid for through taxpayer money and restricted to State/Public non-profit Institutions. If we're really concerned about costs, we could implement caps on consumption unless part of a degree program to ensure those taking the advanced courses for employment prospects are given priority over those seeking non-professional growth. There's a lot of wiggle room to be had, if we're serious about opening this up.
Why would we be in the “foreign forever wars should be free” camp?
When approaching these sorts of situations, it is best to steelman your discussion partner’s argument. It will help in your understanding. People who disagree with you aren’t all stupid.
There's a concept called public money which can build roads, dams and other cute concrete things. Why can't you use that for payroll in higher education? Not everybody can learn the same way, not everybody has a separate and chill space in their homes to study without interruption.
Roads serve the needs of now, knowledge builds roads to the future.
Because you're focusing on the accumulation of a finite resource (currency, land, etc) as the sole barometer for success, and then conflating "freedom for use" with "freedom from cost". Obviously salaries have to be paid, buildings maintained, and improvements paid for. Obviously this all costs money, which is a finite resource. Obviously that money has to come from somewhere. Taxation enables everyone to contribute a fraction of the cost regardless of use, and an effective social program (like free education) distributes that cost effectively over time since there's zero chance 100% of the population will consume that resource at the same time, or even in the same year.
It's basic societal maths. If we accept forgoing a profit on the consumption of the resource (healthcare, roads, mail service, education, defense), we can lower the cost substantially and concentrate on its effective utilization. If we do that, we can carve up the cost across the widest possible demographic (taxpayers), and assign a percentage of it as taxation relative to income and wealth. It's how governments work.
> Do you not subscribe to the saying "You get what you pay for?"
Does anyone subscribe to this in the current economy? Everything has record high prices, yet still bombards you with advertisements, sells your data, and requires replacement in a matter of years instead of being repairable indefinitely. University education has boiled down to little more than gargantuan debt loads to acquire a credential for potential employment, a credential that often has no relevancy to the field you actually find work in.
So no, I don't subscribe to that, and I haven't for a decade. My $15,000 used beater car is literally more reliable than a six-figure SUV, and it doesn't keep mugging me for more value to the manufacturer through surveillance technology and forced-advertising.
> Ever wonder what would happen if you make the next 4 years "free"?
Yes. I imagine much of the populace would be better educated and informed about how modern, complex systems work. More people would be fiercely resistant to the low-wage, high-labor jobs that flood the market, forcing a reconciliation of societal priorities. I figure we'd have more engineers, and artists, and accountants, and tradespersons. We'd have more perspectives to existing problems from a broader swath of the economic strata, instead of the same old nepobabies from a lineage of college graduates making the same short-sighted mistakes.
The question is, have you considered what might happen if we made a four-year degree more economically accessible?
> If you don't have the motivation to prevent your own cognitive decline by taking advantage of a plethora of already free (high quality) education (e.g. https://ocw.mit.edu), then taxing the rest of us so you can be spoon-fed all the free "formal education" you want for life isn't the answer either.
Now you're just insulting people because they lack means, and conflating it with lack of motivation. I've lived with people whose sole education was reading books in Public Libraries because they never had public education, with Section 8 housing recipients hammering online learning courses from shared computers to try and find a way upward and out of poverty. None of that gets them a foot in the door, because they don't have the physical piece of paper that says "University Graduate" and the social networks you build from physically attending school - which adults cannot do without money or taking on substantial debt, that in turn jeopardizes their ability to survive.
If you want a society where only those of monied means have the ability to succeed, well present-day America is certainly an excellent demonstration of that. I'd rather build a society where all of us contribute a part of the proceeds of our labor to build a more equitable society for all, so everyone has an opportunity to found that new business, make those social connections, or try new ideas, without worrying about losing their home or paying for healthcare treatments.
Not anyone whose net worth is under -say- fifty- or a hundred-million dollars and is older than their mid-thirties, that's for sure.
If you're not rich enough to routinely afford very well-made things, and you're old enough to know that very many things legitimately used to be far, far higher quality for not that much more inflation-adjusted money [0], then you sure as shit don't subscribe to that saying anymore.
[0] And sometimes, far less... especially when you factor in the cost of continually replacing the garbage that's all that you can afford.
For practical knowledge you just need to do it over and over. A good mentor/teacher would help a lot, but the very very basics I'd say are learnable by yourself. It's as simple as doing it over and over and keeping a critical eye on what went good and not.
As a result, I don't think free public colleges would enable more people to -actually- learn compared to what we have today. However, I find it would be a great place to build community and find people with similar interests to you, which is quite rare to do without an app these days.
> However, I find it would be a great place to build community and find people with similar interests to you, which is quite rare to do without an app these days.
This is what a lot of detractors seem to miss about the benefits of in-person learning. Team projects force you to interact with strangers and cooperate for the benefit of the whole. Campuses increase the likelihood of chance encounters. They get you out of your home and into the community, which helps you feel connected to your actions and their outcomes.
The knock-on effects are often greater than the immediate benefits.
Good people are always changing in some way. Making public education free encourages lifelong learning and builds a more adaptable human for times of crises. It's good survival strategy, that also just happens to create a more fulfilled human being.
Now, if this was structured as a negative tax system, where eg everyone after graduating high school starts with -$10k in taxable income for a handful of years, perhaps that could avoid punishing those that choose to self-study.
An educated populace is an inherent good. There’s nothing magic about the particular choice of K-12, and one could very convincingly argue that with the increasing complexity of modern life and increasing expectations from employers that ongoing adult education is also a net good, even when you’re not the recipient.
Ongoing education can also be vocational for those who aren’t inclined towards typical academia.
Cynically, one can also point to the current political administration of the U.S. (and the comparative education rates for its voters) as a case in point for why education is important.
What more do you want? People just like to complain about political issues as a type of entertainment.
I do however understand where you are coming from. MIT courseware is abundant, youtube, library resources, github `awesome` lists...
If there wasn't bureaucracy/capitalism surrounding higher education i wouldn't mind it coming from tax dollars since it would be another log added to the fire per se. Plus it helps to create a stronger workforce (theoretically, assuming graduates). Without the right safeguards, free college edu wouldn't work, would be nice tho
I consider this a matter of extreme importance. Your gutter-tier hot take makes me think you may be the one confusing this with entertainment.
It’s inoculation against exploitation, a mental vaccine that, when done right, promotes cooperation over self-interest.
Which is exactly why those who are threatened by it, seek to restrict or destroy it.
Not everything is a zero sum game. That’s just a fact of living in a society. Some people pay in to the system much more than you, and you benefit from that. And vise versa, there’s someone paying less and they benefit from your contributions (taxes, etc). That’s what society is about. A system that allows citizens to thrive. It’s not supposed to be about ME ME ME.
Just my 2 cents…as an american that’s tired of this attitude. Capitalism with small guardrails is garbage in my opinion.
On a somewhat related note - many americans think free healthcare is not worthwhile because it’s a net negative for them PERSONALLY. I struggle to understand that as well. Like “oh i don’t want to pay for that”. Meanwhile most of your fellow americans can’t afford basic care.
What’s the end game??? You’re entitled to your opinion of course but i don’t _understand_ it.
I don’t agree with this at all. Anecdotally, the autodidacts I’ve met are way more knowledgeable about subjects they’re passionate about compared to those who received a formal education for it. This applies to both computer science, but also psychology majors who I’ve met who can’t even tell me the difference between Freud and Jung.
Are you actually saying that nobody exists who learns better when taught in the best ways we currently know how to teach, and in the way all formal education currently works? That everyone is better off teaching themselves with no help?
You are disagreeing if and only if this is what you are saying.
The key to learning accessibility is flexibility. Some thrive on self-study, some thrive on video tutorials, some thrive on audio lectures and others in live exercises. Heck, I wouldn't be surprised if this also applied to specific topics: fundamentals of cooking might be better via live instruction, while iterating on a recipe is often fine with self-study or video tutorials.
The point is the flexibility, to allow people to learn in a way that's best for them, so they're more likely to continue learning throughout their lives.
op isn't saying self paced learning doesn't work for anyone, therefore it's irrelevant if you know some whizz autodidacts
She still got Alzheimer's and died a couple of years later.
She had multiple incidents that she hid because she was too scared to find out, and too stubborn to lose her ability to drive. She could have had some treatment if she'd approached a doctor earlier.
Alzheimer's is utterly evil. Robbing people of their unique spark, killing the person before the body dies.
Sorry for the rant
My grandfather had vascular dementia, and keeping him thinking and using his brain absolutely helped. Makes sense for a problem of blood flow that thinking new, hard stuff might direct some more blood supply to the brain.
Also, 1) you don't know for sure if you have Alzheimer's until you're gone, and 2) it seems that vascular dementia co-occurs with Alzheimer's a lot. So I can't imagine that it would ever be a good idea to stop using your mind if you felt it slipping.
I'm still going to try to fight it for myself, though.
When I've had to kill my pets, I didn't do it myself. I called in a professional to do it.
Surely you don't believe that OP is asking their friends to knife them in the chest if they're too far gone to ask to be euthanized? Surely you believe that OP is asking their friends to ask a doctor or nurse come in and do it, if OP is no longer capable of asking for it to be done?
A human is not a pet.
Yep. Strong agreement there.
> A human is not a pet.
Outside of some fairly fringe consensual relationships, I agree that humans are not pets.
> Did you even read my comment?
I did. I was giving you the benefit of the doubt.
Well, the consequences of your stated philosophy are pretty thoroughly explored in this article: <https://theonion.com/no-one-should>.
At the time it becomes relevant, a person with a DNR is usually (always?) in no state to give informed consent to being killed by their doctor's inaction. Same thing for someone in a irrecoverable coma who's being kept alive by machines when a family member or friend instructs the doctor to pull the plug on them.
Relatedly, angels of mercy have been releasing suffering folks who are at the end of their life from that suffering for ages.
You might find these things unpalatable, but they do happen, will continue to happen, and we're better off because they do happen.
I sincerely hope that through to the end of your life you remain lucid and able to clearly and convincingly express your preferences. I very much hope that you're not locked in a metaphorical hell of suffering, but unable to express to (let alone convince) anyone that you're ready to end it early.
> I hope we have the compassion as individuals not to ask others to kill us. That's a heavy weight to put on someone else. It's not abstract "society" conducting the euthanasia: individual healthcare providers would have to decide that you met the criteria and then administer the drugs.
(while ignoring that asking a doctor or nurse to kill you is also asking another to kill us) and now you've moved to talking about specific situations that can be tricky, depending on the particulars.
And we are better off as individuals if we have the option of having external providers do it as that removes any dependency on actually being able to do things. There also is the benefit that it brings an external evaluation into the system that can recognize that maybe the evaluation was wrong. (I'm thinking of a case I heard about--woman thought she had lung cancer, chose to not treat it, simply work until she dropped. Autopsy said TB, not cancer.)
In the comment above, @thinkingtoilet apparently wants someone to kill them if they ever have severe dementia. Presumably that desire would be expressed in some sort of "living will" type document. If the patient meets the criteria then should a healthcare provider strap them down and kill them, even if in the moment the patient says then don't want to die? That seems ethically dubious. It essentially puts providers in the position of being serial killers.
Canada has also had some serious abuses and ethically questionable situations. They are not necessarily a model to emulate.
https://www.pbs.org/newshour/world/some-health-care-workers-...
Consider, for example, nuclear power. It has basically been regulated out of existence in the US because of the standard that radiation exposure must be as low as reasonably achievable. The problem with this is that it doesn't result in safer nuclear plants, it results in plants that run on different power sources. Natural gas? Approximately 10x the risk (and that's not counting climate effects.) Oil? Approximately 10x the risk of gas, thus 100x the risk of nuclear. Coal? Approximately 10x the risk of oil, thus 1000x the risk of nuclear. The expected (and observed) safety benefit of the regulations is negative.
And to preempt the inevitable "Fukushima!", that was political. The expected death toll of staying put was approximately zero. The city was evacuated, killing hundreds, for no good reason.
If quality of life can be negative then there will be cases where the humane act is to provide someone with a comfortable death.
And my point about nuclear power is that excessive regulation actually is counterproductive at maximizing human benefit.
I cannot do that because I am not a medical professional and even if I was I wouldn't be the only one making that decision. I do have a lot of respect for the people whose job it is to perform euthanasia. It's not an act of cruelty, but of kindness.
> What if they change their mind (even if no longer of sound mind) and say they no longer want to die? Would you go ahead and kill them anyway?
No euthanasia program is going to kill someone who says they do not wish to die. The moral hazard mainly comes from when they are no longer able to express their wish. Then the decision is based on the wish expressed when they were still able and the wish of family members.
This is not all too different from someone who has suffered severe brain damage and is kept alive on life support. Would you keep them alive until they die of old age or would you respect the family's wish to stop treatment? People with severe dementia may not be on a breathing apparatus, but they also cannot survive without the constant support of hospice care.
But cases where the person can no longer express their wish are exceptional. It is often that their wish to end their suffering is so strong that they will stop eating to hasten their demise. What would you do in that situation? Would you forcibly feed them through a tube because you do not believe they are allowed to determine the manner of their death? Or would you simply ignore their suffering as they die a slow and agonizing death from malnutrition? This is what I mean when I say that you would be forcing someone to suffer.
Now a friend of mine who is the best programmer I know has an early onset diagnosis. I have noticed him starting to pick fights regularly with people on LinkedIn over programming topics.
It's a really, really hard thing to watch someone go through.
Unfortunately she didn't share what other incidents she had, I really wish she had.
I don't think mental stimulation correlates to the development of alzheimers anyway. The papers I've touched on the subject seem to suggest a mechanical failure in proteins essentially choking off and killing brain structure. Although the lucidity period shortly before death is interesting.
It depends on the task, but overall, for the work I do as a software developer, yes.
I would say I have less energy, but I need less energy, and I produce better results in the end. I'm better at anticipating where a line of work will go, and I'm quicker and better at adjusting course. There are a lot of multi-hour and multi-day mistakes that I made ten and twenty years ago that I don't make now.
The raw mental energy I had when I was younger allowed me to write things I couldn't write now, but everything I write now is something that other people can read and maintain, unlike twenty years ago. It's very rare that writing a large, clever, intricate mass of code is the right answer to anything. That used to frustrate me, because I was good at it. I used to fantasize about situations where other people would notice and appreciate my ability to do it. Now I'm glad it's not important, because my ability to do it has noticeably declined. In the rare cases where it's needed, there are always people around who can do it.
Another thing that is probably not normal, but not rare either, is that the energy I had when I was young supercharged my anxiety and caused me to avoid a lot of things that would have led to better outcomes, like talking to other people. I'm still not great (as in, not even average for an average human, maybe average for a software developer) but I'm a lot better than I used to be.
I was young once, 25 years ago I started programming, and I feel as though I have at least another 25 in me, if not more.
I'm less likely to code until midnight, but more likely to have the problem solved before clocking out at 6pm ;)
As we age, learning vs getting-paid graph first flattens, then either grows very slowly or not at all.
Im guessing that is where the fatigue part comes. You are not exactly growing too much after working hard after a while.
In fact reducing hours worked might correlate with happiness more as you can allocate free time to other rewarding tasks.
At the same time, I can't context-switch like I used to. Once I get into the zone, no problem, but interruptions affect me much more than when I was 20 (or even 40). I can almost feel the tape changer in the back of my head switching tapes and slowly streaming the new context into RAM (likely because all the staging disks have been full for years).
As for long coding sessions - I relish them when I get the chance, which isn't as often as I'd like. Once the tapes have finished loading and I'm in the zone, I can stay there half the night. So that hasn't changed with age.
As your absolute strength gets stronger, the same exercises and workouts get proportionally more fatiguing.
5 sets of a bench press at an 80% of max load, taken within a rep or two of failure, done by a first-year lifter, is incredibly different from that same scheme being done by somebody who's lifted for 10 years. So more advanced lifters tend to do things like lighten the load and use variations of lifts that have more favorable stimulus-to-fatigue ratios.
Anyways, I thought maybe as an advanced programmer, something here could be analogous. You've already done all the coding and thinking to figure out easier and lower-level problems. So what you're left with are the more cognitively challenging parts of coding, which should be more mentally exhausting per unit time. Whatever is '80% difficulty' for you is probably way more advanced than what you were looking at 10 or 20 years ago.
> Is it reasonable to assume that the natural decline in cognitive performance over time is offset by the gains in experience and expertise?
So according to Carlsen, for chess the answer is no.
I personally also suspect the answer for programming is the same. Most, if not all, of the hotshot programmers we know became famous in their early 20s. Torvalds started writing Linux at 21. Carmack was 22 when Doom was released. Many of the most famous AI researchers were in their early 20s when doing the most groundbreaking work. Einstein's miracle year by the way was also when he was 26.
The famous anti-case for this is J.R.R Tolkien started writing Lord of the Rings when he was about 45.
Writing is not programming but they are not that dissimilar. Especially in this context.
What I've learned over the years is life is actually not fair and everyone is different. You can be razer sharp and reasonably healthy at 83 or be in great shape and die of a brain aneurism at 12 with no warning.
Basically don't let studies or other people's results persuade you into not starting or giving up.
I've had a different experience.
IMO there's a huge overlap in skills when writing, coding, making videos and playing guitar.
They all boil down to the idea of getting something out of your head and then refining it until you know when to stop refining based on whatever criteria you're optimizing for at the time.
This is based on writing over a million words and making hundreds of videos over 10 years on my blog and programming for ~20 years while casually playing the guitar for about as long.
What aspects make them feel different for you?
Also, Hans Albert Einstein was born during Einstein's miracle year.
This was in an era when fathers had little to do with childcare. I don’t know about Einstein’s specific situation, but even 40 years ago almost half of fathers had never changed a diaper.
I took that he said that others have caught up and he is just not motivated to do the type of studying to improve even further at this point.
There is a process we don't really have a name for that was best summed up by the boxer Marvin Hagler:
“It's tough to get out of bed to do roadwork at 5am when you've been sleeping in silk pajamas”
The demotivation of success. Of course, that is also going to correlate with age and be very hard to disentangle. At the same time testosterone levels will be past peak, adding another variable in the mix. Plus actual mental acuity past peak.
In other words, as someone pushing 50. Getting old kind of sucks systemically.
But i haven't stopped learning things, apart of the software-making-related, 2 years ago went into e-foiling, and some half-related more-technical adventures. So maybe that is keeping the dementia at bay..
Maybe up to a point. Most of the tools and languages I use daily are fairly recent, or at least new to me. I don't have much of an advantage, if any, compared to my younger colleagues.
There are certainly things I do better now than 10 years ago but I think I'm slowly declining though. Fortunately, there's more than one way to be productive professionally so I hope I can keep up for a few more year.
There are very few capabilities in mainstream languages today, if any, that weren’t available in Common Lisp back in the 1980s or 90s.
I do think it has more to do with daily chores (work, family) than my age. I noticed that, despite being easier to get frustrated nowadays (because I get exposed to more sources of frustration) than I was in my 30s, I'm actually more perseverant than myself 10 years ago. I managed to be very close to wrap up a side project, the first time in my coding life. Of course the scope is smaller than my previous projects but I'm surprised that I didn't back down easily, considering how many times I banged my head during the first few weeks.
I guess being exposed to more frustrations does improve ones resistance to it. To be precise, I get agitated easily, but that agitation doesn't seem to burn me out in the middle term -- while in my 30s I didn't get agitated very often but every time it burns me down to the point I left my side projects.
As someone who has been writing software and/or managing operations for 20 years here is what I have noticed:
* The more experienced people get the more cognizant they become of fatigue in that they know when to take a step back.
* The more experienced people get the faster they get in that they know how to approach repeated problems.
* People do not necessarily get better with experience. Some developers never fully embrace automation, especially if they are reliant on certain tools versus original solution discovery.
Based on that it’s natural that some older developers tend to decline with age while others continue to grow in capability and endurance. The challenge is to identify for that versus those who mask it.
I wouldn't say, "decline," to be charitable. I tend to lean more on mathematics and writing. That often makes up for the lack of stamina.
When I look back on code I wrote 15, 20 years or more ago... it's fine but it lacks the sophistication I have now. I didn't know what I didn't know back then and had to learn. I can see in my code where I encountered a problem and instead of solving it I added more code until it, "worked."
I wasn't university educated so that's explains a bit of it. I didn't start picking up pure functional programming and formal methods until my mid thirties (gosh, has it been a decade already?). I worked through Harvard's Abstract Algebra at 38. I'm leaning more about writing proofs and proof engineering in my spare time while continuing to stream work in Haskell on various libraries and projects. And I'm in my 40s -- I'm doing more programming and mathematics now than ever.
I'm also playing in a band, practice calisthenics and skateboarding, and have been improving my illustration skills with ink.
It seems like the discovery of the article is that if you don't use your skills they start to decline as early as your late 20s. All it takes is practice to maintain and improve them!
I might get a little tired every now and then and can't keep every library I've used in my head all at once. But I tend to rely more on mathematics and specifications and writing. I write less code now. I remove code. And I keep programs and systems fast and correct.
Nothing declining here!
Not everyone has that though, even among people who claim to be well experienced. If those among us are aging and never fully developed the skills to save on manual effort they will likely appear as if in decline. Others that continue to find news ways to deliver higher quality at ever decreasing costs will continue to demonstrate superior value.
> All it takes is practice to maintain and improve them!
That is largely true for anything in that maintenance costs less than recovery and maintenance costs more than original solution delivery for someone well practiced at delivering original solutions. Not everyone invests in the practice to do this though.
can you expand on that for clarity ?
It depends on what you're doing.
The stronger cognitive strength needed, the less it can be replaced with experience.
Some chess grandmasters are teenagers. Maybe maths intensive ML research could be a bit comparable. But that's... Maths. Or distributed software algorithm optimizations?
In the vast majority of software work (as in > 99% ?), experience is more important, though, if you're bright enough when young. Or so I think
(But when closer to 80 or 90 or 100 years, that's different of course.)
Terms you want to check for more detailed info are 'liquid intelligence' and 'crystalized intelligence', but you basically nailed it.
After all, trouble shooting can be viewed as a productive thing.
Interesting idea though.
> Individuals with above-average skill usage at work and home on average never face a skill decline (at least until the limit of our data at age 65).
Get better at things so you don't have to worry about decline. That simple.
Its like a muscle - develop it early on and then you can easily keep it in shape without much effort until the day you die, without any noticeable decline (at least until like 70).
As I get older (now in my 50s), I find myself reflecting on how many aspects of my life and decisions are operating on autopilot. I figure it's worse now with social media where people are constantly bombarded with dopamine hits, while boredom and idle thoughts have largely become things of the past.
Perhaps counterintuitively, I am trying to break this pattern and consciously engage with my experiences by asking a few basic questions, such as:
- What am I seeing here?
- What's going on?
- What am I missing?
- How can I approach this differently to achieve the same or better outcomes?
Additionally, I am making a concerted effort to notice more new details during routine tasks, like commuting or shopping. I can't count how many times I've discovered something new and interesting on my work commutes. Actually, I can: it's every time.
Edit: Also spending more time with long-form content over short-form, be it reading or watching videos. It forces me to consider a topic for a much longer period. Short form knowledge is a trap, unless you have some system that hits you with high rates of repetition (eg Anki).
As a concrete example, someone in this thread mentioned their older relative spending a lot of time with puzzles daily. I too watched my grandpa doing sudokus and crosswords, but in the end if there’s nothing much else, those too will quickly become uninspiring routine.
I really believe truly experiencing life does require some introspection so that you have agency.
And agreed, at one time I really got into Sudoku and Minesweeper, but my nerd mind quickly turned them into brainless pattern matching routines that required effectively no thinking. Don't get me wrong. I appreciate those abilities, but there's a time and place.
This is one of the underrated pleasures of commuting by bicycle. You aren't abstracted away from the world in a bubble of steel and glass. You see, hear, feel, countless little details, and you can reach out and touch them if you want. Potholes, pedestrians, birds, the wind and rain and sun, smells of food and flowers and weird chemicals, street music and overheard fragments of conversation. Millions of faces.
The "fix" seems to be:
- Add more activities to your day, every day.
- Try to break up routines. Eg. you may run every day, but you don't have to take the same route.
- Be actually present during those activities. Engage in conscious thought about those activities.
- Take photos, videos, recordings to recall those activities and jog the brain.
My short term memory is falling off a cliff. What do I need to do to prevent that from getting worse? Are there any other bases I need to cover that I don't know that I'm missing?
Are you sure? I thought this was happening to me too, and then I realized when looking back 10 years ago that I have way more responsibilities now both in and out of work: I am not only getting more done at work, but also for more people. I am now picking and choosing which meetings to even hold, much less attend, because I have a higher throughput. My children's needs are much more complicated now than when they were younger. I have a side business.
I can't fathom how I would have even gotten this all done when I was younger simply due to how much leisure time I spent, much less kept all of this in short term memory back then.
This so much. When I was in my 20s I never forgot things, but I didn't have anything that I really needed to remember lol.
As far as actually improving memory, I try to expose my mind to as much raw material as I can. The mind is a muscle, it has to be exercised, and as you get old you need to focus on its core strength rather than physique and raw strength.
Rehearsal and repetition. Read constantly, get out in the environment and really try to observe all the things that are going on. Write down all the things you want to do this year, and when you’ve done them, write that down, too. Every so often, review the list. It will prompt your recall to a wonderful degree.
Write down your little milestones - ‘in March we found a clutch of tadpoles in a tire track puddle and we watered and fed them there for six weeks”
https://www.ataglance.com/p/planners-calendars/journals-diar...
No batteries or Internet required.
Read the book GTD by David Allen.
You are not supposed to store things in the brain, that only causes stress.
Brain is to do thinking work, you are better off writing and tracking things on paper. Use the brain to think, and paper for planning, scheduling, tracking etc..
Stress and lack of sleep also affect me a lot. Both are omnipresent, since I am a parent of young-age special-need kids.
Solve emotional problems and memory may improve. (I have no idea if that applies to you, of course.)
> short term memory
Which sort of memory do you mean? Short term memory is remembering a name while you write it down, not remembering it the next day or week.
I never really needed determinants in my life until I tried moving a spaceship towards another object. Trying to render realistic computer graphics gets you into some deep topics like FFTs and the physics of light and materials, with some scary-looking math, but I can feel my mind sharpening with each turn of the page in the book.
When they're working, they're regularly talking to people outside their comfort zone about potentially challenging questions. That gets largely shutdown once you retire.
Both my parents were in a huge rush to retire early, and now they just sit at home and scroll Facebook. I don't see the appeal.
Leaning into stereotypes, the older women in my family did just fine in retirement because they just started doing social activities full time. If anything they retired and got busier. The older men sometimes did ok but usually did worse.
And super true for those parents, my goal is to travel massively as much as my budget and health will allow it. Backpacking all around south east asia, thats what keeps me pushing to work on earlier retirement. Sitting at home unless forced, no thank you thats a downward spiral
My retirement plans look somewhat similar to how Knuth spends his time. Long hours of deep intellectually challenging work. Driving long distances and eating tasty food some where far away.
Most of retirement motivation comes from feeling the sun during weekdays. There is little point to be sitting whole day at home.
This is characteristic of acedia.
I write code, pretty much every single day, and also, solve problems, every single day (7 days a week).
I think solving problems is important. Not just rote coding, but being presented with a bug, or a need to achieve an outcome, without knowing the solution, up front, is what I like.
Basically, every single day, I'm presented with a dilemma, which, if not solved, will scrap the entire project that I'm working on.
I solve every one (but sometimes, by realizing it's a red herring, and trying alternate approaches).
> ”Cross-sectional age-skill profiles suggest that cognitive skills start declining by age 30 if not earlier.”
and
> ”Two main results emerge. First, average skills increase strongly into the forties before decreasing slightly in literacy and more strongly in numeracy”
Does this mean that this study contradicts the popular common understanding that cognitive skills decline after 30? Or am I missing something?
For me, personally, if feels a more impactful finding than the “use it or lose it” one
Comparing cognitive abilities between older and younger people fails to control for the inputs - behavior, experience, etc. Try the same inputs (using some big generalities):
* Exploration: Younger people love to explore, even just for exploration sake, and are also compelled to try things - and they also fail. Exploration is their mode, because so much of the world is new to them, because doing something new and innovative is socially admired, and especially because so many major changes happen - leave home, serious romantic relationships, first job, etc. A lot of that happens, ready or not.
* Learning: Similarly, younger people are compelled to learn lots of very challenging things, whether they want to or not; they are compelled to use cognitive skills that they are uncomfortable with. Their job is to learn, daily, for 12-16+ years. Remember school? Remember your early years at work when had little choice of what you did? Remember struggling with all those things?
* Playing: Young people love to play and are socially admired for playing better and more creatively.
What, you're past all that? Nobody is going to make you study things you're not interested in? Don't want to make any big changes? Dignity too big to play? Ego too big to explore and to fail? When you're older, you can say no and 99.99% (I think that's about accurate) take advantage of that and refuse to do or even talk about things they aren't already comfortable with. Does all this sound too hard? Then don't complain about losing those skills.
I think a big part of the problem is the same that affects CEOs - there is nobody to hold them to account.
Right now I'm reading through a college textbook on the biology of learning and memory with ease and good retention. Never got this deep into any subject in my school years.
Same same.
I figured this is because I have less energy, but a little more wisdom. I have much broader understanding of related concepts. So, things click a lot faster.
Oh, and a question at the back of my mind: wouldn’t using AI to avoid minimize all of us spending time in the struggling-to-figure-something zone lead to earlier decline on a massive scale?
REM sleep seems to be related to archiving of events( memory formation ) while lack of deep sleep affects the brain itself.
Pickup a smartwatch and track the sleep stages with Apple watch being the most accurate.
I can imagine at the very least it won’t hurt, and intuitively it makes sense. But I’m not sure studies have been done specifically to understand how board gaming — or the problems being solved with board gaming - helps with cognitive skills.
Curious if others that are closer to this field have thoughts.
Complexity has its place, especially for engine builders like Terraforming Mars where complex interactions are the point. Many designers seem to be throwing in the kitchen sink arbitrarily. We're in a "bigger is better" paradigm.
Both raw compute and wisdom will be eventually replaced by AI, but "deep wisdom" is largely held in the body, in the way we react viscerally to things, which AI as it is envisioned today does not factor in at all. So we still have a refuge in the wisdom stored in our body memory.
Because it literally speeds up your cognitive decline as your brain shuts off and offloads all the heavy lifting.
There's lots of things that can make an even bigger impact, like moving to a new place or starting a new career or school, or a new relationship. But those are things that sometimes only happen a handful of times in our entire lives.
Everything else I find eventually becomes routine, no matter how stimulatingly it can be at the start. Not that we shouldn't try! Some stimulation is a whole lot better than none, and I have a terrible feeling that many people get little-to-no stimulation for weeks and months at a time (beyond a new TV show or podcast or political drama).
Maybe AI could be our mental gym buddy here - not replacing our thinking but offering just the right level of mental challenge to keep us sharp without burning us out. Picture an AI that knows when to push your intellectual boundaries and when to back off based on your energy levels.
And Neuralink-style brain interfaces? They could be like cognitive training wheels - gently supporting neural pathways while letting us do the actual pedaling. Instead of "downloading knowledge" (which sounds exhausting in its own way), they might subtly enhance natural learning processes or help maintain neural connections that would otherwise weaken with age.
The goal shouldn't be turning our golden years into endless mental marathons, but rather finding that sweet spot where cognitive maintenance feels engaging and enjoyable rather than like another chore on the to-do list!
So, it seems like workers with above-average usage of literacy and numeracy continue to increase their ability, while those in fields that don’t emphasize those would need some kind of mental “exercise”.
(I also note that some commenters here are rushing to add more cognitive work to their daily routine through additional studies, but I wonder if they’d be better off focusing on commonly neglected areas like physical activity.)
I am 53 years old. I don't have a college degree. I have never been unemployed and have had good software development jobs all my adult life, including now.
It is possible and likely that your lack of a degree was not the issue.
At nights and weekends, I have been learning home improvements, home automations, piano, Korean, and LLM toolings. All from streaming platforms.
I'm in my late 40s and I do pick up new technical skills a bit slower than younger folks. But because I have a lot of experience, I'm able to more quickly grasp various contextual aspects of those skills: how/why they are useful, how they compare to previous skills that tried to solve the same problem, the hidden costs and implications, etc. These matter a lot in the practical, everyday application of skills.
I find that younger people have a really hard time with those contextual aspects, or they don't think it's that important... until they discover they do.
The mental energy occupied by and spent with parenting is palpable, not to mention long-term continued stress, physical, mental and emotional exhaustion. I wouldn't be surprised if having kids (which is of course correlated with age) is much more of a factor than age itself.
I for one feel dumber than pre-kids.
Like many I preferred the "internet" before it was this. The War Games-like setups, mystical and empowering, and the wonder of the future, how information would free us, things we could never imagine. All to wind up with people staring at Instagram while driving and running into me and my dog, and the world/companies like Apple and Samsung shoving AI elements down our throats.
So I plan my retirement, and it involves unplugging from all this. Then what? Live in a small cabin in the remote woods? Not sure that would go well.
Add to that some frustration from not being able to keep up with things, health issues, no one "young" having time to hang out and your friends dying all the time and I'd be grumpy too. You were once a stallion taking care of everyone and now you worry about falling in the shower because you occasionally lose balance for whatever reason. And you know it'll hurt like a bitch, you'll break something and it won't heal for a year. It's quite humiliating.
One thing it's definitely possible and important to intentionally keep exposing oneself to!
There is everything there for growth, and yet I see none. I get very tired of knowing well what the boring, selfish reaction of the person I encounter is going to be. I am sure I do the same thing - and don't change much compared to what is available to me to make changes. I do not lead by example at all the way I would like.
Nonetheless, what makes me grumpy is lack of change, not the superficial appearance of change with which technology distracts us. Moral growth would be so refreshing to see, but I see none of it - despite virtue signalling as a veneer from all parts of society.
Said more colloquially, a lot of older people just grow tired of all our bulls*t.