Hacker News
new
top
best
ask
show
job
Training large language models on narrow tasks can lead to broad misalignment
(
www.nature.com
)
3 points
by
petemetefete
6 hours ago
1 comment
PaulHoule
6 hours ago
Isn't this just "catastrophic forgetting?" e.g. training LLMs on anything leads them to get worse at what they learned before.