A recurring problem I see in forecasting is that most resources stop at surface-level recipes: ARIMA folklore, shallow ML pipelines, or Kaggle tricks that break in production.
I spent the last years writing a book that goes much deeper and treats forecasting as a modern probabilistic inference problem, not a curve-fitting exercise.
What it focuses on (and what most books don’t):
Forecasting ≠ point prediction: uncertainty, distributional forecasts, and failure modes
Why many “accurate” models are structurally wrong
Time series as dependent data (not i.i.d. with a time index glued on)
When classical methods fail — and when they still beat deep learning
Modern ML (boosting, probabilistic models) used correctly
Evaluation beyond RMSE: coverage, calibration, stability
Real-world constraints: regime change, feedback loops, limited data
No fluff, no motivational filler, no cargo-cult deep learning. It’s written for people who actually deploy forecasts and get blamed when they fail.
I also released a full video walkthrough of Chapter 1, so people can judge the depth before buying.
Links:
Standard edition:
https://valeman.gumroad.com/l/MasteringModernTimeSeriesForec...
Pro edition (extras, updates):
https://valeman.gumroad.com/l/MasteringModernTimeSeriesForec...
Genuinely interested in feedback from HN folks who’ve built forecasting systems in anger — especially where you think the field still gets things wrong.