Silly question, but if you have some clever way to compute the inverse Hessian, why not go all the way and use it for Newton's method, rather than as a preconditioner for SGD?
It's an interesting trick though, so I'd be curious to see how it compares to CG.
[1] https://arxiv.org/abs/2204.09266 [2] https://arxiv.org/abs/1601.04737 [3] https://pytorch-minimize.readthedocs.io/en/latest/api/minimi...
Iirc eg GMRES is a popular Krylov subspace method.
probably my nomenclature bias is that i started this project as a way to find new preconditioners on deep nets.
Any hints? The Battenburg graphics of matrices?
In the context of optimizing parameters of a model, the Gradient consists of all the derivatives of the output being optimized (i.e. the total error measure) with respect to each of the models parameters.
This creates a simplified version of the model, linearized around its current parameter values, making it easy to see which direction to take a small step to move the ultimate output in the direction that is desired.
And easy to see which parameters adjust the desired output more vs. less.
[EDIT] Nx1 1st derivative vector, N = #parameters, 1 = scalar output.
HESSIAN
The Hessian consists of all 2nd order derivatives, i.e. not just slope, but the curvature of the model, around the current parameter values.
Calculating all the first and 2nd degree derivatives takes more calculations and memory, but allows for more information as to which direction to take a learning step. As not only do we know how the output will respond linearly to a small parameter change, but whether larger changes will produce higher or lower than linear responses.
This can allow for the calculation of much larger changes to parameters, with high output improvements, speeding up training considerably, per training step.
But the trade off is each learning step requires more derivative calculations and memory. So a conducive model architecture, and clever tricks, are often needed to make the Hessian worth using, on larger models.
[EDIT] NxNx1 = NxN 2nd derivative matrix, N = #parameters, 1 = scalar output.
JACOBIAN
Another derivative type is the Jacobian, which is the derivate of every individual output (i.e. all those numbers we normally think of as the outputs, not just the final error measure), with respect to every parameter.
Jacobians can become enormous matrices. For billions of parameters, on billions of examples, with 100's of output elements, we would get a billions x 100's of billions derivative matrix. So the Jacobians calculation can take enormous amounts of extra computation and memory. But there are still occasions (much fewer) when using it can radically speed up training.
[EDIT] NxQxM 1st derivative matrix, N = #parameters, Q = #samples, M = #output elements
At this point, we have enough computer power and memory available, that all small enough problems should be trained with Jacobians in my view. Levenberg-Marquardt is an optimization algorithm that uses Jacobians. It can be orders of magnitude faster than gradient descent.
If I understand it in a nutshell. If Gradient is the angle Hessian is the curvature.
and Jacobians let you know how much weights contributed to the blue component of something identified as a big blue cat.
I think.
Jacobians look like they could be used to train concept splitters. For instance if an LLM has a grab bag of possible conversation paths, the final embedding would have information for each path, but once the selection is made it could filter the embedding to that path, which would be beneficial for chain of thought using the filtered embedding instead of the predicted token. I always wondered how much the thinking in embedding space carried around remnants of conversation paths not taken.
Also if you have happen to have any suggestions for linear algebra for someone who uses it without really understanding it (I can write a measurement function for an EKF from scratch OK, but I don't really understand why the maths does what it does) I would really appreciate it.
The gradient is a special case of the Jacobian for functions mapping N to 1 dimension, such as loss functions. The gradient is an N x 1 vector.
If you want a serious text that goes through the relevant linear algebra and optimization mathematics in depth up front, Neural Network Design, 2nd edition is a good one. [Disclaimer, co-author]. We took great pains to walk through every conceptual and mathematical topic before we apply those concepts to machine learning. We use MATLAB a lot, which may or may not be helpful.
Another potential option is "Linear Algebra and Optimization for Machine Learning", which looks good and also starts out with linear algebra before machine learning. I haven't read it, but the first 2020 edition gets good reviews, and a second 2026 edition just came out, apparently with a fair amount of positive revision. Given the speed of change, that's nice to see.
If so, it would be nice if this were the case, because you could then just use the Woodbury formula to invert H. But I don't think such a decomposition exists. I tried to exhaustively search through all the decompositions of H that involved one dummy variable (of which the above is a special case) and I couldn't find one. I ended up having to introduce two dummy variables instead.
Not quite, it means any submatrix taken from the upper(lower) part of the matrix has some low rank. Like a matrix is {3,4}-semiseperable if any sub matrix taken from the lower triangular part has at most rank 3 and any submatrix taken from the upper triangular part has at most rank 4.
The inverse of an upper bidiagonal matrix is {0,1}-semiseperable.
There are a lot of fast algorithms if you know a matrix is semiseperable.
edit: link https://people.cs.kuleuven.be/~raf.vandebril/homepage/public...
i need to firm my intuition on this first before i can say anything clever, but i agree it's worth thinking about!