Though the assertion how (loosely) bad methods stumbling on a good result are reinforced vs good methods not leading to a hypothesis confirmation are discared for training sounds like extremely important pattern to watch out for: not just with tech like AI but also in life.