2 pointsby galaxyLogic2 hours ago1 comment
  • jqpabc1232 hours ago
    Ok, let me take a wild guess here --- this has something to do with the fact that LLMs are really language prediction engines based on probability --- not logic?