Per Microsoft's definition in https://github.com/microsoft/edgeai-for-beginners/blob/main/...:
> EdgeAI represents a paradigm shift in artificial intelligence deployment, bringing AI capabilities directly to edge devices rather than relying solely on cloud-based processing. This approach enables AI models to run locally on devices with limited computational resources, providing real-time inference capabilities without requiring constant internet connectivity.
(This isn't necessarily just Microsoft's definition - https://www.redhat.com/en/topics/edge-computing/what-is-edge... from 2023 defines edge computing as on-device as well, and is cited in https://en.wikipedia.org/wiki/Edge_computing#cite_note-35)
I suppose that the definition "edge is anything except a central data center" is consistent between these two approaches, and there's overlap in needing reliable ways to deploy code to less-trusted/less-centrally-controlled environments... but it certainly muddies the techniques involved.
At this rate of term overloading, the next thing you know we'll be using the word "edgy" to describe teenagers or something...
As an example the control system network is air-gapped so to use ML for instrument control or similar the model needs to run on some type of "edge" compute device inside the production network all of the inferencing would need to happen locally (i.e. not in the cloud).
IoT is "edge".
The only place I've seen "edge" used otherwise is in delivery of large files, e.g. ISP-colocated video delivery.
For certain things this will be able to go as far as the device if you're only ever operating on data the user fully owns, other things will need data centers still but just decentralised and closer to the user via fancier architectures ala the Cloudflare model.
But the modules that compare the different model families are quite good. As are the remaining modules that are "How to deploy to $platform 101", including microsoft's, of course ;)
Not that I have a better resource at hand for quantization/compression _for beginners_, and I am probably a bad judge for how beginner friendly Song Han's TinyML course was...
Thank you for any response!
https://github.com/microsoft/edgeai-for-beginners/blob/main/...
Edit: seems like it's like that in most languages lol, at least those with a latin script
This is a course on how to use Microsoft compute to maximise their profits
> Welcome to EdgeAI for Beginners – your comprehensive...
Em dash and the word "comprehensive", nearly 100% proof the document was written by AI.
I use AI daily for my job, so I am not against its use, but recently if I detect some prose is written by AI it's hard for me to finish it. The written word is supposed to be a window into someone's thoughts, and it feels almost like a broken social contract to substitute an AI's "thoughts" here instead.
AI generated prose should be labeled as such, it's the decent thing to do.
Is it so hard to believe that there are some people in the world capable of hitting option + “-“ on their keyboard (or simply let their editor do it for them)?
I am guessing you are one of those people who used em dashes before LLMs came out and are now bitter they are an indicator of LLMs. If that's the case, I am sorry for the situation you find yourself in.
Especially given that there are so many linguistic tics one could pick on instead! “Not x, but y”, the bullseye emoji etc., but instead they get hung up on a typographic character actually widely used, presumably because they assume it only occurs on professionals’ keyboards and nobody would take enough care to use it in casual contexts.
I've been wondering why LLMs seem to prefer the em dash over en dash as I feel like en (or hyphen) is used more frequently in modern text.
So:
* fragment a—fragment b (em dash, no space) = traditional
* fragment a — fragment B (em dash with spaces) = modern
* fragment a -- fragment b (two hyphens) = acceptable sub when you can’t get a proper em to render
But en-dashes are for numeric ranges…
What's up with the green checks, red Xs, rockets, and other stupid emoji in AI slop? Is it an artifact from the cheapest place to do RLHF?
I have no proof, sorry.
The decent thing to do is to prefix the slop with the prompt, so humans don't waste their time reading it.
It’s also documentation for an AI product, so I’d kinda expect them to be eating their own dogfood here.