48 pointsby schappim16 days ago2 comments
  • malshe15 days ago
    Looks really nice. I plan to try it out this weekend. I am not familiar with all the Core ML models. Where can I get their names to try out?
  • dsrtslnd2315 days ago
    Does this handle conversion and quantization from PyTorch? Or is it strictly for running existing Core ML files?
    • schappim15 days ago
      Nope, but Apple released the python lib "coremltools"[1] and it can do the conversion. It supports the conversion of PyTorch, TF/TF Lite, ONNX etc...

      1. https://pypi.org/project/coremltools/

      • pzo15 days ago
        > It supports the conversion of PyTorch, TF/TF Lite, ONNX

        I think it doesn't support TF Lite (on TF SavedModels) and ONNX haven't been supported anymore for quite a while sadly

        As for the repo I like it, actually yesterday and today had to convert few models and that would be useful. I see you use Swift instead of coremltools so thats great - for benchmarking should have less overhead.

        Some ideas:

        1) Would love to have this also as agent skill

        2) Would be good if we could parse xcode performance report file and print in human readable format (to pass to AI) - gemini pro was struggling for a while to figure out the json format.

        • storystarling15 days ago
          The JSON output makes it easy to wrap as a tool for frameworks like LangGraph, but I would be worried about the latency. Since it is a CLI, you are likely reloading the whole model for every invocation. That overhead is significant compared to a persistent service where the model stays loaded in memory.