6 pointsby nathan-barry4 hours ago3 comments
  • alexkranias4 hours ago
    Hey it's Alex, I was one of the creators of this project. We thought it would be fun to experiment with this odd mix of real, physical media and altering reality with diffusion models. I'm very happy with the results lol!

    The entire thing can be built for around ~$35-40.

    Here's our parts list: - Raspberry pi 2w for its low power draw, wifi capabilities, decent memory (512mb), and low cost ($15). Also with a arducam - Cheap TTL Aliexpress thermal printer along with sticker thermal paper - Rotary encoder for switching modes, push button for our shutter, and tactile switch for our power switch - I2C OLED display for display our modes and settings - 2 18650 batteries and a 3amp 5v UPS supply, enabling good battery life and regulation. We also add a 1000uF capacity to guard against current spikes from the printer.

    We have a longer write up on our devpost: https://devpost.com/software/diffuji?ref_content=user-portfo...

    We submitted this for Treehacks 2026, and ended up winning the Most Creative grand prize :)

  • vunderba3 hours ago
    Nice. I’ve seen a couple of these - Instagen [1] is probably one of the more recent ones - but I’d love to see Diffuji stand out by potentially adding some kind of BYOK or proxy requests to a home machine with a powerful enough GPU.

    [1] - https://www.instagram.com/instagen.camera

  • randomgermanguy4 hours ago
    Cool idea, but kinda sad that it has to go through a cloud-provider. I feel like there's a possibility with an accelerator-board (Coral TPU or something), to make this into a totally local thing maybe? The longer-waiting time is surely not an issue when considering how many people still use Polaroids.
    • whackamadoodle2 hours ago
      We were looking to add on-device styles with the Raspberry Pi in order to keep the device cost low, though a Coral TPU would make this easier. The OnyxStream library appears to be able to do SD1.5 generation in 10 minutes on a Pi Zero, so with some optimization and reducing image resolution img2img may be possible on the Pi in ~1 minute. We were also looking at style transfer models, which are much more lightweight and could run fast on a Pi (https://github.com/tyui592/AdaIN_Pytorch/tree/master). Eventually our goal is to make this both on-device and relatively cheap.
    • alexkranias4 hours ago
      We were looking into OnnxStream (https://github.com/vitoplantamura/OnnxStream) and modifying it to support img2img. We got pretty close but yeah capability of running diffusion models on a Raspi are quite limited lol.

      Alternatively we could use compute from your iPhone, but it adds additional dependencies to external hardware that I don't quite like. We could use a Jetson, but then power draw is quite high. I agree with you that on-device inference is the holy grail, but figuring out the best approach is something we are still trying to figure out.