Also, the developers are super active in their Discord and are fast to fix bugs. Love this project.
about: https://firebender.com/about
terms: https://firebender.com/tos
privacy: https://firebender.com/privacy
> Some features may use a custom model like Apply, Autocomplete, Agent, so these proxy settings won’t be relevant, and your code will be sent to Firebender servers for processing.
Which suggests that there's always some cloud component? How usable is the plugin with fully local setup?
right now requests go directly to your proxy from the plugin if you have it configured (i.e. if you set up a clean VPC/VPN network environment with no outbound requests besides anthropic): both chat, cmdk, and agent will work. We are still working on the DevX for this, but need someone to work closely on this. Enterprise is also pushing us to make this more friendly.
But there are massive downsides, we use some custom models and hosting infrastructure to speed things up. For example, code edits will take much longer.
For fully local LLMs, we just need to setup a unified API client, but there aren't any good kotlin ones and I'm scrambling to write this from scratch. It is very annoying how there are different nuances in the anthropic/openai/etc. and all the "Open source" gateways are cloud hosted. I don't think people will want to "host" a gateway locally, the best experience is to just to put your keys/base url in settings which could be localhost:3000.
I can solidify this option with stronger guarantees.
Separately, we're working on Soc II at the moment and should have type 1 soon, and type 2 pending the observation period.
I know trusting my word is difficult bc I'm a random person on the internet, but we do NOT store you code data or use your code data to improve our product in any way (like training models).
same is true for intellij and kotlin/java support