Some before/after experiments with editing images using Kontext:
https://specularrealms.com/ai-transcripts/experiments-with-f...
If the claimed_model is trained from scratch, the new model will have 0 capability (basically generate gibberish words or noise). If it is a derivative of the suspected model, it will do something sensible.
It is a bit more interesting for diffusion model because you can fine-tune to a different objective, making this investigation harder to do, but not impossible.
Additionally, certain prompts will produce nonsensical but specific outputs known only to BFL.
I'm actually all for open training but I think it's only fair you treat the model as your treated the life's work of others.
Model weights are not copyrightable creative works, no matter how much various companies wish they were.
At least they're not copyrightable until either legislatures extend the list of what's copyrightable, or courts have definitively show their willingness to reinterpret the words in the existing definitions far outside of their established legal meanings, their established meanings in common speech, and/or any sane analogy to those established meanings.
Yes, I am aware that collections and databases are copyrightable. Models don't have the elements required for a copyrightable collection or database. I'm also aware that software is copyrightable. Models don't have the elements required for copyrightable software. They just flat out aren't works of authorship in any way. How much effort goes into creating them is irrelevant; that's not part of what defines a copyrightable work.
I was at the top of the list ... pitched it poorly. That night I made a party game to practice: https://pitchanary.com/
The rules might need some work.
It's no stretch to say that the hackathons I won, all the projects were janky and the hackathons I lost, all the products worked well and did exactly what I said.
I really want an AI to jam with on a canvas rather than to just have it generate the final results.
I have been hoping someone would pick up on the time series forecasting innovations in the LLM space, combine them with data from e.g. the Google quick draw dataset, and turn that into a real-time “painting partner” experience, kind of like chatting with an LLM through brush strokes.
Now that BFL has released a dev model, I'd love to see a Kontext plugin for Krita given that it already has one for Stable Diffusion though!
This was pessimistic, native support today, with workflow and pointer to an alternate fp8 model download for people that can't run the full fp16 checkpoint.
https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux...
Tomorrow… like 4GB if you have an hour.
There's an FP8 version that's the default for the ComfyUI template that's in the release that just came out with Kontext support that I've seen reports of running in 12GB or less, and which I'm running at this moment in 16GB.