Eventually, this led me to writing my own indie book on generative art with Go: https://p5v.gumroad.com/l/generative-art-in-golang, which led me to a talk I gave on GopherCon Europe: https://youtu.be/NtBTNllI_LY?si=GMePA3CfVQZJq2O7
These were great times, but I think the book is not worth buying anymore. Sadly, AI-generated imagery sort of killed the mojo of algorithmic art for me, and I've been trying to get back to it for the last few years.
That is quite wonderful.
> Sadly, AI-generated imagery sort of killed the mojo of algorithmic art for me,
I am surprised you did not specifically mention Nake's provocative writing "There Should Be No Computer-Art" : https://dam.org/museum/essays_ui/essays/there-should-be-no-c...
His argument is still 100% relevant in the age of AI.
I wasn't unhappy with some of the results, but it was an interesting and frustrating struggle.
https://www.flickr.com/photos/32832718@N00/17951484570/in/ph... https://www.flickr.com/photos/32832718@N00/19868350512/in/ph... https://www.flickr.com/photos/32832718@N00/17952106385/in/ph...
You can push AI in the same way and end up in some unusual spaces, but the quality often degrades when you get there.
https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fr...
https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fo...
I wrote a JS canvas library[1] partly because existing libraries of the time (2013) didn't do what I wanted a canvas library to do. Things like animated gradients and patterns, etc. I'm still working on the library today - so thats 12+ years of my spare time gone!
Generative art - such as challenges like Genuary[2] - is a key tool for giving me ideas on how to develop the library further. I keep CodePens of some of my better efforts[3] around as a set of extra tests to check for breaking changes as I fiddle with the library.
[1] - https://github.com/KaliedaRik/Scrawl-canvas
[2] - https://genuary.art/
I used to (and occasionally still do) make generative art and found this too! Although I'm not really sure why - I still love good generative art and don't really consume any AI generated art intentionally.
I think possibly one of the main things that happened was a lot of online generative art communities got flooded first by NFTs, and then AI generated art. I find it a lot harder to reliably find other people's generative art these days.
I've only realized later that it had a name, but since I've had my hands on a computer, I've been making generative graphics with code one way or another. First in GWBasic, later in Flash, and more recently in JavaScript.
In my view generate art is art created by an autonomous system. The trick is to balance randomness with intent and to create clever algorithms with maximum visual impact. Nowadays the term "generative" is more associated with AI, and to differentiate it from AI Art, I call it Algorithmic Art now. I wrote a piece about it: https://monokai.com/articles/algorithmic-art-as-a-subset-of-...
And I know NFT's are frowned upon here, but I've always found Algorithmic Art a perfect match for NFT's. Where before I created autonomous systems that ephemerally generated different outputs each run, now you can store specific outputs on the blockchain by just storing the PRNG seed. There are platforms that facilitate this, like Artblocks or Fxhash.
Correct is "generative images". Or "kitsch".
Using a blockchain, you can store a specific output of your generative system in a way that's definitive and collaboratively agreed upon. If you believe in collecting / trading digital assets, that's a prerequisite for algorithmic systems.
One is of course allowed to care about proof of ownership and the method used to do so if they like :). I just didn't follow the response in context of the question of how it's different from doing the same without the blockchain otherwise.
Unrelated: Kickass you're the Monokai author - I still use that today! Have you ever posted a retrospective about Monokai?
Regarding Monokai, I’ve written some history about it here: https://monokai.pro/history :)
For example, it's just as easy - and more accessible - to put the code and example configurations with output in a github repository.
E.g. are the artists who worked on Flow (2024) no longer artists because the resulting images are generated rather than drawn? Most people would disagree, and hold/put forth a very different definition as a result, given even they were already credited as the artists on the piece before I asked the question.
Even the arguments in the courts about AI, which is a very different kind of "generated" output, stuck to showing the outputs can't be copyrighted rather than trying to argue whether the outputs were still art as the problem.
It's what I doodle with to generate images using a stack based program per pixel.
Every character is a stack operation, you have 50 characters to make something special.
Mine is also pixel coloring at the lowest level. I have a shading kernel in GPU doing the low level work, mainly applying colors recursively like fractal. I got sick of writing shader code so I make a high level language supporting math operations in concise expression that are compiled to shader code in GPU. The main thing is it supports functions. That let me reuse code and build up abstractions. E.g once I get the "ring" pattern settled, it's defined as a function and I can use it in other places, combine with other functions, and have it be called by other functions.
One of these days when I get some time, I'll formalize it and publish it.
I'm not sure art is still meant to be a widely shared experience and smarter people than should tackle this idea.
I'm glad people are interested in art discourse and exploring arts in general. Art is a very personal thing. Different people see arts in different ways. Yet there's some recurrent themes time after time.
I got my insight in art in musics and on why people love them so much. Musics and songs are basically repeatable patterns with slight variations in multiple dimensions, in pitch, in beat, in tone, in rhyme, in lyrics, etc. The human mind is a super pattern processing machine, as part of our evolution survival traits. Pattern brings structure, abstraction, and comfort. But strict repetitive patterns bore the mind. Human love patterns, but with variation and imperfection.
The human mind is very good in filling the missing pieces in a pattern, again from our evolution survival traits. Our ancestors could look at the tail of an animal and filled in the blank that it's a tiger hidden behind a big rock. The filling of missing pieces is by experience and learning. It really is the original generative AI.
I believe the variation and imperfection in patterns trigger the mind's filling the blank function, which triggers the generative function, which can run wild generating wide range of imagination. That's why arts can have different reaction from different people as each has their own life experience and thus different generated result.
I think art is patterns with variation, imperfection, and blanks at the most basic level. Computer generated art thus needs to fulfill that basic requirement at the least to be called art.
For me (and many others), the “how” of art is just as important as the “what”, if not more important. There are installations that reflect this, many of which are interactive and allow the observer to become part of the art itself.
And if you extend the definition of “generative”, it can include many other methods, like swinging a paint can with a hole in the bottom over an empty canvas to create random patterns based on pendulum movement. Myself, like many others, recognize the amount of creativity and effort that goes into this type of “generative” art, especially in comparison to others. I also appreciate the creativity and complexity of the grandparent’s generative system.
I started out in all the usual ways - inspired by Daniel Shiffman making generative art first using Processing, then p5.js, and now mostly I create art by writing shaders. Recently after being laid off from my job, I actually took my obsession further and released my very first mobile app - https://www.photogenesis.app - as a homage to generative art.
It's an app that applies various generative effects/techniques to your photos, letting you turn your photos into art (not using AI). I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, etc.) pretty much directly inspired by various Coding Train videos.
I love the generative art space and plan to spend a lot more time coming up doing things in this area (as long as I can afford it) :-)
I find this to be a key insight. I've been working on a black-and-white film app for a while now (it's on my website in profile if you're curious), and in the early stages I spent time poring over academic papers that claim to build an actual physical model of how silver halide emulsions react to light.
I quickly realized this was a dead end because 1) they were horribly inefficient (it's not uncommon for photographers to have 50-100MP photos these days, and I don't want my emulator to take several minutes to preview/export a full image), and 2) the result didn't even look that good/close to actual film in the end (sometimes to the point where I wondered if the authors actually looked at real film, rather than get lost into their own physical/mathematical model of how film "should behave").
Forgetting the physics for a moment, and focusing instead on what things look and feel like, and how that can be closely approximated with real time computer graphics approach, yielded far better results.
Of course the physics can sometimes shed some light on why something is missing from your results, and give you vocabulary for the mechanics of it, but that doesn't mean you should try to emulate it accurately.
I read this interview with spktra/Josh Fagin and how he worked on digitally recreating how light scatters through animation cels, which creates a certain effect that is missing from digital animation - and it was validating to read a similar insight:
"The key isn’t simulating the science perfectly, but training your eye to recognize the character of analog light through film, so you can recreate the feeling of it."
He showed some techniques. I think someone asked a question about the best way, but the presenter got a little ranty and basically said the way that looks best to your eye is the best way.
And as you point out, at capture time you can use color filters to affect the image; processing too can lead to fairly different results based on what developer you use.
This is in contrast to color film, which I find to be much more rigid and narrow in how it’s meant to look and be processed; one could argue there’s much less range for interpretation from negative to final image (especially so with slide film, which completely falls apart if it’s ever so slightly over/under exposed).
But it's still useful to have some of those effects catalogued and easily accessible as presets. Photoshop doesn't quite do that, which on the one hand makes it hard for beginners to get a good look, but also leaves some space for those who want to go deeper to get more creative.
And
Both written by the same guy who wrote the Janet for Mortals book, about the Janet language, which supports both those sites.
I'm really wanted to see if I could combine those tools to make Arabic art inspired generative art. Anyone know of any projects which are doing that? There is a lot of crossover in modern generative art and ancient Arabic art.
https://web.archive.org/web/20140701114342/http://www.cgl.uw...
https://web.archive.org/web/20180426122308/http://www.wozzec...
Of course the topic is still alive to some extent, but the above 2 "dead" homepages remain some of the best entry points I've found overall.
The subtitle is "Computer Explorations of Fractals, Chaos, Complex Systems, and Adaptation".
It's a wonderful book to own as a physical copy, and the contents are very inspiring for generative art.
[0] https://www.amazon.com/Computational-Beauty-Nature-Explorati...
I used it create art, basically taking animal photos and using the dna sequence from that animal to recreate the photo using the 4 letters. (I did four passes using different size letters and layered in Gimp). People seem to like them, and they got into an art:science show.
Coding train has a lot of videos on using p5.js Some of them more sophisticated than the childish iconography appears. It’s pretty fun.
One major truth discovered:
Art is always in the eye of the beholder.
I like to think of fine art as a subjective human expression to stir emotion.
I wrote an application called Axo which is a graph-based programming tool that outputs paths to be converted into HP-GL, SVG, or G-Code (we've used this for engraving with a laser cutter). Axo is an homage to Reaktor, Patternodes, ArtMatic & Max, all applications I have enjoyed using.
Here's a gallery of some of the bits I am happiest with so far: https://axo.mattmower.com/projects/mausart/gallery
At some point I want to pair the virtual image with a picture of the real-life plot. Some of the plots look great on nice paper.
I use it to generate a new wallpaper every time my computer is booted.
though really i just mean changing wallpapers, like https://satelliteeyes.tomtaylor.co.uk/
my thing is a collection of public city webcams from around my state, that get applied to various displays on a cron. i add noise, saturation, etc. sometimes two displays get such a wallpaper that i find the combination lovely
( this is a top/bottom setup https://i.imgur.com/Xc8RdnW.jpeg )
now, this isn't really generative. but i've imagined and toyed with it. splotches of greyscale, or flip that and only splotches of color
i've imagined embedding secrets, whatever that means
https://www.youtube.com/watch?v=HmV_r_s6JgE&list=PLE1355A8B1...
I think there are newer versions of this book, though I haven't tried finding it. It's a hefty coffee table book as-is
https://v1.benbarry.com/project/f8-conference
I ended up learning enough processing to mimic a lot of what he did with the connected dots. It was great.
I'd love to get back into this sometime! Gotta have a reason though...
Touchdesigner is more popular and I suppose declarative, but vvvv is more general purpose and similar to the processing workflow. It’s a very weird tool I’ve used for everything between MIDI instruments, live installations, escape rooms, VJ rigs and, well, proc art.
Autodesk funded some really interesting projects.