Neither new nor unique. It's been done, many times. The classic is Kinoautomat, 1967.[1]
Much video game design revolves around how to keep to the plot while giving the user some freedom. If the user is locked to a path, the game is called a "track ride". If the user can do whatever they want, it's an open-world game. Resolving that dichotomy is hard, but has been done successfully many times. GTA V is a good example.
> Most games in the Dragon's Lair series are interactive films where the player controls Dirk the Daring, in a quest to save Princess Daphne. The game presents predetermined animated scenes, and the player must select a direction on the joystick or press the action button in order to clear each quick time event, with different full motion video segments showing the outcome.[10] A perfect run of the 1983 arcade game with no deaths lasts no more than 12 minutes. In total, the game has 22 minutes or 50,000 frames of animated footage, including individual death scenes and game over screens.
https://en.wikipedia.org/wiki/Dragon%27s_Lair
> If the user is locked to a path, the game is called a "track ride". If the user can do whatever they want, it's an open-world game. Resolving that dichotomy is hard
Actual generative AI (as opposed to that in the OP) holds promise in solving this conflict by being the story teller in place of the game designer. I'm curious to know what's happening in this space.
I played with a piece of music software of his many years ago now, mid 90s it was. Can't remember the name of it but it was an early attempt at sort of generative music.
Simple concept - choose a few basic settings like BPM, then drag some instruments (represented as blobby icons) into a 2D box which represents the 'soundscape'. The horizontal axis of the soundscape represented displacement on the stereo channels, the vertical represented volume.
Each instrument would play a part in the music. I can't remember if the were samples or used the old midi synth stuff, or how it was decided what notes they would play. The instrument icons were mobile in the box and would move around, bouncing off the walls, shifting from left to right speaker while disappearing and reappearing in the music that was generated.
The idea was that the music was infinite and unique.
Simple idea, fun to play with, I wonder if anyone got much more than an "Oh, neat" and five minutes tinkering out of it though...
I bring it up because even though that was 30 years ago, it seems to be on-theme with this project.
(Edit - https://en.wikipedia.org/wiki/Koan_(program) - turns out not to have been his creation, but he used it to publish music and wrote about it)
They made it clear that it's not AI. They could have been clearer on what it is instead, but the impression I got is that it's procedural generation with a good old PRNG.
It's not AI and it's not just random clips, either. The opening and the ending are always the same. Some parts like the segment on Oblique Strategies and how they shaped the recording of Bowie's Moss Garden were played (almost?) every time. The bit showing the evolution of U2's Pride from a yodeling demo to the finished song played half of the times. Same with the one about his Windows 95 start sound. I think the latter screenings had more examples of generative pixel/glitch art (each of which, presumably, was unique).
My feeling was that segments were divided into categories and/or tags; their selection was like a chef's menu at a restaurant, where you don't know what you'll get, but you can expect that some kind of desserts are always at the end.
Also, I don't think it's exactly true that each performance is different. Between segments, there's some computer output scrolling by and it includes a filename of the next clip. At the start of the movie, the (fake?) filename includes the venue and the date. I didn't stick around for two consecutive shows, but I think they're identical through the day. The 24h streams were different, of course.
Likewise I'm glad Eno found a way to fund 500 hours of digitization of ephemera but again it needs to be curated, not put into an ffmpeg script.
I loved the Endless Eight, personally, but I watched them one a day after the event was over. Having read the LNs, I knew what was coming as soon as I got a whiff from the internet of the first episode, so I just held off, waited until it was over, and enjoyed.
Depending on where I am, I get something different from this film.
This is much less than the faults in intelligence of generative systems. (E.g.: "make the visuals for the movie M as if created by director D" - which can result in a formal exercise without the depth that director D would have brought.)
The sequence of the editing is of course an artistic process which represents an intelligent intention - a deliberate choice with grounds.
Also, artists need time to develop. Sturgeon's law isn't just for artists you know. Every artist knows that you generally have to create a lot of crap to get good. So when people give out about crap art, they're really just telling on themselves: "I don't create".
Finally, for fun, try comparing fossil fuel subsidies to artist subsidies some time. "But we need fossil fuel" - try going a week without any art; no music, no movies, no games.
We've been making art for a really long time. We've been smearing colors onto surfaces for at least tens of thousands of years, and carving patterns into rocks and shells for at least hundreds of thousands of years.
Everywhere you find human communities, you find some kind of art. We must need it for something. If we didn't then it wouldn't be so ubiquitous.
Like Bandersnatch or the 20 other Netflix interactive titles (Most gone now - https://www.netflix.com/browse/genre/2869704)
They are shit. People want shared experiences and not thinking until you loop all the way to a computer game, the in-between is garbage.