1. Affordance: A lot of people, especially from 3rd world countries are very poor and can't afford to buy hardware to run Turbobloat.
2. e-Waste: Producing computer chips is very bad on the environment. If modern software wasn't Turbobloated you would buy new hardware only when the previous hardware broke and wasn't repairable.
3. Not putting up with Turbobloat: Why spend money on another computer if you already have one that works perfectly fine? Just because of someone else's turbobloat? You could buy 1000 cans of Dr. Pepper instead."
Took the words from my mouth. What a great project. Please keep posting your progress.
Still, higher resolutions were not just invented because of Turbobloat.
This was just a joke from the site, I actually took serious!
There is no 800x600 limit.
I assume you use a refrigerator and not a hole in the ground with ice. Have you been manipulated into giving money to Big Appliance?
Some people were teenagers when that was the best you could get, so I'm guessing they see it as a "good old days" baseline that they can be principled about while indulging their nostalgia.
First off, I want to say you can totally have a design ethos that covers game engines as much as irrigation systems -- Lee Felsenstein explicitly cited Ivan Illich's notion of 'convivial technology' as an influence on his modems. And Illich mostly talked about bicycles.
What I see in this project is a specific kind of appropriate technology -- 'toaster compatibility' -- mixed with conscious adoption of old methods and aesthetics to serve and signal that end. Which is cool, IMO.
HTMX uses similar techniques in trying to 'bring back' hypermedia and reduce dependencies, although I think they're after a different kind of simplicity. And of course, their Hypermedia Systems book makes similar nods to 90s-software aesthetics: https://hypermedia.systems/
Still, for a simple game limiting to 800x600 for performance and dev reasons - why not? But for me it means I see no use case for myself.
This paper [1] has some discussion of testing differences between 16 C, 25 C, and 31 C ambient exhaust conditions. It's actually a fairly significant difference under testing. ~(0.35, 0.70, 1.05) kWh / 24h for (16 degC, 25 degC, 31 degC). Refrigerators in experiments were kept at ~ 5 degC (approx 600 tests).
[1] https://d1wqtxts1xzle7.cloudfront.net/82169783/j.ijrefrig.20...
Of course this might still be micro-optimization from a rural Africa point of view. And a part of the reason for running the fridge is still just convention and convenience.
Also their AI upscaling makes it look like the guy is wearing foundation and makes it hard to take seriously lol.
Terrible
Of course after some point a higher rendering resolution starts giving diminishing returns if the resolution for the source material isn't also increased.
Except different companies sell different things. This is like the conspiracy that women's pants don't have pockets to sell more purses.
Oh my god, this explains everything!
(btw. I recently learned, that the 9/11 inside job conspiracy evolved. Nowdays the standard theory is, that there were not even planes in the first place, just bombs and smoke)
That is what I would assume, but so far I did not found a reason explaining the limit. Might also just be like it, because the author likes it like it.
I fear the turbobloat is still with us.
If you're sincere about that comparison then I think you're missing the point.
Being able to run something on fifteen year old machines is still plenty anti-turbobloat. And I suspect the 2010 requirement has more to do with the fact that it's pretty difficult to debug software for 1990s hardware that you don't have (or lack proper emulation for).
And if you go back far enough one reaches a tipping point where supporting old hardware can get in the way of something running on new hardware, especially if we're talking about games, unless we're really careful about what we're doing and test on real hardware all the time. Not very realistic for a one-person side project.
From context, I interpret it to be ‘graphics tech I don’t like’, but I’m not sure what counts as turbobloat.
If you're making a game that needs those features, obviously you'll need to bloat up. If you're not, maybe this SDK will be enough and be fast and small as well.
Of course for entertainment it’s difficult to judge, especially when you may have more fun on an old gameboy than a brand new 1000W gaming PC.
This is doing a lot of heavy lifting in this sentence.
What you're talking about is called the embodied energy of a product[0]. In the case of electronic hardware it is pretty staggeringly high if I'm not mistaken.
Just want to say this line was great, very Terry Pratchett. Feels like something Sam Vimes would think during a particularly complex investigation. I love it and hope you keep it moving forward.
Haven't gotten a chance to mess around with it, but I have some ideas for my AI projects that might be able to really utilize it.
The project looks awesome though.
"Temporal" to mean that at any given slice of time during a running application all objects have a signature that matches a type.
Yet most programming languages only allow compile-time analysis and "runtime" is treated as monolithic "we can't know at this point anything about types"
And that Vetinari’s entity component system might seem complicated but it works, damnit and it makes the city function.
(I'm just glad someone got the reference)
Well, except for Detritus
Is the problem here that using a nodal editor encourages/incentivizes you through its UX, to assign properties and relationships to e.g. a `Vector` of `Finger`s — but then you can't actually write code that makes the `Vector<Finger>` do anything, because it is just a "collection of things" in the end, not its own "type of thing" that can have its own behavior?
And does "everything is an Entity, just write code" mean that there's no UX layer that encourages `Vector<Finger>` over just creating a Hand class that can hold your Fingers and give the hand itself its own state/behavior?
Or, alternately, does that mean that rather than instantiating "nodes" that represent "instances of a thing that are themselves still types to be further instantiated, but that are pre-wired to have specific values for static members, and specific types or objects [implicitly actually factories] for relationship members" (which is... type currying, kind of?), you instead are expected to just subclass your Entity subclass to further refine it?
I was initially picturing a DAW VST node graph, where nodes are all effectively top-level-peer specifications to build top-level-peer actors; and the connections between nodes represent dataflow relationships that should be established between the actors.
But is this behavior actually more like:
• a browser DOM, where the nodes (DOM elements) themselves have types — with live behavior that depends on their types and statically-configured attribute values — but where this behavior only comes into play when a node is parented into a live "document" (where you can build nodes or entire subtrees outside of the document, hold onto them + manipulate them, and then attach/detach them to instantaneously activate/inactivate them); where all nodes are containers for child nodes whether they like it or not; but where node types are free to decide what their children "mean" — i.e. whether the children participate in the document as they expect (like nodes under an HTML <div> tag), or whether they are passivated, acting only as private information for the parent node to consume/reference (like nodes under an HTML <picture> tag, or under a Shadow DOM shadow-root)?
• the node graph acting as something like an AST in a Lisp, where a tree-walker component "executes" the graph by recognizing nodes as macro functions, and calling those functions, passing in their parsed-but-not-evaluated "raw" child-subtree ASTs, expecting to get typed entities back in return?
• or something else, that I don't even have a mental model for?
Is this useful to not know where the boundaries are? Sounds like it can become a night mare.
“Also when creating things with nodes, you have to go back and forth between node GUI and code.”
You can see Godot’s Node/GDScript setup as a bit of a response to this argument. Or, they try to make the “going back and forth” as seamless and integrated possible with things like the $ operator and autocomplete.
That said, I do think at the end of day, the “thing is a thing” mindset ultimately prevails, as you have to ship a game.
trying to wrap my head around using scenes vs. nodes in something simple like a 2d platformer.
Platforms:
My thinking: I'm gonna be using a ton of platforms, so it'd make sense to abstract the nodes that make up a platform to a scene, so I can easily instance in a bunch.
Maybe I'm already jumping the gun here? Maybe having a ton of an object (set of nodes) doesn't instantly mean it'd be better off as a scene?
Still, scenes seem instinctually like a good idea because it lets me easily instance in copies, but it becomes obvious fast that you lose flexibility.
So I make a scene, add a staticbody, sprite, and collision shape. I adjust the collision shape to match the image. Ideally at this point, I could just easily resize the parent static body object to make the platform whatever size I want. This would in theory properly resize the sprite and collision shape.
But I am aware it's not a good/supported idea to scale a collision shape indirectly, but to instead directly change its extents or size. So you have to do stuff based on the fact that this thing is not actually just a thing, but several things.
This seems like a bad idea, but maybe one way I could use scenes for platforms is to add them to my level scene and make each one have editable children. Problem with this is I'd need to make every shape resource unique, and I have to do it every time I add a platform. This same problem will occur if I try duplicating sets of nodes (not scenes) that represent platforms, too. Need to make each shape unique. That said, this is easier than using scenes + editable children.
Ultimately the ‘right’ way forward seems to be tilemaps, but I wanted to understand this from a principles perspective. The simple, intuitive thing (to me) does not seem possible.
When I ask questions about this kind of stuff, 9/10 times the suggestion is to do it in a paradigmatic way that one might only learn after spending a lot of time with an engine or asking the specific question, rather than what I would think is a way that makes dumb sense.
A lot of 2D game engines are near frictionless because they're just "write and save" style simple, and Blender Game Engine was actually great about translating this to a UI, and more importantly a UI dealing with 3D since every object in the viewport could just have it's own little code block attached to it just by clicking it. It was no different in function than saving the .py file in a new folder, really. This method Unity "pioneered" of everything having to be part of a giant tree in the asset manager is such a slog and makes keeping track of anything during iteration a nightmare. I still prototype in BGE sometimes because every other 3D engine sprawls too quickly and has so many unnecessary steps.
If somebody could just write a text-only "write and save" style editor like LOVE2D but for 3D (and support it for longer than two months) that would be amazing.
I watch clickbait Godot tutorials on YouTube on 2x speed in my spare time. When I stumble into a problem that I suspect has been solved before, like your resizeable platform problem, I go to YouTube and see if I can find a reference. For your case, I think you're looking to create a Tool, maybe. You'd need to define your platform as a programmaticly sized node using either tile maps or that texture thing that lets you define the corners, the fill texture, and size from there.
But if it were me I'd lift the code for the platform out of the node that sizes it. Then you can just hand edit each platform, and link the platform to the controlling node (or whatever relation you see fit to use).
It's important to focus on the game over the infinite ways you could structure code in an environment of such high flexibility. At least coding your game with bitmaps in C you can't get lost in trifles, you'll just spend more time reimplementing and understanding the basics. See raylib.
When a game team is successful, it can often stem from having picked tooling and workflows that enabled them to be productive enough and avoid enough pitfalls. That’s going to change from project to project and team to team.
I think my best bet is to apply the same mentality you're describing to larger projects, like you're saying. As long as I don't get too sloppy, refactoring will be a necessary effort when I actually hit issues that stall my progress.
So the project I just looked at had 3 types of platforms that I could tell:
The level was made up primarily of a tile map. It had its own collision set in the resource per tile and represents the most copy-cut type platforms you’re likely to see
Then there was a static body tile, which had a polygon2d shape, used to create an irregular platform that would have been more painful (maybe near impossible) to make in the tile map.
Finally, there were two moving platforms that were instances in as scenes.
So the big revelation for me today is that I need to not get hung up on doing any one conceptual thing anyone one way. Any (seemingly minor) difference in fundamentals about what that thing is or does may lead to another basic node type being the best thing to use. I need to not be afraid of making use of more varied tools, even if things feel like they should be all just be the same simple thing in the head.
Sharing behaviors or making things look or act like a little bit like this other thing becomes an absolute nightmare, if not out right impossible, with "a thing is a thing."
There's a reason graph based systems or ECS is basically the corner stone of every modern engine. Because it works and is necessary.
The Half-Life and Morrowind engines are in a unique situation where they're put together by enthusiastic programmers who are paid to develop stuff they think is cool. You end up with minimal engines and great tech, suited to the needs of professional game developers.
This seems like something that sits in between a raylib and a Unity. I haven't used it, but I worry that it's doesn't do enough to appeal to amateur programmers, but it does too much to appeal to the kind of programmer who wants a smaller engine. I could be very wrong though, I hope to be very wrong. Seems like the performance here is very nice and it's very well put together. There's definitely a wave of developers coming out frustrated from Unity right now. As the nostalgia cycle moves to the 2000's, there's a very real demand to play and create games that are no more graphically complex than Half-Life 2.
Anyway, great project. Great web design. Documentation is written in a nice voice.
When you're designing both you can take advantage of features you add but also avoid the ones you can't do well - or even change the art style to "fit" the engine - pixelated angular mobs fit Minecraft quite well, but once they start getting more and more detailed you're in an "uncanny valley" where they look worse and more dated than Minecraft - until you finally have enough polygons to render something decent.
My argument was mainly about these more generalized engines, like raylib, 'Tramway', or Source.
"At work if we want to experiment with a new idea I have to assembly a team, and spend at least a month before we have something we can work with. Meanwhile, at home, I can make a whole Doom campaign in one evening."
(quoting from memory, sorry)
https://store.steampowered.com/curator/42392172-GZDoom-Games...
That is to say, I don't think people are using Unity because they were mistaught by complexity loving professors.
I wonder if this kind of architecture might also be a pretty good approach. The fact that they were able to port the game to another engine within a day is pretty impressive.
I've never played the game, but my understanding is that Slay the Spire largely impresses on a design and artistic front, not a technical one. Its engine requirements were not based on feature set or code quality, but on what developers knew. So they probably picked Unity because it was ubiquitous. Education starts the problem, and then devs who need something common they care hire for continue the problem. I don't blame devs for this, it's the right choice to make and obviously Slay the Spire is great, but I am saying that this is a force that drives down the quality of game engines.
Being ubiquitous was part of the decision, yes, because it means there are many high quality plugins instantly integratable which is a huge time-saver.
You are correct: I definitely agree that not all gamedevs should be making stuff from scratch, but I also think that Unity is a little too much. There's a good middle ground somewhere slightly above raylib.
My argument is that the promotion of engines that live near this middle ground is blocked by education: people who want to be able to sell long courses to the people who look up "how to make a video game."
But also features this brief comment on game libraries:
"More than a Game Library: Having worked in SDL and LWJGL I’d like a bit more handholding. A few in-house APIs for loading/unloading resources, font stuff, and display handling please. I don’t want to write those; I want to make games!"
and some words on LibGDX in specific
"The reason we chose LibGDX for Slay the Spire was because it could do PC, Mac, and Linux. Yes, it runs in a JavaVM and it has all sorts of problems but it’s write once, run anywhere amirite? No. It don’t run on consoles and Mac and Windows updates constantly break it. "
All that said, I still love LibGDX, raylib, löve, and if I was going to make a game I'd use one of those because I think they're more fun. But I'm also not doing this professionally, on a deadline, and with a requirement to work on consoles
But yeah, the more cards there are on the screen at once, the lower the framerate gets—very noticeably so, when you're e.g. looking at the view of your entire deck, or when you draw several cards at once. I just assumed it was some inefficiency that entirely unnoticeable on a high-end PC (250+FPS, no problem), but was very apparent on the Switch. I never would've guessed there would be something far crazier at play than I imagined!
Look at what Epic Games did with fortnite. They killed a competitive scene game that ran smooth for turbobloat graphics and skins.
There is a similar phenomenon with ArcGIS.
Very cool project. And the website design is A+
I feel like this is only true for people who happened to luck out with slightly overpowered hardware in very specific time periods.
As someone who used pretty average hardware in the windows 98/2000/xp era as a teenager even a low end modern laptop with an ssd running Windows 10/11/KDE/Gnome/Whatever is massively more responsive even running supposedly bloated webapps like vscode or slack.
I had this one: https://www.jaruzel.com/blog/amiga-500--fun-with-storage. The official Commodore one was much uglier and, from memory, only 20MB.
So, what does it mean? Just "very bloated"?
Edit: Reading around on the website and seeing more terms like "Hyperrealistic physics simulation" makes me believe it just means "very bloated".
If you gave it to me in a cleanroom and told me I had to share my honest opinion, I'd say it was repeating universally agreeable things, and hitching it to some sort of solo endeavor to wed together a couple old 3D engines, with a lack of technical clarity, or even prose clarity beyond "I will be better than the others."
I assume given the other reactions that I'm missing something, because I don't know 3D engines, and it'd be odd to have universally positive responses just because it repeats old chestnuts.
Certain vintage hardware had a "turbo" button to unleash the full speed of the newer CPUs. The designers blind to the horrors of induced demand.
This seems to be an increasingly common point of view among those of a certain age.
It is definitely the case that the art of a certain sort of texture mapping has been lost. The example I go back to is Ikaruga, where the backgrounds are simply way better than they have any right to be, especially a very simple forest effect early on. Some of the PS2 era train simulators also manage this.
The problem is these all fall apart when you have a strong directional light source like the sun pointed at shiny objects, and the player moves around. If you want to do overcast environments with zero dynamic objects though you totally could bypass a lot of modern hacks.
Seriously, the plot of Silent Hill was invented to justify optimization hacks, you have a permanent foggy space called "fog space" to make easier to manage objects on screen, and the remake instead stupidly waste a ton of processing trying to make some realistic (instead of supernatural looking) fog.
The point about Lumen stands though. Baked lighting would have been much better in this case.
The ‘art’ of making stuff look good has not been lost at all. It’s just very unevenly distributed.
When a team has good model makers and good texture artists and good animators and good visual programming, it looks great, whether it’s built in Unreal or Unity or a bespoke engine or whatever.
There are a lot of technically polished Unity titles that get knocked because they look like very well rendered plasticine, for want of a better description.
For example, there was an argument on here not too long ago where various people pushing the “old graphics were better” (simplification) did not understand or care that the older titles had such limited lighting models.
In the games industry I recall a lot of private argument on the subject of if the art teams will ever understand physically based models, and this was one of the major motivations for a lot of rigs to photograph things and make materials automatically. (In AAA since like 2012). The now widespread adoption of the Disney model, because it is understandable, has contributed to a bizarre uniformity in how things look that I do think some find repulsive.
Edit to add: I am not sure this is a new phenomenon. Go back to the first showing of Wind Waker for possibly the most notorious reaction.
Ironically as we've gotten hardware with more VRAM and higher bus speeds we've decided to go with bigger textures instead of more of them. The same with normal mapping, instead of using normal mapping alongside more subdivided models we've just decided that normal maps are obsolete and physically modelling all the details is technologically forward way. Less pointy spheres is one thing, but physically modelling all the cracks and scrapes on the sphere is just stupid and computationally wasteful.
This right here is precisely what I alluded to in another reply as the motivator for generating meshes and PBR materials from controlled photography. Basically you now have enough parameters per texel, which interact in distinctly unintuitive ways, that authoring them is a nightmare, hence people resorting to what you describe.
Even a "2D" game like Factorio has amazing polish difference between original release, 1.0, and today.
(This can very obviously be seen with modded games, because the modded assets often are "usable" but don't look anywhere near as polished as the main game.)
I've also wanted to run HL2 in DirectX 6 as well on period correct GPUs. Specifically a TNT2 Ultra and a Voodoo 5 5500 I have laying around. I just haven't gotten around to it.
> Also when creating things with nodes, you have to go back and forth between node GUI and code.
> All of the mainstream engines have a monolithic game editor. It doesn't matter how many features you use from it, you still have to wait 10 minutes for all of them to load in.
These notes really resonated; the debug loop even with Godot, using minimal fancy features, felt a lot slower than other contexts I've programmed in. Multiple editors working around a single data file spec is also a cool idea! In finding that a unified IDE makes it easier for different developers to create merge conflicts, I could see having editors of a more specific purpose may also help developers of different roles limit the scope and nature of their changes. Keen to see how the engine progresses!
Managed to contribute my bit from an underpowered netbook.
I had never written a line of C# before, but I'll be damned if I'm going to concede TDD from the CLI. I knew it could be done, and I made it work. Everybody thought I was crazy, though, and none of the sponsors' DevRel were any help.
And, of course, the biggest point of friction for us, that weekend, was our beefiest machine still had to boot and reboot the damned Unity IDE for a thousand years! Incredible the fetters some folks tolerate.
I like the C++ principle of paying only for what you use.
The RPG engine was just an example of why it may not be such a universal thing, I'm not saying it's bad - but clearly you think that is not "bloat" whereas to some it might be. So it's maybe better to head this off at the pass and just write a little paragraph with some examples of bloat you have observed in other engines that you have consciously avoided in Tramway.
I'm in the latter camp and want to thank you for your "Getting Started" Page. The teapot appeared and I understood things I did not think I would understand. I do not have time to finish your tutorial at the moment (due to only having 30 whole minutes for lunch), but I want to, which says more about how entertaining and accessible it is than anything.
”Design patterns used 82%.
When all of the patterns get used, I will delete the project and rewrite it in Rust. With no OOP.”
Nobody argues that FTL, Minecraft, baba is you, Stardew valley, RuneScape, or dwarf fortress are not a high enough resolution.
Do you plan to create some videos showing the process of setting up a basic example?
I also have my own engine although it needs some refurbishment. I've never quite found the time to polish it to a point where it can be sold. It also runs on tiny old devices, although if you limit yourself to desktop hardware, that means anything from the last 30 years or so. It also has a design that allows it to load enormous (i.e. universe scale) data by streaming with most often an unperceptable loading time... on the iPhone 4 in about 200ms you are in an interactive state which could be "in game".
Unity and Unreal are top-tier garbage that don't deserve our money and time. The bigger practical reason to use them is that people have experience and the plugin and extension ecosystems are rich and filled with battle tested and useful stuff.
bespoke big company engines are often terrible too. Starfield contains less real world data than my universe app, but somehow looks uglier and needs a modern PC to run at all. mine runs on an iPhone 4, looks nicer and puts you in the world in the first 200ms... you might think its not comparable but it absolutely is, all of the same techniques could be applied to get exactly the same quality of result with all their stacks and stacks of art and custom data - and they could have a richer bunch of real world data to go with it!
Both are effectively magical sandboxes where platform support is someone else's problem.
Unity is still pretty great, but it's chained to a company that has no real business plan to sustainability.
Unreal is okay, but developers aren't using it right. For any bigger project you should customized the engine for your needs. Or at the very least spend some time to optimize.
But we need to ship and we need to ship now.
Blame the developers not the tools.
unreal is fucking awful, its a masterclass in how to not make:
* components
* hierarchies
* visual scripting
* networking
* editors
* geometry
* rendering
* culling
* in-game ui
* editor ui
* copy-paste
* kinematics
* physics integration
* plugin support
* build system
its just a tower of mistakes i learned not make before i dared to even enter the industry
it is fantastically and incredibly bad.
unity is a bit similar but they add c# complexity to the mix and in the beginning that was a much bigger disaster, especially going with mono. .NET was an enormous misstep by microsoft and remains so, although it improves over time they could have just not gotten it so incredibly wrong to start with.
i could go on.
i definitely blame the developers. of the terrible tools, i couldn't make that badly at most points in my career including the super early days in some cases.
they are also hard to fix because of the staggering depth of the badness.
if you would like more specifics feel free to poke, its more about not typing a wall of text than the cognitive load of knowing better, which is around zero.
oh... and the garbage collection is garbage that enables incompetents to make more garbage. never needed or wanted it. i had one hard memory leak to deal with in my life in native code. and a fucking zillion in their shit fest.
EDIT: i shit you not, it has not learned my first lessons from being an 8 year old trying to draw mandelbrot sets in qbasic.
Both Unity and Unreal have cost billions to make.
Godot is cool, but GD script isn't fun( in general I hate learning a programming language for a single framework, dart is the last time I do that) and C# support is still ify. Godot tries to do everything Unity can, but can't do them particularly as well. The community is also a cult.
I've tried Godot like 3 times and it always feels like janky Unity.
During the Unity drama every single game dev post on Reddit would get a bunch of comments saying you should switch to Godot.
An open source game engine that doesn't accept PRs and is basically ran by 3 people.
Neat.
Personally my dream engine would be Haxe + an editor + docs + Web Assembly/Native/Mobile support.
But engines are very hard and expensive to make. For my current project, it's so text heavy I realized I'm better off just using React/HTML/CSS.
The game is meant to be played in a website, but it's going to be open source so you can run it locally if you wish.
It seemed to work fine, but I did have some issues with the Direct3D 9 renderer. The renderer works fine on other computers, so I have no idea if it's a driver bug (Intel tends to have buggy drivers) or if it's a bug on my part.
The biggest problem with using old hardware is drivers. Older drivers will only work on older operating systems and it's difficult to find C++20 compilers that will work on them.
How is the wasm support? My main issue with Godot was large bundle sizes and slow load times. (GameMaker kicks its ass on both, but I never got the hang of it.)
The webassembly builds seem to work fine. A basic project would take up around 20MB and takes a couple of seconds to load in, so it's not great, but then again I haven't done any optimizations for this.
What is blocking this from high resolutions, and dynamic or smooth lighting? The former is free, and you can do the latter in Vulkan/Dx/Metal/OpenGl etc using a minimal pixel and fragment shader pair.
That bit about 24-bit color and 800x600 resolutions was mostly meant to be a fun nod to promotional text that you could find on the backs of old game boxes.
The default renderer for the engine is meant to emulate what you could achieve with a graphics card that has a fixed-function graphics pipeline.
I'll do more modern renderer later, for now I am mostly focusing on the engine architecture, tools and workflows.
Is this a reference to Inscryption?
By the way, to see a great example of how a modern game can be made using the classic Half Life engine, look at the fan made game Half Life: Echoes [1].
It actually looks pretty decent, and the gameplay is top notch.
> I am not reinventing the wheel, I am disrupting the wheel industry.
I am laughing out loud
https://racenis.github.io/tram-sdk/patterns.html
Love it.
It's announced, and the name is fine, so it'll stick :)
void Entity::Yeet() {
yeetery.push_back(this);
I am also in the early days of writing a very primitive 2.5D Raycasting engine [0] (think Wolfenstein3D) and have just got to texture mapping. Very fun
Its open source and written in C, a pretty small and easy to follow codebase so far
[0]- https://github.com/con-dog/2.5D-raycasting-engine/blob/maste...
The demo(s) should be linked from the page so that HN can complaint that the game is to hard.
https://racenis.itch.io/sulas-glaaze
https://racenis.itch.io/froggy-garden
It runs well in Firefox on my low end laptop.
@media (prefers-reduced-motion) {
.animated {
display: none;
}
}
to the page, please? no_gifs.css is alright, but I need to visit the page (and run JavaScript) before I can find and click it, and by that point the damage is done.This is evidence of a great moment in modern indie game dev: the power of fun and simple prototyping.
> Everyone always says that you "shouldn't create an open-world RPG", but that's just because they have never tried using the Trawmay SDK.
Love it <3
It a practical way to bring global illumination to the masses without real time ray tracing
Using a modern engine seems overkill
Hope some initial tutorials become available. I’ll gladly contribute some but I need a little guide to get started.
10/10 choice of model and animation, this website is amazing.
You've obviously put a lot of effort into this, but I'm always lost at how people publish something open source and forget to actually put a license on there. Since now it's technically closed source, hypothetically if you become a monk in the woods next week no one else can fork your code
The license is MIT. Thanks for noticing.