If it's someone else's project, they have full authority to decide what is and isn't an issue. With large enough projects, you're going to have enough bad actors, people who don't read error messages, and just downright crazy people. Throw in people using AI for dubious purposes like CVE inflation, and it's even worse.
One of my pet peeves that I will never understand.
I do not expect users to understand what an error means, but I absolutely expect them to tell me what the error says. I try to understand things from the perspective of a non-technical user, but I cannot fathom why even a non-technical user would think that they don't need to include the contents of an error message when seeking help regarding the error. Instead, it's "When I do X, I get an error".
Maybe I have too much faith in people. I've seen even software engineers become absolutely blind when dealing with errors. I had a time 10 years ago as a tester when I filed a bug ticket with explicit steps that results in a "broken pipe error". The engineer closed the ticket as "Can Not Reproduce" with a comment saying "I can't complete your steps because I'm getting a 'broken pipe error'".
He even checked "thing A" and "thing B" which "looked fine", but it still "didn't work". A and B had absolutely nothing to do with each either (they solve completely different problems).
I had to ask multiple times what exactly he was trying to do and what exactly he was experiencing.
I've even had "web devs" shout there must be some kind of "network problem" between their workstation and some web server, because they were getting an http 403 error.
So, yeah. Regular users? I honestly have 0 expectations from them. They just observe that the software doesn't do what they expect and they'll complain.
When debugging stuff with the devs at our work, I tend to overexplain as much as I can, because often there’s some deep link between systems that I don’t understand, but they do.
I’m a pretty firm believer in “no stupid questions (or comments)”, because often going in a strange direction that the devs assure me isn’t the problem, actually turns out to be the problem (maybe thing A actually has some connection to thing B in a very abstract way!).
I think just serving a different perspective or theory can help us all solve the problem faster, so sometimes it’s worth to pull that thread, even if it seems worthless in the moment.
Maybe I’m just lucky that my engineering colleagues are very patient with me (and maybe less lucky that some of our systems are so deeply intertwined), but I do hope they have more than zero expectations from me, as we mean well and just want to support where we can, knowing full well that ya’ll are leagues ahead in the smarts department.
In Azure "private networking", many components still have a public IP and public dns record associated with the hostname of the given service, which clients may try to connect to if they aren't set up right.
That IP will respond with a 403 error if they try to connect to it. So Azure is indirectly training people that 403 potentially IS a "network issue"... (like their laptop is not connected to VPN, or Private DNS isn't set up right, or traffic isn't being routed correctly or some such).
Yeah, I get that's just plain silly, but it's IAAS/SAAS magic cloud abstraction and that's just the way Microsoft does things.
You are not describing a network issue. You're sending requests that by design the origin servers refuse to authorize. This is basic HTTP.
https://datatracker.ietf.org/doc/html/rfc7231#page-59
The origin servers could also return 404 in this usecase, but 403 is more informative and easier to troubleshoot, because it means "yeah your request to this resource could be good but it's failing some precondition".
Is not math, logic or any of that asides. Is the actual ability to read, exactly, without adding or removing anything.
I'm not sure I agree.
Reason ?
The old adage "handle errors gracefully".
The "gracefully" part, by definition means taking into account the UX.
Ergo "gracefully" does not mean spitting out either (a) a meaningless generic message or (b) A bunch of incomprehensible tech-speak.
Your error should provide (a) a user-friendly plain-English description and (b) an error ID that you can then cross-reference (e.g. you know "error 42" means the database connection is foobar because the password is wrong)
During your support interaction you can then guide the user through uploading logs or whatever. Preferably through an "upload to support" button you've already carefully coded into your app.
Even if your app is targetting a techie audience, its the same ethos.
If there is a possibility a techie could solve the problem themselves (e.g. by RTFM or checking the config file), then the onus is on you to provide a suitably meaningful error message to help them on their troubleshooting journey.
20 years ago, I worked the self-checkout registers in retail. I'd have people scan an item (With the obvious audible "BEEP"), and then stand there confused about what to do next. The machine is telling them "Please place the item in the bag" and they'd tell me they don't know what to do. I'd say "What's the machine telling you?" "'Please place the item in the bag'" "Okay, then place the item in the bag" "Oh, okay"
It's like they don't understand words if a computer is saying them. But if they're coming from a human, they understand just fine, even if it's the exact same words.
"Incorrect password. You may have made a mistake entering it. Please try entering it again." "I don't know what that means, I'm going to call up tech support and just say I'm getting an error when I try to log in."
I see this pretty often. These aren't even what should be called typical users in theory. They are people doing a technical job and were hired with technical requirements, an application will spit out a well written error message in the domain they should be professionals in and their brain turns off. And ya, it ends up in a call to me where I state the same thing and they figure the problem out.
I really don't get it.
I've seen this with gnss-assisted driving, or with automated driving, or with aircraft autopilot. Something disengages, gives unwarranted trust, we lose context, training fades ; and when thrown back in control, the avalanche of context and responsibility is overwhelming, compounded by the lack of context about the previous intermediate steps.
One of the most worrying dangers of automation, is this trust (even by supposed knowledgeable technicians) and the transition out of the 'the machine is perfect' and when it hands you back the helm on a failure, an inability to trust the machine again.
The way to avoid entering this state, seems to stay deeply engaged in the inputs and decisions of the system (read 'automation should be like iron man, not like ultron') and have a deep understanding of the moving parts, critical design decisions of the system, and traces/visualization/checklist of the intermediate steps.
I don't know where the corpus of research about this is (probably in safety engineering research tomes), but it crystallized for me when comparing the crew reactions and behaviour of the Rio-Paris Air France crash, and the Quantas A380 accident in Singapour.
For the first one, amongst many, many other errors (be it crew management, taking account of the weather...) and problematic sensor behaviour, the transcript tells a harrowing story of a crew not trusting their aircraft anymore after recovering from a sensor failure (that failure ejecting them from autopilot and giving them back mostly full control), ignoring their training, and many of the actual alarms the aircraft was rightly giving, blaring at them.
In the second case, a crew that tries to piece out what capabilities they still have after a massive engine failure (explosion), wrecking most of the other systems with shrapnel. And keeping enough in the loop to decide when the overwhelmed system is giving wrong sensor instructions (transfering fuel from the unaffected reservoirs to actually destroyed, leaky ones).
Human factor studies are often fascinating.
Also arguably the users are kind of right. An error indicates that a program has violated its invariants, which may lead to undefined behavior. Any output from a program after entering the realm of undefined behavior SHOULD be mistrusted, including error messages.
Even when error message was clearly understandable for my expertise, it took surprisingly long tome to switch from one mental activity - "Pay bills", to another - "Investigate technical problem". And you have to throw away all short memory to switch into another task. So all rumors about "stupid" users is direct consequence from how human mind works.
99% of the population have no idea what "Header size exceeded" means, so it absolutely is about understanding the message, if the devs expect people to read the error.
But I WOULD expect the user, when sending a message to support, to say they're getting a "Header size exceeded" error, rather than just say "an error".
In that case (and even sometimes in the more "graceful" cases), we don't always expect the user to know what an error message means.
I have given instructions to repeat, but more slowly, and people will still click through errors without a chance to read. I have asked people to go step by step and pause after every step so we can look at what's going on, and they will treat "do thing and close resulting error" as a single step, pausing only after having closed the error.
The only explanation I have that I can understand is that closing errors and popups is a reflex for many people, such that they don't even register doing it. I don't know if this is true or if people would agree with it.
I've seen this with programmers at all levels of seniority. I've seen it with technically capable non-programmers. I've seen it with non-technical people just trying to use some piece of software.
The only thing that's ever been effective for me is to coach people to copy all text and take screenshots of literally everything that is happening on their screen (too many narrow screenshots that obscure useful context, so I ask for whole-screen screenshots only). Some people do well with this. Some never seem to put any effort into the communication.
No, my mom is not eidetic, and no, she's not going to upload a photo of her living room.
Totally agree with you, though, when the full error message is at least capable of being copied to the clipboard.
Could the manufacturer solve this in a better way? Probably but that won't solve the issue the customer has now.
Jokes aside, "upload a photo of her living room" was meant to highlight the ridiculousness of the UX. I believe the designer of that flow had an OKR to decrease the number of reported bugs.
Worse still, just “it doesn’t work” without even any steps.
I sometimes gave those users an analogy like going to the doctor or a mechanic and not providing enough information, but I don’t think it worked.
Patient: My foot hurts.
Wife: Which part of it?
Patient: It all hurts.
Wife: Does your heel hurt?
Patient: No.
Wife: Does your arch hurt?
Patient: No.
Wife: Do your toes hurt?
Patient: This one does.
Wife: Does anything but that one toe hurt?
Patient: No.
Wife: puts on a brave smile
As far as I'm aware, most large open GitHub projects use tags for that kind of classification. Would you consider that too clunky?
Absolutely. It's a patch that can achieve a similar result, but it's a patch indeed. A major features of every ticketing system, if not "the" major feature, is the ticket flow. Which should be opinionated. Customizable with the owner's opinion, but opinionated nonetheless. Using labels to cover missing areas in that flow is a clunky patch, in my book.
It all stems from the fact that all issues are in this one large pool rather than there being a completely separate list with already vetted stuff that nobody else can write into.
With sufficient thrust, pigs fly just fine. However, this is
not necessarily a good idea. It is hard to be sure where they
are going to land, and it could be dangerous sitting under them
as they fly overhead.
Translation: sure, you can make this work by piling automation on top. But that doesn't make it a good system to begin with, and won't really result in a robust result either. I'd really rather have a better foundation to start with.The rebuke to your comment is right in your comment: "other ticket systems do this by…"
The ticket system does it. As in, it has it built-in and/or well integrated. If GitHub had the same level of integration that other ticket systems achieve with their automation, this'd be a non-issue. But it doesn't, and it's a huge problem.
P.S.: I hate to break it to you, but "I hate to break it to you, but" is quite poor form.
P.S. I didn't ask
I guess it probably leads to higher quality issue descriptions at least, but otherwise this seems pretty dumb and user-hostile.
On repos I maintain, I use an “untriaged” label for issues and I convert questions to discussions at issue triage time.
Speaking for another large open GitHub project:
Absofuckinglutely yes.
I cannot overstate how bad this workflow is. There seems to be a development now in other platforms becoming more popular (gitlab, forgejo/codeberg, etc.) and I hope to god that it either forces GitHub to improve this pile of expletive or makes these "alternate" platforms not be so alternate anymore so we can move off.
All of this is possible on GitHub issues and is in fact done by many projects, by this metric I dont see how GitHub Issues is any different than say, JIRA. In both cases, as you mentioned, someone needs to triage those issues, which would, of course, be the developers as well. Nothing gained, nothing lost.
Especially with the new features added last year (parent tickets, better boolean search etc) although I'm not sure if you need to opt in to get those.
In fact, it's become our primary issue tracker at work.
Well, that’s a paraphrase, but I remember reading that rough idea on their blog years ago, and it strikes me as perfectly fine for many kinds of projects.
Unfortunately there is no such magic bullet for trawling through bug reports from users, but pushing more work out to the reporter can be reasonably effective at avoiding that kind of time wasting. Require that the reporters communicate responsively, that they test things promptly, that they provide reproducers and exact recipes for reproduction. Ask that they run git bisect / creduce / debug options / etc. Proactively close out bugs or mark them appropriately if reporters don't do the work.
It's simply a great idea. The mindset should be 'understand what's happening', not 'this is the software's fault'.
The discussion area also serves as a convenient explanation/exploration of the surrounding issues that is easy to find. It reduces the maintainer's workload and should be the default.
> Yeah but a good issue tracker should be able to help you filter that stuff out.
Agreed. This highlights GitHub's issue management system being inadequate.
(Note: I'm the creator/lead of Ghostty)
Downside is that "Facebookization" created a trend where people expect everything to be obvious and achievable in minimal amount of clicks, without configuring anything.
Now "LLMization" will push the trend forward. If I can make a video with Sora by typing what I want in the box, why would I need to click around or type some arcane configuration for a tool?
I don't think in general it is bad - it is only bad for specialist software where you cannot use that software without deeper understanding, but the expectation is still there.
Then people expect accounting software to be just login click one or two buttons.
That's just a stupid limitation and not even a technical one. You could happily send GBs over email. You can also easily filter allowed attachment size by sender on the recipient side, because by the time the attachment size is told, both information was already provided.
Commenting on things is from a list of features (to be distinguished from UX/UI) I talked about.
it is a UI designed to be hard to use
1) UI = a clearly documented way to configure all features and make the software work exactly how you want.
2) UI = load a web page and try to do the thing you wanted to do (in this case communicate with some specific people).
FB is clearly terrible at 1 but pretty alright at 2.
IME, people cannot even articulate what they want when the know what they want, let alone when they don’t even understand what they want in the first place.
but has not graduated to issue worthy status
I want to clarify though that there isn't a known widespread "memory leak issue." You didn't say "widespread", but just in case that is taken by anyone else. :) To clarify, there are a few challenges here:
1. The report at hand seems to affect a very limited number of users (given the lack of reports and information about them). There are lots of X meme posts about Ghostty in the macOS "Force Close" window using a massive amount of RAM but that isn't directly useful because that window also reports all the RAM _child processes_ are using (e.g. if you run a command in your shell that consumes 100 GB of RAM, macOS reports it as Ghostty using 100 GB of RAM). And the window by itself also doesn't tell us what you were doing in Ghostty. It farms good engagement, though.
2. We've run Ghostty on Linux under Valgrind in a variety of configurations (the full GUI), we run all of Ghostty's unit tests under Valgrind in CI for every commit, and we've run Ghostty on macOS with the Xcode Instruments leak checker in a variety of configurations and we haven't yet been able to find any leaks. Both of these run fully clean. So, the "easy" tools can't find it.
3. Following point 1 and 2, no maintainer familiar with the codebase has ever seen leaky behavior. Some of us run a build of Ghostty, working full time in a terminal, for weeks, and memory is stable.
4. Our Discord has ~30K users, and within it, we only have one active user who periodically gets a large memory issue. They haven't been able to narrow this down to any specific reproduction and they aren't familiar enough with the codebase to debug it themselves, unfortunately. They're trying!
To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users. That's why the discussion is open and we're soliciting input. I even spent about an hour today on the latest feedback (posted earlier today) trying to use that information to narrow it down. No dice, yet.
If anyone has more info, we'd love to find this. :)
> To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users
In this case it seems you believe a bug exists, but it isn't sufficiently well-understood and actionable to graduate to the bug tracker.
But the threshold of well-understood and actionable is fuzzy and subjective. Most bugs, in my experience, start with some amount of investigative work, and are actionable in the sense that some concrete steps would further the investigation, but full understanding is not achieved until very late in the game, around the time I am prototyping a fix.
Similarly the line between bug and feature request is often unclear. If the product breaks in specific configuration X, is it a bug, or a request to add support for configuration X?
I find it easier to have a single place for issue discussion at all stages of understanding or actionability, so that we don't have to worry about distinctions like this that feel a bit arbitrary.
Both are valid, and it makes sense to be clear about what the teams view is
I think the confusion of bug tracking with work tracking comes out of the bad old days where we didn't write tests and we shipped large globs of changes all at once. In that world, people spent months putting bugs in, so it makes sense they'd need a database to track them all after the release. Bugs were the majority of the work.
But I think a team with good practices that ships early and often can spend a lot more time on adding value. In which case, jamming everything into a jumped-up bug tracker is the wrong approach.
For bug reports, always using issues for everything also requires you to evaluate how long an issue should exist before it is closed out if it can't be reproduced(if trying to keep a clean issue list). That could lead to discussion fragmentation if now new reports start coming in that need to be reported, but not just anyone can manage issue states, so a new one is created.
From a practical standpoint, they have 40 pages of open discussion in the project and 6 pages of open issues, so I get where they're coming from. The GH issue tracker is less than stellar.
macOS' Instruments tool only checks for leaks when it can track allocations and it is limited to ~256 stack depth. For recursive calls or very deep stacks (Emacs) some allocations aren't tracked and only after setting malloc history flags [0] I started seeing some results (and leaks).
Another place I'm investigating (for Emacs) is that AppKit lifecycle doesn't actually align with Emacs lifecycle and so leaks are happening on the AppKit and that has ZERO to do with application. Seems that problem manifests mostly on a high end specs (multiple HiDPI displays with high variable refresh rate, powerful chip etc.)
Probably nothing you haven't investigated yet, but it is similar to the ghost (pun intended) I've been looking for.
[0]: https://developer.apple.com/library/archive/documentation/Pe...
Memory usage is not really difficult to debug usually, tbh.
For me, only Rust compilation necessitates more RAM. But, I assume devs just do RAM heavy dev work on a server over ssh.
In the SWE world, dev servers are a luxury that you don't get in most companies, and most people use their laptops as workstations. Depending on your workflow, you might well have a bunch of VMs/containers running.
Even outside of SWE world, people have plenty of use for more than 8GiB of RAM. Large Photoshop documents with loads of layers, a DAW with a bazillion plugins and samples, anything involving 4k video are all workloads that would struggle running on such a small RAM allowance.
Of course, being developer laptops, they all come with 16 gigs of RAM. In contrast, the remote VMs where we do all of the actual work are limited to 4GiB unless we get manager and IT approval for more.
our company just went with the "server in the basement" approach, with every employee having a user account (no VM or docker separation, just normal file permissions). Sure, sounds like the 80s, but it works rearly well. Remote access with wireguard, uptime similar or better than cloud, sharing the same beefy CPUs works well and gives good utilization. Running jobs that need hundreds of GB of RAM isn't an issue as long as you respect other's needs too dont hog the RAM all day. And in amortized costs per employee its dirt cheap. I only wish we had more GPUs.
It doesn’t work when you’re developing on a large database, since it won’t fit. Database (and data warehouse) development has been held back from modern practices just for this reason.
A really shame as running local docker/podman for postges was fine when you just ran the commands.
Large corp gotta large corp?
My guess is that providing the ability to pull containers means you can run code that they haven't explicitly given permission for, and the laptop scanning tools can't hijack them?
In enterprise, we get shared servers with constant connection issues, performance problems, and full disks.
Alternatively we can use Windows VMs in Azure, with network attached storage where "git log" can take a full minute. And that's apparently the strategic solution.
Not to mention that in Azure 8 CPUs gets you four physical cores of a previous gen server CPU. To anyone working with 4 CPUs or 2 physical cores: good luck.
Sure it is bloated, but it is the stack we have for local development
This assumption is wrong. I compile stuff directly on my laptop, and so do a lot of other people.
Also, even if nobody ran compilers locally, there is still stuff like rustc, clangd, etc. which take lots of RAM.
If instead bookmarks worked like tab saving does, I would be happy to get rid of a few hundred tabs. Have them save the page and state like the tab saving mechanism does. Have some way to remind me of them after a week or month or so.
Combine that with a search function that can search in contents as well as the title, and I'm changing habbits ASAP.
I do this mostly for blog posts etc I might not get around to reading for weeks or months from now, and don't want them to disappear in the meantime.
Everything else is either a pinned tab (<5) or a bookmark (themselves shared when necessary on e.g a Slack canvas so the whole team has easy access, not just me).
While browsing the rest of my tabs are transient and don't really grow. I even mostly use private browsing for research, and only bookmark (or otherwise save) pages I deem to be of high quality. I might have a private window with multiple tabs for a given task, but it is quickly reduced to the minimum necessary pages and the the whole private window is thrown away once the initial source material gathering is done. This lets me turn off address bar search engines and instead search only saved history and bookmarks.
I often see colleagues with the same many browser windows of many tabs each open struggling to find what they need, and ponder their methods.
Anyway, just strikes me as odd that the browsers have the functionality right there, it's just not used to its full potential.
Then there's all the basic stuff — email and calendar are tabs in my browser, not standalone applications. Ditto the the ticket I'm working on.
I think the real issue is that browsers need to some lightweight "sleep" mechanism that sits somewhere between a live tab and just keeping the source in cache.
And if you are lucky, the content will still be there the next time.
It’s kind of humorous that everyone interpreted the comment as complaining about Chrome. For all I know, it’s justified in using that much memory, or it’s the crappy websites I’m required to use for work with absurdly large heaps.
I really just meant that at least for work I need more than 8GB of RAM.
Why do you assume that? Its nice to do things locally sometimes. Maybe even while having a browser open. It doesn't take much to go over 8gb.
It's a life of luxury, I tell you.
Your second link looks like an X user trying to start a flamewar; the rest of the replies are hidden to me.
I reported the issue in discussions some time ago, but had no reaction/response.
I was able to reproduce the leak consistently. Finally I've got all the reports done by me, Ghostty sources and Claude Code and tried to fix it.
For the first couple of weeks there were no leaks at all, now it started again but only 1/10 of the times it was before.
https://github.com/ghostty-org/ghostty/discussions/9786 There are some logs and a Claude Code review md file that might be useful.
Hope it will help someone investigate further.
For one, it duplicates the efforts in checking for prior reports. I might try 5–6 sets of keywords, but now I have to do so for 2 separate trackers.
Tickets cannot be moved between trackers, so instead folks resort to duplicating it and moving discussions… which is entirely opaque if you’re following up via email: you won’t get any more notifications and your future replies are silently discarded.
As a maintainer, having two trackers per project never made sense to me, so I’ve disabled discussion everywhere.
This is mostly a criticism of how GitHub implemented this feature, not of the decision taken here.
The benefit is that all users who just ask for help, assistance, or are unable to install or use the software now have a place to ask.
You shouldn't create an issue just because you get an error when installing, but it might be beneficial to still ask for help.
If it is indeed a bug, then create a ticket, linking to the discussion.
Normally, too many issues are user errors.
You can convert an issue to a discussion and vice versa, so no duplication is needed and your notification should be preserved.
Or do you mean something else?
site:https://github.com/org/repo key wordsSo if I'm triaging a new issue, often it'll show up in the results as well
If you spend more time closing issues than creating them manually from discussions, the math adds up.
As a maintainers, if you want to be be able to tell real issues from non-issue discussions, you still gave to read them (triage). That's what's taking time.
I don't see how transforming a discussion into an issue is less effort than the other way around. Both are a click.
Github's issues and discussions seem the same feature to me (almost identical UI with different naming).
The only potential benefit I can see is that discussions have a top-level upvote count.
imo almost all issues are real, including "non-issue" - i think you mean non-bug - "discussions." for example it is meaningful that discussions show a potential documentation feature, and products like "a terminal" are complete when their features are authored and also fully documented or discoverable (so intuitive as to not require documentation).
99% of the audience of github projects are other developers, not non-programmer end users. it is almost always wrong to think of issues as not real, every open source maintainer who gets hung up on wanting a category of issues narrower than the ones needed to make their product succeed winds up delegating their product development to a team of professionals and loses control (for an example that I know well: ComfyUI).
The math is even better if you just ignore all issues and close them after two weeks for being stale!
Wish this was /s but it isn't.
How is this not trivially solved via a "ready-to-be-worked-on" tag?
Compared to that, this system has been a huge success. It has its own problems, but it's directionally better.
(also, what is "huge success" in methods of organizing issues?)
bookmark: (and if your browser supports shortcuts, it can be as easy to open as remembering to type a single char)
https://github.com/ghostty-org/ghostty/issues?q=is%3Aissue%2...
1. The barrier to mislabel is too low. There is no confirmation to remove labels. There is no email notification on label change. We've had "accepted" issues accidentally lose their accepted label and enter the quagmire of thousands of unconfirmed issues. Its lost. In this new approach, every issue is critical and you can't do this. You can accidentally _close_ it, but that sends an email notification. This sounds dumb, but it happens, usually due to keyboard shortcuts.
2. The psychological impact of the "open issue count" has real consequences despite being meaningless on its own. People will see a project with 1K+ issues and think "oh this is a buggy hell hole" when 950 of those issues are untriaged, unaccepted, 3rd party issues, etc.
My practical experience with #2 was Terraform ~5 years ago (when I last worked on it, can't speak to the current state). We had something like 1,800 open issues and someone on Twitter decided to farm engagement and dunk on it and use that as an example of how broken it is. It forced me to call for a feature freeze and full on-hands triage. We ultimately discovered there were ~5 crashing bugs, ~50 or so core bugs, ~100 bugs in providers we control, and the rest were 3rd party provider bugs (which we accepted in our issue tracker at the time) or unaccepted/undesigned features or unconfirmed bugs (no reproduction).
With the new approach, these are far enough away that it gets rid of this issue completely.
3. The back-and-forth process of confirming a bug or designing and accepting a feature produces a lot of noise that is difficult to hide within an issue. You can definitely update the original post but then there might be 100 comments below that you have to individually hide or write tooling to hide, because ongoing status update discussions may still be valuable.
This is also particularly relevant in today's era of AI where well written GH issues and curated comments produce excellent context for an agent to plan and execute. But, if you don't like AI you can ignore that and the point is still valid... for people!
By separating out the design + accept into two separate posts, it _forces_ you to rewrite the core post and shifts the discussion from design to progress. I've found it much cleaner and I'm very happy about this.
4. Flat threads don't work well for issue discussion. You even see this in traditional OSS that uses mailing lists (see LKML): they form a tree of responses! Issues are flat. Its annoying. Discussions are threaded! And that is very helpful to chase down separate chains of thought, or reproductions, or possibly unrelated issues or topics.
Once an issue is accepted, the flat threads work _fine_. I'd still prefer a tree, but it's a much smaller issue. :)
-----------
Okay I'm going to stop there. I have more, many more, but I hope this gives you enough for you to empathize a bit that there are some practical issues, and this is something I've both thought of critically for and tried for over a decade.
There's a handful of people in this thread who are throwing around words like "just" or "trivially" or just implying how obvious a simple solution looks without perhaps accepting that I've been triaging and working on GH issues in large open projects full-time non-stop for the last 15 years. I've tried it, I promise!
This is completely a failure of GitHub's product suite and as I noted in another comment I'm not _happy_ I have to do this. I don't think discussions are _good_. They're just the _least bad_ right now, unfortunately.
Fully agree with this; as a beginner in the space I get nervous when I see a project having a thousand open issues since 2018.
I definitely think splitting discussion and issues is a good idea for that reason alone.
Very often in those infamous bugs that has been open for years, having hundreds of ”me too” comments, there are gems with workarounds or reproductions, unfortunately hidden somewhere under 4 iterations of ”click to load 8 more comments”, making it difficult to find. This generates even more ”anyone know how to solve this” spam, further contributing to the difficulty to find the good post.
technically, messages are messages. this approach no more than grouping messages into different forums. it could also all be under discussion with a sub forum for issues, one for features, one for other topics, etc, and then there would need to be a permission system for each sub forum.
so all this does is to create two spheres of access for users and developers. and that's the point.
in the end it's really a matter of taste and preference.
Is it really that hard to open a discussion?
An additional benefit of that is that a user whose discussion leads to a real issue being created will feel like they're genuinely being listened to. That creates a good customer experience, which is good for your brand's reputation. It's a positive experience. Closing non-issues in the tracker is a negative experience.
Definitely discussing things could also happen in the issue tracker, and some <Actionable> tag could be used to mark issues that are ready to work upon. But I suspect that Discussions are better suited for, well, discussions, while the facilities of the issue tracker can then be used by maintainers / contributors.
I find this separation pretty smart.
"""Unlike some other projects, Ghostty does not use the issue tracker for discussion or feature requests. Instead, we use GitHub discussions for that. Once a discussion reaches a point where a well-understood, actionable item is identified, it is moved to the issue tracker. This pattern makes it easier for maintainers or contributors to find issues to work on since every issue is ready to be worked on.
This approach is based on years of experience maintaining open source projects and observing that 80-90% of what users think are bugs are either misunderstandings, environmental problems, or configuration errors by the users themselves.[...]"""
The real miss here is that there isn't a way on GitHub to only allow maintainers to create issues, instead we are left with these subpar workarounds.
[1]: https://github.com/LGUG2Z/komorebi/blob/master/.github/workf...
In particular when I maintain an open source project, I have a lack of time in general so I need to move quickly. I actually don't mind issue discussions on my project, but people can not expect me to invest a lot of time into managing all of those; whether this is a discussion or an issue directly, is not so important, but I know that some project owners don't like open issues that remain open for years. It is kind of a difference in philosophy here.
One trade off is that I am not so likely to get involved in such a project. I may start a discussion, but in general I am very chaotic and may never follow up on discussions I started, simply due to lack of time, too many things to do, forgetting too much too (I do keep notes locally, but these files just keep on growing!).
Above, the word _simply_ conveys a lot of meaning. This sentence, when considered alone, might be seen to imply that all trade-offs are in a sense, ok, because they are all sort of a matter of taste. This doesn't mesh with my understanding of the world. I frame it this way: for a given objective, some trade-offs are better than others.
Put in reverse, when I see a project making certain trade-offs, I don't assume those trade-offs are in service of some clearly defined objective. Often I see people and organizations mired in trade-offs that are inertial and/or unconsidered.
There is another interesting angle to consider: framing as a question it would be: «When building a product or running a project, how do I make sense of a huge variety of trade-offs?» For that, exploring the Pareto frontier can be a useful method (see [1]) because it reduces the combinatorial explosion.
In the case of Ghostty, I think its values are indeed better served by this GitHub process (which designates an issue as a clear actionable task derived from a discussion).
I'm not so sure. I think this sort of discussion mostly falls within the realm of bike shedding. I'll explain why.
There's such a thing as a ticket life cycle. Ticketing flows typically feature a triage/reproduction stage. Just because someone creates an issue that doesn't necessarily mean the issue exists or isn't already tracked somewhere else, or that the ticket has all the necessary and sufficient information to troubleshoot an issue. When a ticket is created, the first step is to have someone look at it and check if there's something to it. This happens even when tickets are created by internal stakeholders, such as QAs.
GitHub supports ticket labels, and the default set already cover these scenarios.
https://docs.github.com/en/issues/using-labels-and-milestone...
To me this discussion sounds like a project decided to update their workflow to move triage out of tickets and into a separate board. That's fine, it's the exact same thing but with a slightly more complex process. But it's the same thing.
1. We often say 'should' too easily. The post isn't making such a claim is it? I would shift away from saying 'should' to saying: start somewhere that works for your project, gather feedback and evidence, and adjust thoughtfully. You'll end up in a place that feels authentic.
2. If anything, I would prefer the default be random. Then projects end up being natural experiments. See [1]
3. At a meta level, this reminds me of Brian:
> Brian: Look, you've got it all wrong! You don't need to follow me. You don't need to follow anybody! You've got to think for yourselves! You're all individuals!
> Crowd: Yes! We're all individuals!
> Brian: You're all different!
> Crowd: Yes, we are all different!
> Man in crowd: I'm not...
> Crowd: Shhh!
*DO NOT OPEN A NEW ISSUE. PLEASE USE THE DISCUSSIONS SECTION.*
*I DIDN'T READ THE ABOVE LINE. PLEASE CLOSE THIS ISSUE.*
There are absolutely bugs that get reported - either in functionality or documentation - but by requiring a level of triage in Discussions before promoting them up to Issues is a great way to keep things more actionable for folks wanting to come in and contribute fixes that the maintainers do agree are needing a fix
1. Ask a high-quality LLM in research mode to gather empirical statistics on how different GitHub projects are setup.
2. Put human eyes on the data you find, look for patterns, see what is interesting. (I recommend reading on approaches that promote transparency about the order in which you collect data, form hypotheses, etc.)
3. Put on your anthropologist hat and do open-ended interviews with project maintainers.
And so on.
That being said, as long as you still have the discussion tab, auto-deleting all issues by default is not a big deal.
Somehow the distinction of just adding a tag / using filters doesn't communicate the cultural/process distinction in the same way.
Whereas if it goes via a Discussion first, the back and forth happens elsewhere.
Arguably an separate issue could still do this, but it being a discussion sets the expectation better.
> Arguably an separate issue could still do this, but it being a discussion sets the expectation better.
People do that all the time in bug trackers.
IRL every dev issue tracker needs a front-end bozo filter to handle the low-hanging fruit and the misunderstandings and the failures to RTFM and the cases of PEBCAK.
Issue trackers should be used exclusively for earmarking and tracking the progress of actionable items. This is somewhat similar to the integration between email clients and task managers, like how it's done in Gmail, Zoho, etc. You read the message first. If it requires an action from your side, create a task from it and link them.
There are other projects that do this too. A good example is the 'mise' project. Sourcehut projects use this workflow almost exclusively since it's the default by design. I think sourcehut had if before github did. What I would like to see is better integration between discussions/messages and task/issue lists on all these platforms.
Do I ever make mistakes?
No. It’s the users who are wrong.
> Do I ever make mistakes?
> No. It’s the users who are wrong.
This is a textbook example of being uncharitable. Framing matters a lot! If you frame something in an uncharitable way, you are likely to "lock in" that view and discount other ones. Mitchell is not saying «users are wrong to give feedback», he is merely saying «the usual conventions are not ideal for this project». Don't confuse the two.
It is clear to me that Mitchell is giving his answer to this question: «what process gives the best results for this OSS project?». He has adjusted the feedback process in a way that he thinks will give better results. This is a consequentialist framing of how to best serve the users of Ghostty, which I think is a useful lens.
Most people by default see "user got something wrong" and respond "rtfm" or "you don't understand" or "don't make mistakes".
The vast majority of people using Ghostty are not stupid. If they misunderstood something or made a mistake, it's highly likely that it could have been avoided with changes to improve Ghostty.
>> Do I ever make mistakes?
>> No. It’s the users who are wrong.
> I disagree. He's just trying to educate these guys about usability.
I invite you to reconsider for these reasons: (1) Have you seen people "just trying to educate" in an uncharitable way? Many people have. Such cases of 'education' may involve paternalism and/or assuming the other person is ignorant. For example, both can manifest in the phenomenon of "mansplaining". There are more tells, also: (2) The commenter doesn't ask questions; (3) The commenter doesn't steel-man the other position; (4) The commenter uses a mocking tone. (To be fair, I've done such things in the past, but I'm striving to do much less of it.)
> The vast majority of people using Ghostty are not stupid.
No one is claiming this. Individual intelligence is not the same as «how people behave in system S compared to system S'». In other words, people do 'stupid' things all too often -- just hop in an automobile and watch our collective behavior.* Not to mention riots and mobs.
In the case of some issue tracking systems, "not-stupid" people can do things that are counter productive.
So, if project leaders have the ability, dedication, sincerity, rationale, and motivation to experiment with different systems, I say GO FOR IT. Experiment. Take a risk. Refusing to experiment is often worse.
Some people forget a key underlying principle of 'agility'† : start somewhere, gather feedback, be rational, experiment, and see where it takes you. If you see two different teams in different circumstances doing things the same way, you might take a closer look: one or both might have rigid processes that have stopped learning.
> If they misunderstood something or made a mistake, it's highly likely that it could have been avoided with changes to improve Ghostty.
Maybe, but that sounds like a tall claim. Remember that the Ghostty team _did in-fact_ make changes to their _process_ based on reflection.
I put much more confidence in the Ghostty team, who has shown signs of thoughtfulness, to make careful and wise decisions than someone with no skin in the game (a vast majority of the people here), especially uncharitable ones.
* I try not to 'blame' individuals or groups -- most human responses are statistically predictable and often even sensible and maybe even justifiable from a narrow point of view. If anything, I ascribe more importance to the design of roads, automobiles, and the cultural pressures we face.
† Too many forms of 'agility' forget the notion of recursive self-improvement. They instead get mired in ceremony.
So to me it's easy to believe that a user expects something to work a certain way, does minimal or no research about it, and go directly to report a bug when in reality it's intented behavior.
"Slop drives me crazy and it feels like 95+% of bug reports, but man, AI code analysis is getting really good. There are users out there reporting bugs that don't know ANYTHING about our stack, but are great AI drivers and producing some high quality issue reports.
This person (linked below) was experiencing Ghostty crashes and took it upon themselves to use AI to write a python script that can decode our crash files, match them up with our dsym files, and analyze the codebase for attempting to find the root cause, and extracted that into an Agent Skill.
They then came into Discord, warned us they don't know Zig at all, don't know macOS dev at all, don't know terminals at all, and that they used AI, but that they thought critically about the issues and believed they were real and asked if we'd accept them. I took a look at one, was impressed, and said send them all.
This fixed 4 real crashing cases that I was able to manually verify and write a fix for from someone who -- on paper -- had no fucking clue what they were talking about. And yet, they drove an AI with expert skill.
I want to call out that in addition to driving AI with expert skill, they navigated the terrain with expert skill as well. They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process.
People like this give me hope for what is possible. But it really, really depends on high quality people like this. Most today -- to continue the analogy -- are unfortunately driving like a teenager who has only driven toy go-karts."
"Examples: https://github.com/ghostty-org/ghostty/discussions?discussio... "
The current "issues" system works fine for most small-medium projects and even many large projects. Any project who looks for a more "serious" solution would have its own Jira/bug tracker system, and you can find plenty of them.
There doesn't seem to be enough of a separation between the concepts of "issues" and "discussions" to support separating them into two features.
Given that discussions seem more general, it seems like the right path forward would be to have only discussions. Sub-features of issues could be added to discussions.
However github has no option to close issues to contributors only.
These folks do what we do, they have an issue template called "do not use this". big whoop. People blow through those all day so we're clicking on "convert to discussion" all day.
Github please add this feature!
I finally moved on to the official GitHub app on mobile, but before that I used fasthub and other clients that had no idea about issue templates.
GitHub really needs to add permissions to issues, so that users can't create issues without the template; any kind of failure in creation is a sign that you're doing something wrong. The ability to add tags to issues when creating via CLI would also be helpful.
Then GitHub Actions runs on new Issues and any that have that label get auto-closed
The idea is that folks with Triage+ can remove that label when creating an Issue, but not external contributors - might be worth giving that a go?
it would be way better if there was only one way for them to get their content in, in the first place
(I'm thinking of getting some data and words together to look at how this has helped us over the last ~18 months)
It looks great. As mentally easy to process as Jira tasks. Or even better, cause it was written by a good "PM", which is not always a case commercially.
Edit: after reading the contributors doc, it seems that feature requests are discussions which should help. Unreproducible bugs, too; although I would wager that a lot of users believe they can reproduce bugs but in fact can't consistently, or believe their feature request is a bug.
It seems this approach is better but still requires someone to sort through the discussions before they're moved to the cleaner issues pile.
One big pile with filters, or a chaotic pile and a clean pile. That seems to be the end result of this, unless I'm missing something.
Tell him Victoria refer you
The term "discussion" does not have anything to do with this.
I understand these are all github choices of terms, and it should also be reframed properly by github.
And then there are developers who idly complain about normal participation on the work of issues and coordination of testing and feedback because it sends them a notification that they turned on. Unconstructive bitching drives users and collaborators away. They could solve their notification problem rather than impose a burden and emotional bullshit on everyone else.
Just the first thing that popped into my head reading the reasoning. I think it makes a lot of sense to do it like this. Especially for a product which is cross platform that emulates / replaces other known products and on top has extensive configuration options. I also switched over from kitty a couple of weeks back and really like it.
When I have a clear "Issue" which I've already researched, it's a bit of friction, but it doesn't seem like any more work to dump exactly the same text into a Discussion... and yea. Issues becoming a dumping ground is a real issue. This seems like a reasonable strategy / experiment.
Personally, I use GH Issues for my own work, but there’s very few issues, so it’s not a burden. I’m a non-fan of JIRA.
I have seen GH Issues turn into Reddit-like flamefests (every now and then, someone posts a particularly entertaining one, here). Not my idea of productive work.
So this makes me think the developer here just doesn't like the idea of issues being reported on his project.
Who does this project actually serve? The "users", or someone else?
If I'm getting overwhelmed with hundreds of issues per week about some confusion around installation or use, I think those issues are completely justified. Something should probably be fixed if the happy path is this obscure. Pushing this reality into another bucket is not the solution for me.
It's one of those explanations that sound very plausible on paper, but if you see real world issues it just doesn't happen, users will ask questions that are clearly explained in the first paragraph of the readme, en masse.
Yeah but people justifiably don't exhaustively read documentation. If people are getting confused because they didn't read some bit of documentation - even the first paragraph of the readme - then you shouldn't just dismiss them as stupid and bask in your superior documentation-reading abilities. You should think about how to resolve that confusion in a way that they would actually see it.
It's hard to explain how to do that without a concrete example, but it usually is possible. It's also usually more work than just replying RTFM, but you should at least be aware that you are choosing not to bother.
I think a concrete example would help here. Let me find one from this repo...
Ok after looking through about 20 discussions I was actually unable to find a single one that was a misunderstanding or misconfiguration on the user's part. They appear to all be real bugs (or feature requests), and very high quality ones at that.
So I think their assertion that 80-90% of what people think are bugs are actually not is total and utter bullshit.
That's kind of unrelated to what we were discussing though; misunderstandings due to poor usability do happen but I guess we can't easily find examples in Ghostty.
> Any Discussion which clearly identifies a problem in Ghostty and can be confirmed or reproduced will be converted to an Issue by a maintainer
I can see why there is trepidation and guard rails around giving them the key to your office planner.
This includes both our open source project not giving the public access. And our entirely closed source internal projects not giving other developers within the company write access.
This could be useful if not used for enshittification, where you have to get past the chatbot to reach anybody useful.
But, I am super lazy.
At a high level - the audience of discussions is the community at large, the audience of issues is the maintainers.
What Ghostty is doing with a dedicated category for issue triage should work just fine, despite it being an additional hop.
It's not that he has some inner urge to contribute in some way, he just encountered a bug while using the software and wants to report it. The alternative isn't coding — it's no contribution at all.
Unless you only ever work on projects that you have full absolute control over (unlikely if you have a job) then yes they absolutely are.
You can clone the other person’s apartment for free and do whatever you like, though. Just don’t barge in to someone else’s apartment and demand they treat you, a stranger, like if it were yours.
Owning a project is counter-productive for QA. If it’s your project, you know where to click and where to not click.
OTOH, you don’t need to know anything about a project to conclude that a crash with access violation, or hang with 100% CPU usage, are clearly bugs.
Of course anyone can make a mistake. Maybe you prefer the 'discussions' route because it's only seemingly then possible for a projects own devs to make a mistake in creating an issue.