1. Should conference "service" be something we expect of postdocs (and even PhD candidates) rather than established experts?
> Often, as a result, the PC is staffed by junior, ambitious academics intent on filling their résumés. Note that it does not matter for these résumés whether the person did a good or bad job as a referee! [...] I very much doubt that the submissions of Einstein, Curie, Planck, and such to the Solvay conferences were assessed by postdocs. Top conferences should be the responsibility of the established leaders in the field.
2. Should programme chairs strive to maintain exclusivity of their conference track, or look for important ideas that deserve to be communicated?
> As a simple example, consider a paper that introduces a new concept, but does not completely work out its implications and has a number of imperfections. In the careerist view, it is normal to reject it as not ready for full endorsement. In the scientific view, the question for the program committee (PC) becomes: is the idea important enough to warrant publication even if it still has rough edges? The answer may well be yes. [...] Since top conferences boast of their high rejection rates, typically 80% to 90%, referees must look for reasons to reject the papers in their pile rather than arguments for accepting them.
3. Is computer science suffering from a focus on orthopraxy rather than scientific method?
> What threatens to make conferences irrelevant is a specific case of the general phenomenon of bureaucratization of science. Some of the bureaucratization process is inevitable: research no longer involves a few thousand elite members in a dozen countries (as it did before the mid-1900s), but is a global academic and industry business drawing in enormous amounts of money and millions of players for whom a publication is not just an opportunity to share their latest results, but a career step.
What do you think about these?
When I was doing PhD ~15 years ago, I noticed that I rarely cited work that appeared in the top conferences of the subfield. Those conferences covered so wide range of topics that often only 1 or 2 papers were in the same subsubfield as me. And even those were often not directly relevant to my work.
But then there were small specialized conferences that had plenty of interesting papers every year. I left CS for another field a decade ago, but I still regularly attend some of those conferences and review for them. The papers published in them are still interesting and relevant to my work.
Another issue I have observed is that, exactly like the article highlights, conference reviewing has become about rejecting papers and not accepting them: there are too many decent papers! This is very perverse because it may only take a negative review from an unfamiliar reviewer to get a paper rejected. And even if your paper is considered decent, it will often be necessary for at least one reviewer to "champion" your article. This means that if your paper happens to be reviewed by one of the "top researcher" in your field, this person not championing your paper is a death sentence. And if for some reason this researcher does not like your work, you just have to pray that they will not review your paper. But of course, reviewers get to bid on the papers that they want to review so this person is very likely to review your paper every time.
All of this means that in our modern conference system, the role of few loud gatekeepers is highly exaggerated and they get to shape a lot the output of "top conferences". This is really more of human problem, but the lack of hierarchy in academia means that those people can get their way without major opposition for a long time...
Note of course that this is *highly* dependent on the field that your work in. There are still many fields of CS with very reasonable community.
(And contrary to the joke in the article, even your own work becomes uninspired when you ship it to those conferences. You can’t afford to be quirky or interesting.)
Fortunately every field has a fourth or fifth-tier conference that isn’t on this list (or a specialized topic conference that the rankings folks don’t care about), and those still serve the purposes that conferences were made for. You just might not be able to convince a ranking-obsessed administrator that your work has any value if you publish there.
And what's particularly frustrating is that many organizers will try to combat this by writing papers saying they "particularly encourage" papers that are interdisciplinary, or focused on less fashionable topics, etc. It's good that they are trying to change things, but I think the main effect in practice is to encourage people to spend their time writing papers that have little chance of being accepted.
This issue isn't at all unique to computer science, though. Try publishing a paper in a top economics journal as an outsider!
[0] https://blog.cr.yp.to/20240612-bibkeys.html, previously discussed on HN here [1].
There's a slight (but only slight) irony in your use of the HackerNews convention for handling multiple links without breaking up the body of the main text. In this short-form medium it works great. I see someone made this same point in the thread you linked, at https://news.ycombinator.com/item?id=40673426
The other side of all this academic brownie points via papers (and doing reviews, which has become "brownie points for gatekeeping") is that most academic software is not only unmaintained, but actually unusable. They rarely even compile, and if they do, there is no --help, no good defaults, no README, and no way to maintain them. They are single-use software and their singular use is to write the paper. Any other use-case is almost frowned upon.
One of the worst parts of Academic software is that if you re-write it in a ways that's actually usable and extensible, you can't publish that -- it's not new ("research") work. And you will not only have to cite the person who wrote the first useless version forever, but they will claim they have done it if your tool actually takes off.
BTW, there are academics who don't follow this trend. I am glad that in my field (SAT), some of the best, e.g. Armin Biere and Randal Bryant are not like this at all. Their software is insanely nice and they fix bugs many-many years after release. Notice that they are also incredibly good engineers.
Legible writing has little incentive. You can write in a simple down-to-earth manner and risk having someone objecting "Your work is isomorphic to X, done years ago." Your counter-objection, "Well, X is so incomprehensible that no one had any chance of understanding it", will fall flat. Better not risk it, write in an obtuse form and withhold the source code.
When I see HN discussions, in the USA, it seems to be that simple, clear explanations are something to be cherished. On the other hand, in Germany, the academic ideal is to use terms that describe things as precisely and exactly as possible.
In other words:
- in the USA "clarity" means "describing things as simple and "understandable" (for less educated people) as possible".
- In Germany (in particular humanities academia) "clarity" means "describing things in the most precise ('clear' in the sense of 'minimal risk of misunderstanding') terms as possible. You may assume that the reader is either sufficiently educated to know these specialist terms or take a serious effort to learn them".
Thus, the expectations what "clarity" means in academic texts are very different between these countries.
examples here https://github.com/search?q=repo%3AIBM%2FFedMA%20hard&type=c...
these hard coded parts are not easily adjustable
This is a shock to many of our leaders - who were writing 8 bit assembly to do similar things. They commonly did throw away all the work of the last version since it only took them a few months to rewrite it for the exact features they needed. (having experience because they wrote it just a year ago means the rewrite as much faster, and the limitations of 8 bit means it was worth rewriting since they had to remove one feature to add a new one).
ane frankly CS without those skills is useless for the reason I was replying to. If they cannot make readable/maintainable code they need to get a different job.
And there are already loads of advice out there about how to organize and manage huge code bases. It's usually consultancy based for anything but the broadest ideas though because it's so specific to the particular situation. An architecture is as much defined by the application it's built for as it is by what people would like to impose on it for convenience.
Meanwhile the work at the main at the main conference of AAAI or ICML was much farther along, and the value of having it presented at a conference, rather than a journal, was minimal. The conventional wisdom was, "the talk is just an advertisement for the paper."
This doesn't always happen; there are many historical examples where the first publication of an idea or technology goes unnoticed, often for many years, until someone better-positioned re-invents the same thing.
> An interesting case in software engineering is dismissal for lack of “evaluation.” It would be, of course, ridiculous to deny the benefits that the emphasis on systematic empirical measurement has brought to software engineering in the last three decades. But it has become difficult today to publish conceptual work not yet backed by systematic quantitative studies.
struck a chord with me. The top-tier CS systems conferences for me (OSDI and SOSP) have gotten to the point where you basically have to be writing the paper about the system you built at a FAANG that serves 1B users daily to get accepted.
It's hard for a novel idea and first-cut implementation to compete with systems built over many years with a team of a dozen software engineers. Obviously, those big systems deserve tons of credit and it's amazing that Big Tech publishes those papers! Credit to them. But it's also the case that novel ideas with an implementation that hasn't seen 1B users yet still have value.
I suppose the argument is that workshops serve that purpose of novel ideas with unproven implementation. There's some truth to that, but as the article highlights, the full conference papers are the real currency.
Any conference that announces itself as being proudly diverse and inclusive will have to have some difficult board meetings this year. It's not just the US, there's several countries in Europe that need a closer look at too. I hear Canada and the Nordic countries are fairly safe.
The whole show up to conferences internationally to network and put attendace on your CV thing is also not great for people looking after children, among others.
In practice if you want discussion and citation for your cryptography paper, it has to go on IACR eprint at some point. Being published in CRYPTO is still a major endorsement, but not the way people actually get hold of a copy these days.
It seems to get a lot more attention now that people from a different type of country are getting affected.
> Any conference that announces itself as being proudly diverse and inclusive will have to have some difficult board meetings this year. It's not just the US, there's several countries in Europe that need a closer look at too. I hear Canada and the Nordic countries are fairly safe.
Do you mean safe for individuals or a choice of venue?
In the UK (which is the country I know best) individuals are fine once they get a visa, but its not a safe choice in terms of planning because the granting of visas for people from certain countries is unpredictable (so people you expect to be able to attend might not be allowed to).
The UK right now is also trying to figure out who can use what bathrooms. I don't understand the details myself.
https://www.supremecourt.uk/cases/judgments/uksc-2024-0042
The toilets things getting the publicity is more a matter of the media being obsessed with that aspect of it. Obviously there are implications for that, but also for many other things.
As far as I know, the judgement affects much wider issues than just the women's "quota" on boardrooms in Scotland. It seems like a lot of major employers and their lawyers are trying to understand the implications and promising they'll have a update for everyone "soon".
[1] https://www.gov.uk/government/organisations/equality-and-hum... [2] https://www.equalityhumanrights.com/media-centre/interim-upd...
The Supreme Court ruling will also ensure that males are not present in women's prisons, women's hospital wards, women's sports, domestic violence refuges for women, and many other spaces designated as being single-sex.
It also confirms that, in law, sexual orientation is defined in terms of sex. One of the intervenors in the case included lesbian groups who were concerned that legal recognition of the rights of lesbian women would be rendered meaningless if heterosexual males could simply identify as such.
As for bathrooms, they are one of the few facilities for which access is based on trust. The activists who insist that they're going to use opposite-sex bathrooms regardless of what the law says are confirming that they can't be trusted to respect boundaries and stay out. Which says a lot really.
Are you saying that it's fair to extend such an exception?
That would change the dynamic. Some might object: "psychohistory!" What are we doing here anyway? And, what are we meant to be doing? Where is the "philosophy of computer science"?
1. authors that just reviewed the paper, did not do anything substential 2. papers that do not ship with working code 3. papers that are meaningless
https://open.substack.com/pub/mnky9800n/p/how-to-format-code...
The author really should have recognized this, as it serves his point about careerism and brownie points.
The idea that being forced into a citation style stifles innovation is hilarious, especially coming from a computer scientist - formal systems are all we do. It’s not so hard, is it? Use a citation manager and have them generated for you!
Honestly though, I find much better luck in the workshops. They don't really have the same reach as the main conference, and the specificity of the topics means that the reviewers are usually much more focused on content rather than checking boxes. They want to make a good workshop for the attendees, so it's far more important for workshops to focus on actual content rather than the resume building activities. The LIVE workshop for instance doesn't even really require a paper: https://2024.splashcon.org/home/live-2024#Call-for-Submissio...
Other workshops require just an extended abstract. Or maybe a short paper that doesn't have to be archived. I find these venues easier to get into, easier to present at, and easier to have a good discussion with the attendees.
2) Record and post all of the conferences, ACM is doing great strides here, but many many conferences aren't even recorded.
3) Reduce the paywalls and the access to all information, again I want to really support the ACM here, but the conferences that aren't broadcast for free (and also aren't recorded) are also the ones that are hella expensive and you have to attend in person (making them 5x as expensive with travel and lodging). Provide a happy medium, $300 for remote attendance? Delay publishing of the recordings (just post them raw) by 12 weeks?
The collaboration mediums should scale with the number of folks in the population. The venue stays the same but you 10x the input, lots of great people and great research is going to get broken.