Recently I was made aware by colleagues of a publication by authors of a new agent-based modeling toolkit in a different, hipper programming language. They compared their system to others, including mine, and made kind of a big checklist of who's better in what, and no surprise, theirs came out on top. But digging deeper, it quickly became clear that they didn't understand how to run my software correctly; and in many other places they bent over backwards to cherry-pick, and made a lot of bold and completely wrong claims. Correcting the record would place their software far below mine.
Mind you, I'm VERY happy to see newer toolkits which are better than mine -- I wrote this thing over 20 years ago after all, and have since moved on. But several colleagues demanded I do so. After a lot of back-and-forth however, it became clear that the journal's editor was too embarrassed and didn't want to require a retraction or revision. And the authors kept coming up with excuses for their errors. So the journal quietly dropped the complaint.
I'm afraid that this is very common.
I was an undergraduate at the University of Maryland when you were a graduate student there in the mid nineties. A lot of what you had to say shaped the way I think about computer science. Thank you.
I recommended that the journal not publish the paper, and gave them a good list of improvements to give to the authors that should be made before re-submitting. The journal agreed with me, and rejected the paper.
A couple of months later, I saw it had been published unchanged in a different journal. It wasn't even a lower-quality journal, if I recall the impact factor was actually higher than the original one.
I despair of the scientific process.
This is one of the reasons you should never accept a single publication at face value. But this isn’t a bug — it’s part of the algorithm. It’s just that most muggles don’t know how science actually works. Once you read enough papers in an area, you have a good sense of what’s in the norm of the distribution of knowledge, and if some flashy new result comes over the transom, you might be curious, but you’re not going to accept it without a lot more evidence.
This situation is different, because it’s a case where an extremely popular bit of accepted wisdom is both wrong, and the system itself appears to be unwilling to acknowledge the error.
How sad. Admitting and correcting a mistake may feel difficult, but it makes you credible.
As a reader, I would have much greater trust in a journal that solicited criticism and readily published corrections and retractions when warranted.
> We should distinguish the person from the deed. We all know good people who do bad things
> They were just in situations where it was easier to do the bad thing than the good thing
I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"
In this case, it seems not owning up to the issues is the bad part. That's a choice they made. Actually, multiple choices at different times, it seems. If you keep choosing the easy path instead of the path that is right for those that depend on you, it's easier for me to just label you a bad person.
I happen to agree that labeling them as villains wouldn’t have been helpful to this story, but they didn’t do that.
> It obscures the root causes of why the bad things are happening, and stands in the way of effective remedy.
There’s a toxic idea built into this statement: It implies that the real root cause is external to the people and therefore the solution must be a systemic change.
This hits a nerve for me because I’ve seen this specific mindset used to avoid removing obviously problematic people, instead always searching for a “root cause” that required us all to ignore the obvious human choices at the center of the problem.
Like blameless postmortems taken to a comical extreme where one person is always doing some careless that causes problems and we all have to brainstorm a way to pretend that the system failed, not the person who continues to cause us problems.
Well, I'd argue the system failed in that the bad person is not removed. The root is then bad hiring decision and bad management of problematic people. You can do a blameless postmortem guiding a change in policy which ends in some people getting fired.
This is just a proxy for "the person is bad" then. There's no need to invoke a system. Who can possibly trace back all the things that could or couldn't have been spotted at interview stage or in probation? Who cares, when the end result is "fire the person" or, probably, "promote the person".
In theory maybe, but in my experience the blameless postmortem culture gets taken to such an extreme that even when one person is consistently, undeniably to blame for causing problems we have to spend years pretending it’s a system failure instead. I think engineers like the idea that you can engineer enough rules, policies, and guardrails that it’s impossible to do anything but the right thing.
This can create a feedback loop where the bad players realize they can get away with a lot because if they get caught they just blame the system for letting them do the bad thing. It can also foster an environment where it’s expected that anything that is allowed to happen is implicitly okay to do, because the blameless postmortem culture assigns blame on the faceless system rather than the individuals doing the actions.
Not necessarily, although certainly people sometimes fall into that trap. When dealing with a system you need to fix the system. Ejecting a single problematic person doesn't fix the underlying problem - how did that person get in the door in the first place? If they weren't problematic when they arrived, does that mean there were corrosive elements in the environment that led to the change?
When a person who is a cog within a larger machine fails that is more or less by definition also an instance of the system failing.
Of course individual intent is also important. If Joe dropped the production database intentionally then in addition to asking "how the hell did someone like him end up in this role in the first place" you will also want to eject him from the organization (or at least from that role). But focusing on individual intent is going to cloud the process and the systemic fix is much more important than any individual one.
There's also a (meta) systemic angle to the above. Not everyone involved in carrying out the process will be equally mature, objective, and deliberate (by which I mean that unfortunately any organization is likely to contain at least a few fairly toxic people). If people jump to conclusions or go on a witch hunt that can constitute a serious systemic dysfunction in and of itself. Rigidly adhering to a blameless procedure is a way to guard against that while still working towards the necessary systemic changes.
is answered by:
> any organization is likely to contain at least a few fairly toxic people
It’s a good thing to take a look at where the process went wrong, but that’s literally just a postmortem. Going fully into blameless postmortems adds the precondition that you can’t blame people, you are obligated to transform the obvious into a problem with some process or policy.
Anyone who has hired at scale will eventually encounter an employee who seems lovely in interviews but turns out to be toxic and problematic in the job. The most toxic person I ever worked with, who culminated in dozens of peers quitting the company before he was caught red handed sabotaging company work, was actually one of the nicest and most compassionate people during interviews and when you initially met him. He, of course, was a big proponent of blameless postmortems and his toxicity thrived under blameless culture for longer than it should have.
It could also well be that Joe did the same thing at his last employer, someone in hiring happened to catch wind of it, a disorganized or understaffed process resulted in the ball somehow getting dropped, and here you are.
1) the immediate action _is more important immediately_ than the systemic change. We should focus on maximizing our "fixing" and letting a toxic element continue to poison you while you waste time wondering how you got there is counterproductive. It is important to focus on the systemic change, but once you have removed the person that will destroy the organization/kill us all.
2) I forgot. Sorry
If Joe dropped the production database and you're uncertain about his intentions then perhaps it would be a good idea to do the bare minimum by reducing his access privileges for the time being. No more than that though.
Whereas if you're reasonably certain that there was no intentional foul play involved then focusing on the individual from the outset isn't likely to improve the eventual outcome (rather it seems to me quite likely to be detrimental).
This is exactly the toxicity I’ve experienced with blameless postmortem culture:
Hiring is never perfect. It’s impossible to identify every problematic person at the interview stage.
Some times, it really is the person’s own fault. Doing mental gymnastics to assume the system caused the person to become toxic is just a coping mechanism to avoid acknowledging that some people really are problematic and it’s nobody’s fault but their own.
I'm not saying you shouldn't eventually arrive at the conclusion you're suggesting. I'm saying that it's extremely important not to start there and not to use the possibility of arriving there as an excuse to shirk asking difficult questions about the inner workings and performance of the broader organization.
> Doing mental gymnastics to assume the system caused the person to become toxic
No, don't assume. Ask if it did. "No that does not appear to be the case" can sometimes be a perfectly reasonable conclusion to arrive at but it should never be an excuse to avoid confronting uncomfortable realities.
1) Basic morality (good vs evil) with total agency ascribed to the individual
2) Basic systems (good vs bad), with total agency ascribed to the system and people treated as perfectly rational machines (where most of the comments here seem to sit)
3) Blended system and morality, or "Systemic Morality": agency can be system-based or individual-based, and morality can be good or bad. This is the single largest rung, because there's a lot to digest here, and it's where a lot of folks get stuck on one ("you can't blame people for making rational decisions in a bad system") or the other ("you can't fault systems designed by fallible humans"). It's why there's a lot of "that's just the way things are" useless attitudes at present, because folks don't want to climb higher than this rung lest they risk becoming accountable for their decisions to themselves and others.
4) "Comprehensive Morality": an action is net good or bad because of the system and the human. A good human in a bad system is more likely to make bad choices via adherence to systemic rules, just as a bad human in a good system is likely to find and exploit weaknesses in said system for personal gain. You cannot ascribe blame to one or the other, but rather acknowledge both separately and together. Think "Good Place" logic, with all of its caveats (good people in bad systems overwhelmingly make things worse by acting in good faith towards bad outcomes) and strengths (predictability of the masses at scale).
5) "Historical Morality": a system or person is net good or bad because of repeated patterns of behaviors within the limitations (incentives/disincentives) of the environment. A person who routinely exploits the good faith of others and the existing incentive structure of a system purely for personal enrichment is a bad person; a system that repeatedly and deliberately incentivizes the exploitation of its members to drive negative outcomes is a bad system. Individual acts or outcomes are less important than patterns of behavior and results. Humans struggle with this one because we live moment-to-moment, and we ultimately dread being held to account for past actions we can no longer change or undo. Yet it's because of that degree of accountability - that you can and will be held to account for past harms, even in problematic systems - that we have the rule of law, and civilization as a result.
Like a lot of the commenters here, I sat squarely in the third rung for years before realizing that I wasn't actually smart, but instead incredibly ignorant and entitled by refusing to truly evaluate root causes of systemic or personal issues and address them accordingly. It's not enough to merely identify a given cause and call it a day, you have to do something to change or address it to reduce the future likelihood of negative behaviors and outcomes; it's how I can rationalize not necessarily faulting a homeless person in a system that fails to address underlying causes of homelessness and people incentivized not to show empathy or compassion towards them, but also rationalize vilifying the wealthy classes who, despite having infinite access to wealth and knowledge, willfully and repeatedly choose to harm others instead of improving things.
Villainy and Heroism can be useful labels that don't necessarily simplify or ignorantly abstract the greater picture, and I'd like to think any critically-thinking human can understand when someone is using those terms from the first rung of the ladder versus the top rung.
Labelling a person as bad has predictive power - you should expect them to do bad acts again.
It might be preferable to instead label them as “a person with a consistent history of bad acts, draw your own conclusion, but we are all capable of both sin and redemption and who knows what the future holds”. I’d just call them a bad person.
That said, I do think we are often too quick to label people as bad based one bad act.
1. Who is responsible for adding guardrails to ensure all papers coming in are thoroughly checked & reviewed?
2. Who review these papers? Shouldn’t they own responsibility for accuracy?
3. How are we going to ensure this is not repeated by others?
I've read multiple times that a large percentage of the crime comes from a small group of people. Jail them, and the overall crime rate drops by that percentage.
Personally, I do believe that there are benefits to labelling others as villains if a certain threshold is met. It cognitively reduces strain by allowing us to blanket-label all of their acts as evil [0] (although with the drawback of occasionally accidentally labelling acts of good as evil), allowing us to prioritise more important things in life than the actions of what we call villains.
[0]: https://en.wikipedia.org/wiki/Halo_effect#The_reverse_halo_e...
If you were in their exact life circumstance and environment you would do the same thing. You aren’t going to magically sidestep cause and effect.
The act itself is bad.
The human performing the act was misguided.
I view people as inherently perfect whose view of life, themselves, and their current situations as potentially misguided.
Eg, like a diamond covered in shit.
Just like it’s possible for a diamond to be uncovered and polished, the human is capable of acquiring a truer perspective and more aligned set of behaviors - redemption. Everyone is capable of redemption so nobody is inherently bad. Thinking otherwise may be convenient but is ultimately misguided too.
So the act and the person are separate.
Granted, we need to protect society from such misguidedness, so we have laws, punishments, etc.
But it’s about protecting us from bad behavior, not labeling the individual as bad.
I don't buy that for a moment. It presumes people do not have choices.
The difference between a man and an animal is a man has honor. Each of us gets to choose if we are a man or an animal.
The act itself, of saying something other than the truth, is always more complex than saying the truth. ← It took more words to describe the act in that very sentence. Because there are two ideas, the truth and not the truth. If the two things match, you have a single idea. Simple.
Speaking personally, if someone's very first contact with me is a lie, they are to be avoided and disregarded. I don't even care what "kind of person" they are. In my world, they're instantly declared worthless. It works pretty well. I could of course be wrong, but I don't think I'm missing out on any rich life experiences by avoiding obvious liars. And getting to the root cause of their stuff or rehabilitating them is not a priority for me; that's their own job. They might amaze me tomorrow, who knows. But it's called judgment for a reason. Such is life in the high-pressure world of impressing rdiddly.
If we equate being bad to being ignorant, then those people are ignorant/bad (with the implication that if people knew better, they wouldn't do bad things)
I'm sure I'm over simplifying something, looking forward to reading responses.
These failures aren’t on that list because they require active intent.
Both can be pursued without immediately jumping to defending a crime
Blameless post-mortems are critical for fixing errors that allowed incident to happen.
It's not processes that can be fixed, it's just humans being stupid.
Both views are maximalistic.
Negative consequences and money always work!
God gave us free will to choose good or evil in various circumstances. We need to recognize that in our assessments. We must reward good choices and address bad ones (eg the study authors'). We should also change environments to promote good and oppose evil so the pressures are pushing in the right direction.
Unfortunately academia as a pursuit has never had a larger headcount and the incentives to engage in misconduct have likely never been higher (and appear to be steadily increasing).
Surely the public discourse over the past decades has been steadily moving from substantive towards labeling each other villains, not the other way around.
When the opposition is called evil it's not because logic dictates it must be evil, it's called evil for the same reason it's called ugly, unintelligent, weak, cowardly and every other sort of derogatory adjective under the sun.
These accusations have little to do with how often people consider others things such as "ugly" or "weak", it's just signaling.
On the one hand, it is possible to become judgmental, habitually jumping to unwarranted and even unfair conclusions about the moral character of another person. On the other, we can habitually externalize the “root causes” instead of recognizing the vice and bad choices of the other.
The latter (externalization) is obvious when people habitually blame “systems” to rationalize misbehavior. This is the same logic that underpins the fantastically silly and flawed belief that under the “right system”, misbehavior would simply evaporate and utopia would be achieved. Sure, pathological systems can create perverse incentives, even ones that put extraordinary pressure on people, but moral character is not just some deterministic mechanical response to incentive. Murder doesn’t become okay because you had a “hard life”, for example. And even under “perfect conditions”, people would misbehave. In fact, they may even misbehave more in certain ways (think of the pathologies characteristic of the materially prosperous first world).
So, yes, we ought to condemn acts, we ought to be charitable, but we should also recognize human vice and the need for justice. Justly determined responsibility should affect someone’s reputation. In some cases, it would even be harmful to society not to harm the reputations of certain people.
It's a paradox. We know for an absolute fact that changing the underlying system matters massively but we must continue to acknowledge the individual choice because the system of consequences and as importantly the system of shame keeps those who wouldn't act morally in check. So we punish the person who was probably lead poisoned the same as any other despite knowing that we are partially at fault for the system that lead to their misbehavior.
People are on average both bad and stupid and function without a framework of consequences and expectations where they expect to suffer and feel shame. They didn't make a mistake they stood in front of all their professional colleagues and published effectively what they knew were lies. The fact that they can publish lies and others are happy to build on lies ind indicates the whole community is a cancer. The fact that the community rejects calls for correction indicates its metastasized and at least as far as that particular community the patient is dead and there is nothing left to save.
They ought to be properly ridiculed and anyone who has published obvious trash should have any public funds yanked and become ineligible for life. People should watch their public ruin and consider their own future action.
If you consider the sheer amount of science that has turned out to be outright fraud in the last decade this is a crisis.
This is effectively denying the existence of bad actors.
We can introspect into the exact motives behind bad behaviour once the paper is retracted. Until then, there is ongoing harm to public science.
For example, you assume that guy trying to cut the line is a horrible person and a megalomaniac because you've seen this like a thousand times. He really may be that, or maybe he's having an extraordinarily stressful day, or maybe he's just not integrated with the values of your society ("cutting the line is bad, no matter what") or anything else BUT none of all that really helps you think clearly. You just get angry and maybe raise your voice when you're warning him, because "you know" he won't understand otherwise. So you left your values now too because you are busy fighting a stereotype.
IMHO, correct course of action is assuming good faith even with bad actions, and even with persistent bad actions, and thinking about the productive things you can do to change the outcome, or decide that you cannot do anything.
You can perhaps warn the guy, and then if he ignores you, you can even go to security or pick another hill to die on.
I'm not saying that I can do this myself. I fail a lot, especially when driving. It doesn't mean I'm not working on it.
Turns out that calling someone on their bullshit can be a perfectly productive thing to do, it not only deals with that specific incident, but also promotes a culture in which it's fine to keep each other accountable.
It's also important to recognize that there are a lot of situations where calling someone out isn't going to have any (useful) effect. In such cases any impulsive behavior that disrupts the environment becomes a net negative.
It's also important to base your actions on what's at hand, not teaching a lesson to "those people".
It's fine and even good to assume good faith, extend your understanding, and listen to the reasons someone has done harm - in a context where the problem was already redressed and the wrongdoer is labelled.
This is not that. This is someone publishing a false paper, deceiving multiple rounds of reviewers, manipulating evidence, knowingly and for personal gain. And they still haven't faced any consequences for it.
I don't really know how to bridge the moral gap with this sort of viewpoint, honestly. It's like you're telling me to sympathise with the arsonist whilst he's still running around with gasoline
That wasn't how I read it. Neither sympathize nor sit around doing nothing. Figure out what you can do that's productive. Yelling at the arsonist while he continues to burn more things down isn't going to be useful.
Assuming good faith tends to be an important thing to start with if the goal is an objective assessment. Of course you should be open to an eventual determination of bad faith. But if you start from an assumption of bad faith your judgment will almost certainly be clouded and thus there is a very real possibility that you will miss useful courses of action.
The above is on an individual level. From an organizational perspective if participants know that a process could result in a bad faith determination against them they are much more likely to actively resist the process. So it can be useful to provide a guarantee that won't happen (at least to some extent) in order to ensure that you can reliably get to the bottom of things. This is what we see in the aviation world and it seems to work extremely well.
I mean, do not put the others into any stereotype. Assume nothing? Maybe that sounds better. Just look at the hand you are dealt and objectively think what to do.
If there is an arsonist, you deal with that a-hole yourself, call the police, or first try to take your loved ones to safety first?
Getting mad at the arsonist doesn't help.
Academics that refuse to reply to people trying to replicate their work need to be instantly and publicly fired, tenure or no. This isn't going to happen, so the right thing to do is for the vast majority of practitioners to just ignore academia whilst politically campaigning for the zeroing of government research grants. The system is unsaveable.
It's still up! Maybe the answer to building a resilient system lies in why it is still up.
This is true though, and one of those awkward times where good ideals like science and critical feedback brush up against potentially ugly human things like pride and ego.
I read a quote recently, and I don't like it, but it's stuck with me because it feels like it's dancing around the same awkward truth:
"tact is the art of make a point without making an enemy"
I guess part of being human is accepting that we're all human and will occasionally fail to be a perfect human.
Sometimes we'll make mistakes in conducting research. Sometimes we'll make mistakes in handling mistakes we or others made. Sometimes these mistakes will chain together to create situations like the post describes.
Making mistakes is easy - it's such a part of being human we often don't even notice we do it. Learning you've made a mistake is the hard part, and correcting that mistake is often even harder. Providing critical feedback, as necessary as it might be, typically involves putting someone else through hardship. I think we should all be at least slightly afraid and apprehensive of doing that, even if it's for a greater good.
Whatever happens, avoid direct confrontation at all costs.
On the other hand, it sounds like this workplace has weak leadership - have you considered leaving for some place better? If the manager can’t do their job enough to give you decent feedback and stop a guy giving 10 min stand ups, LEAVE.
Reasons for not leaving? Ok, then don’t be a victim. Tell yourself you’re staying despite the management and focus on the positive.
A blameless organization can work, so long as people within it police themselves. As a society this does not happen, thus making people more steadfast in their anti-social behavior
This actually doesn't surprise much. I've seen a lot of variety in the ethical standards that people will publicly espouse.
Yes, the complicity is normal. No the complicity isn't right.
The banality of evil.
> Vonnegut is not, I believe, talking about mere inauthenticity. He is talking about engaging in activities which do not agree with what we ourselves feel are our own core morals while telling ourselves, “This is not who I really am. I am just going along with this on the outside to get by.” Vonnegut’s message is that the separation I just described between how we act externally and who we really are is imaginary.
https://thewisdomdaily.com/mother-night-we-are-what-we-prete...
1) errors happen, basically accidents.
2) errors are made, wrong or unexpected result for different intention.
3) errors are caused, the error case is the intended outcome. This is where "bad people" dwell.
Knowing and keeping silent about 1) and 2) makes any error 3). I think, we are on 2) in TFA. This needs to be addressed, most obviously through system change, esp. if actors seem to act rationally in the system (as the authors do) with broken outcomes.
These people are terrible at their job, perhaps a bit malicious too. They may be great people as friends and colleagues.
When the good thing is easier to do and they still knowingly pick the bad one for the love of the game?
The whole "bad vs good person" framing is probably not a very robust framework, never thought about it much, so if that's your position you might well be right. But it's not a consideration that escaped me, I reasoned under the same lens the person above did on intention.
> If we systematically tie bad deeds to bad people, then surely those people we love and know to be good are incapable of what they're being accused.
A strong claim that needs to be supported and actually the question who’s nuances are being discussed in this thread.
Anyone can do a bad deed.
Anyone can also be a good person to someone else.
If a bad deed automatically makes a bad person, those who recognize the person as good have a harder time reconciling the two realities. Simple.
Also, is the point recognizing bad people or getting rid of bad science. Like I said, choose your victories.
For starters, the bar should be way higher than accusations from a random person.
For me,there's a red flag in the story: posting reviews and criticism of other papers is very mundane in academia. Some Nobel laureates even authored papers rejecting established theories. The very nature of peer review involves challenging claims.
So where is the author's paper featuring commentaries and letters, subjecting the author's own criticism to peer review?
Other than just the label being difficult to apply, these factors also make the argument over who is a "bad person" not really productive and I will put those sorts of caveats into my writings because I just don't want to waste my time arguing the point. Like what does "bad person" even mean and is it even consistent across people? I think it makes a lot more sense to label them clearer labels which we have a lot more evidence for, like "untrustworthy scientist" (which you might think is a bad person inherently or not).
/s
I have a relative who lives in Memphis, Tennessee. A few years ago some guy got out of prison, went to a fellow's home to buy a car, shot the car owner dead, stole the car and drove it around until he got killed by the police.
One of the neighbors said, I kid you not, "he's a good kid"
But there is a concern which goes out of the "they" here. Actually, "they" could just as well not exist, and all narrative in the article be some LLM hallucination, we are still training ourself how we respond to this or that behavior we can observe and influence how we will act in the future.
If we go with the easy path labeling people as root cause, that's the habit we are forging for ourself. We are missing the opportunity to hone our sense of nuance and critical thought about the wider context which might be a better starting point to tackle the underlying issue.
Of course, name and shame is still there in the rhetorical toolbox, and everyone and their dog is able to use it even when rage and despair is all that stay in control of one mouth. Using it with relevant parcimony however is not going to happen from mere reactive habits.
Of course doing so is not free and it takes time. A paper represents at least months of work in data collection, analysis, writing, and editing though. A tarball seems like a relatively small amount of effort to provide an huge increase in confidence for the result.
On my side-project todo list, I have an idea for a scientific service that overlays a "trust" network over the citation graph. Papers that uncritically cite other work that contains well-known issues should get tagged as "potentially tainted". Authors and institutions that accumulate too many of such sketchy works should be labeled equally. Over time this would provide an additional useful signal vs. just raw citation numbers. You could also look for citation rings and tag them. I think that could be quite useful but requires a bit of work.
The idea failed a simple sanity check: just going to Google Scholar, doing a generic search and reading randomly selected papers from within the past 15 years or so. It turned out most of them were bogus in some obvious way. A lot of ideas for science reform take as axiomatic that the bad stuff is rare and just needs to be filtered out. Once you engage with some field's literatures in a systematic way, it becomes clear that it's more like searching for diamonds in the rough than filtering out occasional corruption.
But at that point you wonder, why bother? There is no alchemical algorithm that can convert intellectual lead into gold. If a field is 90% bogus then it just shouldn't be engaged with at all.
1) Anyone publishes anything they want, whenever they want, as much or as little as the want. Publishing does not say anything about your quality as a researcher, since anyone can do it.
2) Being published doesn't mean it's right, or even credible. No one is filtering the stream, so there's no cachet to being published.
We then let memetic evolution run its course. This is the system that got us Newton, Einstein, Darwin, Mendeleev, Euler, etc. It works, but it's slow, sometimes ugly to watch, and hard to game so some people would much rather use the "Approved by A Council of Peers" nonsense we're presently mired in.
Still I'm skeptical about any sort of system trying to figure out 'trust'. There's too much on the line for researchers/students/... to the point where anything will eventually be gamed. Just too many people trying to get into the system (and getting in is the most important part).
The system ends up promoting an even more conservative culture. What might start great will end up with groups and institutions being even more protective of 'their truths' to avoid getting tainted.
Don't think there's any system which can avoid these sort of things, people were talking about this before WW1, globalisation just put it in overdrive.
When you added it up, most of the hard parts were Engineering, and a bit Econ. You would really struggle to work through tough questions in engineering, spend a lot of time on economic theory, and then read the management stuff like you were reading a newspaper.
Management you could spot a mile away as being soft. There's certainly some interesting ideas, but even as students we could smell it was lacking something. It's just a bit too much like a History Channel documentary. Entertaining, certainly, but it felt like false enlightenment.
Ranking 1 to 3 - 1 being the best - 3 the bare minimum for publication.
3. Citations only
2. Citations + full disclosure of data.
1. Citations + full disclosure of data + replicated
And from the comments:
> From my experience in social science, including some experience in managment studies specifically, researchers regularly belief things – and will even give policy advice based on those beliefs – that have not even been seriously tested, or have straight up been refuted.
Sometimes people use fewer than one non replicatable studies. They invent studies and use that! An example is the "Harvard Goal Study" that is often trotted out at self-review time at companies. The supposed study suggests that people who write down their goals are more likely to achieve them than people who do not. However, Harvard itself cannot find such a study existing:
https://en.wikipedia.org/wiki/Addiction_Rare_in_Patients_Tre...
Straight-up replications are rare, but if a finding is real, other PIs will partially replicate and build upon it, typically as a smaller step in a related study. (E.g., a new finding about memory comes out, my field is emotion, I might do a new study looking at how emotion and your memory finding interact.)
If the effect is replicable, it will end up used in other studies (subject to randomness and the file drawer effect, anyway). But if an effect is rarely mentioned in the literature afterwards...run far, FAR away, and don't base your research off it.
A good advisor will be able to warn you off lost causes like this.
https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/pdf/pmed.00...
Ioannidis' work during Covid raised him in my esteem. It's rare to see someone in academics who is willing to set their own reputation on fire in search of truth.
“Most Published Research Findings Are False” —> “Most Published COVID-19 Research Findings Are False” -> “Uh oh, I did a wrongthink, let’s backtrack at bit”.
Is that it?
If IFR is low then a lot of the assumptions that justified lockdowns are invalidated (the models and assumptions were wrong anyway for other reasons, but IFR is just another). So Ioannidis was a bit of a class traitor in that regard and got hammered a lot.
The claim he's a conspiracy theorist isn't supported, it's just the usual ad hominem nonsense (not that there's anything wrong with pointing out genuine conspiracies against the public! That's usually called journalism!). Wikipedia gives four citations for this claim and none of them show him proposing a conspiracy, just arguing that when used properly data showed COVID was less serious than others were claiming. One of the citations is actually of an article written by Ioannidis himself. So Wikipedia is corrupt as per usual. Grokipedia's article is significantly less biased and more accurate.
https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaw...
That said, I'd put both his serosurvey and the conduct he criticized in "Most Published Research Findings Are False" in a different category from the management science paper discussed here. Those seem mostly explainable by good-faith wishful thinking and motivated reasoning to me, while that paper seems hard to explain except as a knowing fraud.
There's the other angle of selective outrage. The case for lockdowns was being promoted based on, amongst other things, the idea that PCR tests have a false positive rate of exactly zero, always, under all conditions. This belief is nonsense although I've encountered wet lab researchers who believe it - apparently this is how they are trained. In one case I argued with the researcher for a bit and discovered he didn't know what Ct threshold COVID labs were using; after I told him he went white and admitted that it was far too high, and that he hadn't known they were doing that.
Gellman's demands for an apology seem very different in this light. Ioannidis et al not only took test FP rates into account in their calculations but directly measured them to cross-check the manufacturer's claims. Nearly every other COVID paper I read simply assumed FPs don't exist at all, or used bizarre circular reasoning like "we know this test has an FP rate of zero because it detects every case perfectly when we define a case as a positive test result". I wrote about it at the time because this problem was so prevalent:
https://medium.com/mike-hearn/pseudo-epidemics-part-ii-61cb0...
I think Gellman realized after the fact that he was being over the top in his assessment because the article has been amended since with numerous "P.S." paragraphs which walk back some of his own rhetoric. He's not a bad writer but in this case I think the overwhelming peer pressure inside academia to conform to the public health narratives got to even him. If the cost of pointing out problems in your field is that every paper you write has to be considered perfect by every possible critic from that point on, it's just another way to stop people flagging problems.
I don't think Gelman walked anything back in his P.S. paragraphs. The only part I see that could be mistaken for that is his statement that "'not statistically significant' is not the same thing as 'no effect'", but that's trivially obvious to anyone with training in statistics. I read that as a clarification for people without that background.
We'd already discussed PCR specificity ad nauseam, at
https://news.ycombinator.com/item?id=36714034
These test accuracies mattered a lot while trying to forecast the pandemic, but in retrospect one can simply look at the excess mortality, no tests required. So it's odd to still be arguing about that after all the overrun hospitals, morgues, etc.
This is a frustrating aspect of studies. You have to contact the authors for full datasets. I can see why it would not be possible to publish them in the past due to limited space in printed publications. In today's world though every paper should be required to have their full datasets published to a website for others to have access to in order to verify and replicate.
If this isn't bad people, then who can ever be called bad people? The word "bad" loses its meaning if you explain away every bad deed by such people as something else. Putting other people's lives at risk by deciding to drive when you are drunk sounds like very bad people to me.
> They’re living in a world in which doing the bad thing–covering up error, refusing to admit they don’t have the evidence to back up their conclusions–is easy, whereas doing the good thing is hard.
I don't understand this line of reasoning. So if people do bad things because they know they can get away with it, they aren't bad people? How does this make sense?
> As researchers they’ve been trained to never back down, to dodge all criticism.
Exactly the opposite is taught. These people are deciding not to back down and admit wrong doing out of their own accord. Not because of some "training".
> because they know they can get away with it
the point is that the paved paths lead to bad behavior
well designed systems make it easy to do good
> Exactly the opposite is taught.
"trained" doesn't mean "taught". most things are learned but not taught
“That’s a bad thing to do…”
Maybe should be: “That’s a stupid thing to do…”
Or: reckless, irresponsible, selfish, etc.
In other words, maybe it has nothing to do with morals and ethics. Bad is kind of a lame word with limited impact.
You guys are saying that drink driving does not make someone a bad person. Ok. Let's say I grant you that. Where do you draw the line for someone being a bad person?
I mean with this line of reasoning you can "explain way" every bad deed and then nobody is a bad person. So do you guys consider someone to be actually a bad person and what did they have to do to cross that line where you can't explain away their bad deed anymore and you really consider them to be bad?
Once something enters The Canon, it becomes “untouchable,” and no one wants to question it. Fairly classic human nature.
> "The most erroneous stories are those we think we know best -and therefore never scrutinize or question."
-Stephen Jay Gould
ResearchGate says 3936 citations. I'm not sure what they are counting, probably all the pdf uploaded to ResearchGate
I'm not sure how they count 6000 citations, but I guess they are counting everything, including quotes by the vicepresident. Probably 6001 after my comment.
Quoted in the article:
>> 1. Journals should disclose comments, complaints, corrections, and retraction requests. Universities should report research integrity complaints and outcomes.
All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.
[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1964011
[2] https://www.researchgate.net/publication/279944386_The_Impac...
The number appears to be from Google Scholar, which currently reports 6269 citations for the paper
Judging from PubPeer, which allows people to post all of the above anonymously and with minimal moderation, this is not an issue in practice.
It has 0 comments, for an article that forgot "not" in "the result is *** statistical significative".
Made me think of the black spoon error being off by a factor of 10 and the author also said it didn't impact the main findings.
https://statmodeling.stat.columbia.edu/2024/12/13/how-a-simp...
Actually it’s not science at all.
But if you're going to quote the whole thing it seems easier to just say so rather than quoting it bit by bit interspersed with "King continues" and annotating each I with [King].
I often say that "hard sciences" have often progressed much more than social/human sciences.
[1] https://en.wikipedia.org/wiki/Replication_crisis#In_medicine
With the above, I think we've empirically proven that we can't trust mathmeticians more than any other humans We should still rigorously verify their work with diverse, logical, and empirical methods. Also, build ground up on solid ideas that are highly vetted. (Which linear algebra actually does.)
The other approach people are taking are foundational, machine-checked, proof assistants. These use a vetted logic whose assistant produces a series of steps that can be checked by a tiny, highly-verified checker. They'll also oftne use a reliable formalism to check other formalisms. The people doing this have been making everything from proof checkers to compilers to assembly languages to code extraction in those tools so they are highly trustworthy.
But, we still need people to look at the specs of all that to see if there are spec errors. There's fewer people who can vet the specs than can check the original English and code combos. So, are they more trustworthy? (Who knows except when tested empirically on many programs or proofs, like CompCert was.)
And therein lies the uncomfortable truth: Collaborative opportunities take priority over veracity in publications every time.
These probably have bigger chance of being published as you are providing a "novel" result, instead of fighting the get-along culture (which is, honestly, present in the workplace as well). But ultimately, they are (research-wise! but not politically) harder to do because they possibly mean you have figured out an actual thing.
Not saying this is the "right" approach, but it might be a cheaper, more practical way to get a paper turned around.
Whether we can work this out in research in a proper way is linked to whether we can work this out everywhere else? How many times have you seen people tap each other on the back despite lousy performance and no results? It's just easier to switch private positions vs research positions, so you'll have more of them not afraid to highlight bad job, and well, there's this profit that needs to pay your salary too.
As I said, harder from a research perspective, but if you can show, for instance, that sustainable companies are less profitable with a better study, you have basically contradicted the original one.
Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.
Incitives are just irrelevant as far as global social good is concerned.
Institutions could do something, surely. Require one-in-n papers be a replication. Only give prizes to replicated studies. Award prize monies split between the first two or three independent groups demonstrating a result.
The 6k citations though ... I suspect most of those instances would just assert the result if a citation wasn't available.
If the flow of tax, student debt and philanthropic money were cut off, the journals would all be wiped out because there's no organic demand for what they're doing.
They are pushed to publish a lot, which means journals have to review a lot of stuff (and they cannot replicate findings on their own). Once a paper is published on a decent journal, other researchers may not "waste time" replicating all findings, because they also want to publish a lot. The result is papers getting popular even if no one has actually bothered to replicate the results, especially if those papers are quoted by a lot of people and/or are written by otherwise reputable people or universities.
That's not right; retractions should only be for research misconduct cases. It is a problem with the article's recommendations too. Even if a correction is published that the results may not hold, the article should stay where it is.
But I agree with the point about replications, which are much needed. That was also the best part in the article, i.e. "stop citing single studies as definitive".
I read the paper as well. My background is mathematics and statistics and the data was quite frankly synthesised.
But the article is generally weird or even harmful too. Going to social media with these things and all; we have enough of that "pretty" stuff already.
However there are two problems with it. Firstly it's a step towards gamification and having tried that model in a fintech on reputation scoring, it was a bit of a disaster. Secondarily, very few studies are replicated in the first place unless there is a demand for linked research to replicate it before building on it.
There are also entire fields which are mostly populated by bullshit generators. And they actively avoid replication studies. Certain branches of psychology are rather interesting in that space.
Maybe, I cannot say, but what I can say is that CS is in the midst of a huge replication crisis because LLM research cannot be replicated by definition. So I'd perhaps tone down the claims about other fields.
Pushing for retraction just like that and going off to private sector is…idk it’s a decision.
She was just done with it then and a pharma company said "hey you fed up with this shit and like money?" and she was and does.
edit: as per the other comment, my background is mathematics and statistics after engineering. I went into software but still have connections back to academia which I left many years ago because it was a political mess more than anything. Oh and I also like money.
This one is pretty egregious.
I only needed the Spanish translation. Now I am proficient in spoken and written Spanish, and I can perfectly understand what is said, and yet I still ran the English through Google Translate and printed it out without really checking through it.
I got to the podium and there was a line where I said "electricity is in the air" (a metaphor, obviously) and the Spanish translation said "electricidad no está en el aire" and I was able to correct that on-the-fly, but I was pissed at Translate, and I badmouthed it for months. And sure, it was my fault for not proofing and vetting the entire output, but come on!
Living VPs Joe Biden — VP 2009–2017 (became President in 2021; after that he’s called a former VP and former president)
Not likely the one referenced after 2017 because he became president in 2021, so later citations would likely call him a former president instead of former VP.
Dan Quayle — VP 1989–1993, alive through 2026
Al Gore — VP 1993–2001, alive through 2026
Mike Pence — VP 2017–2021, alive through 2026
Kamala Harris — VP 2021–2025, alive through 2026
J.D. Vance — VP 2025–present (as of 2026)
Talked about it years ago https://news.ycombinator.com/item?id=26125867
Others said they’d never seen it. So maybe it’s rare. But no one will tell you even if they encounter. Guaranteed career blackball.
When a junior researcher, e.g. a grad student, fails to replicate a study, they assume it's technique. If they can't get it after many tries, they just move on, and try some other research approach. If they claim it's because the original study is flawed, people will just assume they don't have the skills to replicate it.
One of the problems is that science doesn't have great collaborative infrastructure. The only way to learn that nobody can reproduce a finding is to go to conferences and have informal chats with people about the paper. Or maybe if you're lucky there's an email list for people in your field where they routinely troubleshoot each other's technique. But most of the time there's just not enough time to waste chasing these things down.
I can't speak to whether people get blackballed. There's a lot of strong personalities in science, but mostly people are direct and efficient. You can ask pretty pointed questions in a session and get pretty direct answers. But accusing someone of fraud is a serious accusation and you probably don't want to get a reputation for being an accuser, FWIW.
I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)
The replication crisis is largely particular to psychology, but I wonder about the scope of the don't rock the boat issue.
https://blog.plan99.net/replication-studies-cant-fix-science...
I think perhaps blackball is guaranteed. No one likes a snitch. “We’re all just here to do work and get paid. He’s just doing what they make us do”. Scientist is just job. Most people are just “I put thing in tube. Make money by telling government about tube thing. No need to be religious about Science”.
We need to throw all of this out by default. From public policy to courtrooms, we need to treat it like any other eyewitness claim. We shouldn't beleive anything unless it has strong arguments or data backing it. For science, we need the scientific method applied with skeptical review and/or replication. Our tools, like statistical methods and programs, must be vetted.
Like with logic, we shouldn't allow them to go beyond what's proven in this way. So, only the vetted claims are allowed as building blocks (premises) in newly-vetted work. The premises must be used how they were used before. If not, they are re-checked for the new circumstances. Then, the conclusions are stated with their preconditions and limitations to only he applied that way.
I imagine many non-scientists and taxpayers assumed what I described is how all these "scientific facts" and "consensus" vlaims were done. The opposite was true in most cases. So, we need to not onoy redo it but apply scientific method to the institutions themselves assessing their reliability. If they don't get reliable, they loose their funding and quickly.
(Note: There are groups in many fields doing real research and experimental science. We should highlight them as exemplars. Maybe let them take the lead in consulting for how to fix these problems.)
> We need to throw all of this out by default. From public policy to courtrooms, we need to treat it like any other eyewitness claim.
If you can't trust eyewitness claims, if you can't trust video or photographic or audio evidence, then how does one Find Truth? Nobody really seems to have a solid answer to this.
Next, we need to understand why that is, which should be trusted, and which can't be. Also, what methods to use in what contexts. We need to develop education for people about how humanity actually works. We can improve steadily over time.
On my end, I've been collecting resources that might be helpful. That includes Christ-centered theology with real-world application, philosophies of knowledge with guides on each one, differences between real vs organized science, biological impact on these, dealing with media bias (eg AllSides), worldview analyses, critical thinking (logic), statistical analyses (esp error spotting), writing correct code, and so on.
One day, I might try to put it together into a series that equips people to navigate all of this stuff. For right now, I'm using it as a refresher to improve my own abilities ahead of entering the Data Science field.
For example, look at how people interact with LLMs. Lots of superstition (take a deep breath) not much reading about the underlying architecture.
Citation studies are problematic and can and their use should be criticized. But this here is just warm air build on a fundamental misunderstanding of how to measure and interpret citation data.
“Your email is too long.”
This whole thing is filled with “yeah, no s**” and lmao.
More seriously, pretty sure the whole ESG thing has been debunked already, and those who care to know the truth already know it.
A good rule of thumb is to be skeptical of results that make you feel good because they “prove” what you want them to.
All the talks they were invited to give, all the followers they had, all the courses they sold and impact factor they have built. They are not going to came forward and say "I misinterpreted the data and made long reaching conclusions that are nonsense, sorry for misleading you and thousands of others".
The process protects them as well. Someone can publish another paper, make different conclusions. There is 0 effort get to the truth, to tell people what is and what isn't current consensus and what is reasonable to believe. Even if it's clear for anyone who digs a bit deeper it will not be communicated to the audience the academia is supposed to serve. The consensus will just quietly shift while the heavily quoted paper is still there. The talks are still out there, the false information is still propagated while the author enjoys all the benefits and suffers non of the negative consequences.
If it functions like that I don't think it's fair that tax payer funds it. It's there to serve the population not to exist in its own world and play its own politics and power games.
Today the elites rule the plebs by saying "Science sasy so, so you must do this".
Author doesn't seem to understand this, the purpose of research papers is to be gospel, something to be believed, not scrutinized.
There is a reason scriptures were kept away from the oppressed, or only made available to them in a heavily censored form (e.g. the Slaves Bible).
In the past, the elites said "don't read the religious texts, WE will tell you what's in them."
Catholic and Orthodox Christianity do not focus as much on the Bible as Protestant Christianity. They are based on the tradition, of which the Bible is only a part, while the Protestant Reformation elevated the Bible above the tradition. (By a tortured analogy, you could say that Catholicism and Orthodoxy are common law Christianity, while Protestantism is civil law Christianity.)
From a Catholic or Orthodox perspective, there is a living tradition from the days of Jesus and the Apostles to present day. Some parts of it were written down and became the New Testament, but the parts that were left out were equally important. You cannot therefore understand the Bible without understanding the tradition, because it's only a partial account.
For example there is a long history of studies of the relationship between working hours and productivity which is one of the few things that challenges the idea that longer hours means more output.
Even if you support sustainability, criticizing the paper will be treated as heresy by many.
Despite our idealistic vision of Science(tm), it is a human process done by humans with human motivations and human weaknesses.
From Galileo to today, we have repeatedly seen the enthusiastic willingness by majorities of scientists to crucify heretics (or sit by in silence) and to set aside scientific thinking and scientific process when it clashes against belief or orthodoxy or when it makes the difference whether you get tenure or publication.