Can you hook us up with some deep links?
If we consider that the real major's move about 400k-500k passengers/day, let's be really optimistic and say that they check their booking 6 times a day for the week before they fly. That's around 250 requests/sec.
Anyone know about the consumer facing tech stacks at airlines these days? Seems unlikely that they'd have databases that would auto scale 400x...
I think more likely the API would be behind some kind of bot protection that would shut this down before any kind of technical rate limit is reached.
Sounds like no bug bounty?
It's great if OP is happy with the outcome, but it's so infuriating that companies are allowed to leak everyone's data with zero accountability and rely on the kindness of security researchers to do free work to notify them.
I wish there was a law that assigned a dollar value to different types of PII leaks and fined the organization that amount with some percentage going to the whistleblower. So a security researcher could approach a vendor and say, "Hi! I discovered vulnerabilities in your system that would result in a $500k fine for you. For $400k, I'll disclose it to you privately, or you can turn me down and I'll receive $250k from your fines."
There is. It is called GDPR.
Plenty of companies have been fined for leaks like this.
Some countries also have whistleblower bounties but, as you might expect, there are some perverse incentives there.
How does security research like this work out in practice, in the EU?
I read a lot of vulnerability writeups like this and don't recall seeing any where the author is European and gets a better outcome. Are security researchers actually compensated for this type of work in the EU?
How about fining individual developers with poor coding practices?
Also, it needs your company's business model to not be selling user data. That's why American companies find it hard to comply with.
This is a matter for lawmakers and law enforcement. Campaign for it. Nothing will change otherwise
But that's the "industry standard" for checking a reservation online. Requiring airline login doesn't work because of tickets issued by travel agents or other airlines.
> This two-factor system is generally secure. The space of all 6-character alphanumeric confirmation codes combined with all possible last names is astronomically large, making it impossible to “guess” a valid pair.
Depending on the threat model, the attacker's goal might not be to guess a single pair but to access any valid pair (of a booking with a flight date in the future, or maybe even in the past). Suddenly you're looking at thousands of valid booking codes and the attacker can try a couple dozen of common names. Brute-forcing valid pairs then becomes relatively easy.
The "issue" is that they're returning the entire PNR dataset to the front-end in the first place. He doesn't detail how they fixed it, but there's no reason in the world that this entire dataset should be dumped into Javascript. I got into pretty heated arguments with folks about this at Travelocity and this shit is exactly why I was so adamant.
The space of all possible PRLs is about 2 billion, I can imagine a really big Airline moving that many passengers.
Yes in other GDS, it can be enough to identify a full booking. That’s why airlines prefer ticket or coupon number since the first two digits are the airline ticket stock / identifier and then fare codes, etc
The requiring last name, and more info is more or less security since any pss system can query the airline first for that combination before requiring more info to return a match.
Or are PNR locators recycled after a while?
(emphasis my own)
Sorry but I strongly disagree with this phrasing. This is a company "serving over 6 million customers since its 2021 launch" (from Google) that took four weeks to patch an embarrassing security flaw, after being handed all the details on a silver platter.
Imagine a food chain serving a million meals a year was revealed to be storing their food products in unsanitary conditions, and it took them a full month to correct this. That story would make national headlines, not to mention they could get promptly shut down by any competent health ministry.
I think this attitude mostly reveals how complacent we've become about these """incidents""": we just expect this to happen, everywhere and all the time, then we just shrug and say "they fixed it within a month, how responsible of them".
(unfortunately, I feel like AI was overused in authoring the writeup)
I'm not suggesting whether or not the article is AI assisted. I'm wondering if the ease of calling someone's work "AI slop" is a step along the slippery slope towards trivializing this sort of drive-by hostility that can be toxic in a community.
There's a difference between leveraging AI to proofread or improve parts of their writing and this - I feel like AI was overused here; gave the whole article that distinctive smell and significantly reduced its information density.
"The fallout"
This flaw was critical.
And other vibes. You know it when you see it, though it may be hard to define.
How do you know your perception is accurate? One of humanity's biggest weaknesses is trusting that kind of response.
Pattern recognition is a many millions of years evolved ability best exemplified in the "human" species by the way, so I basically disagree with your whole premise anyways.
Imagine that - doctors, who have seen everything, have years of study, treat all those people, still require objective evidence. Anyone in IT looks for objective evidence - timing, stepping through code, etc.
Confidence doesn't correlate well with accuracy; in fact the more someone expresses your kind of confidence, the less I rely on them at all.
What if you wrongfully accuse someone? Does that matter? Are you responsible for the consequences of what you do?
Of course everyone is responsible for their accuracy and their errors, doesn't mean it's impossible to infer things based on observation experience and intuition. This is an evolved ability, but I do agree some people are better than others like most things.
You're conflating a lot of things. Many prejudices are accurate and prudent, which craft is stupid, but so what? I'm not going to deny my perception on something that's correct just because some other idiot believes in magic; non sequitur.
?
A stark reminder is a stark reminder about the existence of AI slop. You see the phrase a lot in social media comment spam.
Which really makes me wonder how we ended up training an AI…
(b.) they practically demonstrate the point: while, yes, AI uses em-dashes, the entire corpus of em-dashes is still largely human, too, so using that as a sole signal is going to have a pretty high false positive rate.