Sure, third-party services like the OP can provide bots that can scan. But if you create an ecosystem in which PRs can be submitted by threat actors, part of your commitment to the community should be to provide visibility into attacks that cannot be seen by the naked eye, and make that protection the norm rather than the exception.
[0] https://docs.github.com/en/get-started/learning-about-github...
It makes the product better
I know people love to talk money and costs and "value", but HN is a space for developers, not the business people. Our primary concern, as developers, is to make the product better. The business people need us to make the product better, keep the company growing, and beat out the competition. We need them to keep us from fixating on things that are useful but low priority and ensuring we keep having money. The contention between us is good, it keeps balance. It even ensures things keep getting better even if an effective monopoly forms as they still need us, the developers, to make the company continue growing (look at monopolies people aren't angry at and how they're different). And they need us more than we need them.So I'd argue it's the responsibility of the developers, hired by GitHub, to create this feature because it makes the product better. Because that's the thing you've been hired for: to make the product better. Your concern isn't about the money, your concern is about the product. That's what you're hired for.
And when the incremental cost to build a feature is low in an age of agentic AI, there should be no barrier to a member of the technical staff (and hopefully they're not divided into devs/test/PM like in decades past) putting a prototype together for this.
But I also think we've had a culture shift that's hurting our field. Where engineers are arguing about if we should implement certain features based on the monetary value (which are all fictional anyways). But that's not our job. At best, it's the job of the engineering manager to convince the business people that it has not only utility value, but monetary.
The article is about in JavaScript, although it can apply to other programming languages as well. However, even in JavaScript, you can use \u escapes in place of the non-ASCII characters. (One of my ideas in a programming language design intended to be better instead of C, is that it forces visible ASCII (and a few control characters, with some restrictions on their use), unless you specify by a directive or switch that you want to allow non-ASCII bytes.)
The mere fact that a software maintainer would merge code without knowing what it does says more about the terrible state of software.
Yes, it's a red flag. Yes, there's legitimate uses. Yes, you should always interrogate evals more closely. All these are true
Eval for json also lead to other security issues like XSSI.
I would say they are arguing that in bad faith, so I wanted to enter a dialogue where they are either forced to agree, or more likely, not respond at all.
Sure the payload is invisible (although tbh im surprised it is. PUA characters usually show up as boxes with hexcodes for me), but the part where you put an "empty" string through eval isn't.
If you are not reviewing your code enough to notice something as non sensical as eval() an empty string, would you really notice the non obfuscated payload either?
Innocuous PR (but do note the line about "pedronauck pushed a commit that referenced this pull request last week"): https://github.com/pedronauck/reworm/pull/28
Original commit: https://github.com/pedronauck/reworm/commit/df8c18
Amended commit: https://github.com/pedronauck/reworm/commit/d50cd8
Either way, pretty clear sign that the owner's creds (and possibly an entire machine) are compromised.
But really, it still has to be injected after the fact. Even the most superficial code review should catch it.
For data or code hiding the Acme::Bleach Perl module is an old example though by no means the oldest example of such. This is largely irrelevant given how relevant not learning from history is for most.
Invisible characters may also cause hard to debug issues, such as lpr(1) not working for a user, who turned out to have a control character hiding in their .cshrc. Such things as hex viewers and OCD levels of attention to detail are suggested.
Things that vanish on a printout should not be in Unicode.
Remove them from Unicode.
Unicode needs tab, space, form feed, and carriage return.
Unicode needs U+200E LEFT-TO-RIGHT MARK and U+200F RIGHT-TO-LEFT MARK to switch between left-to-right and right-to-left languages.
Unicode needs U+115F HANGUL CHOSEONG FILLER and U+1160 HANGUL JUNGSEONG FILLER to typeset Korean.
Unicode needs U+200C ZERO WIDTH NON-JOINER to encode that two characters should not be connected by a ligature.
Unicode needs U+200B ZERO WIDTH SPACE to indicate a word break opportunity without actually inserting a visible space.
Unicode needs MONGOLIAN FREE VARIATION SELECTORs to encode the traditional Mongolian alphabet.
Those are legacied in with ASCII. And only space and newline are needed. Before I check in code to git, I run a program that removes the tabs and linefeeds.
> Unicode needs U+200E LEFT-TO-RIGHT MARK and U+200F RIGHT-TO-LEFT MARK to switch between left-to-right and right-to-left languages.
!!tfel ot thgir ,am ,kooL
> Unicode needs U+115F HANGUL CHOSEONG FILLER and U+1160 HANGUL JUNGSEONG FILLER to typeset Korean.
I don't believe it.
> Unicode needs U+200C ZERO WIDTH NON-JOINER to encode that two characters should not be connected by a ligature.
Not needed.
> Unicode needs U+200B ZERO WIDTH SPACE to indicate a word break opportunity without actually inserting a visible space.
How on earth did people read printed matter without that?
> Unicode needs MONGOLIAN FREE VARIATION SELECTORs to encode the traditional Mongolian alphabet.
Somehow people didn't need invisible characters when printing books.
There are also languages that are written from to to bottom.
Unicode is not exclusively for coding, to the contrary, pretty sure it's only a small fraction of how Unicode is used.
> Somehow people didn't need invisible characters when printing books.
They didn't need computers either so "was seemingly not needed in the past" is not a good argument.
I should be able to use Ü as a cursed smiley in text, and many more writing systems supported by Unicode support even more funny things. That's a good thing.
On the other hand, if technical and display file names (to GUI users) were separate, my need for crazy characters in file names, code bases and such are very limited. Lower ASCII for actual file names consumed by technical people is sufficient to me.
Rule of thumb: two Unicode sequences that look identical when printed should consist of the same code points.
And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?
Yes. Unicode should not be about semantic meaning, it should be about the visual. Like text in a book.
> And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?
Yup. Consider a printed book. How can you tell if a letter is a Greek letter or a Latin letter?
Those Unicode homonyms are a solution looking for a problem.
Do you think 1, l and I should be encoded as the same character, or does this logic only extend to characters pesky foreigners use.
I can absolutely tell Cyrillic k from the lating к and latin u from the Cyrillic и.
>should not be about semantic meaning,
It's always better to be able to preserve more information in a text and not less.
While at it we could also unify I, | and l. It's too confusing sometimes.
Also this attack doesnt seem to use invisible characters just characters that dont have an assigned meaning.
Do you honestly think this is a workable solution?
Then, any appearance of unprintable characters should also be flagged. There are rather few legitimate uses of some zero-width characters, like ZWJ in emoji composition. Ideally all such characters should be inserted as \xNNNN escape sequences, and not literal characters.
Simple lint rules would suffice for that, with zero AI involvement.
Emojis are another abomination that should be removed from Unicode. If you want pictures, use a gif.
I have considered allowing a short list that does not include emojis, joining characters, and so on - basically just currency symbols, accent marks, and everything else you'd find in CP-1521 but never got around to it.
grep -P '[\x{200B}\x{200C}\x{200D}\x{FEFF}]' code.ts
See https://stackoverflow.com/q/78129129/223424And please, everyone arguing the code snippet should never have passed review - do you honestly believe this is the only kind of attack that can exploit invisible characters?
Is there ever a circumstance where the invisible characters are both legitimate and you as a software developer wouldn't want to see them in the source code?
I am wondering how that they've LLM, are people using them for making new kind of malicious codes more sophisticated than before?
My clawbot & other AI agents already have this figured out.
/s