# def two_sum(arr: list[int], target: int) -> list[int]:
펀크 투섬(아래이: 목록[정수], 타개트: 정수) -> 목록[정수]:
# n = len(arr)
ㄴ = 길이(아래이)
# start, end = 0, n - 1
시작, 끝 = 0, ㄴ - 1
# while start < end:
동안 시작 < 끝:
Code would be more compact, allowing things like more descriptive keywords e.g.
AbstractVerifiedIdentityAccountFactory vs 실명인증계정생성, but we'd lose out on the nice upper/lowercase distinction.I hear that information processing speed is nearly the same across all languages though regardless of density, so in terms of processing speed, may not make much difference.
It never really took off. I think because computers already require users to read and type Latin letters in lots of other situations, and it's not that hard to learn what a few keywords mean, so you might as well stick with the English keywords everyone else is using.
One distinction though: Han uses actual Korean words, not transliterations. 함수 means "function" in Korean, 만약 means "if" — they're real words Korean speakers already know.
Your example uses transliterations like 펀크 and 아래이 which would look odd to a Korean reader. That difference matters for readability.
This seems like a reasonably good security measure too
https://github.com/farant/rhubarb/blob/main/include/latina.h
edit: oh, maybe you can’t do full unicode. that’s too bad!
Error messages, REPL, LSP hover docs are all in Korean. You can't get that from #define 만약 if.
anecdotally it is also interesting to use with ai because apparently it is "harder to be on autopilot" based on a huge pre-existing corpus of code when you write it in a different language. could activate different reasoning regions somehow.
(i just appreciate what can be trivially accomplished in c even if it's kind of janky after spending way too much time in the JS preprocessor mines...)
I actually tested this with GPT-4o's tokenizer, and the result was the opposite — Korean keywords average 2-3 tokens vs 1 for English. A fibonacci program in Han takes 88 tokens vs 54 in Python.
The reason comes down to how LLM tokenizers work. They use BPE (Byte Pair Encoding), which starts with raw bytes and repeatedly merges the most frequent pairs into single tokens. Since training data is predominantly English, words like `function` and `return` appear billions of times and get merged into single tokens.
Korean text appears far less frequently, so the tokenizer doesn't learn to merge Hangul syllables — it falls back to splitting each character into 2-3 byte-level tokens instead.
It's a tokenizer training bias, not a property of Hangul itself. If a tokenizer were trained on a Korean-heavy corpus, `함수` could absolutely become a single token too.
So no efficiency benefit today. But it was a fun exploration, and Korean speakers can read the code like natural language. It could also be a fun way for people learning Korean to practice reading Hangul in a different context — every keyword is a real Korean word with meaning.
For Korean readers the character systems look quite different, but I can see how it's hard to parse visually without familiarity.
As you said, syntax highlighting helps a lot — there's a colored screenshot at the top of the README showing how it looks in practice.
I have similar idea to train LLM in Serbian, create even new encoding https://github.com/topce/YUTF-8 inspired by YUSCII. Did not have time and money ;-) Great that you succeed. Idea if train in Serbian text encoded in YUTF-8 (not UTF-8) it will have less token when prompt in Serbian then English, also Serbian Cyrillic characters are 1 byte in YUTF-8 instead of 2 in UTF.Serbian language is phonetic we never ask how you spell it.Have Latin and Cyrillic letters.
Even without retraining BPE from scratch, starting with YUTF-8 and measuring how existing tokenizers handle it would already be a worthwhile experiment.
Hope you find the time to build it, good luck!
The main benefit of Korean actually comes from the fact that the language itself fits perfectly into a standard 27 alphabet keys and laid out in such a way that lets you type ridiculously fast. The consonant letters are always situated in the left half and the vowels are in the right half of the keyboard. This means it is extremely easy to train muscle memory because you’re mostly alternating keystrokes on your left hand and right hand.
Anecdotally I feel like when I’m typing in English, each half of my brain needs to coordinate more compared to when I’m typing in Korean, the right brain only need to remember the consonant positions for my left hand and my left brain only need to remember the vowel positions.
What you talked is mostly right and I did not know about typing in Korean, the left-hand side and right-hand side. Btw, Consonant(Jaeum) and vowel(Moeum).
In experience-wise, what you had would be precise.