* call - to send and await a reply * cast - to send and not await a reply
(though new data is created as a result of running the function, technically this is guaranteed to not affect the inputs due to the function having to be pure)
(perhaps this is excessively pedantic)
If it's the same function running on different data, then you are applying the function to the data. If it's the same data running in a different function, then you are applying the data to the function.
Invoking X sounds deliciously alchymistic, by the way.
> I'm just wired to see puns where they weren't necessarily intended.
I guess it keeps you grounded. Shocking how that works.So at least in Finnish the word "call" is considered to mean what it means in a context like "a mother called her children back inside from the yard" instead of "call" as in "Joe made a call to his friend" or "what do you call this color?".
Just felt like sharing.
It's also separate from the verb for making a phone call, which would be "anrufen".
> Structure and Interpretation of Computer Programs (SICP) is a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture.[1] It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation.
https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...
Another one:
> "Be, and it is" (Arabic: كُن فَيَكُونُ; kun fa-yakūn) is a Quranic phrase referring to the creation by God′s command.[1][2] In Arabic, the phrase consists of two words; the first word is kun for the imperative verb "be" and is spelled with the letters kāf and nūn. The second word fa-yakun means "it is [done]".[3]
> (image of verse 2:117) https://commons.wikimedia.org/wiki/File:002117_Al-Baqrah_Urd...
> The phrase at the end of the verse 2:117 > Kun fa-yakūn has its reference in the Quran cited as a symbol or sign of God's supreme creative power. There are eight references to the phrase in the Quran:[1]
https://en.wikipedia.org/wiki/Be,_and_it_is
I wonder if “Be” would be imperative or functional. Is “Be” another name for `Unit()`? Or, would it be more Lisp-like `(be unit)`?
For example, I often bring up images of card catalogs when explaining database indexing. As soon as people see the index card, and then see that there is a wooden case for looking up by Author, a separate case for looking up by Dewey Decimal et. cet. the light goes on.
However, the metaphor isn’t that educationally helpful anymore. On more than one occasion I found myself explaining how card catalogues or even (book) dictionaries work, only to be met with the reply: “oh, so they’re basically analogue hashmaps”.
But nope, its because a punch card was 80 characters wide. And the first punch cards were basically just index cards. Another hat tip to the librarians.
I guess this is the computing equivalent of a car being the width of two horse's asses...
Both band directors showed up at one of my classes the first day of school, dragged me to an empty room, and browbeat me into returning to band. It was the right choice for my social life, but I did hear great things about that class.
Context and preconceptions are everything!
I just explain that hard disks are just a continuous list of 1s and 0s, and then ask what we need to do if people want to find anything. People are able to infer the idea of needing some sort of structure.
Page 31 has:
> … if, as a result of some error on the part of the programmer, the order Z F does not get overwritten, the machine will stop at once. This could happen if the subroutine were not called in correctly.
> It will be noted that a closed subroutine can be called in from any part of the program, without restriction. In particular, one subroutine can call in another subroutine.
See also the program on page 33.
The Internet Archive has the 1957 edition of the book, so I wasn’t sure if this wording had changed since the 1951 edition. I couldn’t find a paper about EDSAC from 1950ish that’s easily available to read, but [here’s a presentation with many pictures of artefacts from EDSAC’s early years](https://chiphack.org/talks/edsac-part-2.pdf). It has a couple of pages from the 1950 “report on the preparation of programmes for the EDSAC and the use of the library of subroutines” which shows a subroutine listing with a comment saying “call in auxiliary sub-routine”.
I agree, it looks like this 1951 source is using "call in" to mean "invoke" — the actual transfer of control — as opposed to "load" or "link in." Which means this 1951 source agrees with Sarbacher (1959), and is causing me right now to second-guess my interpretation of the MANIAC II (1956) and Fortran II (1958) sources — could it be that they were also using "call in" to mean "invoke" rather than "indicate a dependency on"? Did I exoticize the past too much, by assuming the preposition in "call in" must be doing some work, meaning-wise?
Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.
(In the way you'd call upon a skill, not in the way you'd call upon a neighbor.)
My favourite (least favourite?) is using “call” with “return”. On more than one occasion I’ve heard:
“When we call the return keyword, the function ends.”
I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.
Also Forth comes to mind, but that would probably be a stretch.
It does. It's been discussed on HN before, even: https://news.ycombinator.com/item?id=13857174
λexpr1.λexpr2.λc.((c expr1) expr2)
In in Tcl, `if` is called a "command".
The semantics of `if` requrie at least, `if(cond, clause)`, though more generally, `if(cond, clause, else-clause)`
e.g. in C:
https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf
(6.8.5.1) selection-statement:
if ( expression ) secondary-block
if ( expression ) secondary-block else secondary-block
in C++:https://eel.is/c++draft/gram.stmt
selection-statement:
if constexpropt ( init-statementopt condition ) statement
if constexpropt ( init-statementopt condition ) statement else statement
if !opt consteval compound-statement
if !opt consteval compound-statement else statement
where condition:
expression
attribute-specifier-seqopt decl-specifier-seq declarator brace-or-equal-initializer
structured-binding-declaration initializer
More examples:https://docs.python.org/3/reference/grammar.html
https://doc.rust-lang.org/reference/expressions/if-expr.html...
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
expression != argument
So in the abstract, if is a ternary function. I think the original comment was reflecting on how "if (true) ... " looks like a function call of one argument but that's obviously wrong.
"If" can be implemented as a function in Haskell, but it's not a function. You can't pass it as a higher-order function and it uses the "then" and "else" keywords, too. But you could implement it as a function if you wanted:
if' :: Bool -> a -> a
if' True x _ = x
if' False _ y = y
Then instead of writing something like this: max x y = if x > y then x else y
You'd write this: max x y = if' (x > y) x y
But the "then" and "else" remove the need for parentheses around the expressions.But I assume the comment you were replying to was not referring to the conditional syntax from C-like languages, instead referring to a concept of an if "function", like the `ifelse` function in Julia [1] or the `if` form in Lisps (which shares the syntax of a function/macro call but is actually a special form) [2], neither of which would make sense as one argument function.
[1] https://docs.julialang.org/en/v1/base/base/#Base.ifelse
[2] https://www.gnu.org/software/emacs/manual/html_node/elisp/Co...
(if true '())
Or if you wanted to capture the exact same semantics (rather than returning a null value to the continuation of the if) (if true (values))
Now it's obvious that if takes two (or three) arguments :)In an imperative programming language with eager evaluation, i.e. where arguments are evaluated before applying the function, implementing `if` as a function will evaluate both the "then" and "else" alternatives, which will have undesirable behavior if the alternatives can have side effects.
In a pure but still eager functional language this can work better, if it's not possible for the alternatives to have side effects. But it's still inefficient, because you're evaluating expressions whose result will be discarded, which is just wasted computation.
In a lazy functional language, you can have a viable `if` function, because it will only evaluate the argument that's needed. But even in the lazy functional language Haskell, `if` is implemented as built-in syntax, for usability reasons - if the compiler understands what `if` means as opposed to treating it as an ordinary function, it can optimize better, produce better messages, etc.
In a language with the right kind of macros, you can define `if` as a macro. Typically in that case, its arguments might be wrapped in lambdas, by the macro, to allow them to be evaluated only as needed. But Scheme and Lisp, which have the right kind of macros, don't define `if` as a macro for similar reasons to Haskell.
One language in which `if` is a function is the pure lambda calculus, but no-one writes real code in that.
The only "major" language I can think of in which `if` is actually a function (well, a couple of methods) is Smalltalk, and in that case it works because the arguments to it are code blocks, i.e. essentially lambdas.
tl;dr: `if` as a function isn't practical in most languages.
if' :: Bool -> a -> a -> a
if' True x _ = x
if' False _ y = y
The compiler could substitute this if it knew the first argument was a constant.
Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet. The early versions of Haskell had pretty terrible I/O, too.
In GHC, `if` desugars to a case statement, and many optimizations flow from that. It's pretty central to the compiler's operation.
> Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet.
Neither of these are true. My comment above was attempting to explain why `if` isn't implemented as a function. Haskell is a prime example of where it could have been done that way, the authors are fully aware of that, but they didn't because the arguments against doing it are strong. (Unless you're implementing a scripting-language type system where you don't care about optimization.)
[1]: https://softwareengineering.stackexchange.com/questions/1957...
The post claims that this is done in such a basic way that if you have managed to rebind `ifThenElse`, your rebound function gets called. I didn't confirm this, but I believed it.
ifFalse: alternativeBlock
"Answer the value of alternativeBlock. Execution does not actually
reach here because the expression is compiled in-line."
^alternativeBlock value
But yeah, this is a pretty critical point for optimizations - any realistic language is likely to optimize this sooner or later.
return(value);
In many early computer programming documents the term "order" was used instead of "statement", where "order" was meant as a synonym for "command" and not as referring to the ordering of a sequence.
I believe that the term "statement" has been imposed by the IBM publications about FORTRAN, starting in 1956.
Before the first public documents about IBM FORTRAN, the first internal document about FORTRAN, from 1954, had used the terms "formula" for anything that later would be called "executable statement", i.e. for many things that would not have been called formulas either before or after that, like IF-formulas, DO-formulas, GOTO-formulas and so on, and the document had used "sentence" for what later would be called "non-executable statements" (i.e. definitions or declarations).
Before FORTRAN (1951 to 1953), for his high-level programming language Heinz Rutishauser had used the term "Befehl", which means "command". (For what we name today "program", he had used the term "Rechenplan", which means "computation plan".)
"Plan" is really a much better term than "program". My next compiler will be called "plantran" for "plan translator". "pt" for short.
An interpreter and editor in 1K sounds very challenging! Do you mean 1K of Squeak bytecode? Including vectors of literals and selectors?
The 1KB target was for Baby 8 (the associated processor) binary code. That processor had a few quirks (like only indirect addressing outside the "zero page") and the language design partly reflected that. Seeing other people fit Lisp into less than 512 bytes made me think I had sacrificed too much functionality for size.
The incomplete Squeak code was just a quick test to see if the language made sense at all before wasting time doing the assembly version.
The sieve example uses a single 8Kword array https://github.com/jeceljr/baby8/blob/main/examples/bla/siev...
Best I can tell, all usable definitions surrounding "Command" seem to suggest an associated action, which isn't true of all statements in imperative programming.
The defining characteristic of a programming "statement" is that it can perform some action (even if not all of them do), whereas statements in the usual everyday sense are inert. So it's not a good term.
Given a declaration "statement" such as:
int x;
What is the expected action? Memory allocation... I guess? Does any compiler implementation actually do that? I believe, in the age of type systems being all the rage, one would simply see that as a type constraint, not as a command to act upon.Expressing an idea (the values used henceforth should be integers) seems more applicable, no?
> int x;
> What is the expected action?
In a language that requires you to declare variables before you use them, it clearly does something - you couldn't do "x = 5;" before, and now you can. If you're trying to understand the program operationally (and if you're not then why are you thinking about statements?) it means something like "create a variable called x", even if the implementation of that is a no-op at the machine code level.
> I believe, in the age of type systems being all the rage, one would simply see that as a type constraint, not as a command to act upon.
But in that case by definition one would not be seeing it as a statement! Honestly to me it doesn't really matter whether "int x;" is an expression, a statement, or some mysterious third thing (in the same way that e.g. forward declaring a function isn't a statement or an expression). When we're talking about the distinction between statements and expressions we're talking primarily about statements like "x = x + 1;", which really can't be understood as a statement in the everyday English sense.
> Memory allocation... I guess? Does any compiler implementation actually do that?
Toy/research or embedded compilers do yes.
Including Ken Thompson's very own[1]. Fair enough that in the olden days there was a clearer distinction made, but even the man who (co-)created C changed his mind on that later because that is how the term has evolved. Now, in normal conversation, if you call `int x;` a statement, nobody is going to struggle to understand what you mean. Are you still living in the 1970s? If so, I admittedly missed that. My assumption was that this discussion is taking place in 2025 with the words of its time.
The distinction hasn't gotten any less clear, but it was already easy to find people in the 01970s who didn't know the difference. You're one of today's lucky ten thousand!
I do frequently encounter people who are failing to solve their problems because they don't know things that were already known in the 01970s, so I think it's worthwhile to study 01970s informatics. Mostly, though, I think it's worth studying the zeitgeist of the 01970s in informatics: some people were making rapid progress on solving fundamental problems, partly because they didn't have certain self-defeating attitudes that are popular in informatics today, such as not caring whether what they say is true or false.
Well, of course not. Hnlang is clearly not the same language as Go. The different name tips you off to that. However, the same intent can be expressed as a Go statement:
var x int
For what it is worth, the silly pedantry you offer is as funny as you had hoped it would be, so kudos. But it is telling that you had to reach for being funny to mask your cluelessness. Better than being one of those idiots that doubles down on their idiocy once they realize that they have no idea what is going on, I suppose!From my own experience, native speakers (who are beginners at programming) also do this. They also describe all kinds of things as "commands" that aren't.
A fairly recent example, "salty". It's short, and kinda feels like it describes what it means (salty -> tears -> upset).
It sounds like "call" is similar. It's short, so easy to say for an often used technical term, and there are a couple of ways it can "feel right": calling up, calling in, summoning, invoking (as a magic spell). People hear it, it fits, and the term spreads. I doubt there were be many competing terms, because terms like "jump" would have been in use to refer to existing concepts. Also keep in mind that telephones were hot, magical technology that would have become widespread around this same time period. The idea of being able to call up someone would be at the forefront of people's brains, so contemporary programmers would likely have easily formed a mental connection/analogy between calling people and calling subroutines.
(Which maybe illustrates that a metaphor can succeed even when everyone doesn't agree about just what it's referring to, as you're suggesting "call" may have done.)
And I agree that it has nothing to do with tears. The actual etymology stems from sailors: https://www.planoly.com/glossary/salty
Wiktionary defines bitter as "cynical and resentful", which doesn't quite capture the "more longer-lasting, somewhat less emotional condition" part of it.
Oddly, I never thought of the term library as originating from a physical labelled and organized shelf of tapes, until now.
It just means to start doing something, no great mystery.
In Mauchly's "Preparation of Problems for EDVAC-Type Machines", quoted in part in the blog post, he writes:
> The total number of operations for which instructions must be provided will usually be exceedingly large, so that the instruction sequence would be far in excess of the internal memory capacity. However, such an instruction sequence is never a random sequence, and can usually be synthesized from subsequences which frequently recur.
> By providing the necessary subsequences, which may be utilized as often as desired, together with a master sequence directing the use of these subsequences, compact and easily set up instructions for very complex problems can be achieved.
The verbs he uses here for subroutine calls are "utilize" and "direct". Later in the paper he uses the term "subroutine" rather than "subsequence", and does say "called for" but not in reference to the subroutine invocation operation in the machine:
> For these, magnetic tapes containing the series of orders required for the operation can be prepared once and be made available for use when called for in a particular problem. In order that such subroutines, as they can well be called, be truly general, the machine must be endowed with the ability to modify instructions, such as placing specific quantities into general subroutines. Thus is created a new set of operations which might be said to form a calculus of instructions.
Of course nowadays we do not pass arguments to subroutines by modifying their code, but index registers had not yet been invented, so every memory address referenced had to be contained in the instructions that referenced it. (This was considered one of the great benefits of keeping the program in the data memory!)
A little lower down he says "initiate subroutines" and "transferring control to a subroutine", and talks about linking in subroutines from a "library", as quoted in the post.
He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines. He does talk about mathematical functions being computed by subroutines, including things like matrix multiplication:
> If the subroutine is merely to calculate a function for a single argument, (...)
"instantiating" would be a poor choice of word because in some languages you instantiate a function object when execution passes the point in the code where the function is declared, which is a separate step from calling it later. Examples are lambda expressions in C++ and all functions in Python. In those cases some state is captured at the tone of instantiation time.
I think the early BASIC's used the subroutine nomenclature for GOSUB, where there was no parameter passing or anything, just a jump that automatically remembered the place to return.
Functions in BASIC, as I remember it, were something quite different. I think they were merely named abbreviations for arithmetic expressions and simple one line artithmetic expressions only. They were more similar to very primitive and heavily restricted macros than to subroutines or functions.
C -------- START OF FUNCTION -------
INTEGER FUNCTION INCREMENT(I)
INCREMENT=I+1
RETURN
END
C -------- END OF FUNCTION -------
ARGF(X, Y, Z) = (D/E) * Z+ X** F+ Y/G
Naturally this is syntactically identical to an array element assignment, which is one of the many things that made compiling FORTRAN so much fun. DEF FNF(X, Y, Z) = (D/E) * Z+ X** F+ Y/G
and the function name had to start with FN.In the flang-new compiler, which builds a parse tree for the whole source file before processing any declarations, it was necessary to parse such things as statement functions initially so that further specification statements could follow them. Later, if it turns out that the function name is an array or regular function returning a pointer, the parse tree gets patched up in place and the statement becomes the first executable statement.
I always thought that the functions did not need a call keyword, as they normally would return a value, so that functions would appear in an assignment. So one just uses the function.
What needed a CALL was a subroutine, which effectively was a named address/label.
Indeed it would be just as possible to GOTO the address/label and then GOTO back. CALL keyword made the whole transaction more comprehensive.
So in a sense it was similar to calling up someplace using the address number. Often times this would change some shared state so that the caller would then proceed after the call. Think of it as if a 'boss' first calls Sam to calculate the figures, then calls Bill to nicely print the TPS report.
Eventually everything became a function and subroutines were associated with spaghetti...
Now, why is that it's called routine (aka program) and subroutine?
Well, apparently [0], in a 1947 document "Planning and Coding Problems for an Electronic Computing Instrument, Part 1" by H. Goldstine and J. von Neumann it is stated:
"We call the coded sequence of a problem a routine"
[0]: https://retrocomputing.stackexchange.com/q/20335In modern terminology, we call procedures/functions/subroutines and pass arguments/parameters, so "pass by (value|name|reference)" is clearer than "call by (value|name|reference)". But the old terms "call by value" et al have survived in some contexts, though the idea of "calling" an argument or parameter has not.
As for "raise", maybe exceptions should've been called objections.
But surely those days are in the past. We should call them objections, because they're always used for errors and never for control flow. [1] After all, that would be an unconditional jump, and we've already settled that those are harmful.
[1] https://docs.python.org/3/library/exceptions.html#StopIterat...
You raise flags or issues, which are good descriptions of an exception.
10 FOR N = 1 TO 10
20 PRINT " ";
30 NEXT N
C language would first release in 1972, that had the three-part `for` with assignment, condition, and increment parts.
for p := x step d until n do
So we renamed our git branches from master to main...because of colonialism.
So what's the correct non-colonial word? ask, request, plea?
Some people here seem to like the word "summon". tsk, tsk, tsk
I doubt that "the librarian's 'call for' meaning was indeed the one originally intended" (not that you say it!).
Or maybe there is no difference between what you mean by "call for" and what the first quote meant by "call in". The subroutine is called / retrieved - and control is transferred to that subroutine. (I wouldn't say that when doctors are called to see a patient, for example, they are called in the librarian's meaning.)
HOTSPUR: Why, so can I, or so can any man; But will they come when you do call for them?
-- Henry the Fourth, Part 1
The “call number” in that story comes after the “call”. Not the other way around.
> Dennis Ritchie encouraged modularity by telling all and sundry that function calls were really, really cheap in C. Everybody started writing small functions and modularizing. Years later we found out that function calls were still expensive on the PDP-11, and VAX code was often spending 50% of its time in the CALLS instruction. Dennis had lied to us! But it was too late; we were all hooked...
https://www.catb.org/~esr/writings/taoup/html/modularitychap...
In the Algol 58 report https://www.softwarepreservation.org/projects/ALGOL/report/A... we have "procedures" and "functions" as types of subroutines. About invoking procedures, it says:
> 9. Procedure statements
> A procedure statement serves to initiate (call for) the execution of a procedure, which is a closed and self-contained process with a fixed ordered set of input and output parameters, permanently defined by a procedure declaration. (cf. procedure declaration.)
Note that this does go to extra pains to include the term "call for", but does not use the phraseology "call a procedure". Rather, it calls for not the procedure itself, but the execution of the procedure.
However, it also uses the term "procedure call" to describe either the initiation or the execution of the procedure:
> The procedure declaration defining the called procedure contains, in its heading, a string of symbols identical in form to the procedure statement, and the formal parameters occupying input and output parameter positions there give complete information concerning the admissibility of parameters used in any procedure call, (...)
Algol 58 has a different structure for defining functions rather than procedures, but those too are invoked by a "function call"—but not by "calling the function".
I'm not sure when the first assembly language with a "call" instruction appeared, but it might even be earlier than 01958. The Burroughs 5000 seems like it would be a promising thing to look at. But certainly many assembly languages from the time didn't; even MIX used a STJ instruction to set the return address in the return instruction in the called subroutine and then just jumped to its entry point, and the PDP-10 used PUSHJ IIRC. The 360 used BALR, branch and link register, much like RISC-V's JALR today.