ashton314 13 days ago

I've got Harper's Practical Foundations for Programming Languages and it's a great book—he writes clearly and succinctly.

Knowing the lambda calculus helped me one time when I was working as a software engineer: I had just added functions to a little DSL interpreter that was going to make it easy to customize behavior of our product to different customers. (It wasn't ever going to be used by customers directly; it was primarily a tool for the internal team.) It was at this point that I realized we needed some kind of execution time-out: since we could encode functions, we could write down e.g. the Y combinator or the omega combinator and we could get non-terminating programs in this DSL.

Now I work as a programming languages researcher, so the lambda calculus has direct application to my day job.

Those curious might be interested in ISWIM, [1,2] which is an extension of the lambda calculus with an arbitrary set of operators. Like the lambda calculus, this is an abstract language. However, you can add numbers and other operators, and ISWIM-like languages are often used to illustrate new ideas in programming languages.

Syntax is incidental. Boil the syntax away from languages and reduce them to their distilled semantics to get out the essential differences—this is the kind of thing that the lambda calculus makes easy.

I highly recommend reading Landin's The Next 700 Programming Languages [2] as it is a great, short, clear read.

[1]: https://en.wikipedia.org/wiki/ISWIM [2]: https://www.cs.cmu.edu/~crary/819-f09/Landin66.pdf

  • trueismywork 13 days ago

    Can you read Harpers book without knowing lambda calculus?

    • rkrzr 13 days ago

      You can learn the lambda calculus in a few hours.

      If you read just the first page of the linked paper and work through a few examples, you will likely already know enough about it to read the book. It's really just like equational reasoning in mathematics.

renonce 13 days ago

One invention that drew me to this topic was Binary Lambda Calculus, invented by John Tromp (“Tromp” as in Tromp-Taylor Rules). It’s just a direct binary encoding of a lambda calculus term, but it gives you a concrete and concise representation which is useful for evaluating the complexity of an expression. I encountered it when viewing https://codegolf.stackexchange.com/questions/6430/shortest-t... and found that this language was the one that could write the most precise representation of a very large number with the fewest bits possible. I later learned that Graham’s number could be encoded in 120 bits (maybe 3~4 less), much more concise than equivalent mathematical language, and I’ve since been drawn into the field of googology. It was fascinating.

  • tromp 13 days ago

    That Stack Exchange thread shows you can exceed Graham's number with the 49 bit lambda term (λ 1 1) (λ 1 (1 (λ λ 1 2 (λ λ 2 (2 1))))), or graphically

        ┬─┬ ┬─┬──────────
        └─┤ │ │ ──┬──────
          │ │ │ ┬─┼──────
          │ │ │ └─┤ ┬─┬──
          │ │ │   │ ┼─┼─┬
          │ │ │   │ │ ├─┘
          │ │ │   │ ├─┘
          │ │ │   ├─┘
          │ │ ├───┘
          │ ├─┘      
          └─┘        
    
    Related: https://oeis.org/A333479
    • renonce 13 days ago

      Yeah but it’s not representing a known number - I would rather say it is a proof that the busy beaver function for BLC at 49 bits is higher than Graham’s number. The fact that it represents known numbers concisely is more interesting to me.

an-allen 13 days ago

When I took Fundamentals of Programming Languages 20 years ago - I nearly failed the class. Lambda calculus was simply too esoteric for me to appreciate and much less understand intuitively.

Fast forward 20 years, and I see the fundamentals of Alonzo Church’s system in every computation problem I encounter. It’s one of those concepts that age like wine. The only other concept I put on the same level is Shannons “informational entropy” and maybe Wolfram’s Ruliad.

  • _delirium 13 days ago

    It's kind of interesting that the history is in the order it is. I could completely imagine it being reversed: first, 1000 programming languages are invented, then later, in an attempt to put order to this madness, and understand whether some of them are in a fundamental sense equivalent to others or not, you invent minimalist languages like Turing machines or the lambda calculus, and start developing a theory of reductions.

    Kind of odd that the Turing machines and lambda calculus predate almost all the others! I mean there are good reasons for it, especially if you try to put yourself in a 1930s mathematics mindset (which is why it actually happened that way), but it is, I'd submit, a bit surprising to learn from a 2020s perspective if you didn't already know it.

dexwiz 13 days ago

There has to be some programmer rite of passage involving learning lambda calculus. I went through this a few years ago when something similar was posted. I learned the notation, marveled at its dual simplicity and completeness, and and did a few exercises. But I came out the other side none the wiser. I think I was looking for some great epiphany on the nature of computation. Alas, it eluded me in the end. It was fun but ultimately useless for me.

  • tombert 13 days ago

    I love theory, and I also really like lambda calculus, but I feel like this sentiment applies to most theory, particularly stuff after undergrad.

    I spent a not-insignificant amount of time learning how to do proofs with Isabelle. I learned a lot about inductive proofs, set theory, meta-logic, and challenged myself to prove a lot of the stuff I had previous taken for granted (e.g. proving that different sorts refine each other). I enjoyed it, and similarly was convinced that this was going to be some life-changing thing that changes my career trajectory and...

    Nothing changed. No one in charge of companies gives a shit about theory. They all claim that they love theory, they claim that they are very research focused, they claim that they value all the time you spent learning this stuff, but in reality they really just want you to change the color of buttons, or change the format of dates, or add a field to a JSON. It sometimes feels like no software engineer but me actually wants to learn any math, and will refuse to touch anything even resembling it.

    And I'm not picking on Isabelle here; I've had similar results trying to pitch TLA+ and Coq and Agda for some of the more error-prone parts of the codebase, with different sales-pitches, and without fail the managers will always say that they "will look into it", and promptly do absolutely nothing. The first two times a manager said that, I believed them, but after that I realized that they're just trying to shut me up and tell me "no" politely.

    It was enough to depress me, and it still kind of does.

    I still think learning stuff for fun is worth it, but I'd be lying if I told you if I knew why.

    • eddd-ddde 13 days ago

      I did a lot of math before I began learning programming. This was my experience:

      Recursion? Oh you mean proof by induction?

      Cryptography? Oh you mean number theory?

      Neural networks? Oh you mean calculus?

      Almost everything I learn while programming can be associated with some theory I learnt before for math.

      • xanderlewis 12 days ago

        By the way: logicians sometimes pedantically draw a distinction between ‘recursion’ and ‘induction’: induction is for proving things; recursion is for defining things.

        And it’s not just terminological: the difference becomes very important when you realise one needs to prove that recursion is possible (that is, function definitions by recursion on a well-ordered set are well-defined) by induction. So one actually comes logically before the other.

        I guess to truly determine whether they’re distinct concepts you’d have to exhibit some model of set theory that supports recursion but not induction, or something. But that’s beyond me.

      • tombert 13 days ago

        That's kind of what annoys me though; it's almost like the word "math" is toxic to people. When I try explaining a correspondence between something that they're doing via code, and how maybe learning a bit of the mathematics behind it might be useful, people kind of just roll their eyes, acting like I asked them to go get three PhDs, and acting like it's just one of those weird "Tombert things".

        I don't know anything about neural networks yet (though I really need to get on that), but I have noticed the other two examples you mentioned as well; recursion is more or less applied inductive proofs, a lot of crypto boils to number theory.

        My dream is to some day convince a manager to give me budget to spend a few weeks designing a new system and proving correctness with TLA+. I'm not saying it's terribly likely, but a man can dream.

        • keybored 13 days ago

          An unwelcome comparison: For certain people you can’t just teach Buddhist meditation using Buddhist terminology and context because that’s “woo”. However if you dress it up in vaguely “neuroscience” terms then it’s fine. And preferably you have some scientific studies at hand to prove that meditation has an effect. (Although no one demands biomechanic studies from their intro tennis teacher.)

          Compare with all the moaning and complaining that comes from some people when you talk about functional programming and how it might relate to computer science.[1] Then you might have a better chance inventing ad hoc words for basics like “map” (preferably very prosey) and claiming that Martin Fowler invented it.

          [1] This goes triply for anything having to do with proofs, at least proof software associated with FP.

          • richk449 13 days ago

            Interesting point. One difference:

            It is easy to tell if a tennis instructor is giving “good” instruction, where good means instructions that help you be a better tennis player. You just look at their students. If you want to know if they give good fundamentals for long term growth, you look at their much older students.

            How do you know if a meditation teacher is giving good instructions? Seems much harder to me.

            It is interesting that many of the traditional religions have hard to quantify benefits. Buddhism offers a personal inward help, and the abrahamic religions often benefit communities.

      • DrFalkyn 12 days ago

        Neural networks seem more linear algebra than calculus

        • eddd-ddde 10 days ago

          Yeah it's also lots of linear algebra. Which also reminds me, graphics programming? Linear algebra everywhere.

      • yobbo 13 days ago

        Those things are not equivalent.

        But schools typically teach these things as a "theory part" and as "practice/application part" (for example by "syncing/sequencing" courses if possible) which helps with both relevance and "getting it".

    • jvanderbot 13 days ago

      I use the deep theory experience a ton when designing software or systems. But I don't do an actual proof hardly ever. Being able to reason about a problem and intuit the extent of what's technologically possible to solve is insanely useful. But in most places that I work everyone around me can do that too, so we're constantly battling program managers and C suite guys / gals who just don't see why everything is so slow, difficult, un-scalable, unpredictable, etc. They want a stupid thing that works half the time, and they'll just turn the camera off the other half of the time.

      • catgary 13 days ago

        Yeah, I’m currently an ML scientist but I regularly run into shit where I have no clue how you’d solve the problem without having a compilers/functional programming background.

    • hnoyrnndd 13 days ago

      A friend of mine worked at Rockwell Collins and wrote proofs, and theorem provers, all day.

      If you work on webshit like me you won’t get to use these skills much.

      (On the other hand, recursing over the structure of JSON-like data often feels like 80% of the job, so I think the skills come into play at least a tiny bit).

      • tombert 13 days ago

        I work on web-adjacent-shit, mostly data processing stuff in Kafka that eventually ends up on the web.

        I don't get to recurse over JSON, occasionally I get to design a system from scratch and that's more fun, but it's usually not more complicated than "draw boxes that point to other boxes and/or cylinders on screen". Sometimes I draw a picture of cloud.

        I like my job just fine, it's a decent job, and I like my managers and coworkers as well, but it's just disappointing that enthusiasm for math and theory is what got me to this stage of my career, but I never really got to use it, and I don't see that ever really changing for me.

        Maybe I will write a paper at some point at least.

        • andoando 13 days ago

          Yeah I am at the same place. Considering going for a phd because at this point I cant muster any kind of interest in corporate tech.

          • tombert 13 days ago

            From experience, that has its own share of frustrations as well. You get to work in theory and that’s fun, but if you decide to do a PhD while also working a desk job, I recommend you do not tell your employers.

            I made the mistake of telling an employer, and whenever I made any mistake in my work, no matter how small, the employer would immediately say that the PhD work is distracting me, and my focus not BigCo.

            My PhD is on an indefinite hiatus right now, because it’s something I am questioning the utility of right now.

            • andoando 13 days ago

              That sucks. I cant imagine trying to work a job and do a phd at the dame time. That would take out any joy of it for me.

              I can definitely see a lot of utility depending on what you do research in. I bet a lot of ML doctoretes are making big bucks, but that's relatively niche.

              • tombert 13 days ago

                The PhD I was doing was in formal methods, specifically in regards to timing semantics. I think it’s super neat, but sadly I am in a minority.

                ML is cool but I was never able to get super into it. I use ChatGPT but I never had a ton of desire to get into the guts of it. I was always more interested in discrete math and formal logic.

                • andoando 12 days ago

                  That sounds interesting, can you tell me a bit more? What does formal methods in regards to timing semantics mean?

                  Im in the same boat. I like abstract logic/math and ML never caught much of my interest.

                  I want to start applying but Im not even sure how to find departments that would fit what Im looking for. Maybe I need to take this more seriously and start reading papers and self study.

    • andoando 13 days ago

      But now you can write your own json parser

  • DonaldPShimoda 13 days ago

    The "utility" of the lambda calculus, in my opinion, comes down to Church's original choice of terminology. He did not call the things denoted by lambda "functions"; rather, they were abstractions. What makes the lambda calculus so cool is that it lets us see that abstraction is the root of all computation, at least through one perspective (as opposed to the Turing point of view). Learning the lambda calculus directly will allow you to appreciate functions ("abstractions") and use them more effectively; you can reason about their computation and composition.

    • andoando 13 days ago

      I dont quite think of functions and abstractions as the same. Abstractions seem to do more with having a self referential property. Though I suppose that can be seen as the identity function f->f.

      IIRC Church was heavily influenced by Turing and vice versa.

      • dexwiz 13 days ago

        Turning was Church’s student.

        • tromp 13 days ago

          It's amazing how often Turing gets miss-spelled as Turning. Is that because of overzealous spell checkers?

          • dexwiz 13 days ago

            Probably, I posted that from my phone.

    • 082349872349872 13 days ago

      Might the terminology have been due to early awareness of the cardinality problem associated with the abstractions (D->D) being a subset of the values (D)?

  • mjh2539 13 days ago

    I used it in a graduate course on formal semantics (https://www.wiley.com/en-us/Semantics+in+Generative+Grammar-... after I graduated I wanted to learn how to program so I googled "lambda calculus programming language" and found Haskell. That was ten years ago. Though I rarely use Haskell anymore, the lambda calculus still holds a special place.

  • NotCamelCase 13 days ago

    Funny that you bring this up -- I feel exactly the same way and I even had more math background than average SW person.

    Being on the engineering side of things without much knowledge/focus on programming languages theory and the mathematical nature of computation, one feels as if all the niceties we have in popular PLs are just laws of nature that we just had to discover, whereas in reality they too had to be derived, tried, improved first. But then you study something like this and expect to be able to finally connect the dots and yet, it's still not immediately obvious how we arrived here!

  • agumonkey 13 days ago

    The Y combinator (npi) was also a very mind opening exercise. Even recreating pairs was quite fun.