ilovecaching 6 years ago

God, Lisp is beautiful. You look at some of the older languages of yore and think, my god, how did people survive? Then you see a Lisp program in all it's homoiconic glory, and it looks just as good in 2018 as it did in the 1960s.

A true testament to the expressiveness and generality of expression based programming and the lambda calculus.

  • gnulinux 6 years ago

    This 1960 program is much better looking than pretty much all 2018 programs I saw. There definitely are some exceptions, but it's hard to get lisp wrong whereas it's too damn easy to get most 2018 languages wrong.

    • jorams 6 years ago

      > it's hard to get lisp wrong

      That's not actually true, Lisp offers endless possibilities to work yourself into a hole. The average 1960s program was just written by someone a bit more knowledgeable and caring, who took a lot more time to get things right, than the vast majority of modern programming work.

      • sevensor 6 years ago

        And the average 1960s program was written on a blackboard first. Computer time was expensive! I think being forced to confront the cost of the program ahead of time sharpened programmers' wits.

        • pfdietz 6 years ago

          My first Lisp program was entered via punch cards. This was the early 1970s.

znpy 6 years ago

I know I am going to write an unpopular opinion and get downvoted for it, but here it goes...

I am still not convinced by the general "we don't need to upgrade the Common Lisp specification" attitude.

It surely is remarkable as a specification, encompassing so many aspects of the language, yet it leaves so many other aspects to the particular implementation. Multi-threading for example is supported by most major implementations yet their syntax (and often, semantics) are very different from each other. Same goes for cooperative multitasking (corutines etc). Someone else could come up with other examples.

In my opinion a new committee should be formed, to standardize the syntax and semantics of these so common functionalities.

In this sense, the various C++ language working groups are doing a remarkable job. I wish something similar would happen in the Common Lisp world.

  • eschaton 6 years ago

    My main beef with treating the Common Lisp spec as a sacred text is simple:

    1. There were things at the time that were not quite standardizable that are now. Examples: Some degree of concurrency, system definition, the CLOS meta-object protocol, networking.

    2. There are things that were created or became important in the meantime that should be standardizable now: Unicode and localization, concurrency building blocks like threads and locks and queues and atomic operations, cryptography, compression, higher level networking (URLs, HTTP, etc.).

    3. There are just bugs and holes and incompleteness in the standard due mostly to compromise from the then current vendors. For example, not everything fits the sequence protocols, the language isn’t CLOS all the way down, streams, etc.

    All this argues that there’s room for continued progress. The only good argument I’ve heard for why it hasn’t been pursued is “Who’d fund it?”

    • mark_l_watson 6 years ago

      I don’t particularly mind that Clozure CL, SBCL, Franz, Armed Bear CL, LispWorks, etc. all have different solutions for some things not in the spec.

      When I retire from full time work next year, I have planned on just using one language, probably either Common Lisp or Racket, for research and recreational programming. The fact that the CL standard is crufty doesn’t really put me off. Racket does handle many of the issues you mention nicely but there is something nice about using a programming language that I have been using for 36 years.

      That said, if DARPA and/or a large consortium of corporations decided to drop $$$ to re-standardize and improve the language then that would be very cool. But, I don’t need that to happen.

    • blue1 6 years ago

      I think it's evident now that a purely community-based effort to advance CL cannot realistically happen; even the CDR[1] is unmantained, and in the course of the last years many knowledgeable people have been progressively leaving the community, CL, or both.

      I am wondering how the "other" languages obtain funding for such things. For example, Julia raised $4M one year ago. Or should we acknowledge that no one cares about Lisp anymore? :(

      [1] https://common-lisp.net/project/cdr/

      • vindarel 6 years ago

        Just to counterbalance and give some hope: projects like CLASP (CL on LLVM) got university funds to development it (and the ecosystem on the way: LSP library,…).

        https://github.com/clasp-developers/clasp

        CL is still adopted by universities (for bioinformatics it seems), or innovative firms (Emotiq (blockchain), Rigetti quantum computing,…), and is still used by big ones (https://github.com/azzamsa/awesome-lisp-companies), so why not see fundings in the future…

      • sdegutis 6 years ago

        What language have they mostly been moving to?

    • dwc 6 years ago

      > the language isn’t CLOS all the way down

      I'm curious what you mean by this, why it's needed or would be a good thing, etc. As a multi-paradigm language, I'm not seeing why CL should have a particular paradigm "all the way down".

      • mikelevins 6 years ago

        It's good because it offers the opportunity to simplify and rationalize the type system and associated protocols without losing features.

        In the early 1990s I worked on an experimental Newton OS written in Dylan. At that time, Dylan was still called "Ralph," and it was basically an implementation of Scheme in which all datatypes were CLOS classes. It was "CLOS all the way down."

        Ralph offered substantially the same affordances and conveniences as Common Lisp, but with a simpler and more coherent set of APIs. Ralph was easier to learn and remember, and easier to extend.

        To illustrate why, consider finite maps. The idea of a finite map is a convenient abstraction with a well-defined API. Common Lisp offers a couple of convenient ways to represent finite maps, and it's easy to build new representations of them, but there's no generic API for finite maps. Instead, each representation has its own distinct API that has nothing particularly to do with anything else.

        By contrast, Ralph had a single well-defined API that worked with any representation of finite maps, whether built-in or user defined.

        The upshot is a library of datatypes that is just as rich as Common Lisp's, but with a simpler and more coherent set of APIs, and an easy standard way to extend them with user-defined types that also support the same APIs.

        There are signs in the Common Lisp standard that people were already thinking in that direction when the standard was defined. See the sequence functions, for example. Ralph, designed by Common Lisp and Smalltalk implementors, carried that thinking to its logical conclusion, and the result was something like a tidier Common Lisp.

        Twenty-eight years later, Ralph is still my favorite of all the languages I've used for serious work. Its development system, Leibniz, remains my favorite development system. My favorite current tools are Common Lisp systems, but that's because I can't have Ralph and Leibniz anymore.

        • mindcrime 6 years ago

          My favorite current tools are Common Lisp systems, but that's because I can't have Ralph and Leibniz anymore.

          You said below that you don't find modern day Dylan to be as valuable. I don't know much about Dylan, either the pre-1992 version or the newer version(s), but I'm curious if you would elaborate on why the older Dylan was so much superior to modern Dylan in your view?

          • mikelevins 6 years ago

            Because modern Dylan is not an old-fashioned Lisp.

            I prefer working the old-fashioned Lisp way. I start my Lisp and tell it, an expression at a time, how to be the app I want. Modern Dylan doesn't work like that. It's much more a batch-compiled affair, where you write a lot of definitions and compile them all at once to yield an artifact.

            Modern Dylan does not have a Lisp-style repl that you can use to gradually build up your app interactively, teaching the runtime new tricks, and incrementally querying it to examine what you've built--as I did when working on the Dylan Newton.

            For a while, Bruce Michener and I discussed what it would take to restore that kind of support to OpenDylan, but in the end I concluded it was an impractical amount of work.

            • mindcrime 6 years ago

              Gotcha. I've never really used OpenDylan, but I've had it on my "list of things to learn one day" for a while, so just curious about your take on that. I didn't realize that old Dylan had a lisp style REPL and that new Dylan doesn't.

        • Immortalin 6 years ago

          What's your opinion on Julia given its Dylan heritage?

          • mikelevins 6 years ago

            Julia is almost good enough for me to use, but not quite. I'd prefer an s-expression syntax, but that's not a deal killer for me. I like some other languages that don't have s-expression syntaxes, though I miss the easy text manipulation that s-expressions support.

            I would want a convenient way to deliver a self-contained executable. If there's a simple way to do that with Julia, I don't know about it. I look for it periodically, but haven't found it. If it's there and I've simply overlooked it, then I might actually start using it regularly.

            I have a few other nits, but they're just nits. On the whole, I think Julia's pretty nice.

            • mikelevins 6 years ago

              One thing I forgot to mention is that I don't know whether Julia handles dynamic redefinitions gracefully.

              What I mean is, for example, if I evaluate a new definition of an existing class, what happens to all the existing instances of that class? In Common Lisp, the old instances are now instances of the new class, and there is a runtime facility, defined in the language standard, for updating existing instances to ensure that they conform to the new definition.

              If a language lacks facilities like that, then it's hard to work the way I prefer to work.

              I guess I sort of expect that Julia will not have graceful support for redefinitions, because, generally speaking, the only people who even think of that feature are people who are intimately familiar with old-fashioned Lisp and Smalltalk systems, and they're sort of thin on the ground.

              But maybe I'll be pleasantly surprised.

              • elcritch 6 years ago

                Julia doesn’t have classes. It relies on multi-methods primarily. Which for scientific computing is a much better fit, IMHO. That being said it’s possible to redefine pretty much any operator, including built in ones at the repl.

                • mikelevins 6 years ago

                  Your reply is a bit confusing, because multimethods are not an alternative to classes. Common Lisp and Dylan, for instance, offer both multimethods and classes.

                  Regardless, Julia does offer user-defined composite types. Can I redefine a composite type without halting the program in which it's being used? If so, what becomes of existing instances of the type?

                  If the answer to the first question is "yes," and if the answer to the second one is "the language runtime arranges for the existing instances to be updated to be instances of the redefined type," then Julia offers the kind of support for redefinition that I am accustomed to in Common Lisp. If not, then it doesn't.

                  EDIT: I dug around and answered my own question: Julia doesn't support redefining structs in the repl.

                  There's a project in progress (Tim Holy's Revise.jl) to add support for redefining functions in a session, and that project contains some discussion of how they might approach redefining structs.

                  Of course, the existence of the project and those comments implies that Julia does not currently support such redefinitions, and that answers my questions.

                  I did notice from the comments on some issues that those folks are aware that supporting redefinition of structs in the repl implies that existing instances may become orphans when their types are redefined, and there's some discussion of what to do about it. Common Lisp's solution--updating the existing instances to conform to the new definition--does not seem to have occurred to anyone.

                  That's not a big surprise. Why would such a feature occur to you unless you were consciously designing a system for building programs by modifying them as they run? Of course, that's exactly what old-fashioned Lisp and Smalltalk systems are designed for, but most people don't get much exposure to that style of programming.

                  I always end up missing those features when I don't have them, though, which is one reason I always end up going back to Common Lisp.

                  • elcritch 6 years ago

                    True, classes and multimethods aren’t exactly interchangeable. It’s just that I haven’t found classes useful (at least as done in Java/Python/C++ objects) as compared to the combination of multimethods and type specialization. At least in the context of scientific computing.

                    Does CL use virtual tables to implement CLOS? Always been curious about that. It seems CL must keep the state associated with redefined objects. How do you handle new fields and filling in values with CLOS?

                    It does appear you can’t redefine structs in the repl. Forgot about that point, though as you point out there doesn’t appear to be anything fundamental to prevent that from being changed in the future. I haven’t used Julia day-to-day much for a while, but hopefully the newer generation tools will add in the “old” features from CL and similar.

                    Have you ever tried CLASP?

                    • mikelevins 6 years ago

                      CLOS classes are logically equivalent to structs. In fact, in some implementations, they are exactly the same. They are therefore useful in exactly the same ways and the same circumstances that structs are useful.

                      Maybe what you don't find useful is inheritance. I can see that. I'm not heavily invested in inheritance myself, though it can be useful in cases where you want a bunch of structs with some shared structure, or in cases where you want multimethods that share some behavior across a bunch of related types.

                      The terminology "virtual table" is commonly used with C++ and other languages that associate methods with classes. Each class in such languages has a hidden member that contains a pointer to a virtual method table used for method dispatch.

                      In CLOS, methods are associated with generic functions, not with classes, and are dispatched on any number of arguments. The standard specifies how generic functions and methods behave, but does not specify how they are to be represented, so the representation is an implementation-specific detail.

                      A naive toy representation might be a table associated with each generic function that maps sequences of types to methods. When the function is applied, Lisp computes the values and types of the arguments and finds the appropriate method for those types. I'm sure you can imagine the sorts of optimizations implementations apply to speed things up, including compiling monomorphic generic functions to simple function calls.

                      This is a bit of an oversimplification, because CLOS also provides a bunch of ways to control and customize how dispatch works--CLOS is less an object system than it is a system for building object systems.

                      When you redefine a class, CLOS automatically calls MAKE-INSTANCES-OBSOLETE, which arranges for all existing instances to be marked obsolete (it's up to the implementation to determine exactly what that means). When control touches an obsolete instance, the runtime calls UPDATE-INSTANCE-FOR-REDEFINED-CLASS with the instance, a list of added slots, a list of discarded slots, and a property list mapping the names of discarded slots to the values they had when they were discarded. If you've specialized UPDATE-INSTANCE-FOR-REDEFINED-CLASS for the case in question, the instance is reinitialized according to your specialized method, and things proceed as if it had the new type definition when it was instantiated.

                      If you haven't specialized UPDATE-INSTANCE-FOR-REDEFINED-CLASS then you'll end up in a breakloop. A breakloop is a repl session with read and write access to the call stack and the variable environment. The assumption is that you'll inspect the stack and environment, decide what UPDATE-INSTANCE-FOR-REDEFINED-CLASS needs to do, write that code, then invoke a restart that causes the halted function to resume execution as if your new definition had existed when it was originally called.

                      Again, the language is designed with the assumption that writing a program by modifying it while it runs is standard operating procedure. That being the case, the obvious thing to do when there isn't a relevant definition for UPDATE-INSTANCE-FOR-REDEFINED-CLASS is to offer you the chance to create one, and resume execution from there once you've created it.

                      I've examined CLASP a bunch of times. I keep meaning to mess with it, but I haven't yet.

      • throwaway487548 6 years ago

        Uniformity, which is a really good thing. Surely you could say (class-of 3) or (class-of nil) or (class-of '(1 2 3)) but technically these values are not objects, like it is in, say, Scala, which is a real-world example of how good it is to have a uniform language (everything is an expression, every value is an instance of a class, and therefore everything is uniformly high-order, uniformly typed (unlike Java with distinctions of so-called primitive types) etc, etc.

        • dwc 6 years ago

          Uniformity through imposing one paradigm on everything isn't attractive at all to me, especially for a paradigm I have no interest in using and avoid when I practically can.

        • junke 6 years ago

          Every value in CL is an instance of class. Some of those classes are built-in classes, which are restricted for performance reason. You do not inherit from Int in Scala either, since it is marked as "final", as far as I know.

          • throwaway487548 6 years ago

            In Scala an integer has methods, like everything else, unlike it is in Java and C++, and this is the point and the big deal.

                3 + 2 is actually 3.+(2) which is the right thing.
            • kbp 6 years ago

              You can specialise generic functions on built-in classes in standard CL. Lisp methods are specialisations of generic functions; they don't belong to a class in the way methods do in eg C++. The issue you're talking about is that not all functions are generic functions in Common Lisp, and you can't specialise ordinary functions.

              There's nothing stopping you from doing

                  (defmethod add ((x number) (y number)) (+ x y))
                  (defmethod add ((x string) (y string)) (concatenate 'string x y))
              
              or whatever (multiple dispatch, too), and you could even call it + instead of ADD if you wanted (but not COMMON-LISP:+, so other code would continue to work; your packages could import your + instead of the standard one).
            • junke 6 years ago

              "+" is a function, what makes it "right" to be a method?

              • pfdietz 6 years ago

                You mean, what makes it right to be a generic function that has methods?

                First, + does dynamic dispatch based on the types of its arguments. It does different things when adding fixnums, vs. integers, vs. rationals, and so on, as well as a default method that signals a type error (in safe code). So it has methods, even if they aren't necessarily implemented as standard methods (but they could be).

                Secondly, a user might want to make + work on other, user-defined classes (for example, if he user wanted it to work on a class representing quaterions). To make that work, the user would have to be able to add methods for those classes. One can imagine many CL builtins being implemented as generic functions to which users could add methods. This would be consistent with the standard.

              • jhbadger 6 years ago

                a function is just a method that returns a value. Why make a special case for it? Of course you can go the other direction and allow functions to return nothing (or a representation of nothing like nil). That's fine too.

        • e12e 6 years ago

          But would such a common lisp be better than something like Dylan?

          • mikelevins 6 years ago

            It would not be better than circa-1992 Dylan. It would be better than present-day Dylan, though.

            My opinion only, of course.

  • phoe-krk 6 years ago

    It surely is remarkable as a specification, encompassing so many aspects of the language, yet it leaves so many other aspects to the particular implementation. Multi-threading for example is supported by most major implementations yet their syntax (and often, semantics) are very different from each other. Same goes for cooperative multitasking (corutines etc). Someone else could come up with other examples.

    One of the arguments towards the "we don't need to upgrade the Common Lisp specification" attitude is that if we have multiple different implementations, each with its own interface to implementation-defined functionality, then we can successfully unify that interface as the language's users, without changing the underlying language.

    Portability libraries that offer uniform syntax for features that implementations define differently are the de facto standards in the Lisp community. To name some examples, networking (USOCKET), multithreading (BORDEAUX-THREADS), accessing the metaobject protocol (CLOSER-MOP), GC and weak data structures (TRIVIAL-GARBAGE), accessing the filesystem and operating system functionalities (UIOP). There are many others.

    --------------------

    At the same time, I agree that the specification needs an upgrade. As soon as I find some time, I'll try to think of means of organizing such a committee and, if required, securing the funding for it.

    • blue1 6 years ago

      > As soon as I find some time, I'll try to think of means of organizing such a committee and, if required, securing the funding for it.

      That would be great!

  • e40 6 years ago

    The original specification was extremely expensive to create, from the POV of a Lisp vendor. Today, there are few vendors left.

    In the early 00's, a bunch of users got together to create the next version of the specification, and only Franz and Lispworks were there (because they were the only ones left standing). It was a complete shitshow. The users were very enthusiastic about piling work on the two remaining vendors, and the vendors walked away from the table. I was there as a vendor, and it was frustrating.

    At the time, the users wanted to standardize multiprocessing APIs, yet no vendor had an SMP version of their API! For the decade after this aborted attempt, at least one vendor had to make lots of changes to their API. It would have been horrible to standardize the APIs of the time. None of the users wanted to hear this.

    This leads me to two main points:

    * Users not are the right crowd to standardize a language, because they do not have to implement the things they dream up. This is an important point.

    * Standardizing a multiple-source language when most of the sources are OSS, and the non-OSS sources are vendors do not derive much revenue from the language, is doomed to fail. It's economics.

    The best thing for Common Lisp: users make and support packages that work across many CLs, and users get the package via quicklisp.

    • pfdietz 6 years ago

      > The best thing for Common Lisp: users make and support packages that work across many CLs, and users get the package via quicklisp.

      This is why I'm hot for implementations to support package local nicknames. This extension will largely solve a problem that QL has: package name collisions.

  • kazinator 6 years ago

    From a just purely internal POV, ANSI CL has documented unresolved issues that could be nailed down.

    Example:

    When a dynamic control takes place to a targeted exit point, when are the intermediate exit points (the ones scoped within that one) torn down? The spec says they could be torn down gradually as the control transfer unwinds, or they could be torn down before the control transfer initiates.

    The difference is visible to the program: in an unwind-protect cleanup form, the intermediate exit points are still visible in the former implementation, but no longer visible in the latter implementation.

    I'm generally in favor of everything being as late as possible; the points should stay visible while their scope has not been traversed yet. That way the intercepting handler has the maximum choice: though it intercepted a distant control transfer, less distant control transfers are still available to it.

    In any case, 24 years is enough time to pick one behavior or the other and codify it. That's the proper job of a standard.

  • vnorilo 6 years ago

    This. I was involved in a painful port of a large MCL codebase to LispWorks, and many of these platform and implementation specific behaviors are bound to pop up in any practical application. Threading, networking and FFI were the fundamental problems.

    • varjag 6 years ago

      They all have portability layers for a long while now.

  • pfdietz 6 years ago

    Well, some upgrades to the CL spec could be nice, but I'm not sure how essential they are. Package local nicknames would be nice, and perhaps some tighter specification of undefined behaviors.

    Because of how CL works, libraries are closer to language extensions that they would be in other languages. That may reduce the need for actual language changes. And libraries can be "standardized" by just maintaining a list of de facto standard ones. We're already seeing that.

    Changes for the sake of tidiness seem like a waste of time. I'd rather see the effort devoted to enriching the CL library ecosystem, adding more things to quicklisp, curating what is there, and keeping CL implementations up to date with changes in hardware.

  • blue1 6 years ago

    a large part of the CL community treats the spec a bit like a sacred text, in the sense that it is both very good and impossible to change. The most important reason apparently is that there are no more the resources to do it. Maybe also the struggling commercial vendors do not want to standardize their offer to become too similar to open source implementations?

    • junke 6 years ago

      > a large part of the CL community treats the spec a bit like a sacred text

      That's not my experience. I'd say that people know what the CL spec currently is, love its stability, and might fear what it could become after an "upgrade".

      Anybody can write something like CL21, for example, with one's own style and notations. That's why it is frowned upon to do so and to claim that "this" library is how "the" new Lisp should be. Part of growing in Lisp (and programming) is to learn how to work with a language, not against it, and avoid fixing what is not broken.

      However, I trust Common Lisp compiler developers and vendors to know how to make backward-compatible, useful changes: current common lisp environments have way more facilities and building blocks than the bare minimum required. They also know the pain points that currently exist in the standard in a way that goes beyond aesthetic things.

      • blue1 6 years ago

        It seems to me that this is a case of making a virtue of necessity: since the language does not evolve, many have learned to love the fact that it does not. To me it looks a lot like a form of Stockholm Syndrome.

        I find interesting your remark that "anybody can write something like CL21". (Though, not all problems can be solved with a library, but let's stick to this side). In most language, you can't do this kind of customization. In Lisp you can, but anyone else can too, and the result is that after 25 years there is still no standard way to write a hash literal, for example, or to concatenate strings in a non-verbose way. It is difficult to claim that these are good things.

        • Jach 6 years ago

          It's a curious situation for sure. QBasic shares the property that you can run decades old code unmodified, but we don't celebrate that language. I think a certain amount of resistance to updating it must also come from looking at other languages that have no spec whatsoever (not even an old one) and are doing or have done just fine. Then you look at the various problems that would be addressed by an ANSI update, and see that so many of them are resolved with libraries that work across implementations, some bundled OOTB with implementations, and easy to get via quicklisp if not. And as you bring up, though there's not a standard for hash literals, which aesthetically sucks, you still have a gorillion choices you can import including writing your own in a few lines to fit your aesthetics if you can't stand the alists. Lisp is powerful enough to support such flexible choices without a headache.

          A serious effort would probably benefit from talking to the C++ people. Since my knowledge of C++ is basically stuck at C++98 + a few Boost utils, my outside perspective is that C++0x and eventually C++11-C++17 were just standardizing what were already halfway de-facto standards in Boost (halfway because you could get a lot of questionable looks for including Boost from some parts of the community, and the limitations of C++ meant some things didn't look or debug as nice as if there were compiler support). Still, with the standard the syntax has grown quite a bit and some of it can be more pleasing than the Boost macros. Did it break anyone? Probably, but at least compilers let them compile with old standards. It'd be a good conversation with C++ committee people especially to see what it took to finally update such an old spec. A CL update would surely need a similar feature of running under the old spec, since e.g. if you're adding literal hash table support you're almost surely going to break someone who was using that syntax choice for something else.

        • phoe-krk 6 years ago

          or to concatenate strings in a non-verbose way

          UIOP:STRCAT. UIOP is present on every contemporary Lisp image as a part of ASDF.

          • blue1 6 years ago

            I know it is easy. In this particular example, it is also trivial to write it from scratch.

            But, if UIOP:STRCAT is such a "standardized" solution as you claim, then I believe it should become part of a (hypothetical updated) Standard, not a second-class citizen behind a member of the ancient aristocracy like Lord CONCATENATE.

            What I mean is that I think there should be some kind of mechanism so that standardized solutions emerge (from widely used libraries for example) and become part of the Standard. Like it happily happens in many other domains.

            • junke 6 years ago

              In Go, in Javascript, in Python, in C++, you have the luxury of having thousands of people who just implement things for you while you sleep. See the list of "gold members" for the Standard C++ Foundation: https://isocpp.org/about. See the list for Python: https://www.python.org/psf/sponsorship/sponsors/

              There are people whose jobs is to work on that.

              When you write "it should become part of a standard", you use the passive voice: who is going to do it?

              In CL, the effort is focused on things that matter to each individual, or each company that uses it, and sometimes a (de-facto) standard library comes to life.

            • phoe-krk 6 years ago

              I never said that the updated standard should not happen - quite the contrary. I'm saying only that a string contatenation function is already available on every modern Lisp image that has ASDF loaded, and is therefore usable from user code.

    • bitwize 6 years ago

      Indeed. Common Lisp should be more like C++, where the right way to write Hello World changes every few years. If your language isn't moving fast and breaking things, it means it's abandoned and probably shouldn't be used for real work.

      • blue1 6 years ago

        I know you're being sarcastic, but even in the ANSI CL spec the word "deprecated" appears in some places. It's just that since nothing followed to it, the deprecation had no practical effect: the fixedness of the CL spec is just an accidental result of history.

        As an aging developer I also detest environments that break things every other day, but I have realized that on the other hand system that never move have indeed a tendency to wither, usually in comfortable niches, while the flow of things moves elsewhere.

    • znpy 6 years ago

      > a large part of the CL community treats the spec a bit like a sacred text

      Amen!

    • throwaway487548 6 years ago

      > treats the spec a bit like a sacred text,

      This is not necessarily bad, especially in the context of education. It is well known for ages that a bit of mystery and sacredness makes rather boring moral education much more emotionally charged.

      There are no active Common Lisp based courses, as far as I know, the last one was that one of the Alegro team, but it was rather an attempt of promotion and has been done without "religious vigor" unlike the famous CS61A by prof. Brian Harvey, which could be viewed as almost sectarian, especially the very first version from 2008. Similar course could be easily made based on Common Lisp, which includes more advanced concepts in CLOS.

      At least some zeal is good for education. CS61A and the original SICP course by the Wizards are good and still relevant examples.

  • znpy 6 years ago

    On a side note, I see that this post is actually generating some mixed response: can see it's vote count went up and down since I posted it.

    It has cumulatively gathered more upvotes+downvotes than the total of its comment (and comments to comments etc).

    • olavk 6 years ago

      I think the "I'm going to get downvoted for this..."-baiting leads to a lot of reflexive upvotes and downvotes regardless of the actual content of the comment.

  • cestith 6 years ago

    I agree that perhaps a newer CL standard is called for. On the other hand, there's more to the Lisp family than Common Lisp. The Scheme folks have updated their standard a few times.

    Maybe it would be nice to see threading, Unicode, coroutines, and some other things put into a standard library with as little change to the core language as possible.

  • flavio81 6 years ago

    >In this sense, the various C++ language working groups are doing a remarkable job

    Really? Ask a big amount of C++ users...

    >I am still not convinced by the general "we don't need to upgrade the Common Lisp specification" attitude.

    I am, fully. The language can be extended by using the language.

  • mdhughes 6 years ago

    Scheme moves forward because it's not a rotting corpse as LISP is pictured in that cartoon, it's a still-living language. Scheme's not admired for its purity and historic role, but because it's useful and fast. The R6RS/R7RS arguments are annoying (R6RS is right) but show that someone cares.

    • blue1 6 years ago

      I don't think Common Lisp can be called "pure": it's a language full of compromises (not in the bad sense of the word). Scheme was considered the pure sibling in the lisp family, at least initially.

      • znpy 6 years ago

        Please let's not get into the Scheme vs Common Lisp flame, it's pointless.

        The discussion should be about how to keep the language alive and on par with the currently used technologies.

        • blue1 6 years ago

          It was not my intention, it just sounded strange. I am not even sure whether being "pure" is a good thing or not :)

      • mclehman 6 years ago

        I think they meant even though Scheme is pure, that's not the reason people admire it. Rather, its speed and usefulness engender that admiration.

        • mdhughes 6 years ago

          Right, Scheme is used and those uses are reflected in the spec(s). The minimalism of Scheme is what makes it easier to optimize, but that's not why it's alive.

          If LISP users want their variant to stay alive as well, they need to follow that model of actually using it and documenting how it's used.

          Vengeful LISP zombies gonna downvote, of course, because they're bad people.

          • Jach 6 years ago

            Why the hostility? Since we're thinking at the level of cartoons, here's a fun one: http://kvardek-du.kerno.org/2010/01/how-common-lisp-programm... Note the third row's distinct family orientation...

            • mdhughes 6 years ago

              I'm not hostile, riffing on a cartoon is what humans call "humor".

              While I have no problem with "Scheme is Spock", I'd put Captain Pike in his beeping wheelchair in the CLISP slot, and Clojure is clearly Kirk: All but married to the Enterprise.

              But whooo, there's some tactless stuff in those PHP and Forth slots. And Javascript's a Scotty-type lang now, it's for getting engineering done.

    • dleslie 6 years ago

      Scheme needs a standard ffi so badly.

      And R7RS is better, now.

lerax 6 years ago

Nice transcript. I would love if more often people did it.

cblum 6 years ago

That moment when you happen to know the person linked, but in a totally unrelated context (our kids are friends).

Now I've gotta find some time to watch or read this :)

moron4hire 6 years ago

I've never done work with CL, but I've done quite a bit of Racket. What does CL offer that Racket doesn't? I've found Racket to be fairly complete, and the areas where it's not are usually quite simple to fill in (thus seem pretty clearly intentionally left open-ended).

  • mikelevins 6 years ago

    A language designed for modifying programs while they run.

    Common Lisp and Smalltalk are languages designed to support modifying programs while they run. It's the standard way to develop a program in old-fashioned Lisp and Smalltalk systems: you start up your language runtime and then teach it, incrementally and interactively, to be the application you want.

    Interactive repls are pretty common nowadays, but a repl is not the same as being designed to support programming-as-teaching.

    Consider the Common Lisp generic function UPDATE-INSTANCE-FOR-REDEFINED-CLASS. It's in the language standard. It provides an extensible mechanism that the runtime uses to update existing instances of a class when the class is redefined.

    Why would you ever want such a function at all, much less want it to be part of the language standard?

    Because the authors of the Common Lisp language standard were experienced Lisp hackers. They took for granted that the way you write a program is you start your Lisp and then interactively teach it how to be the application.

    You don't want to have to stop the program and rebuild it just because you redefined a class. That's silly. The runtime should know how to update any existing instances to reflect the new definition, because telling the Lisp new definitions as it runs is the standard way that you normally work.

    Old-fashioned Lisp and Smalltalk systems are full of features that make this assumption--that you build an application by interactively teaching a running program how to be the application. As far as I know, they're the only development systems designed with that assumption (although, if Racket has such features and I overlooked them, I'd be glad to know it!).

    If you're not familiar with that style of programming (and I think most programmers are not), or if you don't prefer it, then something like UPDATE-INSTANCE-FOR-REDEFINED-CLASS makes no sense. On the other hand, if you prefer that way of working, as I do, then the absence of features like UPDATE-INSTANCE-FOR-REDEFINED-CLASS is as uncomfortable as a missing tooth.

  • peatmoss 6 years ago

    As a Racket dabbler, I’d say there are a couple things:

    - performant implementations (SBCL in particular is a standout). I mean, Racket’s pretty good, and will typically blow the doors off Python, but SBCL is in a different league.

    - multiple implementations conforming more or less to a standard. This can also be a distraction, but is on balance a nice thing to have, particularly if you wanted, for example, an implementation that ran on the JVM (ABCL).

    - different set of libraries. Racket’s library situation is pretty good, but sometimes CL will have something that Racket doesn’t (and sometimes the other way around)

hguhghuff 6 years ago

One of the things that really put me off Lua was lack of a strong central control and organization for the language, resulting in fragmentation.

How does lisp stand in this regard?

  • jorams 6 years ago

    It's interesting that you say that about Lua, because it actually does have a strong central control and organization for the language itself. The "problem" is that it doesn't go further than the language, large parts of the community disagree with their choices, and they tend to not care about backwards compatibility too much.

    Lisp is very different in that area. I think that is because the language standard is large and doesn't change. As long as people want to conform to that standard, splitting up too much doesn't really make sense.

  • blue1 6 years ago

    Depends on what you mean by "Lisp". In the wider sense of the lisp family, there are obviously many differences (Clojure, Scheme, Common Lisp, etc.). If instead you mean Lisp == Common Lisp, the ANSI spec - which is a really well-made standard - is common to all implementations and basically works; for what came after that (threading, unicode, networking, etc.) the commercial implementations have their own extensions, while the open source ones rely on libraries, some of which are considered best practice.

winter_blue 6 years ago

After 20 years of programming[1], I’ve gotten to a point in my life where an advanced type system (with strong static types, no nulls — optionals instead, type inference, type refinement, etc) is absolutely mandatory for any language I’ll pick (when I have the freedom to choose). I’ve heard that there are libraries that tack types onto LISP — but how good are they?

LISP code I’ve seen anywhere has always been without types (ie dynamically typed). No LISP dialect that I’m aware of has prioritized or talked about or emphasized type checking. Is this something that’s even big on the radar of the LISP implementors and the LISP community at large?

I’ve been burned too many times working on large (100k+ LOC) dynamically typed codebases that I really never want to work with such a codebase again.

A sincere question: Besides its homoiconic syntax and probably the best implementation of macros — what does LISP offer over a modern programming language with a sophisticated type system – even, for instance, a language as old as OCaml?

Peter Norvig has web page called “Is Lisp Still Unique? Or at Least Different?” that was last updated back in 2002 where he talks about the unique features LISP offers, and how other languages are catching up: http://norvig.com/Lisp-retro.html Modern languages of today offer the features that made LISP unique and superior in its heyday, and languages with powerful type systems offer a huge advantage that LISP never has had.

[1] I started programming in 1998, when I was 8 years old. I’ve developed software professionally for a far shorter period.

  • phoe-krk 6 years ago

    SBCL has a very sane type inference engine that it uses to detect type mismatches and for optimization. This, along with type and ftype declarations, is enough to write statically typed code in Lisp with compile-time warnings.

    • YouAreGreat 6 years ago

      > enough to write statically typed code in Lisp

      Certainly not in the way "statically typed" is usually understood.

      Containers (outside of specialized arrays) aren't parameterized, so every time you extract a value from a list or map, the "static type" is the top type. SBCL remains essentially unityped with islands of type propagation for better performance of (mostly) numeric code.

      • phoe-krk 6 years ago

        Lists aren't typed in standard Common Lisp. For these cases, using THE around the access site is required to declare the type for inference.

  • beders 6 years ago

    Types are for the compiler. Not humans. We just happen to use it as a way to remind us what data looks like. It's not the only way to figure out what happens in a function.

    I've gone the exact opposite route now: From static types to dynamic types to interactive coding using specs.

    Specs are much more expressive than types but - and that is the main difference - it's a la carte and opt-in.

    If you don't opt-in, you'll spend more time at the REPL figuring out what a certain piece of code does.

  • 0x445442 6 years ago

    I'm with you. I went through the whole dynamic and weak typing love fest but my thinking has evolved. I much prefer static and strongly typed languages now because I don't view lines of code or the number of key strokes as the limiting factor in the development process. The bottle neck is always reasoning about code (new or existing) and debugging.

  • mark_l_watson 6 years ago

    Serious suggestion: Haskell is a Lisp so use it. (I use Common Lisp, Scheme, and Haskell - I consider Haskell to be Lisp-like enough and has the same repl based programming from the bottom coding work flow). I really prefer Lisp to Haskell, but Haskell is also awesome. If you want a strongly typed Lisp then look no further.

  • VogonWorkEthic 6 years ago

    I've heard racket is the way to go with types in lisp.

    https://racket-lang.org/

    Just passing along what I heard though, I personally don't use types and think things like clojure spec do a much better job fitting the bill.

    • peatmoss 6 years ago

      I think some are bristling that Racket is the way to go, but Typed Racket is certainly a good way to go. https://docs.racket-lang.org/ts-guide/

      You can start without types and later convert parts of your code to typed without too much headache.

      But, as the GP has mentioned, SBCL allows you to add type annotations which is similar in spirit to the idea of progressively adding types.

      Even as a Racket fan (and Racket’s performance is pretty good), I covet the blazing performance of SBCL, so the calculus is not cut and dry.

      • sevensor 6 years ago

        Interesting stuff is happening performance-wise. I have my eye on both Racket-on-Chez and pycket.

        • peatmoss 6 years ago

          Racket-on-Chez may eventually result in a better performing Racket, but SBCL is still going to be a hard target to match.

          That said, Racket on Chez means more of Racket is getting written in Racket. In turn, I suppose Racket-on-SBCL is not outside the realm of possibility. In general, it would be cool if Racket ran in more places.

    • arminiusreturns 6 years ago

      Or maybe guile?

      • peatmoss 6 years ago

        Some exciting stuff happening in Guile land these days. They’re getting a new JIT which sounds like it makes Guile’s performance closer to Racket’s.

        Also, I’ve been pleasantly surprised by some of the libraries in Guile. For example I was looking for a curses library the other day, and Guile’s was the clear standout in the Scheme-o-sphere. I didn’t follow through on my project, but Guile’s implementation looked pretty thorough and well-documented.

        Oh, and GNU project stuff does have a nice consistency to it. I mean, all the docs are predictably there and follow the same typographical conventions. That’s a nice feature IMO. Racket does something similar, but Racket’s also not the size of the entire GNU ecosystem, which gives Guile some network effects in terms of that consistency.

  • cicero 6 years ago

    I've been thinking along the same lines. I currently use Python for relatively small projects, but even then, I wish I had type declarations when I come back to code I haven't looked at in a while.

    What is your favorite language with an advanced type system, Haskell, OCaml, or something else? I've dabbled in both, but I want to dive deep into one. I like the look of Haskell more, but I'm a little afraid of it.

  • flavio81 6 years ago

    >After 20 years of programming[1]

    29 years here. I started in 1988.

    >I’ve gotten to a point in my life where an advanced type system... is absolutely mandatory for any language I’ll pick

    I used static type systems for most of my programming life until i used Javascript, Python, and then Common Lisp. I don't consider them mandatory.

    On the other hand, I do consider strong typing mandatory. That was my problem with Ruby and Js, not to mention PHP as well.

    >without types (ie dynamically typed)

    "dynamically typed" doesn't mean "without types"

    >No LISP dialect that I’m aware of has prioritized or talked about or emphasized type checking

    Well, check again, then -- Common Lisp is strongly typed and has a lot of type checks. They just happen at runtime.

    >I’ve been burned too many times working on large (100k+ LOC) dynamically typed codebases that I really never want to work with such a codebase again.

    Which language?

    >what does LISP offer over a modern programming language with a sophisticated type system

    you'll have to research more

    >Modern languages of today offer the features that made LISP unique and superior in its heyday

    None offers metaprogramming as easy as powerful as in Lisp. (read again "easy" as key point)

    None, except Dylan or some Scheme extensions, offer something like CLOS

    None, except Smalltalk and derivates (Pharo) offer a powerful interactive image-based environment where everything is dynamic and can be redefined at runtime; AND the running image can be saved to disk an restored later.

    Very few will have an exception handling and recovering mechanism as complete and reliability-assuring as Common Lisp's "signals, conditions and restarts" system.

    Few offer a full numeric tower out of the box, including IEEE floats, bignums, fixnums, complex nums, and fractional numbers.

    Very few dynamic programming languages run as fast as Common Lisp. Few of them compile to native code without losing any feature or functionality or forcing you to use a subset of the language.

    And none of all, except Common Lisp, offer all of the above at the same time.

    >and languages with powerful type systems offer a huge advantage that LISP never has had.

    You're forgetting the advantages that Lisp have over them.

    But I don't want to convince you -- i have huge respect for OCaml, SML and Haskell (btw, Did you know the first ML implementation was basically a extension to Lisp?) -- they simply have a very different philosophy for development, one that is focused at doing everything at compile time, and of giving you just a few tools to deal with ill behavior at compile time.

    • kbp 6 years ago

      > I do consider strong typing mandatory. That was my problem with Ruby

      What isn't strongly typed about Ruby?

      • Confusion 6 years ago

        I was going to ask the same. According to the regular definition of 'strong typing', Ruby is strongly typed. You cannot take a bunch of bytes in memory and pretend they are a different type. If something is an Integer, you can only treat it as an Integer. Sure, it may sometimes seem that way (duck typing, automatic type conversion), but it never is that way (no casting or type coercion)

  • Jach 6 years ago

    As mentioned, types are in the standard, and certain implementations provide compile time warnings of type and other issues...[0]

    The biggest non-feature advantage (the condition system is a pretty big feature though missing from everything else) is the mindset of writing programs, which other dynamic languages typically don't get right. The idea of "growing" a program, and responding to change, is part of that mindset and once it's internalized a lot of the language decisions make a lot more sense. e.g. "trace", "break", "compile" are all part of the standard, all available at runtime, whereas in other languages you tend to have a separation of the runtime from any "IDE feature" like debugging break points, tracing function calls, and compiling (not "eval"ing) new code. Other languages + IDEs are getting there but there's so much more to go.

    In Java land, with the support of things like JRebel, we can make pretty significant modifications to a running program like add/change/remove methods to a class. But what about any existing objects I may have instantiated? Those are left around, referring to the old code. In Lisp, redefining classes works out of the box (no need to wait decades for JRebel to show up), and "update-instance-for-redefined-class" is a standard generic function, which you can write a version of to handle conversion of existing objects. No mysterious old versions of objects running around, you didn't have to bring down the program to restart it, nice. It's a very dynamic and inspectable system, going beyond just the types of things. That to me is still Lisp's advantage as other languages + language tools slowly add more and more of its features that are there already in Lisp independent of extra tools (and Lisp has extra tools too for more things).

    I can't resist commenting a bit more on the type situation directly though... The creator of Clojure talks about "situated programs" in https://www.youtube.com/watch?v=2V1FtfBDsLU (he also has 20+ years of professional programming -- I on the other hand am only arguably on year 10 of "paid" exp (I wouldn't call all my gigs professional and not all were full time) but I tend to ignore how much experience someone has when people with equal or greater experience have such divergent views) and along the way I think he makes good points about how type systems get in the way of designing these big long-running systems that need to change as the years go by. Certainly I see this problem in the gigantic Java code base I deal with every day. Tight coupling is the nemesis of modularity, which you need for big systems, I tend to view static types as premature tight coupling. They're not without their benefits (typo protection less so for me but I love the easy speed wins, which you get in Common Lisp too if you bother to declare types; a form of QE via formal type proofs can be useful too), but they're also not without their downsides, and for many types of programs large or small the downsides can dominate.

    [0] Examples: https://news.ycombinator.com/item?id=12222404 (told about state never being reached? awesome!) https://news.ycombinator.com/item?id=13389287 (notes about why it couldn't use a fast operator, so you need to fix your types) https://news.ycombinator.com/item?id=14780381 (other nice things like unused var, wrong number of arguments, typoed name...)

    • jungler 6 years ago

      There's a page on C2 Wiki that I'm reminded of: "alternating hard and soft layers" as a pattern. And I think this pattern tends to develop throughout large, long-lived systems. Shakeups like transitioning from native apps to the browser, or the replacement of the sysv init scripts with the much larger systemd, make some things "harder" and others "softer".

      So you want some dynamism throughout so that a layer change is possible, but you also want some static elements so that they do the existing work reliably. It's a cyclical process, and often terminates abruptly when some other piece of software comes up and replaces the existing one.

      I am finding, after spending some years locked into a static-minded viewpoint, a renewed appreciation for having dynamic types available. In the design phase, it assists in creating useful feedback about a system. Coming at it from a point of experience helps in avoiding abuse of reflection, metaprogramming etc. and keeping the basic structure and algorithm design simple and gradually discovering which techniques will move the system forward. What a static approach tends to do is "lock in" everything, like when a child grows up and their skeleton becomes rigid.

      If your desire is code with eternal youth, then it would make sense to avoid locking things in! But that might mean denying it some of the wisdom of old age.

    • flavio81 6 years ago

      >is the mindset of writing programs, which other dynamic languages typically don't get right. The idea of "growing" a program, and responding to change, is part of that mindset and once it's internalized a lot of the language decisions make a lot more sense

      Excellent post.

jacquesm 6 years ago

If the author is reading this: typo in the first sentence, the year reads '2108' right now.

  • lkuty 6 years ago

    It's due to the fact that Lisp is ahead of its time :-)

  • xrme 6 years ago

    Oops. Fixed.