cbrozefsky 6 years ago

I've done two successful startups with lisp variants in my career, and continue to do clojure development at Cisco. The advantages of using lisps are real, if you stay focused on the problem domain and stay close to the customer -- you have great agility and flexibility. Woe to those who are tool jockeys and get too infatuated with their tools and code base, because there is a lot of rope to hang yourself and your teammates with.

The first was a Common Lisp web app for resource management, that was recently sold. Not a giant payout, but over it's entire life, starting in 2000, it paid peoples mortgages and put kids thru schools.

The second was a clojure based malware analysis engine, which we sold to Cisco about 3.5 years ago. That was a more conventional startup exit. We have scaled that into a global product. Additionally, we are developing even more clojure products within Cisco, and hiring...

I know from hiring clojure devs for the last 6 years that there are alot of companies, big and small, using it.

  • malloryerik 6 years ago

    "Woe to those who are tool jockeys and get too infatuated with their tools and code base, because there is a lot of rope to hang yourself and your teammates with."

    Could you expand a bit on this? I'm not entirely sure what you mean, but sense some good advice.

    • kornish 6 years ago

      Not the grandparent poster, but stepping in.

      Lisp, due to its easy metaprogramming, gives abstraction enthusiasts rather more rope to hang themselves than languages which make abstraction inconvenient (e.g. Go, which has a simple type system, or Java, which is very verbose). So, if one isn't careful, it's possible to write a very clean, very nice Lisp DSL which is totally incomprehensible to anybody but the author, and thus hell to maintain over time.

      Relevant quote, from https://www.joelonsoftware.com/2000/07/22/microsoft-goes-bon...

      > When great thinkers think about problems, they start to see patterns. They look at the problem of people sending each other word-processor files, and then they look at the problem of people sending each other spreadsheets, and they realize that there's a general pattern: sending files. That's one level of abstraction already. Then they go up one more level: people send files, but web browsers also "send" requests for web pages. Those are both sending operations, so our clever thinker invents a new, higher, broader abstraction called messaging, but now it's getting really vague and nobody really knows what they're talking about any more.

      > And if you go too far up, abstraction-wise, you run out of oxygen. Sometimes smart thinkers just don't know when to stop, and they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don't actually mean anything at all.

      More here: http://wiki.c2.com/?TooMuchAbstraction

      • m4x 6 years ago

        > So, if one isn't careful, it's possible to write a very clean, very nice Lisp DSL which is totally incomprehensible to anybody but the author, and thus hell to maintain over time.

        You can't really call it a "very nice" DSL if it is incomprehensible and unmaintainable :)

        What you have described is one issue, but it really doesn't have a lot to do with lisp DSLs. You can make that mistake in any language.

        A problem that is more specific to lisps is that since it really is possible to write beautiful programs, there is a temptation to spend a lot of time designing DSLs or otherwise refactoring your program to make it very clear, concise and maintainable - but without actually making it more useful. You may spend hours (or weeks) improving your code without actually improving functionality at all.

        This is less of a temptation in Java, for example, because it's such a rubbish language to program in in the first place there's very little temptation to spend time refining your code. You just accept that your program is never going to feel like a work of art no matter what you do, and get on with actually producing a useful solution.

      • flavio81 6 years ago

        >So, if one isn't careful, it's possible to write a very clean, very nice Lisp DSL which is totally incomprehensible to anybody but the author, and thus hell to maintain over time.

        This is a myth. DSLs in Lisp are almost always done in Lisp syntax, no surprises. The idea of having a higher abstraction is to make the problem domain map more clearly to the code.

        >totally incomprehensible to anybody but the author

        Now, what people who like to repeat this myth forget to say, is that when using an OOP language, say, Java, and using another's person code, one also needs to learn the classes that this person has created, the methods and how they interact. And this can be "totally incomprehensible to anybody but the author".

        >Relevant quote

        No, not relevant. Joel Spolsky is talking about device abstractions.

        • dustingetz 6 years ago

          professional clojure programmer here, not a myth, DSLs often abstract in the wrong direction and by departing from a language of values and function composition (eg by introducing macros) a bad dsl in fact becomes a barrier to good abstraction. Consider any DSL for html generation that predates React.js; react isn’t even a DSL at all, react is values and functions, so there’s all this good innovation happening around React that was all dead ends on 100% of pre-React DSLs.

          • flavio81 6 years ago

            >a bad dsl in fact becomes a barrier to good abstraction. Consider any DSL for html generation that predates React.js; react isn’t even a DSL at all, react is values and functions

            React isn't a DSL; React includes a DSL for html generation.

            And by the way, we have had such a thing in Lisp since the existence of HTML. Tons of libraries that allow you to inline HTML into your code.

          • flavio81 6 years ago

            >and by departing from a language of values and function composition (eg by introducing macros)

            You are assuming that a DSL is done by using macros; which is incorrect; in Lisp you can easily create your own DSL using plain functions as well.

            • kornish 6 years ago

              Sounds like maybe we're talking past each other. Would you mind clarifying how you're defining a DSL?

        • kornish 6 years ago

          I'm not fully sure what you're disagreeing with. Of course it's possible to write an overly abstracted DSL in Lisp, as it also is in Java or Go.

          > The idea of having a higher abstraction is to make the problem domain map more clearly to the code.

          Certainly. And, as per the Spolsky quote, too much abstraction can actually miss the expressivity sweet spot and start to obscure the problem domain.

          > No, not relevant.

          It is if you, ah, abstract away the specific context of his story.

        • kamaal 6 years ago

          >>And this can be "totally incomprehensible to anybody but the author".

          It always is.

          Code isn't magic, so you have to write whatever logic you would write in one language in another, for any application you would write.

          The only exception if your language provides abstractions on which you can build on. For example in Perl, you can do a lot of file operations and regex work without libraries. In Java you might need a library, sometimes a combination of libraries, whose documentation one might have to read.

          Same goes with lisp too. The only difference being, every once in a while one might discover a way to do large patterns of code through simple ways using macros.

      • kazinator 6 years ago

        I made a nice Lisp with a comprehensive macro system.

        I also use it to handle my contracting finances. Let's grep that software for "macro":

          $ grep 'macro' *.tl
          money.tl:(defmacro define-relational (name binary)
          money.tl:(defmacro define-arith-predicate (name usr-package-fun)
          money.tl:(defmacro define-unary (name usr-package-fun)
          time.tl:(defmacro def-date-var (name fallback-val . date- range-val-triplets)
          time.tl:         (defsymacro ,name (range-lookup *date* ,fb ,rvt))))
        
        Five hits; that's it! Three macros in money; they are all local shorthands just to eliminate some repetitive code. One macro in time for defining a date-specific variable, and that macro introduces a global symbol macro when invoked.

        The bulk of the system is OOP code: accounts, transactions, expenses, invoices, deltas, ... things with boring stuff like slots and methods. (Of course, I wrote that object system with a fair bit of contribution from macros; but that's in the language, not in the project).

        The DSL for recording transactions consists of plain old functions with transparent contents: just instantiate certain objects, do certain calculations and insert into the ledger.

        The date-specific variable thing is cool; it lets us define a variable, such as an income tax deduction rate, which has different values based on what date it is. Not the current system date, but the context date for the creation of a transaction: the date to which it is being accrued. A macro helps define that with a clear syntax, and the symbol macro makes it look like an ordinary variable.

      • jlg23 6 years ago

        Too be fair, metaprogramming is just one additional rope. I've seen enough OOP-overabstraction in commercial CL code to waste a few lifetimes with maintenance. Worse, code written by people who understand AMOP and they made full use of it[1].

        I think the advice given in this thread's root is good: Stay close to the problem domain. Really close. If you wonder whether you have gone too far, you probably have.

        [1] You of course can gather HTTP-headers via :list method combination. Seriously, you can. And then comes reality in the form of some closed source middleware that expects headers in a specific order and it even refuses to discuss elegance or adhering to protocol specs with you.

  • i_feel_great 6 years ago

    Out of sheer curiosity, is Chez Scheme still used at Cisco?

  • darkhorn 6 years ago

    You cannot use data as a program in Clojure, can you? I mean is data also a program in Clojure?

    • dragandj 6 years ago

      You can, and yes, it is.

reikonomusha 6 years ago

Rigetti Quantum Computing is a relatively recent YC startup that happens to use Common Lisp for building out their quantum programming language, compiler, and simulator stack. For those tasks, it has been an excellent choice, especially given the hugely exploratory and rapidly evolving space.

Rigetti hosted one of the Lisp meetups to talk about their use of Lisp and it was recorded in a talk "Lisp at the Frontier of Computation" [0].

[0] https://www.youtube.com/watch?v=f9vRcSAneiw

(Disclaimer: I'm the speaker in the video.)

  • jgalt212 6 years ago

    Whether you a pro lisp, anti lisp or agnostic, I think most HNers will find watching this talk time well spent.

    I have no relationship with Lisp, YC, or Rigetti.

dimatura 6 years ago

I absolutely love lisp. It's crazy how it's one of the oldest language around, yet it still has more features than many other newer mainstream languages.

At the same time, one of the reasons I don't like it too much for daily work, is how it encourages cleverness. This sounds silly, I know. But sometimes when there's opportunities for cleverness -- such as with C++ (meta) template programming -- I become too tempted to make my own code more clever instead of, you know, making it do whatever it's supposed to do. That makes me appreciate the relative dumbness of say, Go and Python (not that you can't be clever with Python, but it's not encouraged).

  • bonesss 6 years ago

    > ... one of the reasons I don't like [LISP] too much for daily work, is how it encourages cleverness

    I fully agree: nothing is as dumb as being too smart.

    From a linguistic PoV LISP stole my heart as a lad and never really let go... I think it's much more consistent and reasoned than most programming languages, and it's wildly powerful. I can only imagine what 5+ years hacking a domain on a proper LISP machine would bring...

    That said, I can't bring myself to use it in Enterprise projects because of the 'LISP curse'. While LISP got computing right, I think the type systems of the ML languages got industry right. They strike a nice balance between linguistic power, wild cleverness, and a brutal mandatory type system that means I rarely have to understand cleverness. As long as a function signature says it turns a 'Foo' into a 'Bar', I've got what I need.

  • zabuni 6 years ago

    My biggest issue as well. People seem to be drawn to creating a DSL for their problem set, and then programming in that. I've thought of Go as the Anti Lisp, in that there will be NONE OF THOSE SHENANIGANS LIKE THAT HERE. I think that's why Go draws so much vehemence. It's the "No fun allowed" of computer languages.

    • kamaal 6 years ago

      >>People seem to be drawn to creating a DSL for their problem set, and then programming in that.

      You have to create a DSL for your problem, no matter what programming language you code in. Its just that most people never program in anything beyond their area of exposure, and their DSL which they creates just appears very normal to them. But not to others.

    • aerique 6 years ago

      I recently had to use Go for the first time at work, while generally Common Lisp is my go-to language.

      I kinda liked it for a systems language. It has conveniences trickled down from higher level languages[1] that are missing from C & C++ and which make them such a chore to use

      [1] for example: garbage collection and being able to print an object without have to import vast libraries

      • DonaldFisk 6 years ago

        These are things that Lisp programmers take for granted, but it's difficult to convince most C/C++ programmers of the advantages of garbage collection, as if they wanted it they wouldn't be using C/C++.

        • flavio81 6 years ago

          >advantages of garbage collection, as if they wanted it they wouldn't be using C/C++.

          There are garbage collection libs for C++ and they are used by C++ programmers whenever necessary.

    • bonesss 6 years ago

      > People seem to be drawn to creating a DSL for their problem set, and then programming in that.

      ML languages strike a pretty nice balance here, IMO. Your DSL tend to be normal code with a funny operator or two, structured into happy little lists.

    • kbp 6 years ago

      I think your version of "fun" is my version of "proper abstraction and factoring".

  • reikonomusha 6 years ago

    This is maybe a problem for small open source software projects, but not at organizations where code is vetted with constant review.

    I claim a lot of problems like this are solved perfectly well by team-agreed policy. For the same reason you don’t see people writing C functions taking void* everywhere for every problem, you don’t see professional Lisp codebase chock-full of terrible, homegrown DSLs.

    You will see bad code in Lisp, C, Go, Java, whatever if it goes without review or peer agreement. With that said, since coding in Go is a bit more menial, there’s less policy to make around it.

    • dmytrish 6 years ago

      You can write good code in Lisp, but isn't it wiser to choose a language/technology that drains less organizational resources for herding cats?

      • reikonomusha 6 years ago

        Peer review is always going to use resources. Policing style doesn't take much effort, in my experience.

  • DonaldFisk 6 years ago

    I've seen excessively clever Java. Instead of code which did what was wanted using POJOs, it acted as an interpreter with most of the business logic in configuration files, which eventually numbered hundreds and had to be generated by shell scripts.

    I avoid cleverness, including when writing Lisp. I rarely use macros, for example. Even if no one else reads my code, I'll still have to understand how it works in future.

  • lispm 6 years ago

    One of the problems of the Lisp approach with extreme flexibility: it's amplifying the programmer. One can write extremely dumb code in Lisp - in many dimensions. Well trained/educated programmers can use it to their advantage, but the opposite is creating easily unmaintainable software.

    But in enterprise OO software (-> Java) you can get similar effects once there are enough layers of factories, special purpose DSLs, greenspunning, etc.

    • kamaal 6 years ago

      >>But in enterprise OO software (-> Java) you can get similar effects

      'can' is the wrong word there. You 'always' get an effect worse than lisp.

      It once took me hours and severals hundreds of classes to sift through a massive java code base to understand that all the programmer wanted to do was post a XML to a endpoint.

      This is more of a software engineering problem and not a lisp problem.

  • willtim 6 years ago

    It's been called the "Lisp Curse", but I don't think it's unique to Lisp. Any highly expressive and productive language is vulnerable to such social problems.

nnq 6 years ago

Lisps are like tiling window managers... Some have a taste for them. Most, don't.

Unfortunately some of the Lisp cool features are hard to implement without its simple syntax. Maybe the lesson is that we should find a f way to have syntax-independent programming languages! Just as we expect any successful language to have at least a couple implementations before being taken seriously, maybe we should expect a language to have at least a couple different alternative syntaxes and instant perfect translation between them (with comments and code style preserved), so I can use syntaxA and work on same codebase with my colleague using syntaxB. Than we'll be sure any metaprogramming or code intelligence tools don't suck because they'll be damn forced to not use the program text but the AST (that will have to be standardized) as they should instead...

Lisps would not even seem so cool anymore if we could get our s together and build languages with (a) standardized AST representations and (b) multiple syntaxes... we've like already imported most lisp features into modern languages anyway, and standardized ASTs would make macros both trivial and manageable...

  • Buttons840 6 years ago

    Imagine something like Go and APL compile down to the same AST, and imagine two programmers working on the same project with different syntaxes as you describe. The Go user one day finds a function called "life" that contains several hundred lines of code, which is clearly bad style and should be broken up, so he goes to talk to his APL using coworker who wrote the function. His coworker seems confused, "it doesn't seem too long to me" he says as he shows the APL function on his screen:

      life←{↑1 ⍵∨.∧3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵}
    
    I think what you proposed is a good idea, but it's just an idea. We'd need some new innovation to make it work I think.
    • dang 6 years ago

      Your thought experiment shows how hard it would be. If we could do that, we could translate machine code into readable high-level programs.

      To put it the other way around, it reveals something about how choice of programming language affects not only how we write a program, but what we write, and even how we think about the problem.

      • shaunparker 6 years ago

        Rich Hickey had a great comment a long time ago on the Software Engineering Radio podcast [0] that I think speaks to this. He was asked why he didn't just make the ideas in Clojure a library for Java.

        Interviewer: "Wouldn't it have been possible or simpler to add this using some kind of library or framework to an existing language? In other words, why does this kind of stuff need language support as opposed to just a library?"

        Rich: "In fact it is a library. It's a library written in Java that you can use from Java. The whole thing about a language is, a language is about what does it make idiomatic and easy. So for instance, you can use the same precisely the same reference types, and the STM, and the data structures of Clojure, all from Java... The lack of idioms and language support means using exactly the same constructs, the same underlining code, from Java, is extremely painful compared to Clojure where it is the natural idiom."

        I've always loved that definition of a language.

        [0]: http://www.se-radio.net/2010/03/episode-158-rich-hickey-on-c...

      • Buttons840 6 years ago

        I thought about it some more and I think it would go the other way too. APL has a lot of very terse array operations, but in Go you might find a mix of loops and if-statements. Translating those arbitrary combinations of loops and if-statements into APL might be very very ugly, or even impossible.

        Consider assembly or basic, which use a lot of goto-statements. It can get very ugly trying to fit some arbitrary goto's into the more structured loops and if-statements we're accustomed to.

        • dang 6 years ago

          Exactly. There's a kind of entropic arrow from higher-level to lower. Moreover the different high-level languages occupy different places in that same space. It isn't just that you can't easily translate between them. It's that they lead to different classes of system being created in the first place.

          • fiddlerwoaroof 6 years ago

            What should be possible, though, would be to distinguish a surface-syntax from the underlying code: e.g. a language that can transparently convert to/from c-style curly brace blocks, Pascal-style begin/end and s-expressions. possibly implemented as a bunch of git hooks that transforms to a normalized representation (as, e.g. prettier does for JS) on push and transforms to the programmer's preferred representation on pull.

            • dang 6 years ago

              I suspect you'd find that what look like surface differences actually go deeper into the meanings of programs, like a plant whose roots go deeper than one would expect.

              • le-mark 6 years ago

                There's a special level of Hell that involves streaming xml processing and cobol. Since cobol lacks dynamic memory allocation (speaking of cobol 85 here) and you don't generally build things like linked lists or trees, there's a limited number of things you can conveniently do with streaming xml data. Many cobol programmers are left with a feeling of despair, I wager.

                The Sapir-Whorf hypothesis is certainly real in programming languages[1].

                [1] http://wiki.c2.com/?BlubParadox

        • le-mark 6 years ago

          Consider assembly or basic, which use a lot of goto-statements. It can get very ugly trying to fit some arbitrary goto's into the more structured loops and if-statements we're accustomed to.

          There was a lot of work around equivalance of control flow between goto/conditional branch instructions/unconditional branch instructions and "while programs" in the 70's that showed there's actually a limited variety of control flow patterns, and while programs can be made equivalent. I think the result of this is the nowadays presence of break/continue loop constructs in most languages. Maybe this is a folk theorem, I don't recall really.

        • throwaway7645 6 years ago

          I think with a sufficiently advanced compiler pretty much any page of Go code would compile to a few lines of APL albeit in a different style. I have no source for this view though I can't imagine something in Go being more terse.

    • nnq 6 years ago

      That sounds like an imaginary non-problem. In such an extreme case, the APL developer would just have to put up with that line broken into 10 lines maybe. Just configure his editor accordingly (maybe to move through text like there are no newlines except some condition or whatever - if you use an "exceptional" syntax waaay different from all the others, it's your problem to hack your Emacs/Atom whatever to handle it for your need!). And btw, you could also have an autoformatter like goftm that will break down that line of APL by a condition like "should not exceed maximum line length in any taxes", whether you like it or not, might even avoid having a conversation altogether.

      Even if one of the syntaxes is not text based, maybe even a "syntax that involves arranging blocks in 3D space with a VR headset" or whatever, people could still agree on sensible defaults.

      Only issue I'd see might be with open source projects, where you could add a guideline like "please reformat your code so that it looks good in syntax X according to styleguide X-42 or your pull request will not be accepted". Yeah, the conversations might become less democratic, with some "style dictator" needing to push "the right rules" down right every contributors throat to prevent endless bikeshedding, but it could work.

      If we'd give of some of this crap "style democracy" and have a few more authoritarian rules, we could instead enjoy multiple syntaxes easily. The ol' "more freedom under tyranny" paradox that people always seem to (like to) forget it actually works, and could work well to code too.

  • wolfgke 6 years ago

    > Maybe the lesson is that we should find a f way to have syntax-independent programming languages! Just as we expect any successful language to have at least a couple implementations before being taken seriously, maybe we should expect a language to have at least a couple different alternative syntaxes and instant perfect translation between them (with comments and code style preserved), so I can use syntaxA and work on same codebase with my colleague using syntaxB.

    Microsoft tried something very similar with C# vs VB.net (in the early days of .net there were even more experiments like J#). There exist tools to convert quite seamlessly between C# and VB.net code.

    The result: except for some geeks people have decided that they prefer the C# syntax and it seems to me that Microsoft thus slowly tries to fade out VB.net.

    So it rather seems to me that people don't like it if there are multiple coexisting syntaxes, because even if there are good converters available, it is still an inconvenience that wastes at least some productivity by mere existence of the multiple syntaxes.

    • fiddlerwoaroof 6 years ago

      Well, the “right” way to do it would be to make the textual representation of the syntax a view layer concern and not the format in which the code was serialized. For example, you could store everything as s-expressions or some binary representation of an AST and then have a lens that pretty prints the code with the programmers preferred syntax and the reverses the programmer’s edits to the underlying serialization format.

      I suspect this would work best for an image-based smalltalk-like system where the “source of truth” would be the version of the code persisted in the image and then the textual representation would be generated from that on demand.

      • fiddlerwoaroof 6 years ago

        A really interesting extension of this is to use lisp-style symbols instead of strings to represent variables, which would make it possible to localize the functions and keywords of your language so that a developer could collaborate in his/her preferred language.

    • pjmlp 6 years ago

      > The result: except for some geeks people have decided that they prefer the C# syntax and it seems to me that Microsoft thus slowly tries to fade out VB.net.

      Not really, VB.NET enjoys much more MS love than F#.

      Just check the tooling and documentation at MSDN, or the set of languages supported for UWP development.

      • wolfgke 6 years ago

        > > The result: except for some geeks people have decided that they prefer the C# syntax and it seems to me that Microsoft thus slowly tries to fade out VB.net.

        > Not really, VB.NET enjoys much more MS love than F#.

        This is surely true, but in my opinion F# is more different from C#/VB.net than "just being for most parts a different code representation". So I would consider F# as an independent programming language - which indeed seems to be fallen out of favor by MS.

    • daanlo 6 years ago

      My understanding is that facebook also did that with PHP HipHop (PHP -> C) and react native (JS -> C/Java)

  • guicho271828 6 years ago

    Common Lisp has readtables, so technically Lisp has been already syntax-independent. You can switch between ()s and []s, for example.

    • agumonkey 6 years ago

      Some people had fun with these, having xml, json and the like as a local DSL. One made "lib" .. for years, few people know about.

  • lispm 6 years ago

    > we've like already imported most lisp features into modern languages anyway

    It's one thing to implement all kinds of features and another thing to integrate them well. You can bolt wings onto an existing ship, but it won't fly well.

  • rekado 6 years ago

    > we've like already imported most lisp features into modern languages anyway

    This makes it sound as if there are no modern lisps.

    What's the big deal of using a lispy language? Do other languages really need to desperately try to reimplement features that come more naturally to lispy languages? Are parentheses (only the round ones) really that much of a deterrent to motivate excessive work like that?

    I'm very happy with Guile and Racket.

    • twblalock 6 years ago

      Do you use them professionally?

      It seems to me that very few companies use Lisp-like languages used in production. Even Haskell seems more common.

      The big deal with using Lispy languages is that companies don't want to use them -- it's a lot easier to hire developers who know other languages. So, I'm stuck with Java whether I like it or not, and I appreciate any functional programming features that can be packed in to new Java releases, even if they end up being a bit clunky.

      • phyrex 6 years ago

        Quite a lot of companies use Clojure professionally though. And you’re in luck, it plays great with Java!

      • Barrin92 6 years ago

        this doesn't really seem to be a unique feature of 'lispy' languages because the overwhelming majority of languages does not have wide adoption in the market.

        The reason why adoption concentrates on a few languages is mostly network effects rather than objective features of the language itself.

        Maybe it's time for us to think about how to increase the adoption of languages rather than trying to bolt every feature onto java just because there is a lot of java code and devs. We're going to be stuck in a never-ending mess of complication and legacy.

        • erikj 6 years ago

          Yeah, Lisp actually used to benefit from network effects between the 60s and the 80s, before the AI winter struck and Lisp was sacrificed as a scapegoat by the industry (MCC and NASA JPL are probably the most well-publicized cases).

      • rekado 6 years ago

        Yes, I use Guile for work. (But I work in a not-for-profit environment.)

  • thedufer 6 years ago

    I think OCaml fits your description. ReasonML is an alternative syntax, and it is in fact implemented as simply a parser that generates the same AST as the OCaml parser, plugged into the rest of the OCaml compiler. This is particularly easy in OCaml because syntax extensions are implemented as functions of type AST -> AST, so that type is well documented and fairly stable.

    I don't know of any way to translate between the two seamlessly; the translation itself would be pretty easy but preserving whitespace would be tricky.

    • nnq 6 years ago

      Yeah, I always wanted to find the time to re-give OCaml a try after reading about ReasonML...

      I'm kind of put off after having tried Haskell and seeing now OCaml as an "inferior Haskell", and after looking at F# and seeing it as a "better integrated OCaml". Also folks in ML and data science, the area I care more about now, seem to like F# better, not much mention of OCaml here.

      • chillee 6 years ago

        I'd strongly object to calling OCaml an "inferior Haskell". OCaml has a much faster compiler, a great type system (sure, missing some of Haskell's more advanced type features, but adding modules/polymorphic variants), and it's strict.

        • nnq 6 years ago

          Not my words, just the "general feel" that some people are projecting.

          My experience was just that H felt so elegant and conceptually simple... up until a point (like when you start needing to use lenses or think of monad cobinators). Even the syntax was so nice and readable (love the special monads syntax, the `when` statement etc.).

          OC's designers seemed to not be very fond of the concept of simplicity, simplification, reduction, elegance, improving by removing stuff etc. Only on correctness/soundness. Sounds very... French :) Not a bad thing (I live in France now and like it). Just I don't like it in code. I'm more of a "let's symplify shit more and more until maybe the problem goes away, completely, by itself, instead of actually starting to implement a solution, from the get go" person.

          Oh, and yeah... I don't think I'd ever write a production system in a lazy language, I agree strictness is a feature. (Maybe I'm biased, but I just like to be able to reason exactly about when some code runs, whether it has already ran by the time a breakpoint is reached and all that old school stuff.)

          • chillee 6 years ago

            I see. I think I have a visceral reaction whenever people express sentiments like that, partially because it's Haskell marketing/branding that has worked too well. IMO, Haskell's insistence on purity (which has been an excellent choice advertising wise) has harmed other functional languages. "If you're gonna learn a FP language, why bother with a half-baked impure language?" is a sentiment I've seen thrown around multiple times.

            This is somebody else's point I read, but there's also a cognitive dissonance between 1. Haskell is pure, so it's very easy to reason about anything and 2. Imperative programming is just as easy in Haskell as any other language if you use the do notation.

            Could you give an example of OCaml's designers removing too much stuff? Are you talking about the lackluster stdlib here haha?

            As for laziness, I agree. I like lazy semantics (I'd argue they might almost be strictly superior than strict semantics), but there's too many things it sacrifices. Being able to reason about resource/time composability, having stack traces, having a debugger, etc. is amazing.

            • nnq 6 years ago

              To be honest, what gave me a "bitter taste" after looking at OCaml was not the lack of purity or "which stdlib" issue, but the lack of any kind of usable polymorphism.

              I mean, having to write + or +. or +/ instead of having something like a Typeclass for nice polymorphic operators? Feels like an annoying straightjacket and syntactic noise. And I don't understand the tradeoff: since I see types a lot like "compiler checked documentation that can also work as basic tests", I see no value in type inference for functions - I'd rather have only local type inference for variables, specify manually all the types of all the functions (this helps you think better anyway) and instead be free to binge on polymorphic operators and functions/methods.

              Haskell almost seemed to deliver that with its typeclasses. But then its laziness and monads and weird handling of records make it too weird for practical use for me.

              Is there an "OCaml with typeclasses or other form of polymorphism"? :)

              • chillee 6 years ago

                I guess the closest thing to what you want in Ocaml is modular implicits. http://tycon.github.io/modular-implicits.html

                As far as the tradeoff, my view on types is a bit different than yours, I think. First, a quick note:

                > specify manually all the types of all the functions This is usually encouraged in OCaml, in the form of a .mli file.

                I think types can be a lot more than "compiler checked documentation that can also work as basic tests". They allow you to encode invariants that allow the compiler to check more than basic stuff. I thought this video had a good example. https://www.youtube.com/watch?v=-J8YyfrSwTk&t=20m10s

                • nnq 6 years ago

                  wow, that video rocks! thx!

              • thedufer 6 years ago

                As Haskell shows, typeclasses are not ruled out by type inference. There is a reason they can't be added to OCaml, though, and it is functors. OCaml's functors are fundamentally incompatible with typeclasses.

                I think one of the problems with getting new people into OCaml is that you feel the pain of not having typeclasses long before you understand the power of functors.

  • barrkel 6 years ago

    The real differences between programming languages start cropping up in type systems, order of initialization in object hierarchies, whether operator overloading is permitted or not, whether operator overloading happens by global operator resolution, instance methods, per-type methods, or some mix of them all; you get the idea.

    All the mechanics of how values are created, interact, and are torn down are described by types, whether the types are dynamic or static. It's the type systems that are the limit to language interoperability, not syntax.

    I could go further: it is object systems that are the biggest problem. Program with inert data structures and pure functions of input to output, and interoperability is much more easily achieved. But if you expect to create an object whose behaviour is defined according to language A, and poke it into a function that interacts with it according to the idioms of language B, effectively smuggling A's semantics in via the polymorphic type, somebody is going to be surprised.

    And semantics are what people care about, too. Syntax does have its bikeshedders, but semantics are what make people move or stay.

    • nnq 6 years ago

      > Syntax does have its bikeshedders, but semantics are what make people move or stay.

      There are at least two big exceptions to this rule:

      - 1. Lisps - some people (some of them really smart btw), just can't stand lispy syntax (I'd say that it's most likely because they got the math/physics notations so deeply engrained in their minds that they can't tolerate breaking apart from "thinking in it")

      - 2. operator overloading heavy syntaxes - when writing any kind of sciency code, you'll always have a camp of people that will want to use 50 2-letter operators instead of functions everywhere, and camp of people that will just want "zero tolerance for operator overloading". they'd both have their reasons, and there will be no way to "make peace" -- in a multi syntax setting you'd just have one syntax that will show the `plus` function/method used and another one that would render it as the `+` operator

  • skybrian 6 years ago

    I'm working (slowly) on a fairly flexible standard syntax and syntax tree. It's not going to be as simple as S-expressions or JSON, though. I need five kinds of lists to get a reasonable mainstream-ish syntax, and this seems a bit unwieldy if you're just using it for data.

    I'm not sure what a standardized AST would really be. A standardized concrete syntax tree is doable, but each language is going to have its own statements and expressions (equivalent to special forms in Lisp). This is similar to how we can use JSON objects to represent all sorts of different types as key-value sequences with different sets of keys.

    For an evolving language, the AST needs to change from one version to the next. When you add a new kind of statement or expression, the tree-walkers in the tools usually need to adapt. The tools aren't stable unless the language is stable.

    • useranme 6 years ago

      What are the five kinds of lists you need?

      • skybrian 6 years ago

        - Phrases (space separated lists): a b (c d) e

        - Comma-separated lists: a, b, (c, d), e

        - Square-bracket lists: [a, b, c]

        - Blocks (curly brackets and separated by newlines or semicolons): { a b; c d }

        - Dotted lists, where sometimes the dots can be omitted: foo.bar.baz

        In combination, you can write something like:

          for x in [1, 2, 3] {
            foo.bar(x + 1, x * 2)
          }
        
        Which can be interpreted as 7 lists. The top-level list has five items, the last two of which are lists. The last list is a block containing one item, which is a three-item dotted list. The argument list after "bar" is a two-item comma list containing two phrases.

        I've implemented this, but I'm not satisfied with it; the corner cases are tricky to understand.

  • jimbokun 6 years ago

    My gut instinct is that translating between any two programs in different languages, that compute exactly the same function, will run into the Halting Problem somewhere. I feel like this is where some actual Computer Science analysis of the problem you are describing before diving head first into coding a solution could really pay off.

    • nnq 6 years ago

      That's why you'd need to have a common formally defined semantics underneath, that all syntaxes will be forced to comply with. Solves the halting problem (unless somebody invents a truly weird syntax with meta-meta-templates and context-dependent grammar or whatever - like the "look, I can use C++ templates to implement a compile time Lisp interpreter" horror"), but, yeah, inventing a practical way to enforce that formally defined semantics is a hard problem in itself, and waaaay above may level of compsi knowledge :)

    • syrrim 6 years ago

      By definition, a program in a turing complete language can be turned into a universal turing machine. More, turing equivalence means that any turing complete language can be turned into any other. The most naive implementation of the GPs suggestion would involve creating an IR, then for every language a compiler to this IR, and a reverse compiler to get back source from this IR. With this, you could trivially jump between languages in a mere two steps.

  • bcherny 6 years ago

    I've been thinking about this problem a lot lately: does there exist an isomorphism between high level languages, such that you can map their ASTs and type systems back and forth?

    • setr 6 years ago

      I imagine type systems will fail: haskell int -> C int -> haskell maybe int

      Haskell's type system is more expressive, so information is lost in the translation to C (that the haskell int is non-null), thus becoming maybe int when you translate back to haskell

      • tome 6 years ago

        A C int can't be null. Perhaps you're thinking of an int *.

  • runevault 6 years ago

    Huh this is an interesting idea. Especially if you're willing to only do saving through the compiler, because then you could just save the AST and then when you load it your IDE feeds it to the compiler along with output language and you get it back in whatever you're trying to compile too. Harder to make this editor agnostic but the idea is fascinating.

  • hodl 6 years ago

    Does JVM qualify? Different languages Java Scala etc compile to IL. Run analysis tools on that.

    • nnq 6 years ago

      NO. Far from it.

      I can't work on a codebase with a colleague, me writing/reading the code in Scala, he/she writing/reading the same code in Java or Kotlin. I can't code a project in Clojure, then hand it over to a team of programmers that see it and work on it as Java code.

      JVM languages are too different from one another (and the "common language" underneath, whatever is it called, it's waay too low level to qualify, practically no one writes/reads it). Different syntaxes would mean need to share a different semantic common to all to have seamless translation. (And yeah, apart from some academic experiments I think we're far from abstract syntax independent semantics in any production used programming language.

      To actually be able to treat the multiple JVM languages as syntaxes of the same language with a "seamless" experience you'd need code analysis tools of almost-superhuman intelligence. By that point we'd be out of job anyway ;)

    • empath75 6 years ago

      If the jvm does, then assembly does.

      I think the main reason that this wouldn’t work is that compilation isn’t perfectly reversible. Information is lost.

      I think you will always have this problem with translation and it’s a analogous to the idea of two people working on a novel, one in French, the other in English, with neither knowing the other language, expecting the word processor to come up with some lower representation that it can use to translate flawlessly back and forth.

oblio 6 years ago

I’d take advantage of this opportunity to start a discussion about a somewhat related article, the famed Blub article:

http://www.paulgraham.com/avg.html

> The more of an IT flavor the job descriptions had, the less dangerous the company was. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening-- that's starting to sound like a company where the technical side, at least, is run by real hackers. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried.

17 years later and I still don’t see all those Lisp startups :)

I’d argue that 8) will never happen because the gap between human parsing and computer parsing is too big. Except for a minority of human compilers that enjoy Lisp, most people don’t, so this notation will always be a niche for mainstream programming.

  • coldtea 6 years ago

    >17 years later and I still don’t see all those Lisp startups :)

    Just survivorship bias from PG side (lots of Lisp based companies just gone nowhere) and gross extrapolation of the sole example his one Lisp company as some universal truth (and even for his company, it was more the idea, timing, skills, execution and luck, than Lisp of course).

    • z0ltan 6 years ago

      Yup, this is about it really.

  • montrose 6 years ago

    Click on this link and search for Clojure: https://news.ycombinator.com/item?id=16052538

    • oblio 6 years ago

      I should expand then: 17 years later, where are the successful Lisp (ok, and Clojure) startups fighting the Microsofts (C++, .Net), Facebooks (PHP!!), Apples (Objective-C, Swift), Amazons (Java, C++, Perl), Googles (C++, Java, Python, Go), Twitters (Ruby, Java, Scala), Dropbox (Python, C++), etc.?

      • montrose 6 years ago

        Clojure is a Lisp. One of the distinctive things about Lisp is that it has dialects. You wrote your comment using software written in another of them.

        The youngest of the companies you list was founded in 2007. The first stable Clojure release was in 2009.

        • kbp 6 years ago

          > Clojure is a Lisp. One of the distinctive things about Lisp is that it has dialects.

          When one of those dialects is completely source-incompatible with every other dialect (the rest of them share non-trivial programs freely), and doesn't share the same core features as every other dialect (like list structure), then why is it still a dialect and not a different language? Calling all the other Lisps that share core features and run the same code different versions of one language makes sense to me; I don't see the point in applying it to any language that borrowed a couple ideas.

          If I have a large codebase written in (any) Lisp, will porting it to Clojure be significantly easier than porting it to Javascript or Ruby or any other modernish language? Not really; it's a complete rewrite either way. That sounds to me like it might just be a different programming language.

          • tazjin 6 years ago

            > the rest of them share non-trivial programs freely

            Heh. Most Scheme dialects don't even have source compatibility despite implementing the same standards.

            Common Lisp lets you write portable (between different CL implementations) applications, but it's easy to end up doing something that is implementation-dependent if you're not careful.

            • ScottBurson 6 years ago

              > it's easy to end up doing something that is implementation-dependent if you're not careful.

              I wouldn't say that. I've done a fair amount of porting code between CL implementations, and it's usually pretty straightforward. Implementation dependencies tend to be found only in code doing OS-y things like threads, external processes, network sockets, etc. This code is usually not difficult to find, and I would think that the authors would have expected it to be implementation-dependent. Computational code usually ports with zero effort, even when the machine word size changes.

              • DonHopkins 6 years ago

                Says the guy who wrote an interactive C compiler for Lisp machines! ;)

                "C combines the power and performance of assembly language with the flexibility and ease-of-use of assembly language."

                https://github.com/navoj/clisp-c

          • kamaal 6 years ago

            >>Not really; it's a complete rewrite either way.

            Most languages are a evolutionary dead end. They won't even get this far.

        • oblio 6 years ago

          > Clojure is a Lisp. One of the distinctive things about Lisp is that it has dialects. You wrote your comment using software written in another of them.

          I also wrote other comments on other sites using software written in PHP, what does that have anything to do with anything mentioned here? I guess it proves that you can use Lisp to write forum software. Still, not really related to my main question.

          Slightly off-topic, I should mention that I've read most of pg's articles several times over the years. I know about Arc, Viaweb, etc. I'm asking about this topic after having read all those articles.

          > The youngest of the companies you list was founded in 2007. The first stable Clojure release was in 2009.

          Ok, then some other big company founded since 2009. It's been already about 9 years, there should be some around. I can wait for examples. Those companies can use any dialect of Lisp invented since the 50's :)

          • paroneayea 6 years ago

            I know quite a few people employed working on Clojure. It's true that none of them are spectacularly large corporations, but there are quite a few companies out there using it day to day.

        • ivanpierre 6 years ago

          Ok, Clojure is a lisp-1 like Scheme, CL is a lisp-2.

          Clojure is not based on CONSes but you have CAR and CDR they are first and rest. Processors are no more based on accumulator and decrement registers... :D

          Cons exists but it works only on lists as CONSes doen't exists in Clojure, only lists.

          Empty list != nil.

          Clojure is based on immutability like Scheme, CL is often used in a mutable way (see the shameful f-set).

          {}, [], #{} are syntactic sugars and have already been implemented in CL.

          Clojure is an hosted language, so Clojure, ClojureScript, ClojureCLR can be somewhat different, but the core language is the same (same base Clojure library).

          • kbp 6 years ago

            > Clojure is not based on CONSes but you have CAR and CDR they are first and rest.

            CAR and CDR are operations on conses, so you can't have them without conses; that's like having sqrt without numbers. Lisp has FIRST and REST for operating on lists, and so does Clojure. Lisp also has CAR and CDR for operating on conses, but as Clojure doesn't have conses, it doesn't have CAR and CDR, either.

            > Processors are no more based on accumulator and decrement registers

            CAR and CDR were named for 'contents of address part of register' and 'contents of decrement part of register', not anything about accumulator or decrement registers. I'm not sure where you heard that processors don't have accumulators anymore, but that's definitely not true. The reason Clojure doesn't have CAR and CDR doesn't have anything to do with obsolete processors, though.

            > Clojure is based on immutability like Scheme

            Scheme is not based on immutability.

            > CL is often used in a mutable way (see the shameful f-set).

            Do you mean SETF? FSet[0] is an immutable data structure library for Common Lisp. A lot of Schemes have SETF, too, by the way, there's even an SRFI[1]. But SETF is just so that you can have one generic assignment operator; standard Scheme has plenty of commonly-used, destructive assignment operators like set-car!, set-cdr!, vector-set!, etc.

            0: https://common-lisp.net/project/fset/Site/FSet-Tutorial.html

            1: https://srfi.schemers.org/srfi-17/srfi-17.html

          • lispm 6 years ago

            Lisp also has first and rest.

            Scheme isn't based on immutability.

            R7RS:

            > Literal constants, the strings returned by symbol->string, and possibly the environment returned by scheme-report-environment are immutable ob jects. All objects created by the other procedures listed in this report are mutable.

            > {}, [], ...

            These characters are reserved for the user in CL and have been used for a bunch of different things.

        • PyComfy 6 years ago

          Is it really? If we go back to the roots, lisp is made of what John McCarthy calls the Primary S-Functions which are atom, cons, car, cdr, and eq (a.k.a =). Of these five, clojure only has two (atom and eq)

          • kbp 6 years ago

            > Of these five, clojure only has two (atom and eq)

            You can't have atom without conses (well, it would just be (lambda (x) t)); which Clojure function are you thinking of?

          • lispm 6 years ago

            Clojure has an ATOM function, but it does something completely different from Lisp's ATOM function.

        • flavio81 6 years ago

          >Clojure is a Lisp

          Almost. It deviates a lot from all the other Lisps, in particular the two main branches: Common Lisp and Scheme.

          However it is still a Lisp family language, and thus with its advantages.

        • lispm 6 years ago

          Clojure is sufficiently different from typical Lisp dialects and their implementations, that it forms its own language family.

      • jimbokun 6 years ago

        Ruby and Python, at least at the time of Twitter and Dropbox foundings, fell more on the side of the hackerish languages.

        Objective-C is only a mainstream language now because Apple is the most successful company on Earth. No one besides Apple was seriously using it at the time, and they used it to develop the most popular product in the history of computing. And of course, then bet their development future on an entirely new language, Swift, which of course no one else was using, because Apple invented it.

        True, it's crazy Facebook had so much success with PHP. But they are also one of the biggest corporate users of Haskell, I think.

        True, on the surface Amazon and Google are on the stodgy side with their "supported" languages. But I believe Google internally is rife with custom, proprietary DSLs to drive a lot of their infrastructure? And they invented Go. Which is in many ways the opposite of Lisp in terms of design. But the larger point PG was making was that the dangerous technology companies evaluate technology decisions on the real potential productivity gains, and not on the number of job postings for that technology.

        EDIT: Saw Erlang mentioned elsewhere in this thread, which reminded me of WhatsApp, which led to 50 engineers supporting 900 million users and a $19 billion payday from Facebook! That's the perfect example of the kind of company PG would have been worried about.

      • guicho271828 6 years ago

        Few comes in my mind: iRobot[1], Grammarly, D-wave. I feel there is something in common but it is hard to describe.

        [1] If you call iRobot is a venture company. It is actually 30 yrs old.

      • fsloth 6 years ago

        Some people say Python is a good enough Lisp.

        • flavio81 6 years ago

          >Some people say Python is a good enough Lisp.

          After a full year writing Python professionally, and then Common Lisp, i can confidently say this is not true.

          • montrose 6 years ago

            What do you find is missing?

            • flavio81 6 years ago

              I do like Python, a lot, but you asked for it... the list is long!

              - true, effortless metaprogramming

              - speed close to C by use of type declarations and fixed arrays; this means up to 100x faster than CPython. No, PyPy, Jython and Cython don't get quite there.

              - no Global Interpreter Lock!

              - the condition-restarts error handling system, which is deluxe error handling and recovery from errors.

              - a very flexible type system, plus very strong typing.

              - an extremely powerful object oriented system (CLOS) -- light years ahead of most OOP systems including Python. This means, having multimethods/multiple dispatch, method combinations, around/after/before methods, multiple inheritance, and a MOP.

              - true lambdas, not Python's one-line joke lambdas.

              - true interactive development, able to compile functions on the fly and thus change them while the code is running. Able to redefine classes while the code is running.

              - able to call Java libs and C libs at the same time, with ABCL

              - will run on many platforms and in the JVM without modifications to the code

              - separate namespaces for functions, vars, keywords and everything

              - tail call optimizations

              - something like Quicklisp (pip isn't as good)

              - built-in rational numbers, complex numbers, using the same operators for regular numbers

              - the LOOP macro, which I find better than Python's list comprehensions.

              • montrose 6 years ago

                Thank you, this is very interesting.

                In what way are Python's lambdas restricted? Do they literally have to fit on one line?

                Do you have to use a separate version of + to add complex numbers in Python?

                Do you prefer loop because it's more concise or because you can say things you can't say in Python? Can you give me a couple examples of things that work better with loop?

                (As you can tell from my questions, I don't know much about Python, but as a Lisp hacker I'm curious about other Lisp hackers' view of it.)

                • cmcaine 6 years ago

                  Python lambdas cannot contain statements, only expressions. Typical python contains a lot of statements, so that's a bit annoying.

                  You can define inner functions and so on, and most of the times that I want to use lambdas it's just because I've been stuck in a language without *-comprehensions for too long.

                  + works fine on complex numbers.

                  I'm not super familiar with the loop macro, but it lets you do both things similar to a python comprehension and to a python for loop. I don't think it does anything you can't do in python.

        • jimhefferon 6 years ago

          But macros?

          • thanatropism 6 years ago

            There are a few cool macros in Python, e.g. decorators. It's just that you can't define your own without making it past Guido.

            (To be clear, decorators aren't macros, "decorator" is a macro)

            • xapata 6 years ago

              You kinda can, through the file encoding specifier. It'll let you write reader macros that programmatically modify the AST, essentially.

      • darkhorn 6 years ago

        Erlang is not Lisp but it is very similar. Erlang is widely used in telecommunication industry.

  • marcosdumay 6 years ago

    > 17 years later and I still don’t see all those Lisp startups :)

    I don't see where in the article he is claiming that you would see many Lisp startups in the future.

    • oblio 6 years ago

      Not everything has to be spelled out. He wrote an article 17 years ago declaring Lisp a secret weapon. If it were, wouldn’t you expect the passage of time to prove his point? Or it would mean that everybody else is kind of dumb for not noticing this secret weapon in almost 2 decades :)

      • marcosdumay 6 years ago

        Just trying not to ignore what is spelled out:

        >But I don't expect to convince anyone (over 25) to go out and learn Lisp. The purpose of this article is not to change anyone's mind, but to reassure people already interested in using Lisp-- people who know that Lisp is a powerful language, but worry because it isn't widely used. In a competitive situation, that's an advantage. Lisp's power is multiplied by the fact that your competitors don't get it.

        • oblio 6 years ago

          Heh, I forgot that part.

          I’d argue that today libraries and super used code (more bugs already found and fixed, more answers on Google) > powerful language.

          • le-mark 6 years ago

            I think the main things that's happened is that many languages and frameworks coalesced to fill the void PG and Morris probably used lisp to such great advantage; ie building featureful, performant web apps. In the mid 90's there was perl I guess. Nowadays there's python, php, ruby/rails, etc.. with all their frameworks and environments. Mak

          • marcosdumay 6 years ago

            > more bugs already found and fixed

            I'd argue that libraries that used to have bugs found and fixed tend to have many more of them to catch you by surprise later. They also tend to have less stable interfaces.

            > more answers on Google

            Also, the libraries complex enough to need many questions on Stack Overflow are normally the same as the ones above.

lisper 6 years ago

If you want a good (IMHO) example of how Lisp can be made to "meet the problem" so to speak, take a look at this:

http://www.flownet.com/ron/lisp/djbec.lisp

and take a look at the functions xpt-add and xpt-double. The details don't really matter. What matters is that what is going on here is a whole bunch of modular arithmetic. The details are described here:

http://www.hyperelliptic.org/EFD/g1p/auto-twisted-extended-1...

Notice how the Lisp code is virtually cut-and-pasted from that page, despite the fact that it's not Lispy syntax, and the mathematical operators are NOT traditional addition, multiplication, etc. These are operations on modular integers and elliptic curve points. And yet the code looks just like the source material from which it was derived.

To make all this magic happen required less than 500 LOC.

protomyth 6 years ago

As much as I like Lisp or Smalltalk, I cannot help but wonder if they are really great in the large. Smalltalk has a pattern called Double Dispatch that makes sense but seems to tightly couple the classes. I keep looking at, of all things, the VBX market in the 90's and wonder why we don't have those type of components with current languages. I think something based on agents that create loosely coupled systems where it easier to expand and replace parts would be a better place to look for a future programming language.

  • shalabhc 6 years ago

    > I think something based on agents that create loosely coupled systems where it easier to expand and replace parts would be a better place to look for a future programming language.

    +1

    A different question would be - what would a Smalltalk like system look like 'in the large'? (There was the Croquet project that was a live distributed environment so there has been some work done here already.) It seems the hallmark of Smalltalk was late binding so perhaps the core model can be extrapolated to a loosely coupled distributed system?

    • protomyth 6 years ago

      I've seen one large Smalltalk project, and it seems like it evolved into a tightly coupled group of classes. It looked really hard about reasoning about any changes since the dependencies were pretty high. I think if you went with clusters of VMs talking to each other you could probably do a pretty good job.

      I do wonder if the work at Yale on Linda[1] & Tuple Spaces[2][3] would have been more popular when it was introduced in Java[4] then people would have reasoned a bit differently on big systems. Going the CORBA[5] route always seemed to generate really problematic systems.

      The again, I really thought it would be cool to have people program in Occam[6] just to force some thinking about organization of programs.

      1) https://en.wikipedia.org/wiki/Linda_(coordination_language)

      2) https://en.wikipedia.org/wiki/Tuple_space

      3) https://books.google.com/books/about/Mirror_Worlds.html?id=j...

      4) https://www.javaworld.com/article/2076849/core-java/jini-tec...

      5) http://www.corba.org/

      6) https://www.eg.bucknell.edu/~cs366/occam.pdf [PDF]

      • shalabhc 6 years ago

        Right - I think good ideas in Smalltalk are messaging (~actors) and the liveness. But the usual ST model is too 'flat' - all objects and classes live in the same space. This could be extended such that each object is a space in itself, with it's own inner objects - this model can then recursively scale up or down to multiple levels and also work for large systems.

        > work at Yale on Linda[1] & Tuple Spaces

        Ah, I see what you mean by loosely coupled. Yes, Linda is neat and if we think about objects/messaging with nested spaces, then each space can have its own form of messaging (could use tuple spaces, for instance).

        Also, I really like the cluster of VMs idea because you can then implement various kinds of cluster wide virtualizations on it (Croquet did this with Squeak VMs). I do think that cluster wide virtualization is the future of large programmable systems. We've built up some abstractions that work 'in the small' - we don't have to write assembly or do register allocation, but most 'languages' still limit you to think about what goes in within one Unix process, a very small part of the system. Whole system programming abstraction aren't really there yet.

        • protomyth 6 years ago

          > But the usual ST model is too 'flat' - all objects and classes live in the same space.

          Yeah, but I think its more a problem with the original everything is an object concept. I've thought about it a lot, and keep coming back to Animal Farm[1] paraphrased as "All objects are equal but some objects are more equal than others." We create objects that really do things and value objects, but it takes time to examine a program and figure out the structure of the objects. Telling the doers (often name Agent or Manager) is a pain. Its not implicit in the programming language.

          > Yes, Linda is neat and if we think about objects/messaging with nested spaces, then each space can have its own form of messaging (could use tuple spaces, for instance).

          My thoughts on it are that you could really use the tuple spaces as the VM boundary (communications between VMs is the tuple space). Plus, the tuple space provides an interesting way to decouple logic and actually replace agents in a running program. Freeze the queries to a tuple space, replace the agent waiting on the tuple space, then unfreeze the tuple space. I think building a language and patterns to not only allow maintenance but also facilitate program upgrades would be very helpful. Not to mention testing of agents outside the whole of the program. A test bench where you can load an agent, connect it to a tuple space, and then run a ton of data through it to perform units tests would be very interesting.

          1) https://en.wikipedia.org/wiki/Animal_Farm

          • shalabhc 6 years ago

            > We create objects that really do things and value objects, but it takes time to examine a program and figure out the structure of the objects.

            I agree with this too. I do think there is another layer of organization needed, which can be inspected to see the high level structure of the system - which objects form which part of the interconnected graph and which objects are passed around as messages, etc. I don't think everything-is-an-object needs to be changed though. An old attempt at layering another organization on top of objects is in the PIE papers: http://esug.org/data/HistoricalDocuments/PIE/PIE%20four%20re...

            > My thoughts on it are that you could really use the tuple spaces as the VM boundary (communications between VMs is the tuple space).

            Oh I see, interesting. The 'freeze queries, update, resume' ideas sounds very useful, as similar patterns are often re-implemented many times over by various distributed databases. Might as well make it a standard feature of the system. One question is why only apply this to VM boundaries? Could this be applied to smaller scale (e.g. between smaller object clusters within one VM?). Applying this with finer granularity might let us update a single object or method only.

            The 'freeze' idea seems to fall into the managed time concept. Some interesting work in this area is Virtual Time by David Jefferson - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.134... - and also NAMOS by David Reed.

  • mark_l_watson 6 years ago

    I wonder if there is a future for Pharo Smalltalk as a user interface and coding environment that uses backend services written in other languages. Last week I saw Python matplotlib bindings for Pharo and as I played with this, I thought of interfaces to TensorFlow, ML services at AWS, GCP, and Azure, etc. This idea probably does not make sense, but the idea hit me.

    EDIT: like a replacement for Jupyter, but more capable.

    • scroot 6 years ago

      Amber [1] is a Smalltalk that you use to develop in a Web Browser, but which will compile for you to JS on the back end.

      [1] http://www.amber-lang.net/

    • protomyth 6 years ago

      Technically, it should be possible to build a Pharo / Squeak environment in a dynamic language like Python or Ruby. I generally see it as more a cultural issue. Pharo (and Smalltalks in general) are very productive environments, but they live on the concept of live objects (Smalltalk in the form of an image-based development environment). Given the love of text editors and Unix-style file development, I doubt there would be a move in that direction.

  • mercer 6 years ago

    Are you secretly promoting Elixir/Erlang, or am I misreading what you mean with 'agents' in this case?

    • protomyth 6 years ago

      Down that path, but I'm just not sure Erlang is enough. I might be a bit infected by the promise of Telescript[1] or a paper[2]. RPC over HTTP (in one or more of its forms) won over mobile code, but I think the concept of software agents actually has a bigger win in the building of complex systems that can be reasoned about in the small as well as the large. I also think that the ability to replace software agents in a large system would be easier than refactoring / replacing objects.

      1) https://en.wikipedia.org/wiki/Telescript_(programming_langua...

      2) http://robotics.stanford.edu/~shoham/www%20papers/AgentOrien... from http://robotics.stanford.edu/~shoham/

      • mercer 6 years ago

        I'd love to hear your thoughts on what you'd like to see beyond Erlang. I'm new to Elixir/Erlang, so very curious to see it in a broader perspective.

        • protomyth 6 years ago

          I love Erlang and their attitude about keeping systems running. Plus, the syntax gets a bad rap when its just different. Sadly, the process and state models are different from what really interests me in agents.

          • mercer 6 years ago

            Which part interests you about agents?

            • protomyth 6 years ago

              I'm having a hard time writing up a complete answer, and will probably someday write an actual article, but I think the key for me is loose coupling of functionality and mobile code being a very interesting way to scale up a server farm. The whole thought of combining agents and tuple spaces into a system seems a lot closer to true reuse in software than objects has been. I think an agent programming language, a good speech act based communication language instead of the typical API, decent data types, and tuple spaces would make for a heck of a nice programming environment. I've had a hobby project for a long time to bring those elements together but I am still in the learning phase.

Keyframe 6 years ago

There are two languages (families) I fell in love with that I can't use. Lisp and variants (Scheme) and Ada. Joy to work with, joy to write in, yet... whenever I try to write something more elaborate I hunt the web for libraries or toolchain is lacking this and that. Meh. I still haven't given up, but in the meantime I have come to terms that I am a C programmer and always will be. I enjoy C as well (yeah). Next step would probably be to write core 'modules' in C and glue (think in level above) that in Lisp or Ada. There's also this thing I never quite understood, but it intrigues me, is that some people apparently write Lisp which emits C out, or even assembly.

pasabagi 6 years ago

Reading the article, I'm a little bit curious - I've heard a lot that more productive languages become less productive over a certain team size, since productivity typically comes from a lesser degree of strictness. Does lisp have similar problems?

  • loup-vaillant 6 years ago

    > productivity typically comes from a lesser degree of strictness

    Does it? Maybe it's the way I think, but I have observed that I could move faster when the compiler is strict, because I don't have to think about whole classes of errors—they're automagically checked.

    I like my compiler to be disciplined, so I don't have to.

    • pasabagi 6 years ago

      Me too, to be honest.

      Still, for example - I can see that dynamic types reduce a certain quantity of boilerplate. If you just don't make the kind of mistakes that make static checking necessary, then you could be more productive that way. I think a lot of boilerplate stuff is a bit like that.

      I wouldn't be - but still.

      • loup-vaillant 6 years ago

        Explicit typing adds boilerplate. Static typing, not so much. Type inference is bloody effective at removing boilerplate. To the point where it can take you quite close to the best of the world: static checks, without boilerplate.

        Boilerplate isn't the reason why static typing might be worse than dynamic typing. Undecidability is. Inference algorithms tend to reject some correct programs. The real question is then, are we interested in those programs? So far, my experience has been "not really". I know of a few cumbersome exceptions, but overall, Hindley-Milner type inference basically lets me write the programs I want.

        It's not like Java is a worthy ambassador for static typing.

        • ScottBurson 6 years ago

          The killer app for dynamic typing is when you want to develop the program concurrently with executing it. I don't know that it's technically impossible for a statically-typed language to support that usage mode, but I've never seen it done.

          • yyhhsj0521 6 years ago

            I prototype and test with ghci a lot while I'm writing Haskell. Not really impossible.

            • kqr 6 years ago

              Does GHCi no longer flush the entire application state when you reload a module?

              I know it's not as much of a big deal in Haskell where mutable state is limited, but I still remember it being an annoyance to have to manually breakfast out my data to disk and then reload it again.

        • millstone 6 years ago

          Boilerplate can happen at a higher level than just writing down type names.

          A good example is "variadic" functions. Consider computing the max of three numbers. Clojure's `max` takes any number of arguments, while Haskell has `max` that takes two arguments, or `maximum` that takes a list of arguments; both require some boilerplate to apply if you have exactly three.

          It's impossible to write down a Haskell function equivalent to Clojure's `apply`. You have to work around its absence, e.g. with awkward folds.

          Or for a more immediate example, compare Clojure's flexible `map` to Haskell's big 'zip' family: zip1, zip2, zip3, zipWith5...

          • loup-vaillant 6 years ago

            The zip family has more names than just `map`, but boilerplate? You have to select the right name for your code but you don't have to select more names. And at the implementation side, it's not like map's flexibility came from nowhere. I wouldn't be surprised to find out that Clojure's `map` is just as verbose as Haskell's entire 'zip' family.

            The whole concept of an `apply` function is a bit alien to me. I generally just call the damn function. Also, Haskell has an `apply` function. It's the `$` operator, defined thus (the first line is optional):

                `$` :: (a -> b) -> a -> b
                f $ x = f x
            
            It helps that Haskell functions all have one argument (that with being curried by default). If you want several arguments, just apply them one by one:

                f $ x $ y
            
            (This is sometimes used to avoid parentheses in some cases. I personally tend to just use the parentheses.)
            • kbp 6 years ago

              > The zip family has more names than just `map`, but boilerplate? You have to select the right name for your code but you don't have to select more names.

              Well, you have to select one of the zip functions in addition to map. Those are more names.

              > Also, Haskell has an `apply` function. It's the `$` operator

              That isn't what apply does. Apply takes a function and applies it to a list of arguments, (f x y) is equivalent to (apply f (list x y)).

              • loup-vaillant 6 years ago

                > Apply takes a function and applies it to a list of arguments,

                And the way to translate that in Haskell is to apply a single argument.

                Haskell functions are not Lisp functions. Lisp functions take a list of arguments, so `apply` should apply to a list of argument. Haskell functions have one argument, so `apply` should apply to a single argument. That argument could be a tuple, by the way.

                • kbp 6 years ago

                  > Haskell functions are not Lisp functions. Lisp functions take a list of arguments, so `apply` should apply to a list of argument.

                  I understand how currying works, thanks, but that doesn't change the fact that $ is not equivalent to apply. A version of apply that dealt with curried functions would just have to fold over a list, and I don't think it could be properly generalised in Haskell's type system (but I"m not an expert at Haskell).

                  For your definition of apply, Lisp-1's don't have it, because as you said, you'd just call the damn function. Lisp-2's sort of have it, but it's called funcall. Regardless, your definition of apply is not what apply is in Lisp.

                  • loup-vaillant 6 years ago

                    > I understand how currying works, thanks, but that doesn't change the fact that $ is not equivalent to apply.

                    You are seeing that a square hole isn't fitting a round peg, and accusing the hole of being too sharp. Or at least millstone did. Complaining about the absence of a Clojure like `apply` function in Haskell doesn't make any sense.

                    It would be like complaining about having to work around the lack of infix notation in Forth. Whoever does that need to let go of their old thinking habits, and rewire their brain to the new language. Not everybody can do that. Fewer still want to do that.

                    Now if someone says, "this real world problem is easier to solve in Clojure, in Haskell you have to jump through hoops", that would be different. For the present case, one would have to show how the absence of a proper Clojure like `apply` function could force anyone to jump from any hoops to solve any concrete problem.

                    > Regardless, your definition of apply is not what apply is in Lisp.

                    You've either said too much, or not enough. What "apply" does mean in Lisp, then?

          • kqr 6 years ago

            >It's impossible to write down a Haskell function equivalent to Clojure's `apply`.

                apply :: Dynamic -> [Dynamic] -> Maybe Dynamic
                apply fn args =
                    foldM dynApply fn args
            
            Impossible may be strong misclassification.
          • tome 6 years ago

            Meh. I hardly consider

                max [a, b, c]
            
            to be boilerplate.
          • Y_Y 6 years ago

            You can certainly write variadic functions in haskell. The type looks funky and nobody will like you for it though.

  • marcosdumay 6 years ago

    Avoiding compiler verifications is one way to get some extra productivity on a class of problems. As you said, it does not come without downsides.

    But there are many language level tools that will help with your productivity. Some are all upside things, others come with different trade-offs. The article isn't even focused on compile-time verifications.

  • rerx 6 years ago

    Paul Graham's Viaweb was rewritten in C++ and Perl after the aquisition by Yahoo. I can imagine that this was easier to deal with for a large team than the original implementation in Lisp.

    • lispm 6 years ago

      That's not unusual, historically some software had prototypes in Lisp. Some managed to get into production. Some even proved very hard to replace.

      But that was at a time when more people learned Lisp, there was less choice in tools and the eco-systems were smaller. In the 70s/80s one could buy ten years into the future with the right hardware/software and government/military was financing.

      Take for example the Connection Machine CM 1, an early massive-parallel computer with 2^16 processors. It was initially developed largely for and with Lisp. You could program it in *Lisp from a Lisp Machine - one of the most expensive co-processors. Fortran and C was added then for certain commercial users. Very expensive stuff and at least ten years ahead.

      Today the landscape looks different.

      • copper_think 6 years ago

        The garbage collector for Microsoft's Common Language Runtime (CLR -- the VM for .NET) was written in Lisp originally and then transpiled into C.

        • erikj 6 years ago

          Also, Apple's Interface Builder began its life as a Lisp program: http://vimeo.com/62618532

          • lispm 6 years ago

            Postgres started as a Lisp program, and turned out to be too difficult to develop in a mix of C and Lisp.

            The Objectstore database was developed by former Lispers, who wrote an earlier object-oriented database in Lisp.

          • mark_l_watson 6 years ago

            +1 for mentioning my friend Denny. I wrote an application that Experteliigence sold for me - lots of fun. Expertelligence had a high talent density.

    • flavio81 6 years ago

      >I can imagine that this was easier to deal with for a large team than the original implementation in Lisp

      You can bet this was simply due to lack of Lisp developers at Yahoo. And I can bet that the Lisp code was easier to read.

      I once delivered a sophisticated pricing modeler to a financial institution, in Python, done mostly in functional style. Code well documented and commented, and easy to understand. And Python is one of the easiest languages to learn.

      The customer's IT deparment insisted on a complete rewrite to Java, because that's what their developers knew.

    • deepaksurti 6 years ago

      >> C++...this was easier to deal with for a large team

      This was safer for pointy haired bosses :)

didibus 6 years ago

As much as I like Lisp, I don't think your choice of language will make or break your startup.

  • hnzix 6 years ago

    I worked at failed SaaS startup where the pointy haired boss insisted we use Java because there was lots of grads who knew it. At some point it took forever to wade through the boilerplate (and questionable grad code) to add a feature. This kills the startup.

    I think this says less about the language, and more about the kind of talent you will attract based on your language choice.

  • coldtea 6 years ago

    If anything, a study of actual successful startups, will show that it's almost irrelevant, as their stacks are all over the map.

  • cobbzilla 6 years ago

    sure, within reason. I wouldn't try to build a product in a language where the available labor pool is tiny given the problem domain. For example, don't build an enterprise SaaS platform in pure x86 assembly; don't write device drivers in Delphi.

    • FeepingCreature 6 years ago

      Eeeh. Having a small but not tiny labor pool can be beneficial in that the programmers you get are probably better than average, since average people don't tend to learn off-path languages.

      • cobbzilla 6 years ago

        Rare skills tend towards the expensive, either in money or time-to-recruit, or both. It can sometimes be worth it for specialist tasks, but I'd be cautious.

  • agumonkey 6 years ago

    few times I've read that lisp is on the edge, if your project doesn't need wild experiments you won't benefit, but when you do (a bit like other language, haskell) you'll see exponential returns

agumonkey 6 years ago

Lisp is everywhere these days. Look at destructuring, higher order functions .. this is just the tip of the iceberg. These genes stood the test of time and spread all over.

  • erikj 6 years ago

    I'd rather use the whole thing rather than scattered bits here and there.

    • agumonkey 6 years ago

      I think it's a "natural" thing. Contextually only ideas can be efficient, not the whole, so islands (languages, companies) grab what they can in gradual chunks.

  • flavio81 6 years ago

    >this is just the tip of the iceberg.

    However the full iceberg is still only found on the Lisp languages.

    Some features are found in other languages, but the combination of metaprogramming with homoiconicity is one of the key elements, and that is arguably only found in Lisp.

gyrgtyn 6 years ago

do any of the lisps have a package system like npm with tiny, nano libraries?

  • shmolyneaux 6 years ago

    For package management Common Lisp has Quicklisp, and Clojure has Clojars. I can't really comment on the "nano" libraries part though.

  • eadmund 6 years ago

    Quicklisp is excellent for finding & installing Lisp libraries. I don't know that nano libraries are a good thing — the world probably needs a decent strings package more than it needs left-pad, right-pad, count-chars &c.

  • flavio81 6 years ago

    >do any of the lisps have a package system like npm with tiny, nano libraries?

    In the Common Lisp world there is Quicklisp which works perfectly, however "nano libraries" a la NPM are often shunned in that community (as well as in many others).

    Often a Common Lisp library will only require 2 or 3 dependencies and each one might require 0, 1, or 2 dependencies.

    Part of this is due to the comprehensive "standard library" built into the language specification itself, part is also due to maturity -- some third party libs have become de-facto standards.

  • Keyframe 6 years ago

    Roswell for lisps themselves and quicklisp for 'libraries', and then there's ASDF as well.

    • Annatar 6 years ago

      ASDF appears to have been deprecated in favor of Quicklisp, or rather, Quicklisp appears to be the next generation of ASDF, since I saw asdf functions inside of Quicklisp.

      • kbp 6 years ago

        > ASDF appears to have been deprecated in favor of Quicklisp

        No, ASDF and Quicklisp do different things. ASDF handles compiling and loading systems, whereas Quicklisp handles downloading systems and their dependencies. ASDF is kind of like make, and Quicklisp is kind of like a package manager.

      • Keyframe 6 years ago

        Maybe, I haven't touched it in awhile. Quicklisp and roswell are where it's at.

  • Ros2 6 years ago

    Not sure why no one has mentioned ClojureScript to you, it has first class support and is supported first party by the Clojure team

quadcore 6 years ago

I wonder if ITA software still use lisp.

z0ltan 6 years ago

Another one bites the dust. Heh.

alsadi 6 years ago

So if we go back in 1977 we should pick lisp instead of fortran because they invited if statement first.

Since we are in 2018, I'll go with python

  • Annatar 6 years ago

    You still want 1977 tech, because those guys all had one or more PhD’s in subject matter required to build not just the software, but the hardware to run it as well. As a result of that, the software they designed is still very high tech if unorthodox, even by today’s standards. Modern Lisp is like using a faster than light spaceship with a gravitational distorter (moves by falling into the projected distortion), compared to getting from point A to point B in a gasoline automatic car which is leaking oil (Python); If you run into some ‘70’s and ‘60’s tech, take the time to master it so that you could build antigravitational devices all day long (XKCD Python import.antigravity joke notwithstanding). Lisp is still cutting edge technology, and it’s from the ‘60’s and the ‘80’s by the way.

olskool 6 years ago

LISP - Lots of Irritating Silly Parenthesis

  • Annatar 6 years ago

    Only if f(x) makes no sense to you in mathematics.