Twisol 6 years ago

This basic idea has deeply infected how I approach software architecture and design. When someone asks me why I wrote something a particular way, it's usually because I'm trying to keep the imperative/environmental concerns separate from the functional/algorithmic concerns.

I end up recommending this video at least once every year.

(EDIT: Another one from Gary that pairs well is "Boundaries": https://www.destroyallsoftware.com/talks/boundaries . The core idea is that values -- data, not code -- should form the boundaries between distinct systems.)

  • monocularvision 6 years ago

    The entire architecture of our iOS app is built around this principle. We have tried to give it a name but keep coming back to “Functional Core, Imperative Shell”. You can find some of the ideas that were implemented here: https://m.youtube.com/watch?v=7AGQ9dhWCX0

    • yoava 6 years ago

      What you seek is actually something a bit different.

      It is the observation that all software is built of 3 layers -

      Inbound IO, what you call imperative shell Business logic core, what you call functional core Outbound IO, what you call again imperative shell.

      The problem with the terms in functional programming is that when they say side effects, in most cases they mean IO.

      The functional core does not have to be functional for you to get the benefits - easy testing, easy to reason about, easy to develop. In fact, in some cases, functional programing is the wrong tool while having a business logic core that is separated from IO is still a very valid architecture.

      • nicoburns 6 years ago

        Generally speaking, I find that it is making the "business core" stateless that has the benefits.

      • tasuki 6 years ago

        In which cases is functional programming the wrong tool for the business logic core?

    • tylerc230 6 years ago

      Yeah swift lends itself to this style well. Value types, functional aspects etc.

  • _0w8t 6 years ago

    The actor model is prized in the talk. Yet actor model is not functional. The problem is the message queue of each actor. It is an imperative storage and one can get all nasty problems of imperative code from apparently functional code patterns.

    • ramchip 6 years ago

      You can build every actor like a miniature program, with its own imperative shell and functional core. In Erlang it's common to have various message handlers that are very imperative, but defer all complicated logic to pure functions that take part of the actor state and return a transformed state.

      • mercer 6 years ago

        I've been doing more and more Elixir work, and one of the most important lessons I keep learning is to keep my 'actors' as simple as possible and put the complicated stuff in my 'bags of functions'. I generally try to treat my code as two completely separate layers: the code that does stuff, and the code that orchestrates/isolates all of it.

        What I find fascinating is that, at least for me, using GenServers/Agents/Tasks has made this distinction much more obvious than it has been in other languages.

        I've built horrifying systems that muddled things together, only to realize quite far in how much of a mess I made, but with my Elixir projects I usually realize what I've done pretty quickly, and I end up course-correcting earlier on.

    • yoava 6 years ago

      Actor model is even worse, you have no gerentee of a response or error status

    • shrimpx 6 years ago

      There aren't that many ways you can mess up a message queue.

    • msangi 6 years ago

      What would a functional approach be?

      • _0w8t 6 years ago

        It depends on what is the goal. To take advantage of multicore for computations one can use explicitly parallel functional algorithms. To improve response under load one can do load balancing within application and schedule at the very top (shell level) several independent functional pipelines.

      • yoava 6 years ago

        Function composition

bcbrown 6 years ago

One of the things I really like about this approach is how well it lends itself to separating what needs unit testing from what is unsuitable to unit testing and should instead be validated by higher-level end-to-end/functional testing.

I think it's also related to the concept of building a DSL in which to implement business requirements. Once you have the right 'primitives' you can then combine them in useful ways that are easy to verify (by reading the code) that the implementation matches the requirements.

  • daenz 6 years ago

    Bingo. My philosophy has started to become "If it can be functional without sub-optimal overhead or confusing state magic, it probably should be." Delaying that "DSL" layer as long as possible keeps your system super straightforward and easily testable.

  • pwm 6 years ago

    Exactly. Also once all what you have in core are types and functions then things like mocking becomes obsolete. You can just instantiate those types for real.

  • btschaegg 6 years ago

    Strongly agree. Even more, it's also a gread heuristic to use if you want to refactor existing code into something more testable.

    At times, the simple process of figuring out what the really necessary points of mutation/IO are and how to "fence them off" is all I need to simplify big chunks of an existing "ball of mud".

  • MichaelMoser123 6 years ago

    How do you deal with performance problems? I mean to add an element to a list you need to create a copy of the list and then add the new element, all in order to remain functional. Isn't that very expensive?

    • gary_bernhardt 6 years ago

      You only have to copy if your array is implemented as a simple linear sequence of memory addresses. More advanced implementations don't have to copy everything. E.g., Clojure's vectors (its array equivalent) are effectively O(1) for the common array operations that are O(1) on naive arrays, like indexing and insertion. But Clojure vectors are still purely functional. (The actual time for some of those ops is O(log32(n)), but log32(1,000,000) = 4, so it's effectively O(1).)

      The term for this is "persistent data structures", usually implemented via trees, where replacing an object in a vector is implemented by building a new tree, reusing all of the old nodes except the ones that appear in the path from the root to the replaced node. That's why Clojure's Vector is log32; it's a 32 b-tree. (I'm writing this from memory and have little Clojure experience, but I'm pretty sure I have it right.)

      Many languages have implementations now, but most aren't as fast as Clojure's. E.g., there's immutable.js: http://facebook.github.io/immutable-js/

      • twtw 6 years ago

        For people who are interested in these things, I'd highly recommend Chris Okasaki's thesis/book on the topic.

        I was not familiar at all with this stuff when I read through it the first time, so it was a tad mind-bending and I probably understood ~10% of it, but it was certainly educational.

        http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf

    • adrianN 6 years ago

      Purely functional data structures are more efficient than the naive "just copy everything" approach. You can do anything that's possible in a stateful imperative setting with at most a logarithmic slowdown.

      • MichaelMoser123 6 years ago

        Let's say the entries of the original list and the new one point to the same objects, still for a list of a thousand entries you need to copy all the link entries to add one on top of it. What am I missing?

        • etatoby 6 years ago

          If you are working in an immutable context, meaning that you are building immutable data structures to contain immutable values, there are many useful structures that you can use.

          For instance, the "single linked list" or simply "list" has constant-time push and pop operations. Here such a list of 3 elements (NIL is a special value that says "no more list")

          L = (((v1, (v2, (v3, NIL)))

          To add v0 in front, you just create a new "cell" that reuses the old list (which remains valid by itself) as the "tail" of the new list:

          L2 = (v0, L)

          L2 = (v0, (((v1, (v2, (v3, NIL))))

          Depending on your requirements, there are more complicated and smarter data structures that can make the operations you care about either constant, or at most logarithmic.

          I recommend this free book to learn more and get better at Computer Science in general: https://mitpress.mit.edu/sites/default/files/sicp/full-text/...

          • MichaelMoser123 6 years ago

            I did scheme assignments as part of the programming language course back in the early nineties - but then I was wondering why the runtime environment had so many GC pauses (it was some A* search assignment, if I remember correctly) also it wasn't quite fast by any standards

            • ramchip 6 years ago

              There's lots of factors:

              * GC: it probably had a very simple algorithm, rather than a fancy generational GC

              * Data structures: it wouldn't support modern functional structures like HAMT

              * Compilation: you probably used an interpreter, rather than compiling the code to bytecode or native code

              ...and of course, the algorithm itself. It may not have been designed for functional data structures, or you may not even have implemented it in a functional style in the first place (Scheme supports mutation).

            • Volt 6 years ago

              Still subject to the implementation. Chez Scheme is one of the best for performance vs., say, MIT Scheme.

              • etatoby 6 years ago

                I'm partial to Chicken Scheme:

                https://www.call-cc.org/

                It's a Scheme-to-C compiler (plus interpreter / repl / script runner.) The Scheme code is transformed into continuation passing style (CPS) and then translated into C function calls that never return, but call other functions instead, forever. Therefore the C stack can only grow, never shrink, and is used as a natural "nursery" or first generation of allocation. When it eventually overflows, the garbage collector is invoked, which scans the stack for "live" values and moves them into the permanent generation (on the heap) and resets the stack; after which, execution resumes.

                It's the most ingenious way I've ever seen to turn not just Scheme, but any language with automatic allocation into fully standard C code. I think there's only one non-portable function written in assembly, the garbage collector that runs when the C stack overflows. That's a small price to pay to have a compiler that can piggyback on any existing C compiler, for virtually any platform. (It's not even entirely in assembly. IIRC, it uses some kind of setjmp / longjmp sorcery.)

                Moreover, the generated C code is fully tail-recursive and call/cc comes for free, so you can use first-class continuations in complex ways, without any performance penalty. It has hygienic macros and all the advanced stuff you expect from a modern Scheme. And of course you can link to any C library, use standard C APIs from Scheme and have your code compiled to optimized machine code.

                If only Scheme was not a dynamically typed language... but that's a rant for a different time.

        • kryptiskt 6 years ago

          If you just want to insert an element at the beginning of the list you can use the original list as the tail of the new list. It's immutable, so it's not going to change under you.

          If you want to insert stuff randomly, you wouldn't use lists to get the best results in a functional setting. You might use something like finger trees instead.

        • vnorilo 6 years ago

          You would want a persistent data structure, such as persistent HAMT (as in clojure, scala and others) for associative map or a tree of subvectors for an appendable ordered list. These allow most of the structure to be shared between typical mutated versions and allow value semantics as a bonus.

    • bebop 6 years ago

      You can also use what is called a transient in Clojure, where you can get an mutable copy, do what you need to do, then return a immutable copy back.

      The biggest benefit seems to be when adding elements within a for loop. The example on this page illustrates how you could use this. https://clojure.org/reference/transients#_example

T-R 6 years ago

This concept is a big part of what the "big deal" around monads is - using monads to model effectful code conveniently puts the information of "this should be shell code" into the type, in a way that ensures that code that calls it also gets annotated as "shell" code. Monads are of course also a much more broadly applicable abstraction, but their application to effectful code, enforcing this design, is usually the first and most apparent place people run into them in the ML family of languages.

  • dwohnitmok 6 years ago

    I disagree, although it's possible I only disagree with how you've phrased it.

    Monadic interfaces in the context of non-deterministic effects are a consolation prize. They represent a way to combine effectful code, but ideally your code would have almost no effects at all.

    As far as I can tell, the idealized version of this talk is a batch interface: one effect to grab all the data you need, transform the data, and then one effect to "flush" the transformed data (where flush could mean to persist the data in a database, send it out as commands to control a robot, etc).

    Tracking side effects in your types (maybe what you were going for?) is helpful for measuring to what degree your code fails to adhere to this idealized model. If most of your code has an effect type, that's probably a sign to refactor. It also keeps you honest as to the infectious nature of effectful code by propagating the type as necessary.

    • T-R 6 years ago

      I don't think we disagree in spirit - I didn't mean to imply that it prevents you from, e.g., writing all of your code in the IO monad, just the points you made in your last paragraph. So, more that they're a useful tool to help you realize these goals, not something that gets you there on its own. It does let you broaden/specify your definition of "effectful" a bit - modelling event streams with monads gives you FRP (as in your robot example), and I vaguely remember reading in some paper somewhere the suggestion of using monads to separate out unconstrained recursion/non-totality/co-data from total code.

  • _0w8t 6 years ago

    The big problem with monads is that they are still imperative calculations even if individual effects are nicely typed. If functional code uses them, it effectively becomes imperative. To keep benefits of functional style one want mostly avoid monads. The whole idea of the article is to use any notion of imperative patterns only at the very top to glue things together.

    • neukoelln 6 years ago

      What's imperative about monads? Or rather, what is not functional? Why should they be avoided?

      • _0w8t 6 years ago

        Look at any do block in Haskel, PureScript, Idris etc. It is the imperative code. The individual effects are typed and separated, but it is still the code that depends on implicit state with all its drawbacks.

        Then look at Elm code. Elm does not have any imperative hatches. The monad that runs everything is at the very top level (“shell” as the article calls it) and hidden.

        As such Elm code is forced to use functional decomposition resulting in very easy to follow, refactor and maintain designs.

        • spion 6 years ago

          Its still quite different than classic imperative code

          If you're working with a free monad, or if you don't specify IO (just some of the generic IO like typeclasses like say MonadError), you can still choose your own interpreter for the monad and "program" the semicolon. Which means you get back all the benefits of testability etc.

          To get a similar effect in an imperative language, you would use e.g. coroutines and `yield` every side effect to the execution engine. The engine will take the action "specs" (probably a data structure describing the action to perform, e.g. set some value in memory) and decide what to do with them, and you can swap the real engine with a test/mock engine in your tests.

          • _0w8t 6 years ago

            Programming semicolon is not different from mocking interfaces with imperative code. One still has to write it and tests still do not test the real interfaces. Surely the situation is improved compared with imperative code, but it is not as good as with monad-free code.

            It is pity that modern conveniences like polimorphic record types with nice syntax for record updates were not invented earlier. With those even with complex code monads can be used only at the top level when the sugar of do blocks is not even necessarily.

        • neukoelln 6 years ago

          Do-notation is Haskell is purely syntactic sugar over function calls. You can remove do-notation from Haskell and still write the exact same programs (with monads and all). Also, monads are not about state anymore than classes in Java are about Toasters.

          • _0w8t 6 years ago

            Surely a do-block is just sugar for the functional code. But that code can be used to model all imperative effects. As such the code inevitably models all troubles the imperative code can cause.

            If one looks at the desugared version one can see where the trouble comes. Functional code using monadic style depends on the state of the monad interpreter that can be arbitrary complex and spread over many closures with many interdependencies. It can be rather hard to uncover what exactly is going on, precisely in the same way as with imperative code it models.

  • pankajdoharey 6 years ago

    Monads is taking it too far. Mutation is a reality, the correct approach is disciplined mutation. Shoving mutation into convenient boxes and convincing yourself to never look inside it does not mean mutation does not exist. The best approach is taken by scheme, and more specifically clojure to have a disciplined and practical approach. Mathematical purity of programs is a myth propagated by Type theorists dont buy into it.

    • gwn7 6 years ago

      > Monads is taking it too far. Mutation is a reality, the correct approach is disciplined mutation. Shoving mutation into convenient boxes and convincing yourself to never look inside it does not mean mutation does not exist.

      Monads exist exactly because mutation is a reality. Monads do not defy the "mutation reality", nor try to encourage programmers to never look inside them. They are a means of dealing with the "mutation reality" by encouraging to separate pure and impure parts properly and while still making functional composition possible. The image you create for monads is a straw man. Monads ARE a kind of "disciplined mutation" as you put it.

      You don't have to like them nor prefer them. But they are clearly a great and established abstraction loved and used by many. You may prefer Clojure, I get it, but I see no reason to talk shit about monads in this way. Have you ever used monads and similar abstractions extensively?

      > Monads is taking it too far.

      > Mathematical purity of programs is a myth propagated by Type theorists dont buy into it.

      Those are big words. Are you some kind of authority? You could have at least prepended "I think" to those phrases.

      • pankajdoharey 6 years ago

        You are repeating what i wrote, by writing this large comment you havent increased anyone knowledge neither mine nor yours. Monads exist because because haskell people want to pretend that there is this ideal mathematical world where things dont change, some go as far as saying Strong types removes the need for writing tests.

        > Those are big words. Are you some kind of authority?

        Yrs of writing programs have taught me that programming functions are not equivalent to mathematical functions, there is no equivalence that exist stop pretending that it does.

        • neukoelln 6 years ago

          > Monads exist because because haskell people want to pretend that there is this ideal mathematical world where things dont change

          Monads exist independently from Haskell and are not about "things that change".

        • always_good 6 years ago

          Once again, monads are a form of "disciplined mutation." They didn't repeat what you wrote, they contradicted your entire premise.

          You didn't respond to anything they said and you doubled down with your nonsense about "Monads exist because because haskell people want to pretend that there is this ideal mathematical world where things dont change."

          That monads ignore the "mutation reality" isn't a very strong point when monads are a concession for the "mutation reality." Unless you want to repeat yourself a third time, the ball is in your court to bring concrete supporting arguments since you're making the extreme and somewhat self-aggrandizing claim that these other people don't really see the mutation reality of the world like you do, thus they are using inferior tools.

          I'd say that anyone specifically trying to corral/isolate their I/O code (monads or not) are so "enlightened" about the mutation reality of the world that they use specific abstractions to address it.

          If you want to see code that tries to paper over I/O, look at a program where you can't even tell when and where the I/O is performed because it just looks like any other function call. Active Record in Ruby on Rails might be a good candidate in its effort to make DB access opaque to the programmer.

          • pankajdoharey 6 years ago

            OK This has become a big feud, apologies for choosing the incorrect words and being rude. Let me put it another way by "disciplined mutation" all i meant was localised mutation. A lexically scoped local variable is enough to handle the spread of mutation, i dont see the need for a specific datatype to handle mutations exclusively. Monads make mutations explicit and global. I hope that makes sense.

    • adimitrov 6 years ago

      I think you misunderstood Monads, or the role type theory has to play in modern programming.

      Large projects inevitably benefit from static guarantees enforced automatically by your environment. That can be a 3rd party static code analysis tool or the compiler. Even just a linter will improve code quality and thus developer happiness and productivity.[] Having your compiler enforce* the functional core/imperative shell, and exposing your business logic only through functional components is what makes a strongly typed language of the ML family stand out over, say Clojure.

      Mutating state is no problem in a strongly typed functional language. In Haskell, just put your computation in an ST Monad. You can even still expose a functional signature that doesn't leak the ST monad if your algorithm is faster with mutation.

      [*] Overall. Some people will probably be unhappier, because they have to follow "arbitrary" rules now, but those would usually have been the worst offenders.

      • Chris_Newton 6 years ago

        Mutating state is no problem in a strongly typed functional language. In Haskell, just put your computation in an ST Monad. You can even still expose a functional signature that doesn't leak the ST monad if your algorithm is faster with mutation.

        That works reasonably well in some situations, but not all.

        We often work with local, temporary state, meaning something mutable that is only referenced within one function and only needs to be maintained through a single execution/evaluation of that function. (Naturally this extends to any children of that function, if the parent passes the state down.)

        If that function happens to be at a high level in our design, this can feel like global state, but fundamentally it’s still local and temporary. I/O with external resources like database connections and files typically works the same way.

        We can also have this with functions at a lower level in the design. An example would be using some local mutable storage for efficiency within a particular algorithm.

        However, not all useful state is local and temporary in this sense. We can also have state that is only needed locally in some low-level function but must persist across calls to that function. A common example is caching the results of relatively expensive computations on custom data types that recur throughout a program. A related scenario is logging or other instrumentation, where the state may be shared by several functions but still only needed at low levels in our design.

        Now we have a contradiction, because the persistence implies a longer lifetime for that state, which in turn naturally raises questions of initialisation and clean-up. We can always deal with this by elevating the state to some common ancestor function at a higher level, but now we have to pass the state down, which means it infects not just the ancestor but every intermediate function as well. While theoretically sound in a purely functional world, in practice this is a very ugly solution that undermines modularity and composability, increases connectedness and reduces cohesion. And weren’t those exactly the kinds of benefits we hoped to achieve from a functional style of programming?

        If anyone would like to read more about this, we had an interesting discussion about these issues and how people are working around them in practice over on /r/haskell a couple of years ago:

        https://www.reddit.com/r/haskell/comments/4srjcc/architectur...

        Spoiler: We didn’t find any easy answers, and everyone is compromising somewhere.

      • pankajdoharey 6 years ago

        There have been many articles on this topic. There isnt any evidence to suggest that static guarantees makes your code better. Ofcourse what does make your code better is immutability. But complete immutability isnt practical and even Haskell people understand that but they continue to pretend that programs are about mathematical purity. If that isnt enough claiming static typing removes the need for testing is complete bunk.

        • pwm 6 years ago

          Have you written programs in Haskell/F#/Ocaml? Static guarantees, especially of expressive type systems, absolutely make your code better and their benefits compound as your system gets bigger. The type checker acts as a guardian of the soundness of your whole domain. And yes, expressive static typing removes the need for a whole class of tests, namely the ones that you'd have to write in other languages if you are disciplined enough to care about the soundness of your domain model. I personally loath writing these type of tests but I do when I can't use the power of say Haskell because I care.

          Immutability also makes you code better but it's an orthogonal concern and utilising both is a smart move.

          • pankajdoharey 6 years ago

            If it were true, we would all be writing in c++ and there would never be a Stack overflow or Null pointer exception. But thats isnt true.

            • pwm 6 years ago

              Any language that has null, including C++, does not fall into the category of languages with expressive type systems. As soon as you have proper sum types the null issue goes away + the whole big world of working with ADTs open up.

    • TheCoelacanth 6 years ago

      What makes you sure that mutation is a reality? You can just as accurately model reality using an immutable value with time being an additional dimension as using a mutable value without time being taken into account. Both are just models of reality, not reality.

abenedic 6 years ago

So, I am a person who wishes there was a proper article I could read for this. I don't know how much this matters to people, but I can understand written English far far better than I can understand spoken English. I know auto-translate works for some. I am trying with applet and get nothing. I like core idea. I feel often people outside English land get second class experience.

  • pfranz 6 years ago

    I've followed Gary's stuff for a bunch of years now. His preferred medium seems to be screencasts and the occasional conference talk. His blog dating back to 2011 has 10 posts (he does seem to tweet often, though). His screencasts are very thoughtfully done and dense. So I imagine you'd have to find that article from someone else.

    As a native English speaker, I generally have the same preference you do. Articles I can skim, they're easier to refer back to, and I can search.

    I hope you can take solace that I will sometimes search for an error message and the only result is a forum posting in a foreign language that Google Translate barfs on.

  • TeMPOraL 6 years ago

    People inside English-land get second class experience too. Audio and video are inferior tools for this kind of content, in the same way a linked list is inferior to a vector if you need random access.

    In this particular case, I'm annoyed too. I've learned (what I think is exactly) this concept from other sources, and I've been recently linked to this video a couple times. What I would love to do is to quickly diff my existing knowledge with contents of the talk, but I can't do that because it's in a video format. I've been putting off watching it for couple weeks now.

    • mercer 6 years ago

      I'd say this one (as well as other classics like 'Simple Made Easy') is worth it.

      That said, I'd also prefer an article that tells me the same thing.

  • godot 6 years ago

    You're not alone. I immigrated to the US from Asia in the 90s. For many years I didn't understand spoken English fully. Arguably I still miss some words now and then, even nowadays, after over 20 years living in an English environment. I had great grades throughout high school and college here mostly thanks to having excellent reading and writing skills in English, since I came from a place in Asia that emphasized learning English since childhood. Listening to spoken English is not easy for non native speakers.

    • abenedic 6 years ago

      > Listening to spoken English is not easy for non native speakers

      Thank you for saying this. You clearly have a good understanding of English(far better than mine at least). I feel like this is an area that is glossed over. For every article that is written in english there are at least 10+ non-native speaker struggling to understand the work, who could extract something useful or help explicate the work.

      • theoh 6 years ago

        In the case of this video, you aren't missing much. It's deliberately 'dense', according to the author, but the concept is simple.

        Almost any app will involve some imperative code, but using pure functions within that, where possible, makes it easier to reason about what is going on. That's all there is to it. https://en.wikipedia.org/wiki/Referential_transparency

lackbeard 6 years ago

Does anyone know of an example of a non-trivial codebase written in this style?

How do you do this cleanly when, e.g., you need to make a network call and then based on what the result returned to you is, either do something with a local database, make a different network call, or return a result to your user. Also, error handling...

It seems to me like monads must be the logical conclusion to this style of programming, or else you wind up with a mess (or just abandoning this technique.)

  • justinpombrio 6 years ago

    Literally anything written in Haskell. When you write stateful code in Haskell (such as code that reads from a database, makes a network call, or does IO, to use your examples), that code will be wrapped in a Monad. Thus the type signature of the code shows that it's stateful code, and Haskell's type checker will ensure that any code that calls it also has a stateful type. For example, if a function takes in an `Int` and returns an `Int`, and along the way may directly or indirectly perform IO, that function will have type `Int -> IO Int`.

    Now of course, you don't have to cleanly separate a functional core from a stateful shell. But if you don't, all of your code is going to end up wrapped in nested Monads declaring all of the ways that it's inadvertently stateful, and that's a very painful way to program. So Haskell pushes you strongly towards having a functional core and stateful shell.

    For projects written in Haskell, the wiki has a long list: https://wiki.haskell.org/Haskell_in_industry

    • smadge 6 years ago

      I agree, but I think it still takes some coding self-discipline to write code with a functional core and stateful shell in Haskell. There’s little stopping you from having every return type wrapped in the IO monad. It’s not any more unnatural to do that than it is to code in any imperative language.

      • justinpombrio 6 years ago

        > It’s not any more unnatural to do that than it is to code in any imperative language.

        Yeah, I guess the `do` notation makes it pretty painless. If you start mixing monads, though, things get hairy quickly.

        • waluigi 6 years ago

          Monad Transformers aren't _that_ bad, the MTL style of doing things makes it all pretty painless.

          It also provides a huge opportunity for testing. At a very high level, you describe all of your effects as a series of embedded, compostable DSLs that you define interpreters for. The awesome part is that you can switch out the interpreters at will, so you can, for example, replace something that handles network requests with something that returns dummy data almost effortlessly.

    • vmchale 6 years ago

      Haskell actually has a reasonably functional way of approaching things like I/O (in part due to laziness), so that you can e.g. parallelize I/O actions by passing them to a `parallel` function.

  • kellysutton 6 years ago

    We use this pattern in our Ruby apps to safely move $1B+/month at Gusto.

    You don’t need full blown monads to do this, just need to be cognizant to how you separate the what (functional core) from the how (imperative shell). I recommend giving it a try!

  • dnautics 6 years ago

    Basically anything written in erlang or elixir follows this paradigm.

    It's really extremely productive to code this way... I wrote a job scheduler from scratch in 3 months and never once had to write or use a mutex or semaphore. Immutability makes you very confident about your code.

    Similarly, my UI guy wrote a UI in basically functional react. It's amazing. With very little js experience I made code patches that.. just worked because I was guaranteed that no function calls had mysterious side effects...

    • mmartinson 6 years ago

      Agree that Elixir and Erlang lend themselves well to this style, but I strongly disagree that everything written in them does or that the languages help enforce this pattern in any way.

      It's quiet the opposite really from what I've seen. Elixir places absolutely no constrains on when and how IO happens, and provides extremely useful primitives for shutting state between (VM) processes in otherwise stateless code. A library function that looks totally pure could, for example, boot an entirely different subsystem that fired a missile into the sun before providing a return value and you'd never know it if you didn't read the docs, or use one of many pieces of fantastic beam tooling to inspect the runtime state of the system.

      This is part of what makes these languages pragmatic to work in. There are foot guns everywhere, but the VM ensures you sign into the foot gun registry whenever you use them.

  • _greim_ 6 years ago

    The Elm architecture seems to use pretty much this exact pattern. The imperative shell parts are all buried within the Elm runtime, and you contribute the functional bits. Redux also to a lesser extent.

    • arwhatever 6 years ago

      Have been using Elm architecture for a while, and also have pushed my code toward an immutable core/mutable shell architecture since watching Vladimir Khorikov's video on the subject at Pluralsight, but had never considered how the two are interrelated. Thank you for the big light bulb lighting up in my head just now.

  • pwm 6 years ago

    The goal is that everything in the core is pure data types and functions. Now, depending on your problem domain, your core can be small or huge compared to the shell. Your question was all about IO/effects. They are impure and belong to the shell. If what you work on does mostly IO then you will have a lot of impure code lying around by definition. Still, the idea is that you should keep the amount of time spent in impure land minimal, ie. drop into pure structures/computation as soon as possible during program execution and only dip out of on occasions when you have to do IO.

    Here is a good talk about these ideas: https://www.youtube.com/watch?v=US8QG9I1XW0

    • lackbeard 6 years ago

      I guess what I'm getting at is that in any non-trivial program with many external dependencies and cross-cutting concerns, almost all of your code is the "shell", so either I'm missing something, or this is stating the obvious (write pure functions where you can) in a very roundabout way. I've never seen a program written explicitly in this style that wasn't a trivial example.

      • ramchip 6 years ago

        The functional core can return a symbolic description of actions to take, which the shell executes, a bit like a simple interpreter.

        This is a good example:

        https://www.theerlangelist.com/article/spawn_or_not

        (Note the context is Elixir, so it’s talking about lightweight processes, not OS processes. It explains how to keep that stateful / effectful code very simple, and have all the real logic be pure functional code.)

      • pwm 6 years ago

        The system I've been working on in the past 6 months is non-trivial and written in this style. Roughly 2/3 of the code is in core and 1/3 is in shell.

      • yen223 6 years ago

        I think it's more that a lot of programs aren't actually doing anything terribly complex, logic-wise.

        A lot of apps out there do nothing more than fetching data from one service, and dumping it out to another.

  • charlieflowers 6 years ago

    I am a fan, but you're right, it can get awkward.

    The good part is the set of pure functions that take input and compute something from it.

    But the awkward part (assuming you're coding in a typical mainstream imperative language) is the "transaction script" that gets an input, passes it to the pure functions, and then takes the result and writes it out. If there's no event loop and you're not using promises or something like that, it forces an awkward boundary right down the middle of your code that just feels unnatural.

    Still, I think it's worth it for many problems. The vast majority of your code is pure functions -- simple, understandable, testable. The price you pay is this unnatural seam at each point of IO.

  • vmchale 6 years ago

    > It seems to me like monads must be the logical conclusion to this style of programming, or else you wind up with a mess (or just abandoning this technique.)

    Monads are nice, but there's a lot of work on algebraic effects/effects in general which may pan out into something useful (and more general).

dwohnitmok 6 years ago

Although it means something different and is not the the actual semantic opposite, I've found the syntactically opposite phrase "Functional shell, imperative core" also useful, which the speaker alludes to when he talks about local variable rebinding that is invisible to the outside.

That is locally scoped, mutation-heavy, imperative code is fine as long as you can wrap it in a deterministic, immutable interface for general use. This is the premise behind things like Haskell's ST type. More generally it's the usual way FP languages try to recover competitive performance.

  • JBiserkov 6 years ago

    >locally scoped, mutation-heavy, imperative code is fine as long as you can wrap it in a deterministic, immutable interface for general use.

    Yes! Clojure supports this via transients. What's even better is that you can write the loop using immutable data, make sure it works. Then just

    1. add a call to transient in the initializations,

    2. add a call to persistent! in the end and

    3. add a ! to each function that modifies the transient, e.g. conj becomes conj!

    The benefit in doing it this way is that the language will catch and flag as errors any "mixed" usage - calling a pure function with a mutable argument or vice-versa.

    https://clojure.org/reference/transients

    http://www.hypirion.com/musings/understanding-clojure-transi...

  • taeric 6 years ago

    It is also how your computer does basic addition. It isn't like there is such a thing as immutable numbers when it gets to the silicon. Look at how a barrel shifter is implemented someday, or a comparator.

    • branislav 6 years ago

      It's also how something like React.js works. Wrapping stateful DOM manipulation in a functional shell, made performant thanks to the Virtual DOM diffing algorithm.

ryanmarsh 6 years ago

Great to see this. I ended up writing this way after watching Brian Will's Why Object Oriented Programming is Bad [0]. Near the end he states we should be writing "procedural code" but that we should favor "pure functions". What I took away from this was that at a high level I should be able to reason about a problem procedurally but the solution should be composed of mostly pure functions.

0: https://www.youtube.com/watch?v=QM1iUe6IofM

squirrelicus 6 years ago

Often people will ask how to follow these principles in the world of I/O. The answer is incredibly simple: with values.

Return a DTO that encapsulates all the business information about the result state of the I/O call. Do not throw exceptions. Log them if you like, but you must return all the data necessary for the business logic to react to failure conditions, encoded in your own use case specific structure.

If you don't care about whether you failed due to timeouts or refused connections or query parse failures what have you, don't return that data, just return a hasError kind of property on the return structure (bonus points if your language supports discriminated unions, but this is not necessary). If the parent logic needs to react to timeouts and failures to connect differently, then catch those exceptions or state separately and return didTimeout or cantConnect flags separately.

Values values values

  • the_gipsy 6 years ago

    I would prefer a ‘Result<ValueType, ErrorType>’ over stuffing weird ‘hasError’, ‘didTimeout’, etc flags everywhere.

    It’s much safer to type-check at compile time that you have discriminated between result and error, than to hope that you are ‘if’-checking some flags at the right time at runtime.

    • squirrelicus 6 years ago

      That is another way to accomplish the same goal. This trade-off is more aesthetic than architectural and is just a matter of language support

danidiaz 6 years ago

In the Idris language, one best practice is to relegate to the outer shell not only IO effects, but also "partiality"—in the sense of computations that might enter an unproductive infinite loop.

(In Haskell, one way of getting into unproductive infinite loops is by mistakenly asking "parse elements until the first failure" to a parser combinator that can always succeed without consuming input.)

The ideal is that the core of the application should be both pure and total, purity and totality being tracked by the type system. In fact, one can often relegate partiality to a single function of the outer shell.

squirrelicus 6 years ago

It's difficult to overstate the importance of the principles taught in this talk. Almost as difficult as it is to describe why they are so important. The fundamentals of quality software are found here, not in some Martin Fowler dissertation on DDD.

Edit: correction from commenter, thanks!

  • stevebmark 6 years ago

    I think the general concept of using immutability, queues, and isolating side effects, is lost and muddied by using Ruby, and it's a misuse of the term "functional."

savethefuture 6 years ago

This is one of my favorite videos. I highly recommend DAS videos, definitely worth the subscription.

  • mercer 6 years ago

    Seconded! This one and a few others gave me 'aha moments' that I don't get much without hunting for the right videos (recommendations much appreciated! In the vein of Simple Made Easy).

    • squirrelicus 6 years ago

      Simple Made Easy should be mandatory watching for your developer's license.

      Seriously though, this talk, Boundaries, and Simple Made Easy are the trifecta that forms the foundation of the software I design.

Flenser 6 years ago

After watching this and his Boundaries talk, and then reading the Out of the Tar Pit paper, I started watching Pete Hunt's first talks on why they built React and had an epiphany why React was going to end up being the most popular front end framework.

jypepin 6 years ago

And I'd strongly recommend paying for DAS and looking at every.single.video in there. Gary is brilliant and each video explains stuff in a very simple, enjoyable way.

Reminds me a bit of Ryan Bate's ruby/rails videos.

jcyw 6 years ago

The idea of functional core is cool. I find domain-driven design a better guide for implementation. In a nut shell, there are three types of objects: value, entity, and service. Both value and entity are functional and stateless objects. A service object maps between stateless objects and may cross the domain boundary (such as network, IO, etc). When I do this in java, a service object would have interfaces returning CompletableFuture of Value or Entity objects.

InfoQ has a very nice summary of DDD book: https://www.infoq.com/minibooks/download/domain-driven-desig...

_0w8t 6 years ago

One does not need a pure functional core to have benefits of such design. The key idea is that the core code should not change or even read the global state. It should only be allowed to access or change what is explicitly passed to it.

For example, to avoid excessive memory consumption to sum matrixes one wants to code it like A += B, not like C = A + B. Yet one still benefits from all the testing, design etc. benefits of pure functional code. At the end one call always get a functional version just by turning A += B into C = A; C += B.

codetrotter 6 years ago

While the purpose of this screencast is to explain a principle, not to focus on the particular piece of software that was implemented, I am left wondering about a couple of things;

The timeline as he explains it, is updated by creating a new timeline instance which consists of previous timeline + new tweets. It seems to me then, that his client does not remove from the timeline tweets that were deleted by its author subsequently to having been downloaded.

To some it might be a feature to capture as many tweets as possible, but at the same time, if your view of the timeline includes deleted tweets then you might find yourself trying to reply to a deleted tweet, and then you would get an error response from Twitter when you try to post your reply. (Though I don’t know if his client also does posting, or if it’s a read-only client.)

Furthermore, what about tweets that are edited by their author subsequently to having been downloaded? Seems that you would not see those edits.

  • gary_bernhardt 6 years ago

    Tweets aren't editable so no need to handle that. Deleted tweets can be honored by updating the timeline to be `old_timeline - deleted_tweets + new_tweets`.

ridiculous_fish 6 years ago

How do you implement progress reporting and cancellation in this model?

  • mvc 6 years ago

    Make it an explicit part of the data model.

    • ridiculous_fish 6 years ago

      A quicksort that supported progress reporting and cancellation might have a function parameter that it calls periodically to report progress, with the function returning a bool indicating cancellation. But this is using code, not data.

      How might this be implemented within the data model?

      • jhomedall 6 years ago

        You would implement the sort so that it either:

        1: Returns the sorted list (once finished)

        2: Returns an intermediate state containing the progress of the sort in addition to the current state of the sort.

        You would repeatedly pass the intermediate state to the sorting function until finished. You can use the progress component of the intermediate state to track progress.

        Something like the following (I don't actually write Haskell, so this could probably be represented better):

          data SomeIntermediateState = ...
          data Progress = Double
          
          data SortState a =
            Complete [a]  
            | InProgress (SomeIntermediateState, Progress)
          
          sortInit :: [a] -> SortState
          sort :: SortState -> SortState
      • yen223 6 years ago

        How do you measure progress in a quicksort?

        • ridiculous_fish 6 years ago

          Glad you asked. The progress should be a bound on the worst case time.

          If you have N elements, then the initial worst case time is N^2. Say after the first partition, you are left with pieces N/3 and 2N/3; the worst case time is now (N/3)^2 + (2N/3)^2. Your progress after the first partition is the difference between the original worst case and the new worst case.

          This can make for uneven progress advancement but it’s monotonic: the progress bar will never go backwards.

blt 6 years ago

I'm struggling to apply this model to code for machine learning experiments. The functional core takes an extremely long time to compute, e.g. "train the model". Thus, you want to save lots of intermediate values to disk so you can reload them over and over while you are experimenting the downstream part of the program.

I end up with a lot of code writing values to the disk, which is currently mixed in with the computations. I'm wondering if there's some way to automate this "save intermediate values to the disk" so I can write the code in a more functional style without having to constantly go in and out of the imperative shell portion.

The process of ML experimentation is extremely painful compared to normal software development where nothing really takes that long to compute.

  • waluigi 6 years ago

    The way I would approach this is to write what essentially amounts to a declarative specification of what computation needs to be run, and then define an interpreter that handles the caching of intermediate values.

    The "Embedded DSL + Interpreter" pattern is incredibly powerful, and it's nice to see it catching on more.

Apaec 6 years ago

I learned this through Haskell, it enforces the "functional core, imperative shell" approach through the IO monad. Now I find myself repeating the same pattern in other languages(js, java).

ww520 6 years ago

It's similar to what usually do. Use functional style in inner, smaller, library level code. At the global level, it's unavoidable to deal with state when interfacing with outside. State changes just need to be well encapsulated and managed.

GolDDranks 6 years ago

This is the pattern I'm striving to code in, but when developing with a team, I keep struggling convincing other people (with OOP, mutability-happy mindsets) to do so...

mlthoughts2018 6 years ago

I like to imagine that the huge planet-sized villain from the film The Fifth Element is the physical manifestation of giving one’s self over fully to the programming style of indisciminantly mixing side-effectful and I/O operations into arbitrary code whenever it’s locally convenient to do so.

draw_down 6 years ago

Love this one! I think Rich Hickey’s “The Value of Values” pairs nicely with it.

grzm 6 years ago

(2012)

stevebmark 6 years ago

"Functional core" littered with classes, attr_readers, data that's not data (Curosr.new) and += imperative loops. Yikes.

  • gary_bernhardt 6 years ago

    The classes serve to close some functions over variables for convenience; the attr_readers serve as destructuring functions for convenience; the += replaces a recursive function for convenience and familiarity. The function with the += doesn't mutate any value and it remains pure from all callers' perspectives.

    You're complaining about syntax, but this screencast is about semantics.

    • stevebmark 6 years ago

      No, something with "functional" in the title that doesn't use functions isn't related to syntax.