mtraven 5 years ago

I founded the mailing list the book was based on. These days I say, Unix went from being the worst operating system available, to being the best operating system available, without getting appreciably better. (which may not be entirely accurate, but don't flame me).

And still miss my Lisp Machine. It's not that Unix is really that bad, it's that it has a certain model of computer use which has crowded out the more ambitious visions which were still alive in the 70s and 80s.

Much as the web (the Unix of hypertext) crowded out the more ambitious visions of what computational media for intellectual work could be (see the work of Doug Engelbart and Ted Nelson). That's a bigger tragedy IMO. Unix, eh, it's good enough, but the shittiness of the web makes humanity stupider than we should be, at a time when we can ill afford it.

  • ken 5 years ago

    I'm slightly too young to have lived through it myself, but from what my dad told me, pre-GNU Unix and post-GNU Unix are almost completely different in their user experience. Prior to GNU, Unix (the tools and the kernel) used to simply crash a lot, or drop data. It was not an especially good system in any regard.

    As the GNU coding standards say:

    > Avoid arbitrary limits on the length or number of any data structure, including file names, lines, files, and symbols, by allocating all data structures dynamically. In most Unix utilities, “long lines are silently truncated”. This is not acceptable in a GNU utility.

    This never really clicked with me because I didn't live through a time when my primary Unix utilities weren't robust. The closest I got was using old (1990's) HP/UX and Solaris and AIX systems where the sysadmins had installed the GNU tools already. All I knew is that I shouldn't use the system tools, because they were worse.

    Personally, I think what helped Unix succeed is that the implementations were bad but the architecture made it (mostly) possible to improve the implementations piecemeal. Accordingly, the parts that have had the most trouble improving (like X11, and C) are those where the design doesn't allow this.

    • pjmlp 5 years ago

      Yes, using standard POSIX tools wasn't the best experience of the world.

      What helped Unix succeed was that originally Bell Labs wasn't allowed to sell it, so they gave it to universities for what was a symbolic price in comparisasion what a standard OS used to cost, alongside its source code.

      So naturally many stundents from those universities went out and started businesses based on UNIX, like Sun for example.

      One of the reasons behing the BSD lawsuit was that AT&T got the license to charge for UNIX after Bell Labs was broken down into smaller units and they were trying to kill off those free branches.

      People have a tendency to gravitate towards free stuff regarless how bad the quality happens to be.

      • x122 5 years ago

        Eh, what? At the time (1994) the handbook was written reasonably priced alternatives were MacOS and Windows, both of which froze all the time and had horrible programming environments.

        AIX, non-free, was notoriously horrible, too.

        And the Wirth systems that you promote here were actually free.

        So what exactly is the great stable commercial alternative in the 90s?

        • TheOtherHobbes 5 years ago

          Digital Research produced some nice usable systems in the early 80s. MP/M was a multitasking version of CP/M available by 1981.

          In the mainframe world, TOPS-10/20 and VMS were both exemplary. From the user POV, the former was one of the best command-line systems ever created, with none of the insane user-hostility built into UNIX command naming. Dave "VMS -> NT" Cutler absolutely hated Unix, and it shows.

          But we're really comparing cinder blocks and potatoes. UNIX was designed as a hobby/student hacker tool, not as a general user OS. Many of the design choices are bizarre and frankly stupid, as the book delights in pointing out. But hackers love UNIX because using it just it feels just like hacking code, and that's considered a good thing.

          It wouldn't be impossible to design an OS with powerful command line options but a sensible command naming system, much more intelligent and reliable security, a modern filesystem, and so on - and perhaps add some of the user configurability and extendability of the Lisp/Smalltalk/Hypercard(?) world.

          But UNIX is so embedded now it would be a purely academic exercise.

          • AnIdiotOnTheNet 5 years ago

            > But UNIX is so embedded now it would be a purely academic exercise

            I guess I'm less pessimistic, because I don't think that's true. I think there are quite a few people like me who are fed up with the shit that exists today and really want a good alternative. I think we'd be willing to make quite a few sacrifices for something with a whole lot of potential.

            What seems to be true is that there are very few people both capable of making that happen and bold enough to try to make it something people can actually use instead of just a toy academic project.

            • incog-neato 5 years ago

              Do you blog much? I've been lurking here a lot and have seen your posts. Would be interested in long-form essays or the like on what you would like to see.

              • AnIdiotOnTheNet 5 years ago

                I blog a little under my actual name, but I'd rather not tie that to this identity lest it make me less willing to express my stupidest and least popular opinions here, which I feel need to be exposed to criticism so I can better judge their merit. There's a reason my handle is what it is.

          • taffer 5 years ago

            OS/400 was also a very interesting and well designed system.

        • einr 5 years ago

          So what exactly is the great stable commercial alternative in the 90s?

          OS/2 was pretty solid...

          • adestefan 5 years ago

            The OS/2 part was, the DOS/Win comparability later not so much.

            Workplace Shell was so cool though.

            • einr 5 years ago

              Not sure what was bad about the DOS and Windows compatibility? OS/2 was frequently touted as "better DOS than DOS, better Windows than Windows" and it was generally true. You could multitask DOS apps (lots of people used this for running multi-line BBSes and so forth) and Win16 apps could crash without bringing down the whole system.

              Of course OS/2 was later severely hampered by its inability to run Win32 apps, but that's another story.

        • pjmlp 5 years ago

          Besides what others already replied, universities were "free"[1] to use AT&T UNIX V source code and BSD derived code until AT&T was allowed to go commercial with UNIX, and suing Berkeley university in the process.

          In fact, GCC only got traction after Sun introduced the idea in the UNIX world of selling the developer tools instead of bundling them wiht the OS.

          Back in 1994 the Wirth systems were mostly only available at ETHZ.

          Had Linux not come up into the scene, with the ongoing BSD lawsuit, and the UNIX landscape would have looked much different nowadays.

          [1] - "free" here meaning several factors cheaper than buying a VMS or OS/360 timesharing system for the university campus.

        • TeMPOraL 5 years ago

          From reading it I recall it was less about contemporary alternatives (which all sucked), and more about how UNIX displaced better environments in the 70s/80s.

        • cafard 5 years ago

          There were a number of Multics descendants in the minicomputer world. Data General had AOS/VS, and Prime had PrimeOS. They were commercial and they were stable. However, they were tied to proprietary, expensive hardware.

      • dotancohen 5 years ago

        > People have a tendency to gravitate towards free stuff regarless how bad the quality happens to be.

        Where does the MS operating system fit in here? Arguably the worst general-computing OS, but by far the most popular. For purpose of discussion, do you consider it free due to being widely pirated?

        I'll say that the developer experience on Windows was excellent around the 1995-2005 timeframe. But the user experience was horrible, yet it remained an almost monopoly on the desktop.

        • mcguire 5 years ago

          It's "free" because the cost is built into the hardware---MS had licensing terms that made selling hardware without a license a poor choice. As a result, it was preinstalled and no one looked at any alternatives.

          • dotancohen 5 years ago

            In the context of the GP's post, that is free. The user did not have to outlay any more money to acquire it.

            • nickpsecurity 5 years ago

              It wasn't free: the user was forced to pay for it built-in to cost of the machine even if they intended to use a different OS. That's much worse than free.

        • nradov 5 years ago

          Worst compared to what? Microsoft gained its original OS dominance with MS-DOS when the primary competitor was CP/M.

          • chronogram 5 years ago

            And for the post 2005 timeframe, I don’t see any alternatives for general purpose desktop operating systems. Can’t install macOS on a device, and while I use Linux systems a lot, I don’t think anyone really develops it for desktop usage. I did hear you can have variable refresh rate monitors in the next Ubuntu release, but no idea about power management and hardware video decoding and all that..

            Obviously it’s great in a VM!

        • pjmlp 5 years ago

          It was the OS for PCs, only geeks and some IT departments ever bothered updating their OSes.

          Regular consumers just handle their computers like appliances, getting a new one with whatever OS it gets bundled with.

          The monopoly worked both ways, any OEM was free to ditch Microsoft agreements, they just cared to improve their profits by getting into bed with Microsoft.

  • linguae 5 years ago

    When I discovered Linux and FreeBSD as a 15 year old, I was absolutely amazed by what I saw, coming from the world of Windows with some elementary school memories of the classic Mac OS sprinkled in. My exploring these *nix variants and learning about their development and history led to my decision to major in computer science and pursue a career in systems software research. I still have a high level of respect for people like Ken Thompson, Dennis Ritchie, Bill Joy, and Marshall Kirk McKusick. I had a dream of working for Sun before the Oracle acquisition and working on projects like ZFS.

    But lately I’ve been studying systems that for whatever reason ended up losing out in the marketplace despite having really interesting design decisions and features that would be welcome today. Although I like the classic Mac OS, NeXT, and the modern macOS, I wish Steve Jobs would have “copied” all of Smalltalk and then added Mac-like touches to it. I also wish that Genera were open-sourced instead of the current situation where it’s difficult to obtain legally and inexpensively. I have a dream of writing a modern OS inspired by Genera, Smalltalk, and Apple’s OpenDoc, but writing a new OS is a major undertaking.

    Maybe it’s my inner romantic speaking, or maybe I’m just drawn to beautiful things, but I’ve always been attracted to “what could have been” things. Hopefully one day we’ll have a “right thing” OS again, and maybe one day we’ll have an alternative to the Web that isn’t as much of a technology hodgepodge.

    • pjmlp 5 years ago

      I had a similar path, from MS-DOS 5 / Amiga OS point of view, something like SGI seemed great and I dived into Linux zealotry a couple of years later.

      But then a rich library at university campus opened my mind to other models of computing and sundenly pure old UNIX wasn't that interesting any longer, only NeXT, which used UNIX compatibility more for winning over Sun's customers than anything else.

    • mpweiher 5 years ago

      Funny, for me NeXT pretty much was Smalltalk + Mac + Unix.

      And there were mechanisms for an OpenDoc-like system of embedded content (maybe more like OLE). While the rough idea of OpenDoc was and is appealing, I don't think that version of the idea is actually tenable.

      Alas, since it was killed, it sort of lives on and occupies that space as an ideal version of the rough idea, rather than as an artefact that can be criticised and improved upon.

  • pryce 5 years ago

    It's probably been a long while since you've gone back to it, but The "Worse is Better" chapter in this book seems to show rather amazing foresight; it basically predicts the situation you describe regarding Unix being "the best operating system available without getting appreciably better" and offers compelling reasons why that was likely.

    25 years ago. Who the hell does that?

    • mpweiher 5 years ago

      Richard Gabriel, that's who ;-)

      I heartily recommend his other writings. In fact, he considers "Worse is Better" some of the worst of his writings, and it is the most successful. ¯\_(ツ)_/¯

      Just one that I've gotten a lot of mileage out of is "Habitability and Piecemeal Growth" in Patterns of Software. Lots of other gems in there: Reuse vs. Compression, The Quality Without a Name, etc.

      EDIT: Forgot the link

      https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf

  • avar 5 years ago

    > And still miss my Lisp Machine[...]

    This all happened before my time, but having read a lot about the Lisp Machine I'm as interested in the what-ifs of history if it had won out as the next guy.

    I wonder though how much of the legitimate sentiment in this book is simply raging against a machine that's successful and installed in production.

    Given widespread commercial use a system where you could modify the running Lisp code of any program down to the kernel would have had its own nightmare stories of sysadmins monkeypatching things in production, and e.g. the perceived shittyness of NFS being replaced by some Lisp-native system where you sent serialized Lisp objects for your program over the wire, with corresponding upgrade hassles if you needed to upgrade a client/server pair.

    • tabtab 5 years ago

      I really like the concept of Lisp, but just plain find it too hard to read. Part of the problem is that small-scale groupings too closely resemble large-scale groupings. In most production languages, "big blocks" are visually different than "small blocks" (or groupings). For example, in C, parameter statements are grouped within parenthesis. Large scale groupings are done with curly braces. This provides visual cues about scope and intention without first reading each token.

      Plus it's too easy to "reinvent the language" in Lisp. In C-based languages, the block structures (if, while, try, etc.) are pretty much hard-wired into the language so they stay consistent. In Lisp, one can roll their own. It's great if you are the lone reader: you can customize it to fit your head. But, other readers may not agree with your head's ideal model, or learning it adds to the learning curve of a new shop.

      • lispm 5 years ago

        > This provides visual cues about scope and intention without first reading each token.

        There isn't even a token to read in C. You have to infer what it is by parsing the thing in your head. Well, our visual systems have no problems doing this.

        The C function definition doesn't have an operator which would help me to identify what it actually is.

          float square ( float x )
          {
            float p ;
            p = x * x ;
            return ( p ) ;
          }
        
        Where in Lisp we have this operator based prefix syntax:

          (defun square (x)
            (* x x))
        
        Oh, DEFUN, short for DEFine FUNction, it's a global definition and it defines a global function.

        Or with types/classes:

           (defmethod square ((x float))
             (the float (* x x)))
        
        Oh, DEFMETHOD, DEFine METHOD, so it's a global definition of a method (kind of a function).

        The names and lists may not tell you much, but to a Lisp programmer it signals the operation and the structure of the code, based on typical code patterns.

        Once we learned basic Lisp syntax this is the usual pattern for definitions:

          <definition operator> <name of the thing to be defined> <parameters>
            <body of definition>
        
        Most definitions in Lisp follow that pattern. Function definitions extend/refine this:

          <define function> <name of function> <parameter list>
            <declarations>
            <documentation>
            <body of definition>
        
        
        Code then has a layout which is always similar - since the layout is hardwired/ supported in editors and Lisp itself (via the pretty printer, which prints code to the terminal according to layout rules).

        > block structures (if, while, try, etc.) are pretty much hard-wired into the language so they stay consistent

        These are also hardwired in something like Common Lisp. But the general language is differently designed. Common Lisp has relatively few built-in basic syntactic forms (around 30) and the other syntax is implemented as macros.

        > It's great if you are the lone reader: you can customize it to fit your head

        Over the decades of Lisp usage a bunch of conventions and some language support for common syntactic patterns have emerged. It is considered good style to follow those patterns.

        > But, other readers may not agree with your head's ideal model, or learning it adds to the learning curve of a new shop.

        That's the same problem everywhere: the new control structure implemented as a Lisp macro is the new control structure in any other language implemented by a bunch of different tools (macros, preprocessor macros, embedded languages, external languages, a web of classes/methods, a C++ template, ...).

        If you add a new abstraction, there is always a price to pay. In Lisp the new language abstraction often gets implemented as a new macro and may hide the the implementation behind a more convenient form.

        • tabtab 5 years ago

          C itself is not the pinnacle of programming languages in my opinion. JavaScript does require a "function" keyword (although it should be shortened to "func" in my opinion.)

          It still stands that "}" indicates larger-scale structures than ")" in C-like languages. Lisp has nothing equivalent, and I find a visual way to identify forests versus trees without first reading text is quite helpful. Your eyes may vary; everyone's head works differently.

          But as evidence people agree with me, lisp had roughly 60 years to try to catch on as mainstream. It didn't. Nothing had more opportunities. If you don't win any beauty contests after 60 years, it's time to admit you are probably ugly to the average judge.

          • lispm 5 years ago

            > Lisp has nothing equivalent

            prog, let, defun all indicate block structures.

            > Your eyes may vary; everyone's head works differently

            No, it's just training. You are just unfamiliar with Lisp code. After a few days training, you should have little problem reading Lisp code. It's like bicycle riding: people can't believe that it is possible to balance it, but after a few lessons it's no problem.

            That's an old saying:

            'Anyone could learn Lisp in one day, except that if they already knew Fortran, it would take three days.' (Marvin Minsky)

            • tabtab 5 years ago

              I tried to get used to it, I couldn't.

  • chrismaeda 5 years ago

    MT-

    Here we are 25 years later and now UNIX is the "good" OS.

    The horror.

    -Chris

  • kickingvegas 5 years ago

    Mary Shaw (http://spoke.compose.cs.cmu.edu/shaweb/) had a backhanded reference to Unix/AT&T Bell Labs as the "New Jersey School of Computing." The misery of it is that it was good enough to get to where computing is now. The downside is that is also what is holding computing back.

  • marmshallow 5 years ago

    Do you have any suggested reading to learn more about the more ambitious visions from the 70s and 80s, and why they didn't pan out?

    • KineticLensman 5 years ago

      Well in some cases it was because conventional hardware caught up. When I arrived at my first job in 1998 we were using Symbolics Lisp machines. Two years later we were using TI microexplorer Lisp cards that were hosted in a Mac IIFx (something like [0]). When I left, two years after that, we were running Procyon Common Lisp on the IIFx itself, with no Lisp co-processor. The old Symbolics machine was booted up occasionally, and in fact had the best diagnostics for the climate conditions in the server room. If the air con failed, the Symbolics would send temperature reports to a remote console before gracefully shutting down while the Sun workstations would just overheat and randomly fail.

      [0] https://imgur.com/gallery/Vw5agg5

      • lispm 5 years ago

        Lots of people used Macintosh Common Lisp on Macintosh IIFX machines and later. The 68030 in the FX also enabled a better garbage collector for MCL.

        • KineticLensman 5 years ago

          We used Procyon Common Lisp because it was very nicely integrated with the Macintosh - especially for graphics - and had a great CLOS implementation. I used it to reimplement a clunky VAX-based FORTRAN modelling environment that had been developed in-house into a smooth Macintosh app with a graphical node-graph editor. In all of the system development I've done, this had the biggest 'awesome gosh wow' reaction I've ever received from the users.

          • lispm 5 years ago

            I have used it, too - but not for long. Usually I thought MCL then was better - Apple also bought it and then released it via their developer channel.

            Unfortunately Procyon Common Lisp was taken off the market, when it was bought by Franz, Inc. They used the Windows version of Procyon CL as kind of a starting point for their Windows offering of Allegro CL, IIRC.

    • jayalpha 5 years ago

      Probably likely with the WWW and the very complex competing version.

      Too complex and takes too long to deliver.In the end, the winner takes it all.

  • hyperpallium 5 years ago

    the meek (=cowards=euniques=unix) shall inherit the earth

uniqueid 5 years ago

Dennis Ritchie's "anti-preface" to this handbook contains the hard-to-forget metaphor:

    your book is a pudding stuffed with apposite observations, 
    many well-conceived.  Like excrement, it contains enough 
    undigested nuggets of nutrition to sustain life for some.  
    But it is not a tasty pie
  • delish 5 years ago

    And:

    > Your sense of the possible is in no sense pure: sometimes you want the same thing you have, but wish you had done it yourselves; other times you want something different, but can't seem to get people to use it; sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice, just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog.

    • eternalban 5 years ago

      "a future whose intellectual tone and interaction style is set by Sonic the Hedgehog"

      Ironically, Unix won yet here we are..

      • 77pt77 5 years ago

        And sonic died...

  • aarch64 5 years ago

    One of these "apposite observations" is in the section on pipes (p. 199 of link).

    The authors use this convoluted bash script to show that pipes are best suited for simple hacks and might fail mysteriously under certain circumstances:

      egrep '^To:|^Cc:' /var/spool/mail/$USER | \
      cut -c5- | \
      awk '{ for (i = 1; i <= NF; i++) print $i }' | \
      sed 's/,//g' | grep -v $USER | sort | uniq
    
    Although this is hard to read, it can be simplified from 7 commands to 5:

      awk '/^To:|^Cc:/ { for (i = 2; i <= NF; i++) print $i }' /var/mail/$USER | \
      sed 's/,//g' | grep -v myemail@exmaple.com | sort | uniq
    
    By folding the egrep pattern matching into awk as it is supposed to be used and starting from the second field instead of removing the first four characters with cut, this script is both easier to read and fewer lines long.

    By inverse grepping for myemail@example.com, there will not be mysterious behavior if $USER is in another person's email address.

jayalpha 5 years ago

Tue, 15 Mar 1994 00:38:07 EST Subject: anti-foreword

To the contributers to this book:

I have succumbed to the temptation you offered in your preface: I do write you off as envious malcontents and romantic keepers of memo- ries. The systems you remember so fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out to pasture, they are fertilizing it from below. Your judgments are not keen, they are intoxicated by metaphor. In the Preface you suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag. In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and addled by puffiness of the genome.

Yet your prison without coherent design continues to imprison you. How can this be, if it has no strong places? The rational prisoner exploits the weak places, creates order from chaos: instead, collec- tives like the FSF vindicate their jailers by building cells almost com- patible with the existing ones, albeit with more features. The journalist with three undergraduate degrees from MIT, the researcher at Microsoft, and the senior scientist at Apple might volunteer a few words about the regulations of the prisons to which they have been transferred. Your sense of the possible is in no sense pure: sometimes you want the same thing you have, but wish you had done it yourselves; other times you want something different, but can't seem to get people to use it; sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice, just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog. You claim to seek progress, but you succeed mainly in whining. Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.

Bon appetit!

  • mcguire 5 years ago

    Dennis Ritchie, ladies and gentlemen.

  • Zelmor 5 years ago

    ^ underrated

darkpuma 5 years ago

Much of the book is very dated, complaining about things that have been rectified many years ago.

However I think it still has a valuable lesson that many, particularly young CS students, would benefit from: Unix is not the perfect fundamental model for computing. C is not the gospel. Their prevalence today is as much a historic and economic accident as a rational consequence of their objective merits. Both are social artifacts, not manifestations of fundamental truths.

Worshipers of Bell Labs (such as the cat-v or suckless folks) don't get this, and I've seen them pull a lot of people down that rabbit hole with them, particularly young and inexperienced students.

  • linguae 5 years ago

    As a systems software researcher, I concur. I like Unix and I’m very greatful for its impact on computing, but Unix and its derivatives are not the final answers of systems software design. I feel that much of the software engineering and computer science world (even among OS researchers) is unfamiliar with non-Unix designs other than Windows. We can learn a lot from historical non-Unix operating systems such as the Genera OS for Symbolics LISP machines and the Oberon operating system.

    • milquetoastaf 5 years ago

      BeOS is the silent master we can all learn from

      • int_19h 5 years ago

        IIRC it still looked very Unix'ish once you dropped into the shell.

        • pjmlp 5 years ago

          As much as Windows NT with POSIX personallity.

        • electroly 5 years ago

          It ran bash, sure, but BeOS was not similar to Unix at all.

          • waddlesplash 5 years ago

            It had the coreutils, glibc, it used UNIX filemodes and the fork() process model, "/dev", etc. So, how exactly was it not a UNIX?

            • electroly 5 years ago

              Ever try to port a POSIX utility to BeOS? I did. No mmap and no BSD sockets killed any chance of easily porting any network utility to BeOS. It had a directory called /dev, yes, but it was absolutely nothing like /dev on any Unix system; it's all home grown. No permissions; it's a single-user system. It stores file owners and modes, yes, but they don't actually mean anything; you don't need the +x bit to execute a file, you don't need the +x bit to list a directory, everything is owned by "baron", etc. It supported fork but it's incompatible with threads, and everything on BeOS is threaded. Some signals are implemented, but the C API is home grown instead of the standard POSIX calls. Etc. They built a Unix facade but the system is very un-Unix like.

              If it were really Unix, they probably would not have needed to write a whole book on porting Unix applications: https://www.amazon.com/BeOS-Applications-Martin-C-Brown/dp/1...

              • waddlesplash 5 years ago

                > No mmap

                mmap could be emulated very easily via create_area(), and I think most people did this indeed.

                > and no BSD sockets

                BONE (an official R5 patch) had BSD sockets. There were also libraries for this also.

                > it was absolutely nothing like /dev on any Unix system

                What does this mean? Devices were in "/dev" could be opened, stat'ed, "dd"'ed to, etc. It had a different naming convention, but so does macOS and the like.

                > No permissions; it's a single-user system. It stores file owners and modes, yes, but they don't actually mean anything; you don't need the +x bit to execute a file, you don't need the +x bit to list a directory, everything is owned by "baron", etc.

                "baron" was just UID 0. I haven't looked in a while, but I'm pretty sure you needed +x to actually execute files? At least we mandate this on Haiku and it hasn't broken BeOS compatibility at all, as far as I'm aware of.

                > It supported fork but it's incompatible with threads, and everything on BeOS is threaded

                What does this mean? Its fork behaved the same way fork does on any other UNIX-like OS, i.e. only one thread carries over. The same is true in Linux...

                > Some signals are implemented, but the C API is home grown instead of the standard POSIX calls.

                Again, I don't know what you mean here. "glibc" was there, and so of course was malloc(), string.h, etc. etc.

                Further, "UNIXy" does not mean "POSIX compatible." IMO the fork-based process model and "/dev" are the large hallmarks of the "UNIX philosophy". POSIX was a larger set of API calls that plenty of 90s-era UNIXy OSes did not ever implement.

                • electroly 5 years ago

                  Re: sockets. We both know BONE was never officially released because Be went out of business before Dano came out, that it was written to solve exactly the problem I described, and that Haiku picked it up because the networking situation without BONE is dire. Come on. I don't get the feeling that you're arguing in good faith here. If you know about BONE, you know that I'm not whistling dixie about the lack of sockets on BeOS being a problem.

                  Re: /dev. If the only thing similar to Unix is that it contains file-like objects (i.e. "can be opened, stat'd, dd'd to"), and everything else is different, is that really similar?

                  Re: +x to execute. I'm pretty certain BeOS didn't obey the +x bit, because I remember being surprised by it later when I migrated to a Linux system, but I'm too lazy to fire up real BeOS and not Haiku to check. I only have Haiku set up. Let's assume you're right.

                  Re: baron. He wasn't "just" UID 0. He was the only user -- you couldn't create others. I tried to create a multiuser addon for BeOS and gave up; you can't do much more than swapping home directories. I feel like you're seeing a facade they built and assuming an entire multiuser infrastructure is present, when they're just supplying whatever hardcoded information they need in order to make their POSIX facade work.

                  Re: fork. BeOS is so deeply multithreaded their marketing called it "pervasive". Linux is not at all; fork is the standard model for multiprocessing and most processes can happily be forked without issue. Almost nothing on BeOS forks because almost everything needs to use a thread at some point. You know this too; I really think you're playing dumb here in order to make your point seem more convincing. We both know that BeOS does not primarily use forking as its process model, and that fork's use is very rare.

                  Re: signals. BeOS doesn't have the same internal signal infrastructure as a POSIX system. The "Porting UNIX Applications" book has a whole chapter on how signals in BeOS are different from POSIX systems. The chapter treats signals as sort of a Venn diagram between BeOS and POSIX, where there is some common functionality in the middle, and then some BeOS-only stuff (SIGKILLTHR), some POSIX-only stuff (lots of signals not implemented in BeOS). BeOS's glibc is forked and heavily modified.

                  Indeed, I am very aware that POSIX and Unix are different. BeOS aimed for some level of POSIX compatibility via a facade, and it's not really very similar to Unix systems.

                  • waddlesplash 5 years ago

                    > and that Haiku picked it up because the networking situation without BONE is dire

                    Haiku is vastly more POSIX compatible than BeOS ever was (we have mmap, pthreads, posix_spawn, madvise, etc. etc. etc.) It is indeed compatible with both BONE and Net-API BeOS applications, but properly speaking our network stack is neither; under the hood it looks nothing like the BeOS network stack did (only NIC drivers are source-compatible for the most part.)

                    > I don't get the feeling that you're arguing in good faith here. If you know about BONE, you know that I'm not whistling dixie about the lack of sockets on BeOS being a problem.

                    Nowhere did I say it was not a problem. What I did say that a lack of them does not make BeOS "not a UNIX".

                    > and everything else is different, is that really similar?

                    What does this mean? Do you think that if there is no "/dev/sda", it's "not UNIX"? There was "/dev/zero", "/dev/null", "/dev/random", etc. What exactly are you asking for here?

                    > when they're just supplying whatever hardcoded information they need in order to make their POSIX facade work.

                    The point is that the facade is at least there; I think they were planning to add multiuser in a later release. (Haiku is still single-user on the facade, but under the hood you can useradd, chown, SSH in to other users, etc.)

                    > BeOS is so deeply multithreaded their marketing called it "pervasive". Linux is not at all; fork is the standard model for multiprocessing and most processes can happily be forked without issue.

                    Yes, obviously using fork() in an application that used the GUI would not make sense on BeOS. It doesn't make sense on Linux either; it's just that most "BeOS native" applications had GUIs. So what is the difference here?

                    > Almost nothing on BeOS forks because almost everything needs to use a thread at some point. You know this too; I really think you're playing dumb here in order to make your point seem more convincing.

                    My point is, again, not that "nothing used it", but rather that "this is how the process model works." I am not playing dumb here...

                    I didn't look at the details, but my understanding is that the Be-native API calls (e.g. BRoster::Launch) invoked fork/exec under the hood. So, again, simply "it's not that useful when you are using threads" is not the case here.

                    > The chapter treats signals as sort of a Venn diagram between BeOS and POSIX, where there is some common functionality in the middle

                    I haven't read the book; what's in the diagram? Is it just signal names e.g. SIGKILL, SIGTERM, etc.?

                    Only having some signals and not others does not mean it's "not a UNIX". If it talks like a duck, quacks like a duck, looks like a duck, but it's missing a leg and half its feathers ... it's still a duck.

                    > BeOS's glibc is forked and heavily modified.

                    Not really? They released the modified source due to GPL of course, and they mostly just removed stuff like mmap() or the like.

  • simias 5 years ago

    >particularly young CS students

    >C is not the gospel

    >Worshipers of Bell Labs

    I think you're one or two generations late, you'd have to be a hipster to be a CS student worshiping C and Bell Labs these days. I suspect most CS students these days start around web technologies using VS Code in stock Ubuntu, not hacking C programs using Emacs running in dwm on a heavily customized Slackware.

    I do agree that cat-v and the suckless folks should be taken with a massive grain of salt though.

    • sitkack 5 years ago

      I used to be a suckless person, not official, but I had branded them _my tribe_. I still appreciate them for their simplicity target, but the means and methods for me have changed.

      When I first came across the UHH, my systems exposure was AmigaDOS, DOS, Windows 95, Windows NT 4, Ultrix, FreeBSD and Linux. I had never used VMS in any meaningful way (David Cutler is a well known Unix Hater) and the idea of "Hating Unix" was off putting because it seemed so useful, and the pain inflicted I internalized as my failing for not properly understanding it. I had no idea how much of a patched together mess it was. Hell, I think if we had continued using DOS that it would have eventually gotten a scheduler, memory protection, pipes and signals and then ended up in largely the same place.

      At the part of the stack that most programmers are operating now, the operating system doesn't matter much. We can take the pain points from UHH and apply them to our own lack of system design.

    • saagarjha 5 years ago

      > I suspect most CS students these days start around web technologies using VS Code in stock Ubuntu

      More likely, Visual Studio Code on macOS.

      • kkarakk 5 years ago

        haven't the google chromebook kids hit the college market yet? wonder if they'd go for macOS at all

        • saagarjha 5 years ago

          Anecdotally, very few people buy Chromebooks as their primary college computer. Most people, irrespective of their major, need to work with Microsoft Office, sometimes some custom publisher software, etc. and nobody seems to thing that Chrome OS is up for this task. Plus, I feel like a "college computer" is a weighty purchase for many students, and buying a $200 Chromebook doesn't really seem right for this.

          For computer science, the only people who buy Chromebooks are the people who install Linux on them or have another machine to SSH into.

        • pjmlp 5 years ago

          Worldwide ChromeOS has even less share than desktop GNU/Linux.

          It is only a thing on US school system.

          And now with latest changes at Google, eventually not even that.

  • jcranmer 5 years ago

    > However I think it still has a valuable lesson that many, particularly young CS students, would benefit from: Unix is not the perfect fundamental model for computing. C is not the gospel.

    The problem is, this book doesn't actually motivate that lesson. Instead, it spends a lot of its time sniping rather than arguing for why the entire philosophy is perhaps misguided or outright wrong. And sometimes, even the snipe targets are pretty idiotic: when complaining about C++ syntax, for example, the target isn't the incomprehensible nature of template rules [1], but one C++ compiler doesn't lex //* correctly.

    [1] I'm not sure anyone actually understands how name lookups work when templates are in play. Instead there's a lot of guesswork and don't-shadow-names-if-it-might-matter going on.

    • darkpuma 5 years ago

      > "The problem is, this book doesn't actually motivate that lesson. Instead, it spends a lot of its time sniping"

      Yes but that sniping does a good job of deconstructing the historical revisionism that Unix was some beautifully architected thing, rather than something that's become less shit over time.

    • int_19h 5 years ago

      > I'm not sure anyone actually understands how name lookups work when templates are in play

      All you really need to figure out for that is what a "dependent name" is. And that has a very straightforward definition.

    • pushpop 5 years ago

      I thought The UNIX Haters Handbook predated C++ templates?

yebyen 5 years ago

I've just skimmed the TOC and, having never seen this book before, I'm a little in love. Here's why

It's a cynical UNIX manual from the 1990's. Here:

> 14 NFS..............283

> Nightmare File System

> Not Fully Serviceable............284

> No File Security...........287

> Not File System Specific? (Not Quite)..........292

Here also:

> The Oxymoronic World of Unix Security .......243

> Holes in the Armor ........244

> The Worms Crawl In ..........257

I work in IT systems development in a University IT department. I want to read this take on UNIX from 1994, just to see how much better things haven't gotten.

OK, the state of the art has gotten better, but if I compare my work environment which is byzantine complexity and full of bespoke garbage sometimes, to the hells apparently described herein, I bet I can find more similarities than differences.

And that will hopefully make me a more effective communicator about how to make things better with modern convenience technologies that we're not using enough. (Dare I say Kubernetes is the one big thing that is actually majorly different today, compared to UNIX in the 1990's.)

  • marcosdumay 5 years ago

    > I bet I can find more similarities than differences.

    Most of what's on the book has been fixed (not NFS, this one is forever), and we have an entirely new set of things to worry about.

    The book is funny, but every part of it feels old.

    • hyperion2010 5 years ago

      Rereading the section on NFS again, I suddenly realize that I was foolish to have skipped it as out of date. Since my last read I have encountered some incarnation (heh) of every single one of the problems listed in that chapter. I'm betting v4 isn't much better either. Is there a good network file system solution!?

      • shaklee3 5 years ago

        What's wrong with nfs v4? It was quite a bit faster than the alternatives last I checked.

        • marcosdumay 5 years ago

          As the book says:

          > There’s only one problem with a connectionless, stateless (file) system: it doesn’t work.

          > Example #1: NFS is stateless, but many programs designed for Unix systems require record locking

          > Example #3: If you delete a file in Unix that is still open, the file’s name is removed from its directory, but the disk blocks associated with the file are not deleted until the file is closed.

          (Example #2 is a non-issue)

          The security is still lacking, and the system still can't handle failures.

          Yet, yes, it's faster, easier to set-up, and more widely supported than the alternatives.

          • JdeBP 5 years ago

            One of the saddest to see edits to Wikipedia is this one to its article on NFS.

            * https://en.wikipedia.org/w/index.php?diff=385927363&oldid=38...

            There has been a lot of literature published on the problems with NFS, and the mismatch with POSIX file semantics, since 1988. Some of it was even in the original paper from Sun, in fact. To this day, Wikipedia has nothing on the subject. The words "stateful" and "stateless" appear once each in the entire article, with no indication of their pivotal importance, made much of in the actual literature on the subject.

            * https://news.ycombinator.com/item?id=17950343

Smithalicious 5 years ago

I always have a very conflicted feeling about the Unix-Haters Handbook. It's clearly intended to make legitimate criticisms, but then there is a kind of tongue-in-cheek exagerrated part to it, but then behind that there's clearly a lot of very real bitterness and barely controlled anger.

I think it's a good book and worth reading still, if not especially, in the modern world where to many people Unix and its family are the hot "alternative" that is often favourably compared against Windows and its ilk.

But I also think that Ritchie's "anti-preface" was already quite on-the-nose when the book came out and has only become more true over time.

alwillis 5 years ago

I loved this book back in the day—it even came with its own barf bag.

But how things change. This was before macOS became BSD-based and before I really got into web development because today, I spend so much time at a shell prompt inside of the Terminal app. I certainly appreciate the elegance of Unix a lot more today.

vermilingua 5 years ago

"C++ is to C as Lung Cancer is to Lung" has to be the best chapter title I've ever read.

mcguire 5 years ago

P. 124:

"We have tried to avoid paragraph-length footnotes in this book, but X has defeated us by switching the meaning of client and server. In all other client/server relationships, the server is the remote machine that runs the application (i.e., the server provides services, such a database service or computation service). For some perverse reason that’s better left to the imagination, X insists on calling the program running on the remote machine “the client.” This program displays its windows on the “window server.” We’re going to follow X terminology when discussing graphical client/servers. So when you see “client” think “the remote machine where the application is running,” and when you see “server” think “the local machine that dis- plays output and accepts user input.

Sigh.

etaioinshrdlu 5 years ago

I think the success of Unix is much better explained by "Worse is Better" than the modularity/composability of little utilities called the Unix philosophy.

  • mixmastamyk 5 years ago

    And DOS was a lot worse than Unix.

    • etaioinshrdlu 5 years ago

      I love DOS and how utterly terrible it is :)

GnarfGnarf 5 years ago

This is a hilarious book, even if you're a *nix supporter.

Highly recommended.

  • dataflow 5 years ago

    There's this relevant joke about Linux which is that Linux is very user friendly... it's just very picky about who its friends are.

    • DonHopkins 5 years ago

      Linux is only free if your time is worthless. ;)

      • sls 5 years ago

        Windows 10 Pro is only $200.00 if your time is worthless, is the flip side.

        • TeMPOraL 5 years ago

          If your time is worthless, Windows can - and always could - be free. But if your time is worth something, it's better to shell out that $$$, and also be legally and morally on the safe side.

          • mruts 5 years ago

            Pretty sure installing Kmspico is faster than buying a serial key.

            • 0w4u2a 5 years ago

              Or changing the KMS server to a "customised" one that accepts all activation attempts. Much more secure than giving root to kmspico.

              Find a proper key for your edition here: https://docs.microsoft.com/en-us/windows-server/get-started/...

              slmgr /ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

              slmgr /skms kms.digiboy.ir # just an example of many KMS servers that exist

              slmgr /ato

              • gnu8 5 years ago

                What can the operator of a KMS server do to my computer though?

                • 0w4u2a 5 years ago

                  1. If you use a custom KMS server like that, I don't know what they can do to you. Maybe a KMS server can send arbitrary commands to a client by design? I don't know. Even if it can't, there could be a bug like a buffer overflow in the KMS client that might allow the server to execute code on your client machine. Since you are only supposed to connect your client to Microsoft's KMS server (I don't believe other alternative and supported implementations of the server exist), maybe the client is not as battle-tested and hardened as it should.

                  2. Installing kmspico requires admin access to your machine. What kmspico does is that it installs a local KMS server which works the same way the remote KMS server I've suggested to use does: it activates everything you throw at it. But as I said it needs admin access to your machine, and it's up to you whether you trust kmspico or not.

        • DrScump 5 years ago

          And bandwidth...

        • quickthrower2 5 years ago

          ... and you didn’t get an OEM licence.

      • sandov 5 years ago

        Using computers is free if your time and patience are worthless.

      • w8rbt 5 years ago

        It's like a free puppy.

      • mehrdadn 5 years ago

        Truer words have never been spoken. ;)

    • anoncake 5 years ago

      At least its not user-hostile, e.g. it doesnt spy on its users.

      • vermilingua 5 years ago

        It's a joke comment, on a joke thread, about a joke book. I wholeheartedly agree, but lighten up.

    • sureaboutthis 5 years ago

      That's a line about Unix. I believe before Linux ever came along.

    • AnIdiotOnTheNet 5 years ago

      Yes, it's a great illustration of why people don't like Linux users: they're full of themselves and believe they're better than everyone else.

Naac 5 years ago

This is one of my favorite books, and I own the hard-copy. As I was reading it, I found that a lot of the problems mentioned seem to be affecting Windows and macOS, while Linux based distros have them resolved.

I found the commentary and history both hilarious and informative. The anti-forward by Dennis Ritchie is also very funny.

If anyone has found similar CS style humor books, please let me know!

  • delish 5 years ago

    An excerpt from a job posting by Hal Abelson:

    > Applicants must also have extensive knowledge of Unix, although they should have sufficiently good programming taste to not consider this an achievement.

  • depressed 5 years ago

    I enjoyed "How not to program in C++". It's a collection of find-the-bug puzzles, and has hilarious war stories at the bottom of every other page.

  • itronitron 5 years ago

    "Mr. Bunny's Big Cup o' Java" offers a humorous take on Java, but it's not mind-blowing in the way that unix-haters handbook is.

  • marcosdumay 5 years ago

    Somehow I often get myself thinking "oh, it's good that systemd boots this quickly".

  • mistrial9 5 years ago

    a ridiculous and sometimes hilarious poke at all things computers is "The Binary Bible of Saint Silicon" .. from long ago, when computers were fewer.. it makes fun of Catholicism and similar too, so be warned..

    • DrScump 5 years ago

      My late sister actually worked with Saint $ilicon (Jeffrey Armstrong) at Seagate in the 1980s. I have an autographed copy of the book.

      I got my second ever hard disk drive through her employee-purchase discount. Eighty megabytes!

gerbilly 5 years ago

I bought this book in print 20+ years ago.

It came with a vomit bag.

  • sbisson 5 years ago

    If you still have the barf bag, it adds value!

  • 77pt77 5 years ago

    You can buy this in print?

    Like, an actual book! Not just printing it yourself?

    • jasongill 5 years ago

      Yes, there are people selling copies on Amazon

rcarmo 5 years ago

I read this when it came out, and it's only improved over the years. At the time we were just graduated and it felt otherworldly and futuristic, especially considering we had been taught to use VAXen and used 680x0 Macs...

iheartpotatoes 5 years ago

I expected this to be tongue-in-cheek. Hoo boy was I wrong.

phil9987 5 years ago

This book is hilarious, love how the frustration is underlined with email conversations and just the right amount of sarcasm.

vagab0nd 5 years ago

Great book.

And what's funny is, some of the things it criticizes are features I'd not heard of and actually learned from the book itself. E.g. bare double dash.

moomin 5 years ago

I still own my copy, and highly recommend reading it in conjunction with a more conventional text on operating systems design. It’s quite illuminating.

pts_ 5 years ago

I learned a lot about unix like systems and loved them even more after reading this, ironically. Or maybe because it's sarcastic.

dboreham 5 years ago

Still have my copy..

fromthestart 5 years ago

Amusing read. Most of this material is before my time.

But in geoscience grad school just five or so years ago, a number of older, but technically minded professors were still using Sun machines, and most of the manuals for geoscientific software tend to provide supplementary information for other modern exotic systems/OSes, so I wonder: has windows replaced Unix as the technically inferior/evolutionary superior champion?

systemBuilder 5 years ago

This is a tongue in cheek but also piece of shit MIT propaganda piece mostly written by the clowns who created the abortions that were Multics and LISPMs. Like as if every software engineer should start their next project by warming up their soldering irons to add obscure new features to their CPU's!

I know because I attended that school and worked with the fathers of Multics. The sad fact is that MIT produces extreme bloatware that nobody understands nor needs (gnu emacs cough cough). MIT has almost ruined unix with bloatware like 'configure' and gcc 'extensionettes'. The repeated rant about memory mapped files (a Multics bedrock feature) has been refuted as showboating hundreds of times by OS designers like my manager at Xerox OSD, the designer of Pilot, and an ex-MIT professor who never drank that Kool aid!

What's happening in OS's right now is that European bloatware is strangling Linux ... The reason Unix got people so thrilled is that it could 'terminate and stay resident' in one single human mind!

  • musicale 5 years ago

    Nice rant, systemBuilder. ;-)

    Say what you will about Multics, but it had essentially "no buffer overflows" [1], simply because it was written in a more memory-safe language (PL/I). (The stack also grew upward rather than downward iirc.)

    It also had several nice features that I wish UNIX/Linux hadn't forgotten, even simple ones such as long names for commands (e.g. list-segments in addition to the 'ls' abbreviation.)

    "Thirty Years Later: Lessons from the Multics Security Evaluation (2002)", https://news.ycombinator.com/item?id=16956386

  • delish 5 years ago

    > This is a tongue in cheek but also piece of shit MIT propaganda piece mostly written by the clowns who created the abortions that were Multics and LISPMs.

    Sure--the people who wrote systems with opposing philosophies to the "unix philosophy" are going to be the ones who write this book. And that they failed-in-the-market ("abortions") is already evident by the fact that HN is probably majority unix (mac) or unix-like (linux).

    I confess I'm one of those people who has never used a lisp machine, and admires them greatly. I'd love to hear specific reasons I shouldn't admire them.

    • systemBuilder 5 years ago

      LISPms, Multics, Symbolics machines all run instances of software that depended upon custom instructions, tailored precisely for the languages which they executed, and could not be ported to any other machine.

      • vermilingua 5 years ago

        Given that you're so opposed to custom instructions, I'm sure you also abstain from using x86, ARM, etc. which all now include custom in silico instructions for decoding, crypto, etc.

        Keep fighting the good fight, comrade.

      • mruts 5 years ago

        OpenGenera was ported to ALPHA64. Also, a VM exists for Linux/OSX. It works pretty okay.

        https://github.com/ynniv/opengenera

        you first have to get the opengenera sources, though. But they aren't hard to find.

  • pjmlp 5 years ago

    That abortion called Multics was assessed by DoD as not having any of the memory corruption security exploits that plague UNIX to this day.

  • cyberpunk 5 years ago

    I'll assume you're talking about systemd here? How is linux being strangled? How is it Europe's fault? As far as I can see linux is booming and almost ubiquitous as far as backend OS's go...

    • systemBuilder 5 years ago

      Operating systems reflect the cultures that created them. UNIX was a reaction to the enormous overcomplexity of Multics and other operating systems (OS/360) of its day. One file type, one record type ("THE BYTE"),one device driver model, NFA research (~1 pattern language vs zero in other operating systems) and a way to plug together pipeline programs on the command line not seen before.

      UNIX succeeded precisely because it was powerful and yet so simple a person could read any part of it and change it.

      However, the kernel of UNIX is now beyond ken for most developers precisely because the developers pursued complexity and marginal features for marginal benefits, something very common in European governments and a part of that culture.

      The shell commands went from maybe 50kloc to millions of lines precisely because the Linux developers were desperate for contributions and let MIT bloatware people control that part of the source code, strangling Linux with MIT culture.

      • cyberpunk 5 years ago

        Kernels do a lot now that they didn't used to. I'm not sure what marginal features you're talking about, can you share some with us? The ABI may look the same as it was in the 1990's but thing under the hood have gotten a lot smarter, and that smarts requires more code.

        As for userland.. Even OpenBSD, my personal preference of operating system and one which is known for its 'lean'ness of base userland has 10x this kloc in /bin/ and /usr/bin:

          # find bin usr.bin | egrep '\.c$|\.h$' | xargs wc -l | tail -1
            689562 total
        
        Coreutils, by comparison:

           coreutils$ find . | egrep '\.c$|\.h$' | xargs wc -l | tail -1
             91991 total
        
        (The difference is because coreutils doesn't contain everything obsd's /bin and /usr/bin does I suppose, openssh, tmux etc)

        Either way, that's not a mad amount of code imo... Which parts of this are 'MIT bloatware' ?

    • darkpuma 5 years ago

      > "How is [systemd] Europe's fault?"

      The only way that makes sense is if they're referring to Lennart being German, but that's kind of a bullshit argument because SystemD was not some novel idea; launchd was around first and came from the American west coast.

      • f2f 5 years ago

        counterpoint: launchd works.

        • dbdjfjrjvebd 5 years ago

          What doesn't work about systemd?

          • 0w4u2a 5 years ago

            I liked the idea of systemd because I love launchd and I always thought it was the best approach.

            After years of dealing with systemd, I've finally given up on it, and I'm back on openrc.

            (Yes, I know I'm not answering your question, but I'm not up to the task... ;P)

    • systemBuilder 5 years ago

      No this is not a rant about systemd. The idea behind UNIX was a small set of lego building blocks that could be plugged together to construct something way larger than the parts themselves. The idea in Linux is that once you become an open source contributor more software is always better and more features is always better and hey why doesn't everyone build a monolith with 150 subcommands, which does not interoperate with any other software, and this describes most of the software built for Linux in the past 10 years.

      • systemBuilder 5 years ago

        I once worked on a project at Qualcomm and wondered if the head had any idea what he was doing. I went to him and asked what is the complexity of a change you are willing to put up with for a 1% increase in cellular sector throughput. He had no idea. I should have quit the next day. The project was a commercial failure (802.20). Too big and late to market. Same problem that killed CISC machines. Most Linux developers never think in these terms and hence, long term, Linux is dead.

towaway1138 5 years ago

I bought this when it came out, but thought it was horrible. The only good bit was the preface by one of the Unix authors (Ritchie?). The rest just seemed like whining by people that didn't really get the Unix philosophy.

  • mruts 5 years ago

    I mean, you could say that everyone hates Nazi's because they don't really get Nazi philosophy.

    Both your statement and my statement don't really mean anything.