pjmlp 10 days ago

$100 (base OS + bare bones devtools) + $90 (for having the confort of Macros) + $75 (debuging beyond text output) + $75 (for a proper code formatter) = $340 in 1978.

Which are around $1,628.73 in today's money, plus hardware costs, so no wonder that many of us would just stick with the base install and make do with what was available.

Example, I never had an Assembler for Timex 2068, rather painfully translated opcodes into DATA segments, or used a hexdump editor that appeared in some computer magazine as one of those type-in examples.

  • zwieback 10 days ago

    Yes, stuff was ridiculously expensive back then. I was lucky that my dad bought an Apple ][ for home use but none of my friends could afford anything like that. All our software was pirated or we wrote it ourselves. I have fond memories of my youth in the 8 bit era but this is the golden age of home computing, right now, and it keeps getting better.

    • musicale 9 hours ago

      > my dad bought an Apple ][ for home use but none of my friends could afford anything like that

      Apple's pricing hasn't changed that much. ;-)

      > Introductory price: US$1,298 (equivalent to $6,530 in 2023)

      source: https://en.wikipedia.org/wiki/Apple_II

  • ghaff 10 days ago

    Basically, dabbling in PCs used to be really expensive relative to today. ((The details matter of course) but basically a PC and software in the early eighties was many thousands of dollars in today's money.

    • bombcar 17 hours ago

      It was expensive if you were at the forefront; but things were changing so fast that if you were willing to be back a few years, or a "unfavored" style, you could get in quite cheap.

      That really accelerated around '95 or so with Windows 95 - anything that couldn't run it became almost free overnight.

    • justin66 10 days ago

      An awful lot of other expenses were a lot cheaper, so computer stuff wasn't quite so extravagant for a working person to splurge on as it might seem. The defining characteristic of early users was curiosity, not wealth.

      • ghaff 10 days ago

        Depends where you lived. The "other expenses" were mostly housing in expensive areas of the country.

        • justin66 10 days ago

          If areas with more expensive housing were less likely to have a PC, that’s not a trend that I ever noticed.

    • nsguy 10 days ago

      Sinclair ZX-80 was $625 in today's prices when it was released in 1980. Not too crazy.

      EDIT: You could get an assembler/dis-assembler for the ZX-81 for $20 (1982). Also at the time a lot of software was "shared" between enthusiasts.

      • ghaff 10 days ago

        There were some very clearly hobbyist computers like the Sinclair. Also Commodore 64 that were fairly cheap in the early 80s.

        But get into anything like a PC clone--much less anything with a hard drive--and the prices went up fast.

        • nsguy 10 days ago

          Sure. Until the flood of Taiwanese clones brought the price down the cost of a "proper" IBM-PC was high. I thought we were talking "personal computers" more broadly here though. The IBM-PC was released mid 1981...

          • ghaff 10 days ago

            There were basically some toys that let you do a bit of BASIC or whatever hooked up to your 13” TV. But S-100 bus, Apple, or PC clones weren’t cheap.

            • nsguy 8 days ago

              They were general purpose computers. You could do BASIC or machine language and run other software on them. Lots of software developers grew up on those.

              • musicale 9 hours ago

                Definitely true, especially considering the original Apple II, whose base model came with 4K of RAM and didn't include a floating point BASIC.

                Of course the real Apple II competitor from Sinclair was the ZX Spectrum.

                https://en.wikipedia.org/wiki/ZX_Spectrum

peter_d_sherman 11 days ago

>"IBM wished to purchase CP/M outright, whereas DRI sought a per-copy royalty payment in order to protect its existing base of business. The meeting ended in an impasse over financial terms, but Gary believed they had essentially agreed to do business.

Kildall tried to renew the negotiations a couple of weeks later. IBM did not respond because, in the meantime, Bill Gates purchased an OS from Seattle Computer Products that was written to emulate the look and feel of CP/M. He then sold a one-time, non-exclusive license to IBM, which used the designation PC DOS. With great foresight, he retained the right to license the product to others as MS-DOS."

[...]

>"IBM announced the PC on August 12, 1981, but with the PC-DOS list price set at $40 versus $240 for CP/M, most customers simply chose the former as the lower-cost option."

And the rest... is computer history...

(https://en.wikipedia.org/wiki/History_of_Microsoft)

  • layer8 10 days ago

    > that was written to emulate the look and feel of CP/M.

    And that’s how DOS and then Windows ended up with “/” as the command-line option indicator and therefore “\” as the directory separator.

    • ok123456 10 days ago

      CP/M didn't have directories until 3.0, which was released in 1983. DOS didn't have them until 2.0.

      The backslash was chosen because the forward slash was used for arguments. This convention was taken from other timesharing operating systems of the day.

      • lproven 10 days ago

        This is not true.

        [1] CP/M 3 (normally called "CP/M Plus") does not have or support subdirectories, only user areas, just the same as earlier versions... (Such as B4:MYFILE.TXT or A0:WORDSTAR.COM.) Source: me -- I own 2 CP/M 3 computers. Feel free to check the manual:

        http://www.bitsavers.org/pdf/digitalResearch/cpm_plus/CPM-Pl...

        [2] CP/M does not have standard command line switches or arguments, so there is no standard delimiter.

  • TacticalCoder 10 days ago

    > IBM did not respond because, in the meantime, Bill Gates purchased an OS from Seattle Computer Products that was written to emulate the look and feel of CP/M. He then sold a one-time, non-exclusive license to IBM...

    Yeah. It's incredible how IBM handed the golden keys to the castle on a plate cut in a solid 100000000 carat diamond to Microsoft.

    This has to be the absolute worst business deal ever made in history: 3 trillion market cap for Microsoft. And IBM created that.

    • achairapart 10 days ago

      The story was not so linear, though. These new 16 bit machines (8080 & 8086) were useless without an OS. They needed it badly and fast.

      86-DOS/PC-DOS/MS-DOS was born as a kind of throw-away OS to fill this hole temporarily. Microsoft was betting hard at the time on another OS called Xenix, which was Unix-based!

      Then, somehow the industry decided that DOS was good enough, it lasted ~15+ years and this made Microsoft a fortune.

      In the end, Bill Gates saw a business opportunity and acted quickly and ruthlessly. Gary Kindall, on the other hand, delayed the initial CP/M-86 development for too long.

      By an irony of fate, Gary Kindall spent the rest of his career anticipating and innovating software concepts (Multiuser, Multitasking, Graphics Environment Manager, even the first encyclopedia on CD-ROM, ten years before MS Encarta) but somehow the timing was always wrong and all this never marked the return of the first CP/M glory days.

      Gary Kindall was the greatest PC software pioneer, and as the old saying goes: “Pioneers take the arrows, settlers take the land”.

      • floren 10 days ago

        > They needed it badly and fast.

        Lucky for them, Microsoft was happy to provide an OS badly and fast!

        • exe34 10 days ago

          They got so good at it, they've been doing it ever since!

      • pjmlp 10 days ago

        Being old enough to have used Xenix as my UNIX introduction, in hidsight there is a certain irony that Microsoft ended up getting rid of it, followed by not being so serious about UNIX subsystem on Windows NT.

        • dblohm7 2 days ago

          AIUI the UNIX subsystem was there entirely for the purpose of getting Windows NT certified at C2-level security.

    • smallstepforman 10 days ago

      That is debatable. The PC clone business made the IBM PC the industry “standard”, since it opened up the supplier list to many companies. IBM alone (eg. PS2 and OS/2) as a sole manufacturer did not manage to conquer the world. And MS had the insight to offer DOS to everyone. MS really won because they were cheaper than alternatives (eg Berkeley GeOS, CP/M, Unix) until critical mass was established. Even if FreeBSD was available in mid 80’s, it would have become the “standard” today. Cheap and good enough always wins the day. Proprietary and expensive makes you 2nd best.

    • wglb 10 days ago

      > Yeah. It's incredible how IBM handed the golden keys to the castle on a plate cut in a solid 100000000 carat diamond to Microsoft.

      In all fairness, very few recognized at the time what that would lead to.

      IBM had a deep, entrenched culture that had serious dominance over all computer business. There was not an article during that time that didn't characterize the business as "Snow White and the 7 Dwarfs" where IBM was of course Snow White. Everyone else was essentially looking for crumbs. A similar example was illustrated in David Halberstam's The Dead Reckoning where General Motors was so dominant and culturally resistant to any new ideas from the outside that visitors from Japan were allowed to meet with the executives, but no discussions of the auto industry were allowed.

      So in the eyes of the IBM leadership, this was a tiny, goofy idea that would not amount to much. One leader at IBM scoffed at the PC-DOS O/S calling it a toy.

      Well, it was a toy that essentially ate your world.

    • WillAdams 10 days ago

      Even without that, Bill Gates would have created something --- Microsoft did a _lot_ of custom development, e.g., the Tandy-Radio Shack Model 100 (which was the last bit of coding which B.G. did for production) and developed and licensed a lot of other products.

      He also made his share of deals which took a lot of value away from other companies on his own:

      https://www.folklore.org/MacBasic.html

      (buying MacBasic from Apple for $1 so as to kill it contine the MS-Basic license for the Apple II)

      An interesting insight into this timeframe is Jerry Kaplan's _StartUp: A Silicon Valley Adventure_

      https://www.goodreads.com/book/show/1171250.Startup

      the whole "embrace, extend, extinguish" thing shows up as documented by folks feeling its effects.

    • timbit42 10 days ago

      It's interesting that Gary was working for Intel when he created CP/M and Intel wasn't interested in it. Then IBM chose the Intel 8088 for their PC. Intel could have offered CP/M to IBM if they'd kept it.

    • flohofwoe 9 days ago

      I don't think anybody at IBM (or anywhere else) had an idea how successful the PC would eventually become, it needed many happy accidents for that to happen (and Microsoft was probably more responsible for that success than IBM, the other factor being cheap clones).

      The original IBM PCs were thoroughly unimpressive, even when compared to much cheaper 8-bit home computers.

      It was much more likely that the PC would have gone the way of the Dodo like most other computer architectures of that time.

  • JohnClark1337 11 days ago

    I've heard 3 different versions of this story now. 1. Gary was out joyriding in his plane and when the reps from IBM showed up, his wife turned them down because she didn't feel like she had the authority to continue negotiating without him

    2. IBM reps showed up, but it was Gary's wife's birthday, and so he told them he would need to reschedule

    3. This version

    I wonder if we will ever really know what took place that day.

richard_shelton 10 days ago

I hope someone will write an article about Gary Kildall's achievements in theory of compilation. For us, compiler writers, he is a legendary figure.

  • wglb 10 days ago

    I have a copy of his paper at U of W titled "A Code Synthesis filter for basic block optimization" 72-3-01. The code behind the paper was written in XPL.

    • wglb 10 days ago

      Further, his work in that area is referenced in other compiler-related books such as Flow Analysis of Computer Programs by Matthew S. Hect.

nsxwolf 10 days ago

The article mentions Gary died in an accident, which I recall reading elsewhere was that he fell and hit his head in a brawl at a biker bar.

  • jetrink 10 days ago

    > Seattle Times: Computer Pioneer's Death Probed -- Kildall Called Possible Victim Of Homicide

    > Kildall was injured at the Franklin Street Bar & Grill on July 8 but refused medical treatment. He was taken to a hospital the following day and released. But he was readmitted Sunday and died the following day.

    > Kildall could have died either from a blow to the head or from hitting his head as he fell, said Sollecito.

    > The first description of the incident at the bar, he said, was that Kildall had been in the bar between two to 20 minutes. Then witnesses "turned around, and he was on the floor," the sergeant said.

    > But Sollecito said stories are changing. "That version isn't going to hold up," he said.

    https://archive.seattletimes.com/archive/?date=19940716&slug...

    The source of that story seems to be the Monterey Police Department, but nothing came of the investigation.

guyzero 10 days ago

Headquartered in Pacific Grove? Wild. The alternate history of Silicon Valley being the Salinas Valley instead of Santa Clara valley.

  • krallja 10 days ago

    Silicon Valley is so named because of Shockley Semiconductor Laboratory, Fairchild Semiconductor, and Intel.

    Shockley moved from NJ to Palo Alto in 1956 to care for his ailing mother.

uticus 10 days ago

> "Together with his invention of the BIOS (Basic Input Output System), Kildall's operating system allowed a microprocessor-based computer to communicate with a disk drive storage unit and provided an important foundation for the personal computer revolution." (IEEE milestone plaque cited in article)

The whole idea of an OS as an important thing is debatable. For example:

> The operating system is another concept that is curious. Operating systems are dauntingly complex and totally unnecessary. It’s a brilliant thing that Bill Gates has done in selling the world on the notion of operating systems. It’s probably the greatest con game the world has ever seen.

> An operating system does absolutely nothing for you. As long as you had something—a subroutine called disk driver, a subroutine called some kind of communication support, in the modern world, it doesn’t do anything else.

(Chuck Moore, interview in "Masterminds of Programming", 2009)

  • musicale 9 hours ago

    I seem to recall someone, perhaps Alan Kay (or another person familiar with Smalltalk, Lisp, etc.), saying something like "operating systems are only necessary when your programming language/environment is deficient; ideally you shouldn't have one."

    I'm unsurprised that Moore, who invented his own complete-machine programming environment, would concur.

    • xkriva11 4 hours ago

      "An operating system is a collection of things that don't fit into a language. There shouldn't be one." -- Dan Ingalls

  • djur 10 days ago

    Chuck Moore's alternative to the modern OS is Color Forth. The man is brilliant and he is also a crank in the way brilliant people can sometimes be. When he says "you" he's really referring to programmers (and he's still wrong about that). I don't think the category of "non-programmers who use computers" is a significant part of his analysis.

  • hmry 10 days ago

    This works great as long as you have exactly one program running on your computer and fully trust its developers to not mess up your hardware or lose your data.

    • flohofwoe 9 days ago

      This sort of protection is not a requirement for an operation system though. For instance AmigaOS had full pre-emptive multitasking, device drivers, a windowing UI, a filesystem and all sorts of user-facing features expected of a modern operating system.

      But it had no memory protection, no "kernel mode" or filesystem access rights, the hardware was fully documented and accessible to all code running on the machine.

      Each program could easily take over the entire machine and do literally anything (including destroying all your data).

      Multi-user capability and the idea that a user only has limited access to the computer was really a mainframe thing that leaked into personal computing during the 90s.

    • uticus 10 days ago

      I think this is the biggest argument for an OS. The OS is a virtual Nanny or representative of yourself. It does what you would do if you could control the machine: provide security for data, keep code from running through fences, and ensure multiple simultaneous routines don't block each other.

      • sophacles 10 days ago

        If you're that concerned about it, go grab a raspberry pi and get cracking on your own compute environment. You don't actually need all the OS stuff as you keep arguing, so go make it so.

        Of course you're ignoring the fact that linux is open source, so you can go make the OS do what you want it to - that's one path you could take.

        Or you could just write code for the rpi and build up all that stuff yourself, no problem.

        The only person stopping you from doing this is yourself.... go do it and show us the "no-os" way.

        • uticus 10 days ago

          I think you read my previous comment as an argument against what I was replying to. I was actually agreeing with them and acknowledging they have a great point.

          • sophacles 10 days ago

            I did misread, thanks for the clarification.

  • bregma 10 days ago

    Interesting. Remember when, in order to run a DOS application, you had to edit a config file to tell it which graphics card, sound card, serial port, and printer you had? Also, tweak the jumper on the hardware to select which IRQ they used to avoid conflicts?

    That was because your application booted the OS out and ran on naked hardware. It was as simple as IRQ 1-2-3, plus Hercules, SoundBlaster, ExpressCard and extended or expanded RAM. And maybe typing ATDT to get the modem's attention.

    • uticus 10 days ago

      I think you're confusing what an OS provides with the concept of configuration maintenance, esp "plug and play." Or to make my point more clearly, all the configuration you listed still must be done for an OS by something outside the OS - ie the bootloader (GRUB, etc) or device manager or the like. An OS doesn't magically make configuration for different hardware unnecessary; indeed it doesn't even handle such configuration.

      > ... and ran on naked hardware.

      Not quite, drivers are still involved. An OS may load drivers, expose a facade for drivers, or even abstract whole layers of driver subroutines. But it isn't like it's a choice of either OS or bitbang the hardware.

    • nsguy 10 days ago

      DOS had nothing to do with graphics and sound. All that was at the application level. DOS provided the command line (your shell), the filesystem, and a few basic abstractions. Generally speaking applications didn't boot the OS out since they still used it to access files etc.

  • xnorswap 10 days ago

    That sounds like nonsense. The idea that a stable API provides no value is contrary to all the evidence.

    Look at graphics APIs like DirectX and look at what happened when jQuery was released for more evidence that having a stable API that covers disparate collections of hardware (or browser) APIs, provides vast amount of value.

    The idea that an operating system providing a stable API over different computers is something that doesn't provide value is hogwash.

    • AnimalMuppet 10 days ago

      It's more than just a stable API across different computers, though that's a huge deal. It's also code that I don't have to write. I don't have to write the disk drivers. I don't have to write the keyboard driver. I don't have to write the mouse driver. I don't have to write the screen driver. I can build what I'm trying to build on top of that, without worrying about all of that detail.

      • uticus 10 days ago

        Not disagreeing (completely) with you, but worth pointing out the very next question in the interview I quoted above looks like this:

        > What about device support? > Chuck: You have a subroutine for each device. That’s a library, not an operating system. Call the ones you need or load the ones you need.

        ...for sure it's a departure from the common way of doing things, at least. And it's almost impossible to imagine, today's generation would have to look at retro-computing to see something akin to the idea [0] [1].

        [0] https://en.wikipedia.org/wiki/Commodore_64#BASIC

        [1] https://dfarq.homeip.net/commodore-64-operating-system/

        • AnimalMuppet 10 days ago

          No, I can imagine it. I worked on an embedded device that I think worked this way (vxWorks), though I didn't get into the device driver side of things very much.

          But it's still a bad approach for general computers, for two reasons.

          1. With the current setup, we need an OS that detects the hardware and loads the right code. With Chuck's proposal, that responsibility - and the code to do it - moves to every application. Since there are far more applications than OSes, this is highly inefficient.

          2. Let's say I buy an application, and then some time later I buy a new computer. If the new computer has a compatible OS, I can just run the existing program on the new computer. But under Chuck's approach, I need a new version of the program as well - one that knows how to detect the new hardware, and what drivers to load in response.

          So (virtually) nobody is going to do it that way. It might have made sense when you were fighting for every byte of memory, but that isn't the world we live in any more, and it hasn't been for decades.

        • skydhash 10 days ago

          I think the linux kernel is close to that. And if you go with a minimal distro like alpine, then you're not that far away from what is described. Compare it to MacOS where just launching a program is a whole shenanigan. I have a small digital audio player running linux and the interface is a single program. I like OS for their flexibility, but when I look at my computing needs (as a user), I'd be fine with just a small list of software. Anything else is nice, but not "essential". I think that's why iPad has been successful, it's the graphic DOS. You launch an app and for all purposes, it may be the only one running on the hardware.

    • toast0 10 days ago

      An operating system provides those things, but it's not the only way they could be provided.

      Disk drivers could be provided by BIOS/UEFI (and they are) or through some library, or through standard hardware interfaces (AHCI and similar). Same deal for graphics.

      Does that need to be bundled with a kernel and a bunch of other stuff? Not strictly. All that other stuff is valuable too, most of the time. Otoh, if you're using the whole computer for one thing, the OS can get in the way, and often isn't providing much you couldn't get as libraries.

      • babypuncher 10 days ago

        > Disk drivers could be provided by BIOS/UEFI

        They can, but I don't want direct access to a disk driver. I don't want to care about the hardware, or even what file system is in use. I want to write a log file to this folder on this disk. Once the BIOS/UEFI starts providing that level of abstraction, how is it different than an operating system?

        Having an operating system means there is a lot less code I have to write, because all the nitty gritty details of networking, talking to hardware, and all that other stuff is already taken care of for me. Someone else already invented the wheel so I don't have to.

    • uticus 10 days ago

      > The idea that a stable API provides no value is contrary to all the evidence.

      ...agreed

      > Look at graphics APIs like DirectX and look at what happened when jQuery was released for more evidence that having a stable API...provides vast amount of value.

      I wonder if more correct to say the concept of virtual machine provides a vast amount of value - like Java saying "run everywhere."

      But your examples (DirectX, jQuery) could also prove a different point: high levels of abstraction layers lead to maintenance and efficiency nightmares. For example, jQuery sits on JavaScript, sits on browser's engine, sits on OS, sits on microcode. A jQuery-focused dev would still spend a significant amount of time setting up, debugging, accommodating, and troubleshooting things on all the other layers compared to just getting things done in their (somewhat) stable jQuery ABI. The layers are helpful sometimes, and hurtful sometimes, and the net gain is not always as great as expected.