bruce511 12 days ago

For a long time, doing training of programmers, I'd explain that bugs come about when we write code. Code left alone "does not rust".

In the last decade I've swung the other way. While the code does indeed "not rust" it does decay.

Primarily this is because software now is more integrated. Integration means communication and there are changes happening all the time. SSLv3, TLSv1, went away. XML became JSON. And don't get me started on SaaS APIs that have a "new version" every couple of years. And yes, the elephant in the room, security.

Unfortunately lots of developers in my age group believe in "if it's not broken don't fix it". They are the hardest cohort to convince "update things even they're important. Updating when its urgent is no fun at all."

There's no habit as hard to break as a best-practice thats turned bad.

  • necheffa 11 days ago

    > Code left alone "does not rust".

    I think I am uniquely qualified to speak authoritatively on this subject as I regularly work in code bases dating back to the mid 1950s (nearly 70 years old if you are counting).

    Code left alone absolutely does rust. Everything around the code changes, you can wax philosophically about how "technically the code didn't rust the world did", but at the end of the day that old code aged into a position where it no longer functions optimally or even correctly.

    Just to tease with an example, the original machine some of this code was written for only had a few kilobytes of memory, half of which was taken up by the OS and compiler. In order to fit the problem you are working on into memory, they effectively wrote a custom swap allocator that looked and felt like you were using in-memory arrays but were actually using tape storage. Fast forward to today where we have a single compute node with 128 physical CPU cores and a terabyte of memory, the code still diligently uses disk storage to minimize RAM consumption but runs like dog shit on a single thread anyways. Not to mention all the integers used to store pointers have had to be widened over the years as you went from 48 bit words to 32 bit words to 64 bit words.

    • conover 10 days ago

      "The world moved on", as it's said in the Dark Tower series.

  • userbinator 12 days ago

    Integration means communication and there are changes happening all the time. SSLv3, TLSv1, went away.

    The problem is a lack of modularity and standardised interfaces where it matters (and modularity where it doesn't, causing additional useless complexity), and incentives that reward planned/forced obsolescence.

    Also related: TLS 1.3 on Windows 3.11: https://news.ycombinator.com/item?id=36486512

    • vegetablepotpie 11 days ago

      I’d say interfaces are part of the problem. People are happy to change interfaces when it makes the implementation of a component easier. I’ve seen interfaces change without notice and this imposes maintenance overhead.

      I’d say that complexity is the other part of the problem. As developers, we like to organize functionality into packages, and version them based on the type of changes we make. We like layers of abstraction to contain classes of problems to work on them independently. But all software is a list of instructions, and all the ways that they can interact cannot be categorized in a Symantec version number, or a unit test. Subtle changes can break functionality in surprising ways.

      • oooyay 11 days ago

        One of my more favorite forms of testing is offline E2E testing. Python and Go are very adept at doing this kind of testing, which is nice. The idea is that you use exposed APIs (read: anything for consumption. Could be REST but also could be important packages) just as the user would use them. The nature of testing is that it captures the discreet contracts you're mentioning by reusing the API itself and ensures them.

        There's a discussion to be had that's the volume of E2E vs unit tests, but that isn't this discussion.

      • eddd-ddde 11 days ago

        An interface that changes because of the implementation is not really an interface.

        I can change from ext2 to btrfs and my program won't die because open and read are a good interface.

        • f1shy 11 days ago

          If 10% of people doing SW would understand this, the world would be much better.

          • cpeterso 11 days ago

            I fear good ol’ fashioned information hiding is being forgotten in modern software design. Systems are increasingly complicated and developers are busy just trying to get all their code to fit together to worry about the difference between an interface and encapsulation.

            https://en.m.wikipedia.org/wiki/Information_hiding

    • isityouyesitsme 11 days ago

      Agreed. your parenthetical comment explains the tension.

      The only solution is to have a crystal ball.

  • necovek 12 days ago

    I don't think it's as simple as that.

    Some of the modern ecosystems have gone entirely bonkers (think nodejs/npm): hundreds and thousands of dependencies for the simplest of things, basically an unmanageable "supply chain".

    Sure, we can talk about what's good approach to update and dependency hygiene, how packages should "freeze" their dependencies, how should breaking changes be communicated through version numbers (or not), but we've seen the rise of the non-NIH that can't bother to implement 5 lines of code.

    If you commit to update whenever there's a newer version of any dep you are using, you commit yourself to a world of pain and fighting even more bugs.

    I actually believe LTS repos of public repos (PyPI, NPM, ...) could be a decent way to earn some money in today's dependency-heavy world.

    • IshKebab 11 days ago

      That's really a separate issue. Even if all of your code is first party and you've been crazy enough to write your own TLS library, XML parser, etc. all the things he said still apply because most code lives in an ecosystem of other systems.

      • necovek 11 days ago

        He was advocating for continually updating whenever the environment changes. Dependencies are a natural part of that environment, and I am highlighting how even doing just that is troublesome. With any mildly complex project, you would simply be spending all your time doing dependency updates.

        I think we need to be looking at a better balance of backwards compatibility in the tools we use (both external systems and external libraries), understand the cost for importing almost-trivial dependencies, and I believe there might be even an opportunity for someone to start a business there ("I'll backport security fixes for your 10 dependencies for you").

    • PeterisP 11 days ago

      On the other side of the coin, if you freeze your dependencies and commit to not update whenever there's a newer version of any dep you are using, you commit yourself to having to continuously (and rapidly!) evaluate all the security advisories of all those dependencies to see if there are any bugs you have to mitigate before your system gets exploited.

      You can't simply choose to never update any dependencies - the only question is how you decide when and which updates will get made, or delegate that decision to others.

      • necovek 11 days ago

        Yeah, I don't think that's an answer either, which is why I talked about LTS-approach (Long Term Support) for your core dependencies.

        Eg. all packages in Ubuntu LTS "main" archive, or RedHat releases, get supported for 5-10 years with security fixes, while maintaining backwards compatibility.

        However, even Canonical has realized people will accept breakage, so "main" has been dwindling over time to reduce their costs. That also applies to snaps and flatpaks — no guarantees about them at all.

    • taneq 12 days ago

      This overacceptance of external dependencies and compatibility-breaking API changes all started with the shift from software-as-product to software-as-a-service. Honestly it feels like the resulting churn and busywork is on some level a deliberate ploy to create more job security for software devs.

      • necovek 11 days ago

        I don't think it's a deliberate ploy: it's a micro-optimization that's localised to the team/company building the API/product, and it improves efficiency at that level.

        You can really build and maintain only a single version of your product, forgetting about maintaining old versions. This means pushing backwards compatibility challenges onto your customers (eg. other developers for an API) — the fact that customers have accepted that tells us that the optimization is working, though I am very much not a fan.

  • petabyt 12 days ago

    The code running on the ECU on my 34 year old truck hasn't rusted ;)

    • filmor 12 days ago

      Because it doesn't talk to other, potentially changing systems. As soon as an application is networked, it requires maintenance.

    • necheffa 11 days ago

      Someone already mentioned that your ECU doesn't really need to talk to other systems (besides perhaps via the OBD-II port with a well established, industry standard, backed by regulation protocol).

      And I'll also add that at no point are you ever going to need to flash the ECU with updated firmware.

      So while a cute example, it isn't really a practical example. Typically when people complain about "code rusting", it is in the context of a codebase that needs to adapt to the world around it; whether that is communicating with other systems or even just accepting new features within its own little fief.

    • quadhome 11 days ago

      Would that ECU pass modern emissions?

      • datavirtue 11 days ago

        If you had an appreciation for emissions technology and policy you would be able to see how it has swung past the point of it's intended purpose. People are dismantling new engines to find all kinds of issues that will severely limit the useful life of the vehicle and/or cost the consumer a ridiculous amount of money in repair or replacement costs.

        The lengths to which manufacturers have gone to "hit the numbers" has resulted in these major deficiencies. This has all formed out in the last decade. Previous to that, a lot of the emissions standards had a positive affect (in most cases).

        The automotive manufacturers and the regulators are colluding without enough transparency and it has corrupted the process.

      • pif 11 days ago

        Much likely not, and that's a good thing, because nobody has yet found a way to limit emissions without seriously impacting performance (especially concerning responsiveness and low regime). May be damned them who invented drive-by-wire!

  • amboo7 11 days ago

    Code rusts via the loss of information about it (memory, documentation...). If it cannot be maintained, it is a zombie.

GuestHNUser 12 days ago

> but critically, it needs to adapt to its users – to people!

In principle, I want this to be true. But, in practice, I think products change because the teams that built a product need to justify their continued existence within a corporation. Slack's last few rounds of UI changes for instance, has been a net negative for me, the user. Why can't I split several channels in the same window anymore? Why did they add a thick vertical toolbar that I don't use and can't remove? Not for my benefit, that's for sure.

p.s. Kelley not Kelly is the correct spelling of his name.

  • ozim 11 days ago

    You are also sure that those changes were not beneficial for someone else in a way where your single opinion is outvoted?

    • wavemode 11 days ago

      I'm sure it worked lovely for whatever biased focus group Slack Inc hired to review the changes. But where I work the redesign is pretty much universally hated.

      I would've thought a company centered around developer culture would understand the concept of not fixing what isn't broken. I guess pencil pushers have now fully taken them over.

  • cqqxo4zV46cp 12 days ago

    I think that it’s very important to remember that most software, by count, is not some deep-pocketed VC-funded / big tech company’s baby.

    If HN people stopped working for HN companies then they’d see that most of the behaviours they complain about, on HN, are not universal inevitabilities.

gherkinnn 11 days ago

> Code is malleable enough that we think of applying changes on top of existing code to get a new product, slightly different from the previous one. In this way, code is more similar to genetic material than other types of design.

This is an interesting perspective and will take me some more time to apply to my understanding (see what I did there?). A first thought is how this condemns designing and architecting for an unknown future.

  • PeterisP 11 days ago

    For code, future is neither unknown nor certain - we often do have a valid, informed opinion about what kind of changes are likely (or even inevitable on a schedule) and what kind of changes are possible but unlikely.

    For example, you may know for sure that some invariant is true in your market but not globally, so you know in advance that when(if!) your company expands, you'll need to change that thing in a specific way - but not in an arbitrary direction, that change is knowable even if you haven't yet spent the effort to know about its details.

jongjong 12 days ago

I wasn't familiar with Andrew Kelly but he is spot on. It is a corporate conspiracy where the problem is manufactured and then later solved in a way which introduces yet more problems. It took me almost a decade years to figure this with high confidence.

This article is falling into a trap stating that 'requirement changes' are to blame for constant software rewrites. Good software architecture doesn't require full rewrites precisely because it anticipates requirement changes. That's the entire point of software architecture. Unfortunately, almost nobody has experienced good software architecture. Even among the very few who will encounter it, most of them won't recognize it because either they are not competent enough to appreciate it or they simply won't stay on the same project long enough to observe the benefits and flexibility of its architecture.

So yes, I think it definitely looks like a conspiracy. I've encountered many developers in my career who have had a tendency to over-engineer and create unnecessary complexity; they are often promoted within companies. Not only in agencies and consultancies (where the incentive to over-engineer is clear; more bugs = more repeat customers), but also in other large corporations and even non-profits.

There is a significant shortage of competence both on the developer side as they are unable to produce good architecture but also on the management side as they fail to recognize good architecture and accurately reward talent.

Most software developers and managers will outright deny the existence of 'good architecture'. They are such a large majority that their voices completely obscure the voices of the (guessing) 1% who know better. The voice of mediocre developers, propped up by short-term financial incentives, is much louder than the voice of excellent developers who pursue longer-term goals.

Unfortunately, if you're a top 1% software developer, the industry feels like Idiocracy, except for the crucial fact that, in Idiocracy, the idiots are just smart enough to identity who the intelligent people are.

  • misja111 11 days ago

    There is a problem with the notion of 'good architecture', and that is people tend to have strongly different opinions on what a good architecture is.

    Take a basic web application as an example. In the last 20 years, we went from n-tier, to generated frontend using some template framework, to rest server + js frontend, and recently generated frontends have become fashionable again.

    Or for example enterprise architecture. We went from a bunch of monoliths with ad hoc interfaces, to service oriented architecture, to microservices, and now the trend goes to monoliths again.

    • jongjong 11 days ago

      I think the problem is that when people think about 'good architecture' they're thinking 'silver bullet architecture.' There is no such thing. Different projects will require a different architecture. One project's good architecture may not be suitable for a different project. The architecture has to be adapted to the current and future requirements of the problem domain of the project.

      There are some highly generic architectures like front end frameworks (e.g. VueJS, React, Angular) which are almost silver bullets but it's still a mistake to think of them in that way and they tend to be overused. But the fact that front end frameworks can handle an extremely broad set of front end requirements is a great example of how good architecture can handle requirement changes. It's unusual for someone to have to fork or swap out React due to front end requirement changes; it's usually an indication that React wasn't suitable for their use case to begin with. It would have to be a massive business pivot and you'd have to dump the entire codebase anyway if the requirement changes where that significant.

      When discussing architecture in a broader sense, the front end is just one aspect. There have been many attempts to produce silver bullet full-stack frameworks (e.g. Meteor, SailsJS, DeepStream) and they are/were useful for a certain subset of projects, much smaller subset than what front end frameworks have achieved.

      But there are many architectures that are heavily adapted to specific business domains and which are very far from silver bullets but which could be considered 'good architectures' nonetheless because they are good at handling domain-relevant requirement changes with minimal code changes necessary.

    • anyonecancode 11 days ago

      I don't know if "good" architecture can be defined, but "consistent" architecture can. It's the lack of the latter I often see and that causes the most issues. Inconsistent architecture makes it very difficult to understand the overall system, which makes changes risky. There will always be mismatches between what our software does and what we want it to do -- either because we got it wrong the first time, or because conditions have changed. The most important attribute of software is it's ability to change, and for that consistency is key, as it makes it possible to actually understand the software. If you have variable names that don't correspond to the data they story, a hodge podge of different ways you are making network communications, state being stored, mutated, and accesses willy nilly... it's going to be a bad time. Have a vision of how it's all supposed to work, stick to it, and change deliberately as needed.

  • datavirtue 11 days ago

    I just clarify managements priority before making decisions. It won't be framed as short vs. long-term but it's always clear. What are devs going to do, ignore the business realities and stomp off to create their masterpiece?

ChrisMarshallNY 11 days ago

In my case, maintaining software usually ends up reducing complexity. Many bugs, are because I wrote an overcomplex kludge, which I only recognize upon reflection.

Also, it’s not just users and their environment that changes.

I have had software break, when Apple comes out with an OS update, and I had, for example, relied on a certain behavior that was not specifically dictated in the API.

photonthug 11 days ago

Interesting comments, I was expecting a roughly 50/50 split for folks who buy into the “intrinsically hard” story where Reqs change or highly integrated systems just tend towards breakdown vs the “enshittification” story where software quality just falls victim to planned obsolescence, greed, or simply pointless changes caused by fake jobs and the typical make-work.

The third explanation is practitioner incompetence and indifference. Regardless of whether we are talking apps or apis, it’s never been easier to keep around old versions and skip the bitrot but most people just don’t know how to do that. This doesn’t mean it’s hard, just that they’d rather screw over some subset of users than learn how to do proper versioning and releasing.

I phrase it like this because I think it’s a choice people make, and thinking about things like backwards compatibility and dependencies is seen as unsexy and just a thankless chore. If someone wants to dodge it then eventually some one else will get assigned to clean the mess. Next person will do just enough to escape the situation without really fixing it and so on.

The attitude is basically “It’s not like we’re the ones who have to use this crap! Ship it”, and it’s present everywhere from corporate to open source, projects big and small.

Part of it is a lack of pride/craftsmanship, but even senior folks are going to yolo their next release sometimes unless there’s some process in place that prevents it.

fl0ki 11 days ago

I agree with all of this, but even this is overlooking a simpler point: the software probably wasn't perfect in the first place. Even a few-line algorithm, isolated from any APIs or IO, can have subtle bugs. Remember the headlines when people noticed that most textbook implementations of binary search had an integer overflow bug?

If you take any old C or C++ code, for example, it's extremely likely that it has at least one construct with Undefined Behavior. Many instances of that binary search issue had outright UB because of signed integer overflow, and most code has much more subtle UB than that.

The more time passes the more likely that UB manifests with a newer compiler, a new OS version (especially if it's Library UB rather than Language UB), a new CPU family, etc.

This is nobody's fault as such, UB wasn't well understood until compilers became aggressive enough to actually exploit the letter of the standard rather than the common intuitions. Even so, good luck finding old code that would pass modern scrunity.

In the unlikely case that a program had absolutely no defects of its own at the time it was written, it could still use a deprecated API, make a stale assumption that increasingly limits its interoperability and versatility, fail to utilize modern resources like many-core CPUs and RAM larger than entire enterprise disk arrays used to be, etc. Any number of things could merit improvement without a single bug or change to requirements as such.

esperent 12 days ago

> The software I encounter most is software that interacts with a changing world. It’s software that needs to adapt to other software, but critically, it needs to adapt to its users – to people! People change their attitudes and processes in response to the environment, and the software needs to adapt around that

Over the last few months I've been evaluating several POS solutions. They come in two varieties: fairly modern SAAS types, and super old school software (feels like it's from the nineties but the two we evaluated both seem to have been created early 2000s) with a local database that is synched online every so often.

I can tell you that while the old stuff works, it doesn't feel comfortable to use in a modern environment. One example took around 30 minutes to install on my modern workstation laptop with many steps (and the guy demoing commented in surprise how fast it was!). And that's what software used to be, twenty years ago. Enterprise software, I mean. Everything was clunky and needed a guy who understood it or at least following long complex documents and spending several hours working through the install.

Meanwhile, the SAAS varieties have zero install time because I can use the web interface instantly, or install a client app in around 3 minutes. It is much smoother and with a far lower barrier of entry. I've only dealt with the install but the UI is equally different and more intuitive on the modern apps, often with guided tutorials and setup wizards. Although to be fair, some of the modern apps are not great either. But I'm thinking about the best modern app vs the best old-school app while I write this.

Both of these styles would work as a POS system. But the old style feels very clunky and uncomfortable to use.

I still do deal with lots of clunky installs, databases, long technical documents. But that's because I'm a developer. Non-technical users are far less accepting of these things than they were twenty years ago and the software has to be upgraded to deal with that.

We chose one of the smooth and well documented SAAS options. I've been looking around at what other local businesses use and anecdotally it seems like 90% of them also did and only very few use the older style software - and maybe those have been using it for several decades.

  • jasongi 12 days ago

    Yet, this is also the tragedy of modern software. While a fancy SaaS POS system will be fast and easy to install, the legacy local database version is going to keep working throughout an internet blackout (with cash), a power outage (via backup power) or an outage of the remote server.

    I doubt anybody is losing customers over a 1s delay in the till opening or a POS server syncing the day’s transaction after close. But having worked in retail - the one time you get a call from head office is when there’s “loss of trading” - it’s a bigger issue than theft.

    I remember there being an entire tourist town that was suffering economically because during peak season, the mobile phone tower was saturated and merchants could not process card payments. You can’t even use click-clack machines anymore with modern credit cards.

    Now… working offline is entirely doable in a modern tech stack too - but I somehow doubt most modern POS products support it well.

    • esperent 12 days ago

      The modern SAAS versions all work without internet using the client app. As we are in Vietnam, I don't think any business would ever choose a POS solution that didn't work without internet or via battery power during an outage. So there's no loss of functionality there. There are power outages around once a month, scheduled or unscheduled, not to mention storm season which has more frequent outages, sometimes for several days. So this functionality gets well tested.

      All banking solutions - as well as the POS system - can work on mobile data and that usually is fine during an outage. The only time mobile data failed in recent years was after a 3 day power outage following a typhoon, when I guess their batteries failed. By that time our business was pretty much shut down due to supply issues anyway.

      So basically, as long as the battery or generator lasts, all of these POS solutions will perform equivalently.

      Edit: and to further clarify, I don't think there are any features that the old schools apps have that the new ones don't. Unless you consider a local database or not using a web browser as features (which is valid but not my view). While the newer ones tend to have a much stronger focus on accessibility (probably because they are basically web apps) and translation.

  • brabel 12 days ago

    Why were they "installing" stuff on your laptop instead of, for example, using Docker, or even better, Nix, which are tools that solve this entirely?

    • ozim 11 days ago

      Are they?

      I have to install docker still, I have to configure networking to access the thing that is in the container because defaults work for demo, but in production env you still have to have bunch of stuff in front of docker.

      If I have defaults for the system in database that I need to change so anything that is in docker actually works.

      With docker/kubernetes I have one more thing to worry about and still lots of config that doesn’t go away “because docker”.

    • esperent 11 days ago

      Why would I know the answer to that?

      • brabel 11 days ago

        Because you're complaining like there's no solution. Well, there are many solutions.