This is how software used to be before the internet.
You'd write (or buy) software for a purpose, and once it was debugged and installed, it was done. You just ran it after that. It was not exposed to external attackers, it didn't need to ever be updated unless new features were needed. In some cases (i.e. games on ROM cartridges) it couldn't be updated.
This is part of why Y2K was such an issue. So much old software was never intended to be updated. Preservation of original sources and build tools wasn't well managed. In many cases, software that had been in use for years or decades just had to be completely replaced because there was no practical way to update it.
Lately I'm thinking about which could be the tech stack that enables a project that will keep running for 10-20 years with the least amount of maintenance.
Right now it's this:
- HTML/CSS/vanilla JS for the UI. If it renders on a browser now I expect it to render almost the same in 20 years.
- SQLite: It's a library that sure will be alive, maintained and API compatible in the future.
- Go: The Go 1 compatibility promise applies here. Also, trying to reduce external dependencies as much as possible (SQLite lib should use standard DB api)
Sure you can use C or Java, but Go strikes the right balance for me (also personal preference for its philosophy and ecosystem)
It's a nice thought experiment in a time when you leave a NextJS project for a year and it ages like milk.
The problem with Go is that it's single-source. That used to be death, single-source; couldn't get contracts if you were the only one providing a technology. C is multiple-source; even if you limit yourself to modern OSS compilers there's GCC and Clang, each from an independent group.
The trend towards unstandardized languages that only exist as a single blessed implementation, as opposed to languages defined by an official standards document with multiple implementations that are all on the same footing, is definitely an artifact of the Internet era: You don't "need" a standard if everyone can get an implementation from the same development team, for some definition of "need" I suppose.
If your horizon is only 20 years, Go is likely reasonable. Google will probably still exist and not be an Oracle subsidiary or anything similarly nasty in that period. OTOH, you might have said the same thing about staid, stable old AT&T in 1981...
Really nicely written and quite thought-provoking. I think about when I die, will anyone be able to use or maintain any of the software I've written? Updates and patches are something so entwined with software that I doubt much of my code would be worth using if it suddenly froze.
It puts a beautiful spotlight on OSS communities and what they do to keep software alive through refactoring, iteration, patching. Also, on well-written documentation— perhaps that's even more important than the code for longterm maintenence and value. A good thesis that encourages someone to write it again, and better?
The problem of what happens when the author is unable to keep working on the source code has come up a LOT in the .NET space. One author (of both books and OSS) has even written up [0] the pro-active steps he's taken for when "The Emnuggerance" (as Pratchett called it) takes his abilities.
That's true. Even then, though, you're dealing with backwards-compatibility support as the system updates. A compiled binary might run well for the systems it was compiled for, but what about longer timelines (a decade)? Will the newest system be able to easily run that compiled binary? Not always... and there's always the possibility it might include vulnerabilities that weren't discovered until later.
I was reading about terminal text editors (em, en, vi, vim, neovim, etc.), and it's interesting how some of the software that "lasts" is more like Theseus' Ship. All the original components replaced over time, but the core concepts last.
> I was reading about terminal text editors (em, en, vi, vim, neovim, etc.), and it's interesting how some of the software that "lasts" is more like Theseus' Ship. All the original components replaced over time, but the core concepts last.
There's probably a lesson about interfaces here. The thing itself is able to last and adapt if you're able to replace the components, and the components can be replaced if the interfaces between them are stable (or at least knowable, so you can change them and know what you're changing). A couple of the examples I can think of that try to do this are Linux and Clojure. Both have improved and added a ton over the years, but they've always focused on maintaining stable interfaces.
“Cold-blooded software” concisely expresses something I’ve thought about for years. Modern software based on HTML/CSS and frameworks works great so long as maintenance is ongoing. But for software I write for myself, I much prefer cold-blooded software. I want to write it today and have it still work in five years, even if I haven’t done any maintenance. Professionally I work in an area with similar needs. Gonna be adding this term to my vocabulary.
Tool vendor choice is one of the most important factors in whether or not things will work next year or decade. Using vendors who take stewardship over their ecosystem is at the heart of it all. The solutions to project rot are actually quite obvious if we will allow for them to be. Being required to vendor-in a vast majority of your dependencies is the biggest hallmark of a neglected ecosystem.
This is how software used to be before the internet.
You'd write (or buy) software for a purpose, and once it was debugged and installed, it was done. You just ran it after that. It was not exposed to external attackers, it didn't need to ever be updated unless new features were needed. In some cases (i.e. games on ROM cartridges) it couldn't be updated.
This is part of why Y2K was such an issue. So much old software was never intended to be updated. Preservation of original sources and build tools wasn't well managed. In many cases, software that had been in use for years or decades just had to be completely replaced because there was no practical way to update it.
Lately I'm thinking about which could be the tech stack that enables a project that will keep running for 10-20 years with the least amount of maintenance.
Right now it's this:
- HTML/CSS/vanilla JS for the UI. If it renders on a browser now I expect it to render almost the same in 20 years.
- SQLite: It's a library that sure will be alive, maintained and API compatible in the future.
- Go: The Go 1 compatibility promise applies here. Also, trying to reduce external dependencies as much as possible (SQLite lib should use standard DB api)
Sure you can use C or Java, but Go strikes the right balance for me (also personal preference for its philosophy and ecosystem)
It's a nice thought experiment in a time when you leave a NextJS project for a year and it ages like milk.
DBMS can be any of the major SQLs, and NodeJS will have a pretty small driver lib for it.
Exactly my thoughts with Nextjs. Haha So sad...
The problem with Go is that it's single-source. That used to be death, single-source; couldn't get contracts if you were the only one providing a technology. C is multiple-source; even if you limit yourself to modern OSS compilers there's GCC and Clang, each from an independent group.
The trend towards unstandardized languages that only exist as a single blessed implementation, as opposed to languages defined by an official standards document with multiple implementations that are all on the same footing, is definitely an artifact of the Internet era: You don't "need" a standard if everyone can get an implementation from the same development team, for some definition of "need" I suppose.
If your horizon is only 20 years, Go is likely reasonable. Google will probably still exist and not be an Oracle subsidiary or anything similarly nasty in that period. OTOH, you might have said the same thing about staid, stable old AT&T in 1981...
Google could still exist but add Go to killedbygoogle.com
Really nicely written and quite thought-provoking. I think about when I die, will anyone be able to use or maintain any of the software I've written? Updates and patches are something so entwined with software that I doubt much of my code would be worth using if it suddenly froze.
It puts a beautiful spotlight on OSS communities and what they do to keep software alive through refactoring, iteration, patching. Also, on well-written documentation— perhaps that's even more important than the code for longterm maintenence and value. A good thesis that encourages someone to write it again, and better?
The problem of what happens when the author is unable to keep working on the source code has come up a LOT in the .NET space. One author (of both books and OSS) has even written up [0] the pro-active steps he's taken for when "The Emnuggerance" (as Pratchett called it) takes his abilities.
[0] https://www.thereformedprogrammer.net/how-to-update-a-nuget-...
If you are worried about software being usable long after you’ve died, you should be releasing compiled binaries.
That's true. Even then, though, you're dealing with backwards-compatibility support as the system updates. A compiled binary might run well for the systems it was compiled for, but what about longer timelines (a decade)? Will the newest system be able to easily run that compiled binary? Not always... and there's always the possibility it might include vulnerabilities that weren't discovered until later.
I was reading about terminal text editors (em, en, vi, vim, neovim, etc.), and it's interesting how some of the software that "lasts" is more like Theseus' Ship. All the original components replaced over time, but the core concepts last.
> I was reading about terminal text editors (em, en, vi, vim, neovim, etc.), and it's interesting how some of the software that "lasts" is more like Theseus' Ship. All the original components replaced over time, but the core concepts last.
There's probably a lesson about interfaces here. The thing itself is able to last and adapt if you're able to replace the components, and the components can be replaced if the interfaces between them are stable (or at least knowable, so you can change them and know what you're changing). A couple of the examples I can think of that try to do this are Linux and Clojure. Both have improved and added a ton over the years, but they've always focused on maintaining stable interfaces.
“Cold-blooded software” concisely expresses something I’ve thought about for years. Modern software based on HTML/CSS and frameworks works great so long as maintenance is ongoing. But for software I write for myself, I much prefer cold-blooded software. I want to write it today and have it still work in five years, even if I haven’t done any maintenance. Professionally I work in an area with similar needs. Gonna be adding this term to my vocabulary.
Tool vendor choice is one of the most important factors in whether or not things will work next year or decade. Using vendors who take stewardship over their ecosystem is at the heart of it all. The solutions to project rot are actually quite obvious if we will allow for them to be. Being required to vendor-in a vast majority of your dependencies is the biggest hallmark of a neglected ecosystem.
Previously (2023; 222 comments): https://news.ycombinator.com/item?id=38793206
Should be (2023).
Fixed, thank you!
https://news.ycombinator.com/item?id=38793206