I have mixed feelings about this. On the one hand, I agree: text is infinitely versatile, indexable, durable, etc. But, after discovering Bret Victor's work[1], and thinking about how I learned piano, I've also started to see a lot of the limitations of text. When I learned piano, I always had a live feedback loop: play a note, and hear how it sounds, and every week I had a teacher coach me. This is a completely different way to learn a skill, and something that doesn't work well with text.
Bret Victor's point is why is this not also the approach we use for other topics, like engineering? There are many people who do not have a strong symbolic intuition, and so being able to tap into their (and our) other intuitions is a very powerful tool to increase efficiency of communication. More and more, I have found myself in this alternate philosophy of education and knowledge transmission. There are certainly limits—and text isn't going anywhere, but I think there's still a lot more to discover and try.
I've also become something of a text maximalist. It is the natural meeting point in human-machine communication. The optimal balance of efficiency, flexibility and transparency.
You can store everything as a string; base64 for binary, JSON for data, HTML for layout, CSS for styling, SQL for queries... Nothing gets closer to the mythical silver-bullet that developers have been chasing since the birth of the industry.
The holy grail of programming has been staring us in the face for decades and yet we still keep inventing new data structures and complex tools to transfer data... All to save like 30% bandwidth; an advantage which is almost fully cancelled out anyway after you GZIP the base64 string which most HTTP servers do automatically anyway.
Same story with ProtoBuf. All this complexity is added to make everything binary. For what goal? Did anyone ever ask this question? To save 20% bandwidth, which, again is an advantage lost after GZIP... For the negligible added CPU cost of deserialization, you completely lose human readability.
In this industry, there are tools and abstractions which are not given the respect they deserve and the humble string is definitely one of them.
shipping base64 in json instead of a multipart POST is very bad for stream-processing. In theory one could stream-process JSON and base64... but only the json keys prior would be available at the point where you need to make decisions about what to do with the data.
I wonder if some day there will be a video codec that is essentially a standard distribution of a very precise and extremely fast text-to-video model (like SmartTurboDiffusion-2027 or something). Because surely there are limits to text, but even the example you gave does not seem to me to be beyond the reach of a text description, given a certain level of precision and capability in the model. And we now have faster than realtime text to video.
With LLMs, the text format should be more popular than ever, yet we still see people pushing binary protocols like ProtoBuf for a measly 20% bandwidth advantage which is lost after GZIPing the equivalent JSON... Or a 30% CPU advantage on the serialization aspect which becomes like a 1% advantage once you consider the cost of deserialization in the context of everything else that's going on in the system which uses far more CPU.
It's almost like some people think human-readability, transparency and maintainability are negatives!
This is one of the core reason I've been focused on building small tools for myself using Emacs and the shell (currently ksh on OpenBSD). HTML and the Web is good, but only in its basic form. A lot of stuff fancies themselves being applications and magazines and they are very much unusable.
The older I get, the more I appreciate texts (any).
Videos, podcasts... I have them transcribed because even though I like listening to music, podcasts are best written for speed of comprehension... (at least for me, I don't know about others).
Audio is horrible (for me) for information transfer - reading (90% of the time) is where it's at
Not sure why that is either - because I look at people extolling the virtues of podcasts, saying that they are able to multi task (eg. driving, walking, eat dinner), and still hear the message - which leaves me aghast
I have mixed feelings about this. On the one hand, I agree: text is infinitely versatile, indexable, durable, etc. But, after discovering Bret Victor's work[1], and thinking about how I learned piano, I've also started to see a lot of the limitations of text. When I learned piano, I always had a live feedback loop: play a note, and hear how it sounds, and every week I had a teacher coach me. This is a completely different way to learn a skill, and something that doesn't work well with text.
Bret Victor's point is why is this not also the approach we use for other topics, like engineering? There are many people who do not have a strong symbolic intuition, and so being able to tap into their (and our) other intuitions is a very powerful tool to increase efficiency of communication. More and more, I have found myself in this alternate philosophy of education and knowledge transmission. There are certainly limits—and text isn't going anywhere, but I think there's still a lot more to discover and try.
[1] https://dynamicland.org/2014/The_Humane_Representation_of_Th...
I've also become something of a text maximalist. It is the natural meeting point in human-machine communication. The optimal balance of efficiency, flexibility and transparency.
You can store everything as a string; base64 for binary, JSON for data, HTML for layout, CSS for styling, SQL for queries... Nothing gets closer to the mythical silver-bullet that developers have been chasing since the birth of the industry.
The holy grail of programming has been staring us in the face for decades and yet we still keep inventing new data structures and complex tools to transfer data... All to save like 30% bandwidth; an advantage which is almost fully cancelled out anyway after you GZIP the base64 string which most HTTP servers do automatically anyway.
Same story with ProtoBuf. All this complexity is added to make everything binary. For what goal? Did anyone ever ask this question? To save 20% bandwidth, which, again is an advantage lost after GZIP... For the negligible added CPU cost of deserialization, you completely lose human readability.
In this industry, there are tools and abstractions which are not given the respect they deserve and the humble string is definitely one of them.
shipping base64 in json instead of a multipart POST is very bad for stream-processing. In theory one could stream-process JSON and base64... but only the json keys prior would be available at the point where you need to make decisions about what to do with the data.
I agree 99%.
The 1% where something else is better?
Youtube videos that show you how to access hidden fasteners on things you want to take apart.
Not that I can't get absolutely anything open, but sometimes it's nice to be able to do so with minimal damage.
I wonder if some day there will be a video codec that is essentially a standard distribution of a very precise and extremely fast text-to-video model (like SmartTurboDiffusion-2027 or something). Because surely there are limits to text, but even the example you gave does not seem to me to be beyond the reach of a text description, given a certain level of precision and capability in the model. And we now have faster than realtime text to video.
(2014) Popular in:
2021 (570 points, 339 comments) https://news.ycombinator.com/item?id=26164001
2015 (156 points, 69 comments) https://news.ycombinator.com/item?id=10284202
2014 (355 points, 196 comments) https://news.ycombinator.com/item?id=8451271
With LLMs, the text format should be more popular than ever, yet we still see people pushing binary protocols like ProtoBuf for a measly 20% bandwidth advantage which is lost after GZIPing the equivalent JSON... Or a 30% CPU advantage on the serialization aspect which becomes like a 1% advantage once you consider the cost of deserialization in the context of everything else that's going on in the system which uses far more CPU.
It's almost like some people think human-readability, transparency and maintainability are negatives!
This is one of the core reason I've been focused on building small tools for myself using Emacs and the shell (currently ksh on OpenBSD). HTML and the Web is good, but only in its basic form. A lot of stuff fancies themselves being applications and magazines and they are very much unusable.
Related: https://sive.rs/plaintext
The older I get, the more I appreciate texts (any).
Videos, podcasts... I have them transcribed because even though I like listening to music, podcasts are best written for speed of comprehension... (at least for me, I don't know about others).
Audio is horrible (for me) for information transfer - reading (90% of the time) is where it's at
Not sure why that is either - because I look at people extolling the virtues of podcasts, saying that they are able to multi task (eg. driving, walking, eat dinner), and still hear the message - which leaves me aghast
Post from the creator of Rust, 11 years ago. Highly relevant to today.
The last 2 paragraphs were quite poetic.
PS: 2014