That CNN website is great, except it still has a huge cookie banner. Looking at the cookies of the site, I think the only cookie it sets is that i clicked on the banner. Most of the size of the page is also related to the banner it seems.
If I’ve understood the grandparent post correctly, they don’t need the banner. They wouldn’t need it if the only cookie they set were a functional 1st-party cookie, and since that sole cookie is just to track cookie banner status, they especially don’t need it.
But taking the time to investigate that, get it approved by legal, etc. all takes longer than just slapping a cookie banner component on it.
This is why people complain about the unclear and bureaucratic nature of these laws, it leads to an over complicated investigation and compliance isn't always simple - meaning the safest option is to comply at the highest level and degrade the user experience.
Eh, it took me all of 2 days to strip all the unnecessary cookies out of our product, and convince management to leave out the giant unnecessary cookie banner.
The sites plastering those everywhere are doing a malicious compliance, pure and simple
I looked at a CNN "lite" article, and it includes 560KB of stuff (lots and lots of CSS declarations) in addition to the actual 11KB of article content.
While still wasteful, CSS is one of those things you can do astronomically wrong before it starts being noticeable. Case in point here: inlining 560 kb of CSS with the page and just sending it with the entire HTML file each time is only ~61 kb of actual network transfer to load the article (due to brotli encoding).
It is, the content loads first, then the js for the cookie banner, then the favicon. If the js fails to load (I blocked the request as a test) the page loads just fine, it isn't blocked by that.
Which is cool but it's not web, and few people have working TV reception that supports it at the moment. The web version of Teletekst (https://nos.nl/teletekst) is over 3 MB in size.
Using the lite subdomain is a great way to read all the subscriber articles as well. Was reminded of the lite site during some annoyingly aggressive A/B testing CNN was doing a few months back.
Google also used to have a go app which they deprecated later on, while now i think about it what is the use of having a go app, if the websites which are shown in search results are not optimised for slower networks.
Thanks for sharing, i almost was not sure if the last part was sarcasm. Html itself was the standard, then when it got bloated we got rss. This seems like it’s not a problem of a lack of standards. It’s the company choosing not to promote it.
It is more to do with the fact that vast majority of people aren't going to bother subscribing to an RSS feed. I am on a freelancer slack group that supposed to have a RSS feed for jobs. The feed is often broken for weeks because most people don't use it.
Even when it isn't broken the display output is broken in Thunderbird because the dev isn't going to bother checking Thunderbird as many people don't use email clients like that anymore and instead use webmail.
I never have used RSS that much as normally if I want to check for new things on a site, I will just go to the site and look myself.
I suppose I meant more of a best practice - if every news site could be found at the subdomain of lite.XYZ.com, or perhaps some way for the browser to request specifically no images or styles, it’d be easier for the end user to find.
RSS is a good point that I didn’t consider. Although it tends to be a summary and hyperlink to the main site.
Ideally IMO this would be accept headers. You're asking for the same semantic content but a different format. I'm not sure if there's a nice way of specifying html but in a minimal sense (we do quality with images, perhaps linked), however these could mostly be text/plain or text/markdown (and it'd be nice if that was then formatted properly by the browser).
This often makes a really nice API if you can do other formats too - the main page of cnn could respond to rss accept headers and give me a feed for example.
WML pages had mostly text and hyperlinks from what I remember and even though it supported images too I think most such basic pages would be readable even if you turned image loading off.
I spent so much time tuning the WAP site for the forum I worked for back in 2008.
I had some sort of Nokia running on whatever 2kbps networking was going then, and would shave absolutely anything I could to make the forums load slightly faster.
RSS is just a list of links to webpages, maybe with summaries. The readers generally fetch those webpages and filter out the text, but every browser has equivalent functionality now. You can do it with literally any HTML page, though some websites try to fight it (since depending on the reader, it neuters ads).
RSS still contains full articles on a lot of personal sites. As you said, it’s about monetisation and control and when you’re writing with no plan to monetize there’s no point in not serving full content.
Yeah, and that’s my point. The problem is not technological, you can make a super readable HTML site by just putting text in <p> tags, and RSS readers for blogs that didn’t rug-pull their content still work fine. People lost interest in giving out something for nothing, so now the web is an ad-infested mess.
If someone makes a new tech that makes that impossible, 10 principled FSF-enjoyers will write content for it and nobody else. Web standard bloat is bad, but it didn’t cause this problem, and you can’t fix it by creating a new spec.
The header image in the article is a 2400x1600 PNG that is 500KB large, apparently due to subtle dithering making it hard to compress. Converting it to a visually-identical .avif (quality 90, 12-bit color depth) takes it all the way down to 15KB.
For an image which isn’t even interesting or illustrative of the issue. Hero images on blog posts have run their course, I can’t remember the last time I felt anything positive by encountering one. They make the page load longer, force us to scroll to start reading, and are forgotten as soon as they go out of sight.
I'm guessing it could be even smaller if it was designed as a SVG file. Although the glow effect with the fading colors would probably need to be simplified.
Plain HTML and forms for interactivity can be very effective.
In fact, for a long time web forums were largely entirely usable without JS.
See the degradation of GitHub for a great example. You used to be able to interact with most of it without any JS at all; browsing the code repositories, reading and replying to issues, etc. Now it barely shows anything without JS. Of course, I suspect in that case it's deliberate so they can trick you into running a few more tracking and analytics scripts...
- AT&T was completely down for us but Verizon and its MVNOs were up
- I had a Verizon MVNO secondary e-sim that came free with a home internet plan, unused until the hurricane hit
- It worked pretty well!
- The day the Verizon disaster internet trucks showed up at the police station in our town my Verizon MVNO internet went down
Non-internet learnings:
- Fill up your vehicle’s fuel or battery before any big storm, we spent a lot of time siphoning and otherwise consolidating fuel to get ourselves and neighbors out of town, particularly because we didn’t know how far we’d have to go to find a gas station with electricity
From a Pineapple Express a few years back (80+ mph gusts and lots of landslides):
- When putting in rural/exurb solar, make sure you have a secondary charge source for your house batteries. This can be a car or a propane generator, but check compatibility before buying anything. Solar won’t cut it (storms are cloudy), and propane won’t cut it (no roads, and also, there’s probably a shortage of supply and trucks).
- Whatever cell networks people fall back on will effectively be down (as you saw with verizon)
- all emergency services websites should fall back to web 1.0 forms and static images if they take more than 5-10 sec to load. Loading a pile of JS and CSS to load a fake modal that obscures the content affer 5 min of loading at 2G speeds doesn’t count (looking at you PG&E)
Standard (in Australia) is 2x45kg household cylinders (chest high to an adult) for household cooking.
(Finish one, switch to the other while waiting for swap).
It's not hard to have eight or more cyclinders on standby and to keep them topped up for when needed.
For rural / quasi industrial, furnaces, generators, etc it's not uncommon to have fixed installation 210kg LPG bulk cylinders filled by supply truck .. and larger.
When disaster strikes a bulk tank lasts a long time if the primary drains on it (eg: a tile or glass furnace) are wound back or turned off.
> For rural / quasi industrial, furnaces, generators, etc it's not uncommon to have fixed installation 210kg LPG bulk cylinders filled by supply truck .. and larger.
Seems kind of small if you're rural/have regular delivery limitations? I've got a 500 gallon propane tank for domestic use (stove, waterheater, fireplace) and another 500 gallon tank for my generator. The internet says a 500 gallon tank at 80% full (max safe fill) is about 750 kg of propane. We've had a few two day outages, but no three day outages since we moved here, but neighbors report some outages in the 7-10 day timeframe. 500 gallon tanks seem pretty popular around these parts, commercial/government goes bigger, small properties go smaller; plenty of neighbors have no generator and may not have propane either; government runs warming centers if you can get there.
it's not uncommon to have .. 210kg LPG bulk cylinders .. and larger.
Nearly five standard 45kg household tanks is the smallest capacity fixed installation bulk tank supplied by one local gas company. The option to rent larger tanks on a long term contract exists.
> if you're rural/have regular delivery limitations?
Many rural locations here have regular deliveries .. the milk gets picked up every day for example (multiple double tanker trucks worth from, say, the Cowaramup* district alone).
There's no need for a larger tank simply because you're rural unless you explicitly want constant gas at a particular delivery rate sufficient to last out a supply issue of {X} {time units}.
( For example if you run a continuously fired glass furnace + annealers, have a ceramics business as a side hustle, specifically have emergancy services generators for blackouts etc. )
> government runs warming centers if you can get there.
Your local government I assume - this isn't something ours has ever considered TBH.
750kg (500 gal) is the smallest you can get around here.
A house backup generator uses something like 3kg of propane per hour idle, so that tank will keep your fridge on for ~ 10 days. Our area (outskirts of Silicon Valley) saw 20-30 day power outages with essentially no sun that year. The weather is rapidly worsening due to climate change, but they are also hardening the infrastructure.
Now, with a battery + backup generator + wood stove, you only need to run the generator to charge the house batteries. Assume a duty cycle of a few hours of optimal efficiency generator per day, wood heating, and you can easily exceed the 30 day target. At that point the sun should be out, at least here in California, or at least the roads will be open enough to let the propane supply chain adapt to the demand.
For the EV route, buy a truck or SUV with vehicle to home support, and a house battery that can charge off the vehicle. The truck has 2-4x a house battery in it. This plan assumes there’s a fast charger in town (ideally near the grocery store), and it’s under ~ 100 miles round trip.
Edit: I’ll add local pricing: A used >= 100 kwh ev is about $30K. The generator + permits + tank is > $20k.
With the ev route, you also get a nice car.
I didn’t price out generator + 2000 gallons of propane storage. It’d guess it’s about $30k.
Maybe you don't need such a huge generator. Run your fridge and a small heater, and only heat a small section of your house. Were talking about emergencies here, not keeping your hot tub running. I can heat a bedroom easily with a 1500 watt space heater. And my fridge pulls an average of about 150 watts. Quite honestly, a whole house generator is for convenience when there's a tree that falls on a power line. It sounds like a real waste for an actual emergency.
> - AT&T was completely down for us but Verizon and its MVNOs were up
It really depended on where you were. In my area everything was down. Literally and figuratively. The only utility that worked was gas.
T-Mobile was the first to come back up but it took weeks. I could occasionally get one bar of LTE if I climbed to the top of the hill but even then I could only send or receive about 1 SMS every few minutes.
Once I was able to get out of the neighborhood I could drive 5 miles away and get cell service and spotty data on Verizon.
NPR's updates were our most reliable way to get information on what was happening.
> - Fill up your vehicle’s fuel or battery before any big storm, we spent a lot of time siphoning and otherwise consolidating fuel to get ourselves and neighbors out of town, particularly because we didn’t know how far we’d have to go to find a gas station with electricity
Having supplies on hand and being patient worked out for us. We waited until 40 was clear and were able to head to the Triad for supplies and gas. In the mean time the neighborhood got together and cut up downed trees and filled in the missing road so it was easier to get in and out of the neighborhood.
> we didn’t know how far we’d have to go to find a gas station with electricity
Corollary: carry cash, so you can buy things without depending on Point Of Sale systems being on and able to talk to the payment card networks. My favorite nearby ATM dispenses $100 bills, so I can have several hundred tucked in my wallet without taking a lot of space.
I recommend having a mix of bills; the other person may not have change.
Clarification: Sure, keeping a $100 bill tucked away in the wallet for emergencies is a great idea (and I do that too), but wherever you keep emergency supplies, I suggest having a mix of smaller bills.
Absolutely. The $100's are for getting a hotel room or something. The $20's are likely the largest other merchants will break.
My emergency cash stash has a couple $100's and a ton of $5's because I'm not super limited on space, and when everyone else has $20's and needs them broken, a bunch of $5's makes everything easy, and beats carrying a whole bucket of $1's. Learned that at flea-markets.
Not Helene for me, but another ISP frustration anecdote from living in a forested area (line damage from tree limbs is common enough) with imperfect cellular data connection:
- the Comcast Xfinity app is extremely bloated and runs into error after error on a poor connection, yet the only time I ever use it is when I have connection problems. Most other apps I use run smoother under similar circumstances. Boggles the mind why one of the US's largest ISPs wouldn't make their primary customer support portal be lightweight and reliable on spotty cellular data.
> Fill up your vehicle’s fuel or battery before any big storm
This is such a big one for any event anytime. Better yet, never go below a half-tank on your vehicle. You’ll almost always have enough range to get out of dodge and also have a mobile cooling / heating / charging station if you’re stuck in place. I grew up on an island and what I thought was universal storm advice was clearly not.
During Helene I had to drive 80+ miles from the Clemson, SC area through to Asheville to bail out my sister-in-law and her husband and their two month old stranded in Asheville. They had only two gallons of diesel in their F-250. The drive up I-26 looked like some kind of zombie flick with a line of 50-100 cars on every interstate off ramp leading up to defunct gas stations with crowds of people just meandering about.
If you’re a mild prepper type, GMRS radios (or a jailed broke Baofeng…) are a great tool. I had no cellular service for the majority of my drive. I was able to stay in comms with my “convoy” the whole way. Perhaps as importantly, a spare, unused Jerry can is incredibly valuable. In my case I have gasoline cans but not diesel and so I had to pay a greedy boomer 3-4x market rate to buy one of his four 5 gallon cans in the Lowe’s checkout line to get a clean fuel canister.
I think many of us in the web space that are old enough were shaped by the "mass outage" that happened on 9/11 when pretty much all news websites went down while we were looking for information. Slashdot was fighting valiantly to say up and was one of the few sites one could find information on the events (if you were in a spot without access to a cable TV, you were very much in the dark). The web and the infrastructure are substantially different than they were 25 years ago, but I still get a bit of "what if" in the back of my head (not that I work on anything of that level of significance).
When the Yahoo news website was not responding, my reflex was to do a traceroute. The last hop was not resolving, and it was arriving Somewhere in NY. Guess what, they were hosted in the tower. It took a while to get redirected to the West coast.
For Hurricane Helene specifically, my team at Newspack actually worked with Blue Ridge Public Radio and a number of other news organizations in the affected area to set up text versions of their websites for low bandwidth readers[1] and get info to 10s of thousands of people[2].
In fact, it was so successful (maybe not at reaching you specifically though), that we got a grant to roll out a general purpose plain text web solution for breaking news situations to news organizations across the country![3] So I think there may have been a mismatch in that you didn't know about all of the plain text versions of news sites available in your area during the disaster -- that's something we'll have to keep in mind.
A related article I read yesterday lamented that 1GB of RAM isn't enough to run a graphical browser anymore (1). Sure JavaScript runs fast now, but at the cost of the code size of the average website being unnecessarily large. This is because speedy js and speedy network connectivity allows for more code, more network requests. Another example of Wirth's law.
This was the case when I got a Rapberry Pi 4 with 1GB of RAM about late 2019. You could run one tab of Chrome, but any more and it would be killed.
- I got caught in the mountains for a few days due to landslides in Nepal. The only available information was relayed by phone between locals. People had no idea of what was going on and their vacation ended on the day the road reopened. It caused a pile up of cars where the road had slid off a few days prior. In some parts, rocks still fell from the cliffs above. We flagged a passing car and asked them to keep us updated on WhatsApp instead. We could have all stayed put if we had that information before.
- During covid I maintained a page with simplified local restrictions and a changelog of new restrictions. The alternative was to follow press conferences and re-read the entire regulation the next day, or keep checking the newspapers. Mine was just a bullet list at a permanent location.
- During the invasion of Ukraine, refugees have set up the most impressive ad hoc information network I have ever seen. It was operational in 24 hours and kept improving for weeks. People sorted out routes, transport, border issues, accommodation, translation and supplies over Telegram, Notion and Google Docs.
Information propagation is critical during emergencies, and people are really bad at it. Setting up a simple website and two-way communication channels makes a huge difference.
My Ukraine-based colleague had to ping friends with real-time questions like "is bridge X still unbombed/passable? Do you know someone who lives nearby and can check?" while he was fleeing from the invasion. Fortunately, enough of the questions came back with correct answers and he managed to get the (relatively) safe location.
1. Habit. We're used to use Telegram for everything: news sources, social network, messaging, memes. Telegram has more capabilities than other apps. You can't realistically move your family and friends to another messaging app because of all of that and the network effect.
2. General attitude of all people towards privacy and sharing data: they don't really care. It's "Who would even care about my data?" and "I've got nothing to hide" all the way.
I doubt most of people ever thought about the topic of trusting a messaging app. It's not the framework they operate in.
There are two global psyops done by I don't know who:
1. That Bitcoin is safe and anonymous.
2. That Telegram is super safe for whatever shady or private stuff you want to do.
One of the best marketing campaigns ever.
I would also want to use this chance to alert everyone who uses Telegram that:
1. It's not e2e by default.
2. They use proprietary encryption protocol. You don't have access to code.
3. I don't think Telegram is profitable, I don't know how it can be with that scale. Which makes you wonder.
4. If you open in-message links, you have a chance of losing your account to hackers thanks to the old vulnerability that hasn't been fixed for years. You literally have to check the list of devices connected to your account every 12 hours if you want to be safe.
If you decide to add an off-the-shelf wad of CSS, like Pico.css, consider hosting it alongside your HTML (rather than turning it into another third-party dependency and CDN cross-site surveillance tracker). Minified, and single-request.
I run a website that's primarily text-based. When I change the base template, I still check that it works without CSS. This just means semantic HTML.
That being said, CSS is rarely that large. Even after a few years of relative indulgence, the gzipped CSS for the whole website is still something like 20kb.
you don't even need `<!doctype html>`. I'm sure it's easy to look up when that was added/recommended, but i've never used it when i do a 94 html page/site like this. html head title /title /head body /body /html
'sit
As a mild lifelong disaster "junky" (grew up in remote areas, dealt with cyclones, floods, droughts, fires, monsoons, coup d'etats, unannounced atomic tests etc during decades of global field work) this description:
As a web developer, I am thinking again about my experience with the mobile web on the day after the storm, and the following week.
I remember trying in vain to find out info about the storm damage and road closures—watching loaders spin and spin on blank pages until they timed out trying to load.
reminds me why we (locally) still rely on AM radio day in day out and will continue to do so for the forseeable future.
so during hurricane Laura the NWS transmitters ceased functioning. The southern one i can pick up was uncrewed and stopped working during, and the northern one, i don't remember, but it went down about 2 hours later, well before the storm was upon them.
up to that same storm i was a gung-ho let's go ham enthusiast. up to. I lived in a more direct path relative to where most of the repeater users were, so they were complaining about how hard it would be to find ice the next morning than relaying potentially life-altering information about storm tracks or whatever.
I explained to everyone as sternly as i could that this was literally an emergency, which was the primary designation of the tallest repeater in the county, and if they wanted to chit chat with chet they should move to one of the 3 other analog or 2 other digital repeaters in the same area.
nothing doing. I was the arrl tech specialist for my state, too. I completely pulled out of the hobby. I might dabble in the future with low power or beacons or whatever, but VHF/UHF i'm done with local usage.
i know you specifically said AM; however i didn't have an AM radio "handy" during the storm and power outage, etc. 9/10 the NOAA/NWS weather radio service suffices.
That's the deal with Ham radio; amateurs, enthusiasts, former SE asian jungle operators, occasional NASA relay operators, etc.
Good fun - not much chop for keeping the general public informed via cheap transister radios (although, who has those anymore?).
Our state emergancy services broadcast locally (and at strength from outside affected areas) when updates are required - they have dedicated bands and they routinely interject on the major broadcast radio networks - where fires are, when and where cyclones are expected to cross the coast, etc.
Our very local area volunteer fire units use the equivilant of ham and CB bands with reduced licences - they broadcast lightning ground strikes (at this time of year) and fire / tender / tanker updates as the season progresses (which is right now, harvest time, a lot of equipment out in tinder dry conditions subject to highly active evening lightening storms).
The iPhone / WiFi stuff is great .. but it hasn't yet passed into "considered reliable" in local culture - the networks have crashed under stress and nobody wants response to grind to a halt if a tower goes down, etc.
It's stuff like this that makes me want to scurry right back into the SRE/sysadmin space. Currently back in webdev after a few years out. I just feel like I'm being a pain in the arse to comment "could just write the HTML...?".
This very article loaded 2.49MB (uncompressed) over 30 requests. It's served using Next.js (it does actually load fine with JS disabled).
Ironically this is a great opportunity for the author to have made a stronger point. It could have gone beyond the abstract desire of "going back to basics" to perhaps demo reworking itself to be served using plain HTML and CSS without any big server-side frameworks.
I live just south of Asheville in NC, and we were completely isolated after the storm for a few days. The only reason we were able to get out after a few days was that a fire truck had been abandoned at the bottom of our driveway as trees fell around it, so they came back as soon as they could to get that resource back. People just the other side of those trees were unable to get out for about a week.
Our best source of information, even after we started to get a bit of service, was an inReach. I messaged a friend far from the region and asked them really specific questions like, "Tell me when I can get from our house to I-26 and then south to South Carolina."
Yes, absolutely, emergency information sites should be as light as possible. And satellite devices are incredibly useful when everything local goes down.
One thing that I would also suggest folks who are resonating with this piece consider...
Local copies of important information on your mobile device. Generally your laptops are not going to see much use. Mobile apps tend to fake local data and store lots of things to the cloud. We tend to ignore such things like backups and local copies nowadays. Most of the time we can get away without any worry here but consider keeping a copy of things like medications and their non commercial names for situations like this as well.
It's hard to understand the privilege bubble you're in unless you actively try to live like your users. My read of the current trend [1] is that building for your marginal users isn't prioritized culturally in orgs or within engineering. Unseating those ways of working in my experience has been immensely challenging, even when everyone can agree on methodologies to put users first in theory [2].
This should be required reading for anyone building government or emergency sites.
During any crisis, people have spotty connections, dying phones, and zero patience for loading spinners. A plain HTML page with bullet points would save lives over a fancy React app that needs 5MB to render.
The irony is we have better tools than ever to build fast sites, yet the average webpage keeps getting heavier. Somewhere we forgot that the web worked fine before JavaScript frameworks.
That bulleted newsletter list being the most useful thing says everything.
... and is built with Next.js including no less than 12 enormous x-font-woff2 chunks of data at the top of the source code and another big __NEXT_DATA__ JSON chunk at the bottom. Hardly lean, vanilla HTML and CSS.
Author should look into WinLink. There is WinLink Wednesdays where WinLink is practiced. A lot of reports come out of WinLink and it’s all text. Of course need to be an Amateur Radio Operator
Many years ago, Google had a service I would use pre-Smartphone days to search when I was away from the PC and needed info for like a restaurant’s number. You would text 46645 and it would send you search results. It was useful during hurricanes.
Reading this on airline wifi right now makes me realise just how unusable some stuff becomes with choppy internet. E.g. I can’t change settings on the LinkedIn app because the request to load the settings page fails :/.
Some things you don't know people need until you're directly affected. For me, it was an injury related light sensitivity that made me realise dark mode isn't just a frivolous addition for looks
Nice to see other fellow Western NC folks commenting here, I'm in Asheville. I did not know about all of these text only version of major news sites. I'm going to bookmark them.
What saved us from a news deficit after Helene was that we had 2 portable AM/FM radios. Both of the radios took batteries and one of them you could even charge via a hand crank. I highly recommend having a portable AM/FM radio of some kind. Blue Ridge Public Radio (our local NPR) was amazing during this time. Their offices are located right in downtown, which never lost power, so they were able to keep operating immediately after the storm.
I also feel this pain of bloated sites taking forever to load when I'm traveling. I'm on an old T-Mobile plan that I've had since around 2001 that comes with free international roaming in 215+ countries. The only problem is that it's a bit throttled. I know that I could just buy a prepaid SIM, or now I can use an eSIM vendor like Saily, but I'm too cheap and/or the service is just good enough that I'm willing to wait the few extra seconds. Using Firefox for Android with uBlock Origin helps some, but not enough (also I just switched to iPhone last month). I've definitely been on websites that take forever to load because there's just so much in the initial payload, sometimes painfully slow. I don't think enough developers are testing their sites using the throttling options in the dev tools.
What you really want is a (mostly) JavaScript-free website. Run NoScript and cut out all the data broker bloat, allowing just a limited number of critical scripts to run. Adding LocalCDN will further reduce wasted transfers of code that you do allow. Then you can decide if you want to show images. The web will be much faster on a fast or slow link.
Yeah, you don't get one. The purpose of modern web development is to make sure the developer is intellectually satisfied in over-engineering something that should be relatively simple, justifying their pet projects. If any useful work gets done, that's a side-effect and most of times an accident. (Just try to use StateFarm's website as a great example).
I have a tab on sublimetext that has been opened since the pandemic and I think it's safe to share my idea since I'm not gonna do it.
*4KB webpage files*
So a website where each page does not exceed 4KB. This includes whatever styling and navigation needed. Surprisingly you can share a lot of information even with such constraints. Bare bone html is surprisingly compact and the browser already does a whole lot of the heavy lifting.
Why 4KB? Because that used to be the default page size in x86 hardware. So you can grab the whole thing in one chunk.
I'm not a web developer, but a product manager, and recently I've created my own website using Astro. I do support the point, that its better to have a relatively simple static website - it's really fast and lightweight! An average Wordpress website would weight 1 MB+ in best case, in worst case - 4 MB+. I don't know how people think it's a good idea to have a compressed webpage to be 4 MB!! Let's KISS
> As a web developer, I am thinking again about my experience with the mobile web on the day after the storm
In some villages, where plenty of stone is available, people used it for everything - roof slabs, pillars, walls, flooring, water storage bowls etc. Also, villages which had plenty of wood around, they used it for everything.
As techies, we say there is an app for everything, or there is a web-technology for everything. When you have a hammer in hand, everything looks like a nail.
I remember complaining about this around five years ago [^1], and it looks like not much has changed since, save for the amount of people complaining about websites being full of garbage that serves no purpose or static resources being bigger than the should be.
I can relate to this post. During the fires in Southern California last year it was confusing and frightening to know that you're surrounded by fire but you can't get any news or information due to degraded cell networks. We had no power and were just trying to load pages to figure out whats going on and if we should evacuate. There were either no emergency alerts, or emergency alerts for irrelevant things.
Another endlessly frustrating aspect is unfortunately Facebook. For better or worse, it's become a hub of emergency information using local facebook groups. In an emergency you want a feed of chronological information and facebook insists on engaging you by showing 'relevant' posts out of order.
For https://cartes.app, I'm planning to code an extremely light version of the map tiles. Based on initial network speed, decide to load this style first, that just lets the user see the streets and and important places. Then load the rest.
This initial map style would be the equivalent of a "text-only" website.
Blocked by Vercel's turbopack bundle analyzer's bugs though, because before optimizing the tiles, I need to optimize the JS that loads the tiles.
I haven't figured a way to load a Maplibre map server side, so the JS must be ready before the map starts to get loaded.
Oh for the time when chairs were still for sitting and PDFs were still for printing.
Restaurant websites mentioned — the majority of restaurant web sites I’ve encountered were much more annoying and difficult to read than a PDF, even on a small phone screen. Or should I say, especially on a small phone screen. Some would make a 32 inch monitor feel cramped.
nice work. dealt with same issue during helene. would be interesting to do things like convert to morse code or convert to modem-over-walkie low baud rate.
The web could in theory support text-first content, but it won't. The Gemini protocol, though not perfect, was built to avoid extensibility that inevitably leads us away from text. I long for the day more bloggers make their content available on Gemspace, the way we see RSS as an option on some blogs.
The web will continue to stray from text-first content because it is too easy to add things that are not text.
I built a connection to a web-powered LLM over SMS/iMessage for literally this purpose. While traveling I’d have really bad or sparse service but still needed to find my way around.
This used to actually work, at least on some sites. The text would load first, then it would reformat as the fonts and CSS assets are loaded. Ugly and frustrating, which is probably why, now you don't get content until the eye candy is all ready.
But the progressive, text first loading, would be readable from the get go, even if further downloads stalled.
Actually, there are readily available tools that can do this. Many websites have implemented accessibility features for the blind, allowing users to read all text information on the screen, as well as alt text for accompanying images. This feature might be hidden, and many people are unaware of it.
Perhaps this is something FEMA could try to encourage and lightly enforce. No new technologies needed, just a mandatory "old web" option for key .gov, state and news sites.
I'm on 600mbps fiber with low latency. Some times, I can't be arsed to load the websites linked on HN and simply head straight into the comments. For example, when it's a link to Twitter, I get an endless onslaught of in-site popups. Cookie banners, "sign in with google", "sign in to X", "X is better on the app" and so on and so forth. Meh. I'll sometimes just stick to HN, especially when I'm on my phone on the sofa or something.
Give me a minimal / plain text website every day, it's not just the link speed.
Interesting, and arguably useful, but I really have to push back on the whole notion of site-specific readers.
I have a text-only CLI utility to read BBC news: the w3m terminal-mode Web browser. (Substitute Lynx, links, elinks2, or any other TMWB of your preference.)
The experience is, unfortunately, kind of shit, because BBC (as with far too many other websites) fails to display well in a terminal-mode browser, but it at least works. And I can use the same browser on any other website.
The problem with site-specific tools, which includes of course mobile apps, is that now instead of invoking a general-purpose reader on any site, you have to choose both the site and the reader, and are dependent on that reader application / utility being continuously updated as the site itself changes.
(And, yes, I've written my own site-specific renderers, including one which produces a "newspaper"-formatted page based on the CNN "lite" website, which I've discussed on HN and elsewhere:
This is a really cool tool! And, not to diminish the work put into this tool...but at a glance at the code, it appears to be a specific RSS reader for the BBC (see: https://github.com/hako/bbcli/blob/709c6417c8dc4ffd4f7d5f5b4...)...so if i'm correct, why not simply use a cli-based RSS reader....so can review other sources beyond BBC? Again, not trying to diminsh the goodness here...But that, a more general tool might be just as good, and a bit more flexible, eh? :-)
"The last couple times I used my dental insurance website, it was completely not mobile responsive, requiring the old-school pinch zoom to even get to anything on the page."
And yet you probably didn't change your dental insurance based on this, because most likely there are more important issues that matter (e.g. you are given a single option by your employer). This is exactly the reason why that company is not likely to fix it, because the lousy website's effect on their bottom line is nearly zero. (Note that I don't like this either)
On the contrary, it helps underscore a storm's significance and allow for memetic spreading of information around the storm. You obviously didn't grow up in a hurricane-prone area.
Likewise, we've always liked to "name things". My personal desktop is named "Foundation" (first built PC) and my car is named "Big Boi" (first adult purchase). Generic names are fine for operational equipment, but no one wants to refer to natural disasters as "HURR-2026-EC02". That's why COVID is "COVID" instead of "severe acute respiratory syndrome coronavirus 2".
Do you model website behaviour for on a slow internet connection, or just hope it will never happen?
The mobile internet technically worked during a big storm some time back but barely. Half-loaded pages. The images were suspended. JS took too long. Most websites were only usable in theory.
The best ones shared some pattern. They;re not random.
Simplified Design.
word first
Avoid complicated client-side logic.
Quick in rendering even on a poor connection.
It has prompted me to think about a straightforward framework.
The order of occurrence of different circumstances
Most of us design products for the first two. It is the third one where things break down.
There were some practical things, that helped me in those moments.
A server-rendered page must still read well when JavaScript is disabled.
The content must load before any decorative element.
A clear hierarchy, even without styling.
There is no important information hidden beneath the interactions.
Interested in the thoughts of others.
Do you deliberately design for suboptimal conditions?
Do you have a definition of “minimum usable version?” ~
I'm not sure I understand. Are you implying we should not design our technology around serious edge cases that humans encounter in life? Why wouldn't we target people in crisis when we design crisis management information sites?
It's not really written there, but how about a loading experience that gives you the important information, and then loads the bells and whistles as the JavaScript gets loaded and run. First make sure the plain text information gets loaded, maybe a simple JPEG when something graphical like a map is needed, and then load the Megabytes of React or Angular to make it all pretty and the map be interactive...
Just as server side rendering was reinvented from first principles by the current generation, now they have rediscovered progressive enhancement! There might be hope for us yet!
"Universal design" or "design for accessibility" will give you lots of examples of constraints that are not "commonly" needed ending up having much wider application and benefiting many other people.
Some oft-cited examples are curb cuts (the sloped ramps cut into curbs for sidewalk access) and closed-captioning (useful in noisy bars or at home with a sleeping baby).
There are many examples from the web where designing with constraints can lead to broadly more usable sites- from faster loading times (mobile or otherwise) to semantic markup for readers, etc.
- How close is the default state to the constraint
Kerb cuts help everyone. Kids, the elderly, disabled people, and anyone distracted by their phone are all less likely to fall on their face and lose a tooth.
Web accessibility helps websites go from unusable for disabled people, to usable.
On the other hand, when a dev puts a website on a diet it might make it load in 50ms instead of 200ms for 99.9% of users, and load in 2 seconds instead of 2 minutes for 0.1%.
So it doesn’t impact anyone meaningfully for the site to be heavy. And for that edge case 0.1%, they’ll either leave, or stick around waiting and stab that reload button for as long as it takes to get the info they need.
As shameful as it is, web perf work has almost zero payoff except at the limit. Anyone sensible therefore has far more to gain by investing in more content or more functionality.
Google has done Google-scale traffic analysis and determined that even a 100ms delay has noticeable impacts on user retention. If a website takes more than 3 seconds to load, over 50% of visitors will bail. To say that there is no payoff for optimization is categorically incorrect.
The incentives are there. Web developers are just, on average, extremely bad at their jobs. The field has been made significantly more accessible than it was in decades past, but the problem with accessibility is that it enables people who have no fundamental understanding of programming to kitbash libraries together like legos and successfully publish websites. They can't optimize even if they tried, and the real problem for the rest of us is they can't secure user data even if they try.
This test was a while ago - it’d be interesting to see if it’s still the case and if the results reproduce. But still let’s consider that Google is Google and most websites are just happy to have some traffic.
People go to Google expecting it to quickly get them info. On other sites the info is worth waiting an extra second for.
At Google scale, a drop in traffic results in a massive corresponding drop in revenue. But most websites don’t even monetize.
They’re both websites but that’s all they have in common.
If you are a hobbyist hosting your own website for fun, sure, whatever. Do what floats your boat, you're under no obligation for your website to meet any kind of standard.
The vast majority of web traffic is directed towards websites that are commercial in nature[1], though. Any drop in traffic is a drop in revenue. If you are paid tens or hundreds of thousands of dollars a year to provide a portal wherein people visit your employer's website and give them money (or indirectly give them money via advertisement impressions), and shrug your shoulders at the idea of 50% of visitors bouncing, you are not good at your job. But hey, at least you'd be in good company, because most web developers are like that, which is why the web is as awful to use as it is.
[1]The only website in the top 10 most visited that is not openly commercial is Wikipedia, but it still aggressively monetizes by shaking down its visitors for donations and earns around $200 million a year in revenue. They would certainly notice if 50% or even 10% of their visitors were bouncing too.
This is the same attitude that results in modern developers ignoring low end consumer hardware, locking out a customer base because they aren't rich enough.
Get some perspective. Some of us have to live on 500kbit/s. The modern web is hell, and because it doesn't impact anybody with money, nobody gives a shit.
Several news sites offer text only versions.
https://lite.cnn.com/
https://text.npr.org/
https://wttr.in/
More listed at https://greycoder.com/a-list-of-text-only-new-sites
It’d be great if there was some standard that allowed these to be easily found, and supported on the local news sites.
That CNN website is great, except it still has a huge cookie banner. Looking at the cookies of the site, I think the only cookie it sets is that i clicked on the banner. Most of the size of the page is also related to the banner it seems.
You can’t put a price on some round-rim glasses wearing EU bureaucrat named Klaus-Dietrich von Regulieren sleeping soundly because of that banner.
If I’ve understood the grandparent post correctly, they don’t need the banner. They wouldn’t need it if the only cookie they set were a functional 1st-party cookie, and since that sole cookie is just to track cookie banner status, they especially don’t need it.
But taking the time to investigate that, get it approved by legal, etc. all takes longer than just slapping a cookie banner component on it.
This is why people complain about the unclear and bureaucratic nature of these laws, it leads to an over complicated investigation and compliance isn't always simple - meaning the safest option is to comply at the highest level and degrade the user experience.
But it is not. the text of that legislation is very clear.
Yes but the only thing better than being compliant is being compliant twice over so there's absolutely no debate about compliance.
I much rather companies be scared into complying and have some spare banners than companies having grey area free-for-alls with my data.
Eh, it took me all of 2 days to strip all the unnecessary cookies out of our product, and convince management to leave out the giant unnecessary cookie banner.
The sites plastering those everywhere are doing a malicious compliance, pure and simple
I looked at a CNN "lite" article, and it includes 560KB of stuff (lots and lots of CSS declarations) in addition to the actual 11KB of article content.
While still wasteful, CSS is one of those things you can do astronomically wrong before it starts being noticeable. Case in point here: inlining 560 kb of CSS with the page and just sending it with the entire HTML file each time is only ~61 kb of actual network transfer to load the article (due to brotli encoding).
For me in Firefox it only loads the article's HTML (50-70kb) and the favicon (7kb).
Are you sure it isn't some addon you have?
The articles HTML is ~60 kb compressed and ~10x that uncompressed, which accounts for both apparent sizes.
I bet you're using uBlock Origin
It could be the cookie banner appearing.
It is, the content loads first, then the js for the cookie banner, then the favicon. If the js fails to load (I blocked the request as a test) the page loads just fine, it isn't blocked by that.
The point is that it would still work if you block JS, CSS and graphics.
Check it out in lynx for example
Ever tried a TUI browser like lynx?
In the Netherlands the public broadcaster still publishes news through Teletekst:
https://tweakers.net/reviews/11700/hoe-werkt-het-vernieuwde-...
Which is cool but it's not web, and few people have working TV reception that supports it at the moment. The web version of Teletekst (https://nos.nl/teletekst) is over 3 MB in size.
Are there no APIs with smaller and faster responses? The Swedish equivalent has --even a fairly respectable terminal client.
In terms of a standard, it would be nice if "reader mode" were standardized to request a text-only minimal formatting version of the site.
Oooh... can you imagine if servers actually took the hint and sent only text if the client provided Accept: text/markdown, text/plain headers?
> Accept: text/markdown
funnily enough, the rise in agentic coding has actually made this on the rise
Using the lite subdomain is a great way to read all the subscriber articles as well. Was reminded of the lite site during some annoyingly aggressive A/B testing CNN was doing a few months back.
I miss RSS.
I still use it. RSS is dead, long live RSS.
And Javascript free web...
IMO, we need a RSS optimized browser that would also block Javascript before user interaction (or even more).
How would "RSS optimized" work in the context of a browser?
$ ssh teletekst.nl
Maybe there could be a service which translates any website into a trimmed-down text-only version.
Firefox's Reader mode is pretty great. Doesn't reduce network traffic but makes almost any page more readable.
Google also used to have a go app which they deprecated later on, while now i think about it what is the use of having a go app, if the websites which are shown in search results are not optimised for slower networks.
DuckDuckGo has a js-free version of their website at https://html.duckduckgo.com/
https://lite.duckduckgo.com/lite
> https://wttr.in/
didnt load for me
Did load for me.
Closing.
Me neither.
I view every website as text-only
I can reformat the text any way I like
Well, there is RSS.
Thanks for sharing, i almost was not sure if the last part was sarcasm. Html itself was the standard, then when it got bloated we got rss. This seems like it’s not a problem of a lack of standards. It’s the company choosing not to promote it.
It is more to do with the fact that vast majority of people aren't going to bother subscribing to an RSS feed. I am on a freelancer slack group that supposed to have a RSS feed for jobs. The feed is often broken for weeks because most people don't use it.
Even when it isn't broken the display output is broken in Thunderbird because the dev isn't going to bother checking Thunderbird as many people don't use email clients like that anymore and instead use webmail.
I never have used RSS that much as normally if I want to check for new things on a site, I will just go to the site and look myself.
I suppose I meant more of a best practice - if every news site could be found at the subdomain of lite.XYZ.com, or perhaps some way for the browser to request specifically no images or styles, it’d be easier for the end user to find.
RSS is a good point that I didn’t consider. Although it tends to be a summary and hyperlink to the main site.
Ideally IMO this would be accept headers. You're asking for the same semantic content but a different format. I'm not sure if there's a nice way of specifying html but in a minimal sense (we do quality with images, perhaps linked), however these could mostly be text/plain or text/markdown (and it'd be nice if that was then formatted properly by the browser).
This often makes a really nice API if you can do other formats too - the main page of cnn could respond to rss accept headers and give me a feed for example.
It’s too bad that WML from WAP is not used anymore.
https://en.wikipedia.org/wiki/Wireless_Markup_Language
https://en.wikipedia.org/wiki/Wireless_Application_Protocol
WML pages had mostly text and hyperlinks from what I remember and even though it supported images too I think most such basic pages would be readable even if you turned image loading off.
I spent so much time tuning the WAP site for the forum I worked for back in 2008.
I had some sort of Nokia running on whatever 2kbps networking was going then, and would shave absolutely anything I could to make the forums load slightly faster.
We could use markdown.
It's a crying shame that a browser can't fetch a plain vanilla goodstuff.md file and display it natively.
RSS is just a list of links to webpages, maybe with summaries. The readers generally fetch those webpages and filter out the text, but every browser has equivalent functionality now. You can do it with literally any HTML page, though some websites try to fight it (since depending on the reader, it neuters ads).
RSS feeds used to contain the full article. That changed when everyone wanted to monetize their blogs.
RSS still contains full articles on a lot of personal sites. As you said, it’s about monetisation and control and when you’re writing with no plan to monetize there’s no point in not serving full content.
Yeah, and that’s my point. The problem is not technological, you can make a super readable HTML site by just putting text in <p> tags, and RSS readers for blogs that didn’t rug-pull their content still work fine. People lost interest in giving out something for nothing, so now the web is an ad-infested mess.
If someone makes a new tech that makes that impossible, 10 principled FSF-enjoyers will write content for it and nobody else. Web standard bloat is bad, but it didn’t cause this problem, and you can’t fix it by creating a new spec.
> great if there was some standard that allowed these to be easily found
Too bad Google sunset Chrome Flywheel (likely after AMP?): https://research.google/pubs/flywheel-googles-data-compressi...
Opera Mini Turbo was equally popular during 2G/Edge era.
The header image in the article is a 2400x1600 PNG that is 500KB large, apparently due to subtle dithering making it hard to compress. Converting it to a visually-identical .avif (quality 90, 12-bit color depth) takes it all the way down to 15KB.
For an image which isn’t even interesting or illustrative of the issue. Hero images on blog posts have run their course, I can’t remember the last time I felt anything positive by encountering one. They make the page load longer, force us to scroll to start reading, and are forgotten as soon as they go out of sight.
Indeed, the website transfers 1.18mb (compressed) to deliver 6.7kb of text. Kind of ironic.
I'm guessing it could be even smaller if it was designed as a SVG file. Although the glow effect with the fading colors would probably need to be simplified.
Plain HTML and forms for interactivity can be very effective.
In fact, for a long time web forums were largely entirely usable without JS.
See the degradation of GitHub for a great example. You used to be able to interact with most of it without any JS at all; browsing the code repositories, reading and replying to issues, etc. Now it barely shows anything without JS. Of course, I suspect in that case it's deliberate so they can trick you into running a few more tracking and analytics scripts...
Other Helene stuff I took note of:
- AT&T was completely down for us but Verizon and its MVNOs were up
- I had a Verizon MVNO secondary e-sim that came free with a home internet plan, unused until the hurricane hit
- It worked pretty well!
- The day the Verizon disaster internet trucks showed up at the police station in our town my Verizon MVNO internet went down
Non-internet learnings:
- Fill up your vehicle’s fuel or battery before any big storm, we spent a lot of time siphoning and otherwise consolidating fuel to get ourselves and neighbors out of town, particularly because we didn’t know how far we’d have to go to find a gas station with electricity
From a Pineapple Express a few years back (80+ mph gusts and lots of landslides):
- When putting in rural/exurb solar, make sure you have a secondary charge source for your house batteries. This can be a car or a propane generator, but check compatibility before buying anything. Solar won’t cut it (storms are cloudy), and propane won’t cut it (no roads, and also, there’s probably a shortage of supply and trucks).
- Whatever cell networks people fall back on will effectively be down (as you saw with verizon)
- all emergency services websites should fall back to web 1.0 forms and static images if they take more than 5-10 sec to load. Loading a pile of JS and CSS to load a fake modal that obscures the content affer 5 min of loading at 2G speeds doesn’t count (looking at you PG&E)
> and propane won’t cut it
Depends entirely on tank size really.
Standard (in Australia) is 2x45kg household cylinders (chest high to an adult) for household cooking.
(Finish one, switch to the other while waiting for swap).
It's not hard to have eight or more cyclinders on standby and to keep them topped up for when needed.
For rural / quasi industrial, furnaces, generators, etc it's not uncommon to have fixed installation 210kg LPG bulk cylinders filled by supply truck .. and larger.
When disaster strikes a bulk tank lasts a long time if the primary drains on it (eg: a tile or glass furnace) are wound back or turned off.
Eg: https://www.supagas.com.au/for-home/lpg-gas-bottles/tanks
> For rural / quasi industrial, furnaces, generators, etc it's not uncommon to have fixed installation 210kg LPG bulk cylinders filled by supply truck .. and larger.
Seems kind of small if you're rural/have regular delivery limitations? I've got a 500 gallon propane tank for domestic use (stove, waterheater, fireplace) and another 500 gallon tank for my generator. The internet says a 500 gallon tank at 80% full (max safe fill) is about 750 kg of propane. We've had a few two day outages, but no three day outages since we moved here, but neighbors report some outages in the 7-10 day timeframe. 500 gallon tanks seem pretty popular around these parts, commercial/government goes bigger, small properties go smaller; plenty of neighbors have no generator and may not have propane either; government runs warming centers if you can get there.
> Seems kind of small
Nearly five standard 45kg household tanks is the smallest capacity fixed installation bulk tank supplied by one local gas company. The option to rent larger tanks on a long term contract exists.> if you're rural/have regular delivery limitations?
Many rural locations here have regular deliveries .. the milk gets picked up every day for example (multiple double tanker trucks worth from, say, the Cowaramup* district alone).
There's no need for a larger tank simply because you're rural unless you explicitly want constant gas at a particular delivery rate sufficient to last out a supply issue of {X} {time units}.
( For example if you run a continuously fired glass furnace + annealers, have a ceramics business as a side hustle, specifically have emergancy services generators for blackouts etc. )
> government runs warming centers if you can get there.
Your local government I assume - this isn't something ours has ever considered TBH.
* https://en.wikipedia.org/wiki/Cowaramup,_Western_Australia#A...
750kg (500 gal) is the smallest you can get around here.
A house backup generator uses something like 3kg of propane per hour idle, so that tank will keep your fridge on for ~ 10 days. Our area (outskirts of Silicon Valley) saw 20-30 day power outages with essentially no sun that year. The weather is rapidly worsening due to climate change, but they are also hardening the infrastructure.
Now, with a battery + backup generator + wood stove, you only need to run the generator to charge the house batteries. Assume a duty cycle of a few hours of optimal efficiency generator per day, wood heating, and you can easily exceed the 30 day target. At that point the sun should be out, at least here in California, or at least the roads will be open enough to let the propane supply chain adapt to the demand.
For the EV route, buy a truck or SUV with vehicle to home support, and a house battery that can charge off the vehicle. The truck has 2-4x a house battery in it. This plan assumes there’s a fast charger in town (ideally near the grocery store), and it’s under ~ 100 miles round trip.
Edit: I’ll add local pricing: A used >= 100 kwh ev is about $30K. The generator + permits + tank is > $20k.
With the ev route, you also get a nice car.
I didn’t price out generator + 2000 gallons of propane storage. It’d guess it’s about $30k.
Maybe you don't need such a huge generator. Run your fridge and a small heater, and only heat a small section of your house. Were talking about emergencies here, not keeping your hot tub running. I can heat a bedroom easily with a 1500 watt space heater. And my fridge pulls an average of about 150 watts. Quite honestly, a whole house generator is for convenience when there's a tree that falls on a power line. It sounds like a real waste for an actual emergency.
> - AT&T was completely down for us but Verizon and its MVNOs were up
It really depended on where you were. In my area everything was down. Literally and figuratively. The only utility that worked was gas.
T-Mobile was the first to come back up but it took weeks. I could occasionally get one bar of LTE if I climbed to the top of the hill but even then I could only send or receive about 1 SMS every few minutes.
Once I was able to get out of the neighborhood I could drive 5 miles away and get cell service and spotty data on Verizon.
NPR's updates were our most reliable way to get information on what was happening.
> - Fill up your vehicle’s fuel or battery before any big storm, we spent a lot of time siphoning and otherwise consolidating fuel to get ourselves and neighbors out of town, particularly because we didn’t know how far we’d have to go to find a gas station with electricity
Having supplies on hand and being patient worked out for us. We waited until 40 was clear and were able to head to the Triad for supplies and gas. In the mean time the neighborhood got together and cut up downed trees and filled in the missing road so it was easier to get in and out of the neighborhood.
> we didn’t know how far we’d have to go to find a gas station with electricity
Corollary: carry cash, so you can buy things without depending on Point Of Sale systems being on and able to talk to the payment card networks. My favorite nearby ATM dispenses $100 bills, so I can have several hundred tucked in my wallet without taking a lot of space.
I recommend having a mix of bills; the other person may not have change.
Clarification: Sure, keeping a $100 bill tucked away in the wallet for emergencies is a great idea (and I do that too), but wherever you keep emergency supplies, I suggest having a mix of smaller bills.
Absolutely. The $100's are for getting a hotel room or something. The $20's are likely the largest other merchants will break.
My emergency cash stash has a couple $100's and a ton of $5's because I'm not super limited on space, and when everyone else has $20's and needs them broken, a bunch of $5's makes everything easy, and beats carrying a whole bucket of $1's. Learned that at flea-markets.
Agree.
Let time I visit the USA, many shop don't have changes for $100 bills. I found $20 bills the easiest to use.
(I am not an American, was on a work trip)
Not Helene for me, but another ISP frustration anecdote from living in a forested area (line damage from tree limbs is common enough) with imperfect cellular data connection:
- the Comcast Xfinity app is extremely bloated and runs into error after error on a poor connection, yet the only time I ever use it is when I have connection problems. Most other apps I use run smoother under similar circumstances. Boggles the mind why one of the US's largest ISPs wouldn't make their primary customer support portal be lightweight and reliable on spotty cellular data.
I have a dual-SIM phone with AT&T and T-Mobile lines (a Google Pixel). I wish they had a triple-SIM phone, then I could add Verizon.
Now that I think about it, I think you can have multiple eSIMs, but only one can be active at a time.
iPhones support 8 (or more): https://support.apple.com/en-us/118227#:~:text=You%20can%20h...
Unsure why they say 'or more' and what makes that happen.
2 can be active at the same time.
I can't find how many a Pixel can store at the same time though.
eSIMs are variable sized.
like the Sony™ PlayStation© save memory cards that held 8MB and games like hexen took all the "slots" available?
FWIW, in my Galaxy S23U I can put 2 physical SIM and 10 eSIM.
But only two lines are active at the time.
The same is the case with pixels, never at least.
> Fill up your vehicle’s fuel or battery before any big storm
This is such a big one for any event anytime. Better yet, never go below a half-tank on your vehicle. You’ll almost always have enough range to get out of dodge and also have a mobile cooling / heating / charging station if you’re stuck in place. I grew up on an island and what I thought was universal storm advice was clearly not.
During Helene I had to drive 80+ miles from the Clemson, SC area through to Asheville to bail out my sister-in-law and her husband and their two month old stranded in Asheville. They had only two gallons of diesel in their F-250. The drive up I-26 looked like some kind of zombie flick with a line of 50-100 cars on every interstate off ramp leading up to defunct gas stations with crowds of people just meandering about.
If you’re a mild prepper type, GMRS radios (or a jailed broke Baofeng…) are a great tool. I had no cellular service for the majority of my drive. I was able to stay in comms with my “convoy” the whole way. Perhaps as importantly, a spare, unused Jerry can is incredibly valuable. In my case I have gasoline cans but not diesel and so I had to pay a greedy boomer 3-4x market rate to buy one of his four 5 gallon cans in the Lowe’s checkout line to get a clean fuel canister.
I think many of us in the web space that are old enough were shaped by the "mass outage" that happened on 9/11 when pretty much all news websites went down while we were looking for information. Slashdot was fighting valiantly to say up and was one of the few sites one could find information on the events (if you were in a spot without access to a cable TV, you were very much in the dark). The web and the infrastructure are substantially different than they were 25 years ago, but I still get a bit of "what if" in the back of my head (not that I work on anything of that level of significance).
When the Yahoo news website was not responding, my reflex was to do a traceroute. The last hop was not resolving, and it was arriving Somewhere in NY. Guess what, they were hosted in the tower. It took a while to get redirected to the West coast.
Interesting.
For Hurricane Helene specifically, my team at Newspack actually worked with Blue Ridge Public Radio and a number of other news organizations in the affected area to set up text versions of their websites for low bandwidth readers[1] and get info to 10s of thousands of people[2].
In fact, it was so successful (maybe not at reaching you specifically though), that we got a grant to roll out a general purpose plain text web solution for breaking news situations to news organizations across the country![3] So I think there may have been a mismatch in that you didn't know about all of the plain text versions of news sites available in your area during the disaster -- that's something we'll have to keep in mind.
[1] https://text.bpr.org/
[2] https://awards.journalists.org/entries/hell-or-high-water-bp...
[3] https://opennews.org/blog/press-forward-release/
I mean I regularly read the news and I didn’t know CNN has a lite version
A related article I read yesterday lamented that 1GB of RAM isn't enough to run a graphical browser anymore (1). Sure JavaScript runs fast now, but at the cost of the code size of the average website being unnecessarily large. This is because speedy js and speedy network connectivity allows for more code, more network requests. Another example of Wirth's law.
This was the case when I got a Rapberry Pi 4 with 1GB of RAM about late 2019. You could run one tab of Chrome, but any more and it would be killed.
(1) https://log.schemescape.com/posts/hardware/farewell-to-a-net...
Vaguely related anecdotes:
- I got caught in the mountains for a few days due to landslides in Nepal. The only available information was relayed by phone between locals. People had no idea of what was going on and their vacation ended on the day the road reopened. It caused a pile up of cars where the road had slid off a few days prior. In some parts, rocks still fell from the cliffs above. We flagged a passing car and asked them to keep us updated on WhatsApp instead. We could have all stayed put if we had that information before.
- During covid I maintained a page with simplified local restrictions and a changelog of new restrictions. The alternative was to follow press conferences and re-read the entire regulation the next day, or keep checking the newspapers. Mine was just a bullet list at a permanent location.
- During the invasion of Ukraine, refugees have set up the most impressive ad hoc information network I have ever seen. It was operational in 24 hours and kept improving for weeks. People sorted out routes, transport, border issues, accommodation, translation and supplies over Telegram, Notion and Google Docs.
Information propagation is critical during emergencies, and people are really bad at it. Setting up a simple website and two-way communication channels makes a huge difference.
My Ukraine-based colleague had to ping friends with real-time questions like "is bridge X still unbombed/passable? Do you know someone who lives nearby and can check?" while he was fleeing from the invasion. Fortunately, enough of the questions came back with correct answers and he managed to get the (relatively) safe location.
Why does Ukraine trust Telegram so much? Seems they share intelligence-worth stuff there.
Few reasons:
1. Habit. We're used to use Telegram for everything: news sources, social network, messaging, memes. Telegram has more capabilities than other apps. You can't realistically move your family and friends to another messaging app because of all of that and the network effect.
2. General attitude of all people towards privacy and sharing data: they don't really care. It's "Who would even care about my data?" and "I've got nothing to hide" all the way.
I doubt most of people ever thought about the topic of trusting a messaging app. It's not the framework they operate in.
There are two global psyops done by I don't know who:
1. That Bitcoin is safe and anonymous.
2. That Telegram is super safe for whatever shady or private stuff you want to do.
One of the best marketing campaigns ever.
I would also want to use this chance to alert everyone who uses Telegram that:
1. It's not e2e by default.
2. They use proprietary encryption protocol. You don't have access to code.
3. I don't think Telegram is profitable, I don't know how it can be with that scale. Which makes you wonder.
4. If you open in-message links, you have a chance of losing your account to hackers thanks to the old vulnerability that hasn't been fixed for years. You literally have to check the list of devices connected to your account every 12 hours if you want to be safe.
Meh, Durov does not appear to be under Putin's control. The real question is, why does Russia trust Telegram so much?
One way to get to this is to start with almost-'94 HTML:
Then add a little non-'94 CSS styling.If you decide to add an off-the-shelf wad of CSS, like Pico.css, consider hosting it alongside your HTML (rather than turning it into another third-party dependency and CDN cross-site surveillance tracker). Minified, and single-request.
This should be every web developer’s first webpage. No npx create-react-app ... or pip install django or any other layers in between.
HTML5 boilerplate: https://github.com/h5bp/html5-boilerplate/blob/main/src/inde...
Now that's a template I haven't seen in a long time! Thanks for the fun trip down memory lane that "started it all".
It's really that simple.
I run a website that's primarily text-based. When I change the base template, I still check that it works without CSS. This just means semantic HTML.
That being said, CSS is rarely that large. Even after a few years of relative indulgence, the gzipped CSS for the whole website is still something like 20kb.
I always include `<meta charset="utf-8">`. Is that still necessary?
you don't even need `<!doctype html>`. I'm sure it's easy to look up when that was added/recommended, but i've never used it when i do a 94 html page/site like this. html head title /title /head body /body /html 'sit
Minimal valid HTML5:
That particular doctype is HTML5. I was making a too-subtle joke about slapping it on '94 HTML.
I also do that and a couple other things. I used mostly '94 HTML for the comment, to try to make a point.
if the server supplies this as a header, it's not necessary.
As a mild lifelong disaster "junky" (grew up in remote areas, dealt with cyclones, floods, droughts, fires, monsoons, coup d'etats, unannounced atomic tests etc during decades of global field work) this description:
reminds me why we (locally) still rely on AM radio day in day out and will continue to do so for the forseeable future.so during hurricane Laura the NWS transmitters ceased functioning. The southern one i can pick up was uncrewed and stopped working during, and the northern one, i don't remember, but it went down about 2 hours later, well before the storm was upon them.
up to that same storm i was a gung-ho let's go ham enthusiast. up to. I lived in a more direct path relative to where most of the repeater users were, so they were complaining about how hard it would be to find ice the next morning than relaying potentially life-altering information about storm tracks or whatever.
I explained to everyone as sternly as i could that this was literally an emergency, which was the primary designation of the tallest repeater in the county, and if they wanted to chit chat with chet they should move to one of the 3 other analog or 2 other digital repeaters in the same area.
nothing doing. I was the arrl tech specialist for my state, too. I completely pulled out of the hobby. I might dabble in the future with low power or beacons or whatever, but VHF/UHF i'm done with local usage.
i know you specifically said AM; however i didn't have an AM radio "handy" during the storm and power outage, etc. 9/10 the NOAA/NWS weather radio service suffices.
That's the deal with Ham radio; amateurs, enthusiasts, former SE asian jungle operators, occasional NASA relay operators, etc.
Good fun - not much chop for keeping the general public informed via cheap transister radios (although, who has those anymore?).
Our state emergancy services broadcast locally (and at strength from outside affected areas) when updates are required - they have dedicated bands and they routinely interject on the major broadcast radio networks - where fires are, when and where cyclones are expected to cross the coast, etc.
Our very local area volunteer fire units use the equivilant of ham and CB bands with reduced licences - they broadcast lightning ground strikes (at this time of year) and fire / tender / tanker updates as the season progresses (which is right now, harvest time, a lot of equipment out in tinder dry conditions subject to highly active evening lightening storms).
The iPhone / WiFi stuff is great .. but it hasn't yet passed into "considered reliable" in local culture - the networks have crashed under stress and nobody wants response to grind to a halt if a tower goes down, etc.
It's stuff like this that makes me want to scurry right back into the SRE/sysadmin space. Currently back in webdev after a few years out. I just feel like I'm being a pain in the arse to comment "could just write the HTML...?".
This very article loaded 2.49MB (uncompressed) over 30 requests. It's served using Next.js (it does actually load fine with JS disabled).
Ironically this is a great opportunity for the author to have made a stronger point. It could have gone beyond the abstract desire of "going back to basics" to perhaps demo reworking itself to be served using plain HTML and CSS without any big server-side frameworks.
There is a movement supporting small websites. The links below may inspire those interested in text or small websites.
https://no-html.club/
https://no-js.club/
https://nocss.club/
https://1kb.club/
https://web.archive.org/web/20231208000921/https://10kbclub....
https://250kb.club/
https://512kb.club/
https://1mb.club/
I find that the sites designed around being small are usually nice to read since the effort is put in the content not the layout.
Additionally a lot of great sites can be found through something like https://wiby.me/ or different protocols like gopher or gemini.
You forgot to mention https://smolweb.org/
It's been shared here many times, but Terence Eden has a great anecdote about how the UK's GDS standards - lightweight, simple html - meant the site was usable even on a crappy PSP https://shkspr.mobi/blog/2021/01/the-unreasonable-effectiven...
I live just south of Asheville in NC, and we were completely isolated after the storm for a few days. The only reason we were able to get out after a few days was that a fire truck had been abandoned at the bottom of our driveway as trees fell around it, so they came back as soon as they could to get that resource back. People just the other side of those trees were unable to get out for about a week.
Our best source of information, even after we started to get a bit of service, was an inReach. I messaged a friend far from the region and asked them really specific questions like, "Tell me when I can get from our house to I-26 and then south to South Carolina."
Yes, absolutely, emergency information sites should be as light as possible. And satellite devices are incredibly useful when everything local goes down.
One thing that I would also suggest folks who are resonating with this piece consider...
Local copies of important information on your mobile device. Generally your laptops are not going to see much use. Mobile apps tend to fake local data and store lots of things to the cloud. We tend to ignore such things like backups and local copies nowadays. Most of the time we can get away without any worry here but consider keeping a copy of things like medications and their non commercial names for situations like this as well.
Syncthing is the way! Along with the Material Files app from F-droid, I finally get to have a file based workflow on Android.
You'll probably want to add localsend to the mix.
Syncthing is also great
It's hard to understand the privilege bubble you're in unless you actively try to live like your users. My read of the current trend [1] is that building for your marginal users isn't prioritized culturally in orgs or within engineering. Unseating those ways of working in my experience has been immensely challenging, even when everyone can agree on methodologies to put users first in theory [2].
[1] https://infrequently.org/2025/11/performance-inequality-gap-... [2] https://crukorg.github.io/engineering-guidebook/docs/fronten...
This should be required reading for anyone building government or emergency sites. During any crisis, people have spotty connections, dying phones, and zero patience for loading spinners. A plain HTML page with bullet points would save lives over a fancy React app that needs 5MB to render. The irony is we have better tools than ever to build fast sites, yet the average webpage keeps getting heavier. Somewhere we forgot that the web worked fine before JavaScript frameworks.
That bulleted newsletter list being the most useful thing says everything.
Rich from a site that loads 4 trackers
... and is built with Next.js including no less than 12 enormous x-font-woff2 chunks of data at the top of the source code and another big __NEXT_DATA__ JSON chunk at the bottom. Hardly lean, vanilla HTML and CSS.
Author should look into WinLink. There is WinLink Wednesdays where WinLink is practiced. A lot of reports come out of WinLink and it’s all text. Of course need to be an Amateur Radio Operator
Many years ago, Google had a service I would use pre-Smartphone days to search when I was away from the PC and needed info for like a restaurant’s number. You would text 46645 and it would send you search results. It was useful during hurricanes.
Reading this on airline wifi right now makes me realise just how unusable some stuff becomes with choppy internet. E.g. I can’t change settings on the LinkedIn app because the request to load the settings page fails :/.
Some things you don't know people need until you're directly affected. For me, it was an injury related light sensitivity that made me realise dark mode isn't just a frivolous addition for looks
May I interest you in my doctorsensei.com/how-to-get-off-the-internet.html page?
tl;dr Use dark mode and set f.lux (or the equivalent) to Cave Painting. Helped me out a lot
Nice to see other fellow Western NC folks commenting here, I'm in Asheville. I did not know about all of these text only version of major news sites. I'm going to bookmark them.
What saved us from a news deficit after Helene was that we had 2 portable AM/FM radios. Both of the radios took batteries and one of them you could even charge via a hand crank. I highly recommend having a portable AM/FM radio of some kind. Blue Ridge Public Radio (our local NPR) was amazing during this time. Their offices are located right in downtown, which never lost power, so they were able to keep operating immediately after the storm.
I also feel this pain of bloated sites taking forever to load when I'm traveling. I'm on an old T-Mobile plan that I've had since around 2001 that comes with free international roaming in 215+ countries. The only problem is that it's a bit throttled. I know that I could just buy a prepaid SIM, or now I can use an eSIM vendor like Saily, but I'm too cheap and/or the service is just good enough that I'm willing to wait the few extra seconds. Using Firefox for Android with uBlock Origin helps some, but not enough (also I just switched to iPhone last month). I've definitely been on websites that take forever to load because there's just so much in the initial payload, sometimes painfully slow. I don't think enough developers are testing their sites using the throttling options in the dev tools.
https://plaintextsports.com
What you really want is a (mostly) JavaScript-free website. Run NoScript and cut out all the data broker bloat, allowing just a limited number of critical scripts to run. Adding LocalCDN will further reduce wasted transfers of code that you do allow. Then you can decide if you want to show images. The web will be much faster on a fast or slow link.
Yeah, you don't get one. The purpose of modern web development is to make sure the developer is intellectually satisfied in over-engineering something that should be relatively simple, justifying their pet projects. If any useful work gets done, that's a side-effect and most of times an accident. (Just try to use StateFarm's website as a great example).
I have a tab on sublimetext that has been opened since the pandemic and I think it's safe to share my idea since I'm not gonna do it.
*4KB webpage files*
So a website where each page does not exceed 4KB. This includes whatever styling and navigation needed. Surprisingly you can share a lot of information even with such constraints. Bare bone html is surprisingly compact and the browser already does a whole lot of the heavy lifting.
Why 4KB? Because that used to be the default page size in x86 hardware. So you can grab the whole thing in one chunk.
This whole comment is not 1KB.
Not all the way down to 4KB, but https://512kb.club/ matches this vibe
You should aim for the size of a single network packet instead
I'm not a web developer, but a product manager, and recently I've created my own website using Astro. I do support the point, that its better to have a relatively simple static website - it's really fast and lightweight! An average Wordpress website would weight 1 MB+ in best case, in worst case - 4 MB+. I don't know how people think it's a good idea to have a compressed webpage to be 4 MB!! Let's KISS
> As a web developer, I am thinking again about my experience with the mobile web on the day after the storm
In some villages, where plenty of stone is available, people used it for everything - roof slabs, pillars, walls, flooring, water storage bowls etc. Also, villages which had plenty of wood around, they used it for everything.
As techies, we say there is an app for everything, or there is a web-technology for everything. When you have a hammer in hand, everything looks like a nail.
We also generally have large screens, faster processors, more memory, faster internet, etc.
I want a plain text website every day, period. I'd even like to have a text site summarizing video from youtube.
I remember complaining about this around five years ago [^1], and it looks like not much has changed since, save for the amount of people complaining about websites being full of garbage that serves no purpose or static resources being bigger than the should be.
[^1]: https://0xff.nu/speed-pt3/
I can relate to this post. During the fires in Southern California last year it was confusing and frightening to know that you're surrounded by fire but you can't get any news or information due to degraded cell networks. We had no power and were just trying to load pages to figure out whats going on and if we should evacuate. There were either no emergency alerts, or emergency alerts for irrelevant things.
Another endlessly frustrating aspect is unfortunately Facebook. For better or worse, it's become a hub of emergency information using local facebook groups. In an emergency you want a feed of chronological information and facebook insists on engaging you by showing 'relevant' posts out of order.
There's a good talk from Jeremy Keith about building resilient websites:
1. Identify core functionality. 2. Make that functionality available using the simplest technology. 3. Enhance!
https://youtu.be/T55Z3VlG43g?si=bJnsv2smKChO9y6q&t=2101
For https://cartes.app, I'm planning to code an extremely light version of the map tiles. Based on initial network speed, decide to load this style first, that just lets the user see the streets and and important places. Then load the rest.
This initial map style would be the equivalent of a "text-only" website.
Blocked by Vercel's turbopack bundle analyzer's bugs though, because before optimizing the tiles, I need to optimize the JS that loads the tiles.
I haven't figured a way to load a Maplibre map server side, so the JS must be ready before the map starts to get loaded.
https://github.com/vercel/next.js/discussions/86731
Oh for the time when chairs were still for sitting and PDFs were still for printing.
Restaurant websites mentioned — the majority of restaurant web sites I’ve encountered were much more annoying and difficult to read than a PDF, even on a small phone screen. Or should I say, especially on a small phone screen. Some would make a 32 inch monitor feel cramped.
Talking about text-first sites: https://wordgag.com brings me a lot of joy everyday. They also update their funny quotes collection regularly.
That site prompted me to watch an ad in order to access it; I don't think it's a great example of what we're talking about
nice work. dealt with same issue during helene. would be interesting to do things like convert to morse code or convert to modem-over-walkie low baud rate.
I built this repo as a Helene response repo, trying to use an llm to help get resources over text message. https://github.com/realityinspector/supply_drop_ai
wonder if you could get to news over sms, use an llm to compress to minimum viable text?
Check out Newswaffle on Gemini:// protocol.
The web could in theory support text-first content, but it won't. The Gemini protocol, though not perfect, was built to avoid extensibility that inevitably leads us away from text. I long for the day more bloggers make their content available on Gemspace, the way we see RSS as an option on some blogs.
The web will continue to stray from text-first content because it is too easy to add things that are not text.
Prepend 'pure.md/' in front of any url.
I'm sure there are more proxies around.
Thats brilliant. For example: https://pure.md/sparkbox.com/foundry/helene_and_mobile_web_p...
Now the favicon (5.92KB) is larger than the article (5.16KB). Much better than the original (4.11MB)
I built a connection to a web-powered LLM over SMS/iMessage for literally this purpose. While traveling I’d have really bad or sparse service but still needed to find my way around.
I use WhatsApp's built-in LLM to read news when I'm on long flights that only give messenger access. It's great
That’s neat - have you considered publishing it?
I did, it's available at olly.bot :)
Doesn’t ChatGPT have WhatsApp access?
Not anymore. Meta didn't want competitors.
This used to actually work, at least on some sites. The text would load first, then it would reformat as the fonts and CSS assets are loaded. Ugly and frustrating, which is probably why, now you don't get content until the eye candy is all ready.
But the progressive, text first loading, would be readable from the get go, even if further downloads stalled.
CMSs should normalize having a text only version of your website.
I think with a little effort they could make it pretty frictionless for their users who it turn would be happy to provide it.
Actually, there are readily available tools that can do this. Many websites have implemented accessibility features for the blind, allowing users to read all text information on the screen, as well as alt text for accompanying images. This feature might be hidden, and many people are unaware of it.
Perhaps this is something FEMA could try to encourage and lightly enforce. No new technologies needed, just a mandatory "old web" option for key .gov, state and news sites.
I'm on 600mbps fiber with low latency. Some times, I can't be arsed to load the websites linked on HN and simply head straight into the comments. For example, when it's a link to Twitter, I get an endless onslaught of in-site popups. Cookie banners, "sign in with google", "sign in to X", "X is better on the app" and so on and so forth. Meh. I'll sometimes just stick to HN, especially when I'm on my phone on the sofa or something.
Give me a minimal / plain text website every day, it's not just the link speed.
W3C did this
https://www.w3.org/Mobile/Specifications
> I was struck by how something as simple as text content could have such a big impact.
Truly a sign of our times
I now use a text only CLI utility to read the BBC news. It is (for me) a greatly improved experience.[1]
1. https://github.com/hako/bbcli
Interesting, and arguably useful, but I really have to push back on the whole notion of site-specific readers.
I have a text-only CLI utility to read BBC news: the w3m terminal-mode Web browser. (Substitute Lynx, links, elinks2, or any other TMWB of your preference.)
The experience is, unfortunately, kind of shit, because BBC (as with far too many other websites) fails to display well in a terminal-mode browser, but it at least works. And I can use the same browser on any other website.
The problem with site-specific tools, which includes of course mobile apps, is that now instead of invoking a general-purpose reader on any site, you have to choose both the site and the reader, and are dependent on that reader application / utility being continuously updated as the site itself changes.
(And, yes, I've written my own site-specific renderers, including one which produces a "newspaper"-formatted page based on the CNN "lite" website, which I've discussed on HN and elsewhere:
<https://news.ycombinator.com/item?id=43723661>
Longer description and screenshots:
<https://toot.cat/@dredmorbius/114356066459105122>
The page takes about 10 minutes to generate (a bunch of serial article requests), but makes for good offline reading.
This is a really cool tool! And, not to diminish the work put into this tool...but at a glance at the code, it appears to be a specific RSS reader for the BBC (see: https://github.com/hako/bbcli/blob/709c6417c8dc4ffd4f7d5f5b4...)...so if i'm correct, why not simply use a cli-based RSS reader....so can review other sources beyond BBC? Again, not trying to diminsh the goodness here...But that, a more general tool might be just as good, and a bit more flexible, eh? :-)
check out reticulum and nomadnet, they meet these needs perfectly!
Thanks !
"The last couple times I used my dental insurance website, it was completely not mobile responsive, requiring the old-school pinch zoom to even get to anything on the page." And yet you probably didn't change your dental insurance based on this, because most likely there are more important issues that matter (e.g. you are given a single option by your employer). This is exactly the reason why that company is not likely to fix it, because the lousy website's effect on their bottom line is nearly zero. (Note that I don't like this either)
> I was struck by how something as simple as text content could have such a big impact.
The fact that he was struck by such an evident truth means that he is (hopefully: was) part of the problem.
Giving weather fenomenons a human name is the silliest thing humanity have invented.
On the contrary, it helps underscore a storm's significance and allow for memetic spreading of information around the storm. You obviously didn't grow up in a hurricane-prone area.
Likewise, we've always liked to "name things". My personal desktop is named "Foundation" (first built PC) and my car is named "Big Boi" (first adult purchase). Generic names are fine for operational equipment, but no one wants to refer to natural disasters as "HURR-2026-EC02". That's why COVID is "COVID" instead of "severe acute respiratory syndrome coronavirus 2".
Can't they just reuse Ubuntu version names?
> 66 requests
> 5.1 MB transferred
Ironic.
Check this out youngsters:
<html> <pre> TEXT </pre> </hml>
Do you model website behaviour for on a slow internet connection, or just hope it will never happen?
The mobile internet technically worked during a big storm some time back but barely. Half-loaded pages. The images were suspended. JS took too long. Most websites were only usable in theory.
The best ones shared some pattern. They;re not random.
Simplified Design.
word first
Avoid complicated client-side logic.
Quick in rendering even on a poor connection.
It has prompted me to think about a straightforward framework. The order of occurrence of different circumstances
Most of us design products for the first two. It is the third one where things break down.
There were some practical things, that helped me in those moments.
A server-rendered page must still read well when JavaScript is disabled.
The content must load before any decorative element.
A clear hierarchy, even without styling.
There is no important information hidden beneath the interactions.
Interested in the thoughts of others.
Do you deliberately design for suboptimal conditions?
Do you have a definition of “minimum usable version?” ~
The nature of most businesses is that they dont care about this.
https://motherfuckingwebsite.com/
Beautiful.
[flagged]
I'm not sure I understand. Are you implying we should not design our technology around serious edge cases that humans encounter in life? Why wouldn't we target people in crisis when we design crisis management information sites?
I’m saying no one will unless incentivised to.
Oh yes absolutely.
It's not really written there, but how about a loading experience that gives you the important information, and then loads the bells and whistles as the JavaScript gets loaded and run. First make sure the plain text information gets loaded, maybe a simple JPEG when something graphical like a map is needed, and then load the Megabytes of React or Angular to make it all pretty and the map be interactive...
Just as server side rendering was reinvented from first principles by the current generation, now they have rediscovered progressive enhancement! There might be hope for us yet!
"Universal design" or "design for accessibility" will give you lots of examples of constraints that are not "commonly" needed ending up having much wider application and benefiting many other people.
Some oft-cited examples are curb cuts (the sloped ramps cut into curbs for sidewalk access) and closed-captioning (useful in noisy bars or at home with a sleeping baby).
There are many examples from the web where designing with constraints can lead to broadly more usable sites- from faster loading times (mobile or otherwise) to semantic markup for readers, etc.
Ah, this raises 2 important nuances:
- How severe is the impact, and
- How close is the default state to the constraint
Kerb cuts help everyone. Kids, the elderly, disabled people, and anyone distracted by their phone are all less likely to fall on their face and lose a tooth.
Web accessibility helps websites go from unusable for disabled people, to usable.
On the other hand, when a dev puts a website on a diet it might make it load in 50ms instead of 200ms for 99.9% of users, and load in 2 seconds instead of 2 minutes for 0.1%.
So it doesn’t impact anyone meaningfully for the site to be heavy. And for that edge case 0.1%, they’ll either leave, or stick around waiting and stab that reload button for as long as it takes to get the info they need.
As shameful as it is, web perf work has almost zero payoff except at the limit. Anyone sensible therefore has far more to gain by investing in more content or more functionality.
Google has done Google-scale traffic analysis and determined that even a 100ms delay has noticeable impacts on user retention. If a website takes more than 3 seconds to load, over 50% of visitors will bail. To say that there is no payoff for optimization is categorically incorrect.
The incentives are there. Web developers are just, on average, extremely bad at their jobs. The field has been made significantly more accessible than it was in decades past, but the problem with accessibility is that it enables people who have no fundamental understanding of programming to kitbash libraries together like legos and successfully publish websites. They can't optimize even if they tried, and the real problem for the rest of us is they can't secure user data even if they try.
This test was a while ago - it’d be interesting to see if it’s still the case and if the results reproduce. But still let’s consider that Google is Google and most websites are just happy to have some traffic.
People go to Google expecting it to quickly get them info. On other sites the info is worth waiting an extra second for.
At Google scale, a drop in traffic results in a massive corresponding drop in revenue. But most websites don’t even monetize.
They’re both websites but that’s all they have in common.
If you are a hobbyist hosting your own website for fun, sure, whatever. Do what floats your boat, you're under no obligation for your website to meet any kind of standard.
The vast majority of web traffic is directed towards websites that are commercial in nature[1], though. Any drop in traffic is a drop in revenue. If you are paid tens or hundreds of thousands of dollars a year to provide a portal wherein people visit your employer's website and give them money (or indirectly give them money via advertisement impressions), and shrug your shoulders at the idea of 50% of visitors bouncing, you are not good at your job. But hey, at least you'd be in good company, because most web developers are like that, which is why the web is as awful to use as it is.
[1]The only website in the top 10 most visited that is not openly commercial is Wikipedia, but it still aggressively monetizes by shaking down its visitors for donations and earns around $200 million a year in revenue. They would certainly notice if 50% or even 10% of their visitors were bouncing too.
Please don't be curmudgeonly about others' curmudgeonliness. We're rather hoping for anti-curmudgeonliness on HN.
https://news.ycombinator.com/newsguidelines.html
Every single time Github goes down there's no shortage of gnashing of teeth on HN about how we should all host our own repos and CI servers.
Then people go outside and play.
Then Github comes back and sins are forgotten.
This is the same attitude that results in modern developers ignoring low end consumer hardware, locking out a customer base because they aren't rich enough.
Get some perspective. Some of us have to live on 500kbit/s. The modern web is hell, and because it doesn't impact anybody with money, nobody gives a shit.