> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.
This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.
> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.
h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.
Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.
WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else
> The constraint that ruined everything: It has to work on enterprise networks.
> You know what enterprise networks love? HTTP. HTTPS. Port 443. That’s it. That’s the list.
That's not enough.
Corporate networks also love to MITM their own workstations and reinterpret http traffic. So, no WebSockets and no Server-Side Events either, because their corporate firewall is a piece of software no one in the world wants and everyone in the world hates, including its own developers. Thus it only supports a subset of HTTP/1.1 and sometimes it likes to change the content while keeping Content-Length intact.
And you have to work around that, because IT dept of the corporation will never lift restrictions.
Back when I had a job at a big old corporation, a significant part of my value to the company was that I knew how to bypass their shitty MITM thing that broke tons of stuff, including our own software that we wrote. So I could solve a lot of problems people had that otherwise seemed intractable because IT was not allowed to disable it, and they didn't even understand the myriad ways it was breaking things.
The corporate firewall debate came up when we considered websockets at a previous company. Everyone has parroted the same information for so long that it was just assumed that websockets and corporate firewalls were going to cause us huge problems.
We went with websockets anyway and it was fine. Almost no traffic to the no-websockets fallback path, and the traffic that did arrive appeared to be from users with intermittent internet connections (cellular providers, foreign countries with poor internet).
I'm 100% sure there are still corporate firewalls out there blocking or breaking websocket connections, but it's not nearly the same problem in 2025 as it was in 2015.
> And you have to work around that, because IT dept of the corporation will never lift restrictions.
Unless the corporation is 100% in-office, I’d wager they do in fact make exceptions - otherwise they wouldn’t have a working videoconferencing system.
The challenge is getting corporate insiders to like your product enough to get it through the exception process (a total hassle) when the firewall’s restrictions mean you can’t deliver a decent demo.
Believe me, the average Fortune 500 CEO does not know or care what “SSL MITM” is, or whether passwords should contain symbols and be changed monthly, or what the difference is between ‘VPN’ and ‘Zero Trust’.
They delegate that stuff. To the corporate IT department.
Sometimes they have checkboxes to tick in some compliance document and they must run the software that let them tick those checkboxes, no exceptions, because those compliances allow the company to be on the market. Regulatory captures, etc.
where else are you going to find customers that are so sticky it will take years for them to select another solution regardless of how crappy you are. that will staff teams to work around your failures. who, when faced with obvious evidence of the dysfunction of your product, will roundly blame themselves for not holding it properly. gaslight their own users. pay obscene amounts for support when all you provide is a voice mailbox that never gets emptied. will happily accept your estimate about the number of seats they need. when holding a retro about your failure will happily proclaim that there wasn't anything _they_ could have done, so case closed.
I think the general idea/flow of things is "numbers go up, until $bubble explodes, and we built up smaller things from the ground up, making numbers go up, bloating go up, until $bubble explodes..." and then repeat that forever. Seems to be the end result of capitalism.
If you wanna kill corporate IT, you have to kill capitalism first.
I’d say there’s nothing inherently capitalist about large and stupid bureaucracies (but I repeat myself) spending money in stupid ways. Military bureaucracies in capitalist countries do it. Military bureaucracies in socialist countries did it. Everything else in end-stage socialist countries did it too. I’m sorry, it’s not the capitalism—things’d be much easier if it were.
I don't believe that. I don't necessarily love capitalism (though I can't say I see very many realistic better alternatives either), but if HN is full of people who could do corporate IT better (read: sanely), then the conclusion is just that corporate IT is run by morons. Maybe that's because the corporate owners like morons, but nothing about capitalism inherently makes it so.
playing devil's advocate for a second, but corpIT is also working with morons as employees. most draconian rules used by corpIT have a basis in at least one real world example. whether that example happened directly by one of the morons they manage or passed along from corpIT lore, people have done some dumb as things on corp networks.
Yes, and the problem in that picture is the belief (whichever level of the management hierarchy it comes from) that you can introduce technical impediments against every instance of stupidity one by one until morons are no longer able to stupid. Morons will always find a way to stupid, and most organizations push the impediments well past the point of diminishing returns.
Oh, they'll do that anyway, once they find the workaround (Oh... you can paste a credit card if you put periods instead of dashes! Oh... I have to save the file and do it from my phone! Oh... I'll upload it as a .txt file and change the extension on the server!)
It's purely illusory security, that doesn't protect anything but does levy a constant performance tax on nearly every task.
>Oh, they'll do that anyway, once they find the workaround ...
This is assuming the DLP service blocks the request, rather than doing something like logging it and reported to your manager and/or CIO.
>It's purely illusory security, that doesn't protect anything but does levy a constant performance tax on nearly every task.
Because you can't ask deepseek to extract some unstructured data for you? I'm not sure what the alternative is, just let everyone paste info into deepseek? If you found out that your data got leaked because some employee pasted some data into some random third party service, and that the company didn't have any policies/technological measures against it, would your response still be "yeah it's fine, it's purely illusory security"?
> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.
You can have still have weird broken stallouts though.
I dunno, this article has some good problem solving but the biggest and mostly untouched issue is that they set the minimum h.264 bandwidth too high. H.264 can do a lot better than JPEG with a lot less bandwidth. But if you lock it at 40Mbps of course it's flaky. Try 1Mbps and iterate from there.
And going keyframe-only is the opposite of how you optimize video bandwidth.
For normal video I think that's a good rule of thumb.
For mostly-static content at 4fps you can cut a bunch more bitrate corners before it looks bad. (And 2-3 JPEGs per second won't even look good at 1Mbps.)
Proper rate control for such realtime streaming would also lower framerate and/or resolution to maintain the best quality and latency they can over dynamic network conditions and however little bandwidth they have. The fundamental issue is that they don't have this control loop at all, and are badly simulating it by polling JPEGs.
They might want to check out what VNC has been doing since 1998– keep the client-pull model, break the framebuffer up into tiles and, when client requests an update, perform a diff against last frame sent, composite the updated tiles client-side. (This is what VNC falls back to when it doesn’t have damage-tracking from the OS compositor)
This would really cut down on the bandwidth of static coding terminals where 90% of screen is just cursor flashing or small bits of text moving.
If they really wanted to be ambitious they could also detect scrolling and do an optimization client-side where it translates some of the existing areas (look up CopyRect command in VNC).
If you are ok with a second or so of latency then MPEG-DASH (standardized version of HTTP Live Streaming) is likely the best bet. You simply serve the video chunks over HTTP so it should be just as compatible as the JPEG solution used here but provide 60fps video rather than crappy jpegs.
The standard supports adaptive bit rate playback so you can provide both low quality and high quality videos and players can switch depending on bandwidth available.
> The fix was embarrassingly simple: once you fall back to screenshots, stay there until the user explicitly clicks to retry.
There is another recovery option:
- increase the JPEG framerate every couple seconds until the bandwidth consumption approaches the H264 stream bandwidth estimate
- keep track latency changes. If the client reports a stable latency range, and it is acceptable (<1s latency, <200ms variance?) and bandwidth use has reached 95% of H264 estimate, re-activate the stream
Given that text/code is what is being viewed, lower res and adaptive streaming (HLS) are not really viable solutions since they become unreadable at lower res.
If remote screen sharing is a core feature of the service, I think this is a reasonable next step for the product.
That said, IMO at a higher level if you know what you're streaming is human-readable text, it's better to send application data pipes to the stream rather than encoding screenspace videos. That does however require building bespoke decoders and client viewing if real time collaboration network clients don't already exist for the tools (but SSH and RTC code editors exist)
Having pair programmed over some truly awful and locked down connections before, dropped frames are infinitely better than blurred frames which make text unreadable whenever the mouse is moved. But 40mbps seems an awful lot for 1080p 60fps.
Temporal SVC (reduce framerate if bandwidth constrained) is pretty widely supported by now, right? Though maybe not for H.264, so it probably would have scaled nicely but only on Webrtc?
There are so many things that I would have done differently.
> We added a keyframes_only flag. We modified the video decoder to check FrameType::Idr. We set GOP to 60 (one keyframe per second at 60fps). We tested.
Why muck around with P-frames and keyframes? Just make your video 1fps.
> Now it’s 10Mbps of blocky garbage that’s still 30 seconds behind.
10 Mbps is way too much. I occasionally watch YouTube videos where someone writes code. I set my quality to 1080p to be comparable with the article and YouTube serves me the video at way less than 1Mbps. I did a quick napkin math for a random coding video and it was 0.6Mbps. It’s not blocky garbage at all.
Thinks: this video[1] is the processed feed from the Huygens space probe landing on Saturn's moon Titan circa 2005. Relayed through the Cassini probe orbiting Saturn, 880 million miles from the Sun. At a total mission cost of 3.25 billion dollars. This is the sensor data, altitude, speed, spin, ultra violet, and hundreds of photos. (Read the description for what the audio is encoding, it's neat!)
Look at the end of the video, the photometry data count stops at "7996 kbytes received"(!)
> "Turns out, 40Mbps video streams don’t appreciate 200ms+ network latency. Who knew. “Just lower the bitrate,” you say. Great idea. Now it’s 10Mbps of blocky garbage"
Yeah, I'm thinking the same thing. Capture the text somehow and send that, and reconstruct it on the other end; and the best part is you only need to send each new character, not the whole screen, so it should be very small and lightning fast?
This was the most entertaining thing I read all day. Kudos.
I've had similar experiences in the past when trying to do remote desktop streaming for digital signage (which is not particularly demanding in bandwidth terms). Multicast streaming video was the most efficent, but annoying to decode when you dropped data. I now wonder how far I could have gone with JPEGs...
This reminds me of the time we built a big angular3 codebase for a content platform. When we had to launch, the search engines were expecting content to be part of page html while we are calling APIs to fetch the content ( angular3 didn’t have server side rendering at that point)
So only plausible thing to do was pre-build html pages for content pages and let load angular’s JS take its time to load ( for ux functionality). It looked like page flickered when JS loads for the first time but we solved the search engine problem.
They're just streaming a video feed of an LLC running in a terminal? Why not stream the actual text? Or fetch it piecemeal over AJAX requests? They complain that corporate networks support only HTTPS and nothing else? Do they not understand what the first T stands for?
We did something similar in one of the places I've worked at. We sent xy coordinates and pointer events from our frontend app to our backend/3d renderer and received JPEG frames back. All of that wrapped in protobuf messages and sent via WS connection. Surpassingly it kinda worked, not "60fps worked" though obviously.
WebSockets over TCP is probably always going to cause problems for streaming media.
WebRTC over UDP is one choice for lossy situations. Media over Quic might be another (is the future here?), and it might be more enterprise firewall friendly since HTTP3 is over Quic.
Yes, this is unfortunately still the way and was very common back when iOS Safari did not allow embedded video.
For a fast start of the video, reverse the implementation: instead of downgrading from Websockets to polling when connection fails, you should upgrade from polling to Websockets when the network allows.
Socket.io was one of the first libraries that did that switching and had it wrong first, too. Learned the enterprise network behaviour and they switched the implementation.
No mention of PNGs? I don’t usually go to jpegs first for screenshots of text. Did png have worse compression? Burn more cpu? I’m sure there are good reasons, but it seems like they’ve glossed over the obvious choice here.
PNGs are lossless so you can’t really dial up the compression. You can save space by reducing to 8-bit color (or grayscale!) but it’s basically the equivalent of raw pixels plus zlib.
A long time ago I was trying to get video multiplexing to work over mobile over 3G. We struggled with H264 which had broad enough hardware support but almost no tooling and software support on the phones we were targeting. Even with engineers from the phone manufacturer as liaison we struggled to get access to any kind or SDK etc. We ended up doing JPEG streaming instead, much like the article said. And it worked great but we discovered we were getting a fraction of the framerate reported in Flash players - the call to refresh the screen was async and the act of receiving and deciding the next frame staved the redraw so the phone spent more time receiving lots of frames but not showing them. Super annoying and I don’t think the project survived long enough for us to find a fix.
I was blown away when I realized I could stream mjpeg from a raspberry pi camera with lower latency and less ceremony than everything I tried with webrtc and similar approaches.
You can run all WebRTC traffic over a single port. It’s a shame you spent so much time/were frustrated by ICE errors
That’s great you got something better and with less complexity! I do think people push ‘you need UDP and BWE’ a little too zealously. If you have a homogeneous set of clients stuff like RTMP/Websockets seems to serve people well
JPEG is extremely efficient to [de/en]code on modern CPUs. You can get close to 1080p60 per core if you use a library that leverages SIMD.
I sometimes struggle with the pursuit of perfect codec efficiency when our networks have become this fast. You can employ half-assed compression and still not max out a 1gbps pipe. From Netflix & Google's perspective it totally makes sense, but unless you are building a streaming video platform with billions of customers I don't see the point.
Would HLS be an option? I publish my home security cameras via WebRTC, but I keep HLS as a escape for hotel/cafe WiFi situations (MediaMTX makes it easy to offer both).
I love the style of this blog-post, you can really tell that Luke has been deep down in the rabbit hole, encountered the Balrog and lived to tell the tale.
This is similar to what BrowserBox does for the same reasons outlined. Glad to see the control afforded by "ye olde ways" is recognized and widely appreciated.
A very stupid hack that can work to "fix" this could be to buffer the h264 stream at the data center using a proxy before sending it to the real client, etc.
Their h264 iframes were bigger than the jpegs because they told the h264 encoder to produce bigger images. If they had set it to produce images the same size as the jpegs it most likely would have resulted in higher quality.
“We didn’t have the expertise to build the thing we were building, got in way over our heads, and built a basic POC using legacy technology, which is fine.”
This is a beautiful cope. Every time technology rolls out something that works great 90% of the time for 90% of the people, those 10%s pile up big time in support and lost productivity. You need functional systems that fall back gracefully to 1994 if necessary.
I started the first ISP in my area. We had two T1s to Miami. When HD audio and the rudiments of video started to increase in popularity, I'd always tell our modem customers, "A few minutes of video is a lifetime of email. Remember how exciting email was?"
Eh, there are a few easy things one can try. Make sure to use a non-ancient kernel on the sender side (to get the necessary features), then enable BBR and NOTSENT_LOWAT (https://blog.cloudflare.com/http-2-prioritization-with-nginx...) to avoid buffering more than what's in-flight and then start dropping websocket frames when the socket says it's full.
Also, with tighter integration with the h264 encoder loop one could tell it which frames weren't sent and account for that in pframe generation. But I guess that wasn't available with that stack.
If you have latency detection already why not pause H.264 frames, then when ack comes just force a key frame and resume (perhaps with adjusted target bitrate)?
The LinkedIn slop tone, random bolding, miscopied Markdown tables makes me invoke: "please read the copy you worked on with AI"
smaller thing: many, many, moons ago, I did a lot of work with H.264. "A single H.264 keyframe is 200-500KB." is fantastical.
Can't prove it wrong because it will be correct given arbitrary dimensions and encoding settings, but, it's pretty hard to end up with.
Just pulled a couple 1080p's off YouTube, biggest I-frame is 150KB, median is 58KB (`ffprobe $FILE -show_frames -of compact -show_entries frame=pict_type,pkt_size | grep -i "|pict_type=I"`)
> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.
This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.
> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.
h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.
Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.
WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else
> The constraint that ruined everything: It has to work on enterprise networks. > You know what enterprise networks love? HTTP. HTTPS. Port 443. That’s it. That’s the list.
That's not enough.
Corporate networks also love to MITM their own workstations and reinterpret http traffic. So, no WebSockets and no Server-Side Events either, because their corporate firewall is a piece of software no one in the world wants and everyone in the world hates, including its own developers. Thus it only supports a subset of HTTP/1.1 and sometimes it likes to change the content while keeping Content-Length intact.
And you have to work around that, because IT dept of the corporation will never lift restrictions.
I wish I was kidding.
Back when I had a job at a big old corporation, a significant part of my value to the company was that I knew how to bypass their shitty MITM thing that broke tons of stuff, including our own software that we wrote. So I could solve a lot of problems people had that otherwise seemed intractable because IT was not allowed to disable it, and they didn't even understand the myriad ways it was breaking things.
> So, no WebSockets
The corporate firewall debate came up when we considered websockets at a previous company. Everyone has parroted the same information for so long that it was just assumed that websockets and corporate firewalls were going to cause us huge problems.
We went with websockets anyway and it was fine. Almost no traffic to the no-websockets fallback path, and the traffic that did arrive appeared to be from users with intermittent internet connections (cellular providers, foreign countries with poor internet).
I'm 100% sure there are still corporate firewalls out there blocking or breaking websocket connections, but it's not nearly the same problem in 2025 as it was in 2015.
> And you have to work around that, because IT dept of the corporation will never lift restrictions.
Unless the corporation is 100% in-office, I’d wager they do in fact make exceptions - otherwise they wouldn’t have a working videoconferencing system.
The challenge is getting corporate insiders to like your product enough to get it through the exception process (a total hassle) when the firewall’s restrictions mean you can’t deliver a decent demo.
Corporate IT needs to die.
It's not corporate IT's fault, it's usually corporate leaderships fault who often cosplay leading technology and not understanding it.
Wherever Tech is a first class citizen and seat at the corporate table, it can be different.
Believe me, the average Fortune 500 CEO does not know or care what “SSL MITM” is, or whether passwords should contain symbols and be changed monthly, or what the difference is between ‘VPN’ and ‘Zero Trust’.
They delegate that stuff. To the corporate IT department.
[delayed]
Sometimes they have checkboxes to tick in some compliance document and they must run the software that let them tick those checkboxes, no exceptions, because those compliances allow the company to be on the market. Regulatory captures, etc.
where else are you going to find customers that are so sticky it will take years for them to select another solution regardless of how crappy you are. that will staff teams to work around your failures. who, when faced with obvious evidence of the dysfunction of your product, will roundly blame themselves for not holding it properly. gaslight their own users. pay obscene amounts for support when all you provide is a voice mailbox that never gets emptied. will happily accept your estimate about the number of seats they need. when holding a retro about your failure will happily proclaim that there wasn't anything _they_ could have done, so case closed.
I think the general idea/flow of things is "numbers go up, until $bubble explodes, and we built up smaller things from the ground up, making numbers go up, bloating go up, until $bubble explodes..." and then repeat that forever. Seems to be the end result of capitalism.
If you wanna kill corporate IT, you have to kill capitalism first.
I’d say there’s nothing inherently capitalist about large and stupid bureaucracies (but I repeat myself) spending money in stupid ways. Military bureaucracies in capitalist countries do it. Military bureaucracies in socialist countries did it. Everything else in end-stage socialist countries did it too. I’m sorry, it’s not the capitalism—things’d be much easier if it were.
I don't believe that. I don't necessarily love capitalism (though I can't say I see very many realistic better alternatives either), but if HN is full of people who could do corporate IT better (read: sanely), then the conclusion is just that corporate IT is run by morons. Maybe that's because the corporate owners like morons, but nothing about capitalism inherently makes it so.
> corporate IT is run by morons
playing devil's advocate for a second, but corpIT is also working with morons as employees. most draconian rules used by corpIT have a basis in at least one real world example. whether that example happened directly by one of the morons they manage or passed along from corpIT lore, people have done some dumb as things on corp networks.
Yes, and the problem in that picture is the belief (whichever level of the management hierarchy it comes from) that you can introduce technical impediments against every instance of stupidity one by one until morons are no longer able to stupid. Morons will always find a way to stupid, and most organizations push the impediments well past the point of diminishing returns.
Apparently capitalism doesn’t pay enough for corporate IT admin jobs.
They even break server-sent events (which is still my default for most interactive apps)
There are other ways to make server-sent events work.
I try to remember many environments once likely supported Flash.
>And you have to work around that, because IT dept of the corporation will never lift restrictions.
Because otherwise people do dumb stuff like pasting proprietary designs or PII into deepseek
Oh, they'll do that anyway, once they find the workaround (Oh... you can paste a credit card if you put periods instead of dashes! Oh... I have to save the file and do it from my phone! Oh... I'll upload it as a .txt file and change the extension on the server!)
It's purely illusory security, that doesn't protect anything but does levy a constant performance tax on nearly every task.
>Oh, they'll do that anyway, once they find the workaround ...
This is assuming the DLP service blocks the request, rather than doing something like logging it and reported to your manager and/or CIO.
>It's purely illusory security, that doesn't protect anything but does levy a constant performance tax on nearly every task.
Because you can't ask deepseek to extract some unstructured data for you? I'm not sure what the alternative is, just let everyone paste info into deepseek? If you found out that your data got leaked because some employee pasted some data into some random third party service, and that the company didn't have any policies/technological measures against it, would your response still be "yeah it's fine, it's purely illusory security"?
What's the term for the ideology that "laws are silly because people sometimes break them"?
It's called black and white thinking
At the same time, enterprise is where the revenue is.
> When the network is bad, you get... fewer JPEGs. That’s it. The ones that arrive are perfect.
You can have still have weird broken stallouts though.
I dunno, this article has some good problem solving but the biggest and mostly untouched issue is that they set the minimum h.264 bandwidth too high. H.264 can do a lot better than JPEG with a lot less bandwidth. But if you lock it at 40Mbps of course it's flaky. Try 1Mbps and iterate from there.
And going keyframe-only is the opposite of how you optimize video bandwidth.
It might be possible to buffer and queue jpegs for playback as well to help with weird broken stall outs.
Video players used to call it buffering, and resolving it was called buffering issues.
Players today can keep an eye on network quality while playing too, which is neat.
> Try 1Mbps and iterate from there.
From the article:
“Just lower the bitrate,” you say. Great idea. Now it’s 10Mbps of blocky garbage that’s still 30 seconds behind.
Rejecting it out of hand isn't actually trying it.
10Mbps is still way too high of a minimum. It's more than YouTube uses for full motion 4k.
And it would not be blocky garbage, it would still look a lot better than JPEG.
1Mbps for video is rule of thumb I use. Of course that will depend on customer expectations. 500K can work, but it won’t be pretty.
For normal video I think that's a good rule of thumb.
For mostly-static content at 4fps you can cut a bunch more bitrate corners before it looks bad. (And 2-3 JPEGs per second won't even look good at 1Mbps.)
Still images will use much more BW for the same perceived quality in my experience.
Proper rate control for such realtime streaming would also lower framerate and/or resolution to maintain the best quality and latency they can over dynamic network conditions and however little bandwidth they have. The fundamental issue is that they don't have this control loop at all, and are badly simulating it by polling JPEGs.
They might want to check out what VNC has been doing since 1998– keep the client-pull model, break the framebuffer up into tiles and, when client requests an update, perform a diff against last frame sent, composite the updated tiles client-side. (This is what VNC falls back to when it doesn’t have damage-tracking from the OS compositor)
This would really cut down on the bandwidth of static coding terminals where 90% of screen is just cursor flashing or small bits of text moving.
If they really wanted to be ambitious they could also detect scrolling and do an optimization client-side where it translates some of the existing areas (look up CopyRect command in VNC).
If you are ok with a second or so of latency then MPEG-DASH (standardized version of HTTP Live Streaming) is likely the best bet. You simply serve the video chunks over HTTP so it should be just as compatible as the JPEG solution used here but provide 60fps video rather than crappy jpegs.
The standard supports adaptive bit rate playback so you can provide both low quality and high quality videos and players can switch depending on bandwidth available.
> The fix was embarrassingly simple: once you fall back to screenshots, stay there until the user explicitly clicks to retry.
There is another recovery option:
- increase the JPEG framerate every couple seconds until the bandwidth consumption approaches the H264 stream bandwidth estimate
- keep track latency changes. If the client reports a stable latency range, and it is acceptable (<1s latency, <200ms variance?) and bandwidth use has reached 95% of H264 estimate, re-activate the stream
Given that text/code is what is being viewed, lower res and adaptive streaming (HLS) are not really viable solutions since they become unreadable at lower res.
If remote screen sharing is a core feature of the service, I think this is a reasonable next step for the product.
That said, IMO at a higher level if you know what you're streaming is human-readable text, it's better to send application data pipes to the stream rather than encoding screenspace videos. That does however require building bespoke decoders and client viewing if real time collaboration network clients don't already exist for the tools (but SSH and RTC code editors exist)
Having pair programmed over some truly awful and locked down connections before, dropped frames are infinitely better than blurred frames which make text unreadable whenever the mouse is moved. But 40mbps seems an awful lot for 1080p 60fps.
Temporal SVC (reduce framerate if bandwidth constrained) is pretty widely supported by now, right? Though maybe not for H.264, so it probably would have scaled nicely but only on Webrtc?
I made this because I got tired of screensharing issues in corporate environments: https://bluescreen.live (code via github).
Screenshot once per second. Works everywhere.
I’m still waiting for mobile screenshare api support, so I could quickly use it to show stuff from my phone to other phones with the QR link.
There are so many things that I would have done differently.
> We added a keyframes_only flag. We modified the video decoder to check FrameType::Idr. We set GOP to 60 (one keyframe per second at 60fps). We tested.
Why muck around with P-frames and keyframes? Just make your video 1fps.
> Now it’s 10Mbps of blocky garbage that’s still 30 seconds behind.
10 Mbps is way too much. I occasionally watch YouTube videos where someone writes code. I set my quality to 1080p to be comparable with the article and YouTube serves me the video at way less than 1Mbps. I did a quick napkin math for a random coding video and it was 0.6Mbps. It’s not blocky garbage at all.
Setting to 1 FPS might not be enough. GOP or P frame setting needs to be adjusted to make every frame keyframe.
"Think “screen share, but the thing being shared is a robot writing code.”"
Thinks: why not send text instead of graphics, then? I'm sure it's more complicated than that...
Thinks: this video[1] is the processed feed from the Huygens space probe landing on Saturn's moon Titan circa 2005. Relayed through the Cassini probe orbiting Saturn, 880 million miles from the Sun. At a total mission cost of 3.25 billion dollars. This is the sensor data, altitude, speed, spin, ultra violet, and hundreds of photos. (Read the description for what the audio is encoding, it's neat!)
Look at the end of the video, the photometry data count stops at "7996 kbytes received"(!)
> "Turns out, 40Mbps video streams don’t appreciate 200ms+ network latency. Who knew. “Just lower the bitrate,” you say. Great idea. Now it’s 10Mbps of blocky garbage"
Who could do anything useful with 10Mbps. :/
[1] https://en.wikipedia.org/wiki/File:Huygens_descent.ogv
Yeah, I'm thinking the same thing. Capture the text somehow and send that, and reconstruct it on the other end; and the best part is you only need to send each new character, not the whole screen, so it should be very small and lightning fast?
Sounds kind of like https://asciinema.org/ (which I've never used, but it seems cool).
This was the most entertaining thing I read all day. Kudos.
I've had similar experiences in the past when trying to do remote desktop streaming for digital signage (which is not particularly demanding in bandwidth terms). Multicast streaming video was the most efficent, but annoying to decode when you dropped data. I now wonder how far I could have gone with JPEGs...
If playing with Chromecast types multicast or streaming one frame at a time manually worked pretty good.
This reminds me of the time we built a big angular3 codebase for a content platform. When we had to launch, the search engines were expecting content to be part of page html while we are calling APIs to fetch the content ( angular3 didn’t have server side rendering at that point)
So only plausible thing to do was pre-build html pages for content pages and let load angular’s JS take its time to load ( for ux functionality). It looked like page flickered when JS loads for the first time but we solved the search engine problem.
They're just streaming a video feed of an LLC running in a terminal? Why not stream the actual text? Or fetch it piecemeal over AJAX requests? They complain that corporate networks support only HTTPS and nothing else? Do they not understand what the first T stands for?
Suppose an LLM opens a browser, or opens a corporate .exe and GUI and starts typing in there and clicking buttons.
So it’s video of an AI typing text?
Why not just send text? Why do you need video at all?
Why send anything at all if the AI isn't even good enough to solve their own problems?
(Although the fact they decided to use Moonlight in an enterprise product makes me wonder if their product actually was vibe coded)
> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB.
I believe the latter can be adjusted in codec settings.
Of course. But same quality h264 keyframe will not be much smaller than JPEG.
We did something similar in one of the places I've worked at. We sent xy coordinates and pointer events from our frontend app to our backend/3d renderer and received JPEG frames back. All of that wrapped in protobuf messages and sent via WS connection. Surpassingly it kinda worked, not "60fps worked" though obviously.
WebSockets over TCP is probably always going to cause problems for streaming media.
WebRTC over UDP is one choice for lossy situations. Media over Quic might be another (is the future here?), and it might be more enterprise firewall friendly since HTTP3 is over Quic.
Yes, this is unfortunately still the way and was very common back when iOS Safari did not allow embedded video.
For a fast start of the video, reverse the implementation: instead of downgrading from Websockets to polling when connection fails, you should upgrade from polling to Websockets when the network allows.
Socket.io was one of the first libraries that did that switching and had it wrong first, too. Learned the enterprise network behaviour and they switched the implementation.
No mention of PNGs? I don’t usually go to jpegs first for screenshots of text. Did png have worse compression? Burn more cpu? I’m sure there are good reasons, but it seems like they’ve glossed over the obvious choice here.
PNG is VERY slow compared to other formats. Not suitable for this sort of thing.
PNGs are lossless so you can’t really dial up the compression. You can save space by reducing to 8-bit color (or grayscale!) but it’s basically the equivalent of raw pixels plus zlib.
PNGs likely perform great, existing enterprise network filters, browser controls, etc, might not, even with how old PNGs are now.
So they replaced a TCP connection with no congestion control with a sycnronous poll of an endpoint which is inherently congestion controlled.
I wonder if they just tried restarting the stream at a lower bitrate once it got too delayed.
The talk about how the images looks more crisp at a lower FPS is just tuning that I guess they didn't bother with.
webp is smaller than jpeg
https://developers.google.com/speed/webp/docs/webp_study
ALSO - the blog author could simplify - you don't need any code at all at the web browser.
The <img> tag automatically does motion jpeg streaming.
… and JPEG XL is smaller than WebP.
JPEG XL looks to have pretty poor support.
https://caniuse.com/jpegxl
A long time ago I was trying to get video multiplexing to work over mobile over 3G. We struggled with H264 which had broad enough hardware support but almost no tooling and software support on the phones we were targeting. Even with engineers from the phone manufacturer as liaison we struggled to get access to any kind or SDK etc. We ended up doing JPEG streaming instead, much like the article said. And it worked great but we discovered we were getting a fraction of the framerate reported in Flash players - the call to refresh the screen was async and the act of receiving and deciding the next frame staved the redraw so the phone spent more time receiving lots of frames but not showing them. Super annoying and I don’t think the project survived long enough for us to find a fix.
It’s always TCP_NODELAY seems relevant here: https://news.ycombinator.com/item?id=40310896
so did they reinvent mjpeg
I was blown away when I realized I could stream mjpeg from a raspberry pi camera with lower latency and less ceremony than everything I tried with webrtc and similar approaches.
An MPEG-1-based screen sharing experiment appeared here 10 years ago:
- https://news.ycombinator.com/item?id=9954870
- https://phoboslab.org/log/2015/07/play-gta-v-in-your-browser...
from first principles.
Doesn’t matter now, but what led you to TURN?
You can run all WebRTC traffic over a single port. It’s a shame you spent so much time/were frustrated by ICE errors
That’s great you got something better and with less complexity! I do think people push ‘you need UDP and BWE’ a little too zealously. If you have a homogeneous set of clients stuff like RTMP/Websockets seems to serve people well
> Why JPEGs Actually Slap
JPEG is extremely efficient to [de/en]code on modern CPUs. You can get close to 1080p60 per core if you use a library that leverages SIMD.
I sometimes struggle with the pursuit of perfect codec efficiency when our networks have become this fast. You can employ half-assed compression and still not max out a 1gbps pipe. From Netflix & Google's perspective it totally makes sense, but unless you are building a streaming video platform with billions of customers I don't see the point.
Would HLS be an option? I publish my home security cameras via WebRTC, but I keep HLS as a escape for hotel/cafe WiFi situations (MediaMTX makes it easy to offer both).
Thought of the same. I have not set it up outside of hobby projects, but it should work over HTTP as it says on the box, even inside a strict network?
Yes, it is strictly HTTP, not even persistent connections required.
> I mashed F5 like a degenerate.
I love the style of this blog-post, you can really tell that Luke has been deep down in the rabbit hole, encountered the Balrog and lived to tell the tale.
I like it too, even though it has that distinctive odor of being totally written by chatgpt though. (a bit distracting tbh)
I guess this is great as long as you don't worry about audio sync?
at least the ai agents aren't talking back to us
This is similar to what BrowserBox does for the same reasons outlined. Glad to see the control afforded by "ye olde ways" is recognized and widely appreciated.
A very stupid hack that can work to "fix" this could be to buffer the h264 stream at the data center using a proxy before sending it to the real client, etc.
One of the big issues was latency.
I’m surprised that H264 I-frame only compresses less than JPG.
Maybe because the basic frequency transform is 4x4 vs 8x8 for JPG?
Their h264 iframes were bigger than the jpegs because they told the h264 encoder to produce bigger images. If they had set it to produce images the same size as the jpegs it most likely would have resulted in higher quality.
Awesome!
Good engineering: when you're not too proud to do the obvious, but sort of cheesy-sounding solution.
One thing this article does point to indirectly is sometimes, simple scales, and complex fails.
“We didn’t have the expertise to build the thing we were building, got in way over our heads, and built a basic POC using legacy technology, which is fine.”
> I mashed F5 like a degenerate
Bargaining.
This is a beautiful cope. Every time technology rolls out something that works great 90% of the time for 90% of the people, those 10%s pile up big time in support and lost productivity. You need functional systems that fall back gracefully to 1994 if necessary.
I started the first ISP in my area. We had two T1s to Miami. When HD audio and the rudiments of video started to increase in popularity, I'd always tell our modem customers, "A few minutes of video is a lifetime of email. Remember how exciting email was?"
> looks at TCP congestion control literature
> closes tab
Eh, there are a few easy things one can try. Make sure to use a non-ancient kernel on the sender side (to get the necessary features), then enable BBR and NOTSENT_LOWAT (https://blog.cloudflare.com/http-2-prioritization-with-nginx...) to avoid buffering more than what's in-flight and then start dropping websocket frames when the socket says it's full.
Also, with tighter integration with the h264 encoder loop one could tell it which frames weren't sent and account for that in pframe generation. But I guess that wasn't available with that stack.
If you have latency detection already why not pause H.264 frames, then when ack comes just force a key frame and resume (perhaps with adjusted target bitrate)?
That would require that they understand the protocol stack they're using to send H.264 frames
Yeah, monitor the send queue length and reduce bit rate accordingly.
Another case of we’re going backwards. The boring stuff is what works every time…
So, they've invented MJPEG?
Or is it intra-only H.264?
I mean, none of this is especially new. It's an interesting trick though!
The LinkedIn slop tone, random bolding, miscopied Markdown tables makes me invoke: "please read the copy you worked on with AI"
smaller thing: many, many, moons ago, I did a lot of work with H.264. "A single H.264 keyframe is 200-500KB." is fantastical.
Can't prove it wrong because it will be correct given arbitrary dimensions and encoding settings, but, it's pretty hard to end up with.
Just pulled a couple 1080p's off YouTube, biggest I-frame is 150KB, median is 58KB (`ffprobe $FILE -show_frames -of compact -show_entries frame=pict_type,pkt_size | grep -i "|pict_type=I"`)
at least it had a minimum of Clause. Clause. Punchline.