Running squid in container is a bit tricky, since it is indeed an ancient piece of software, but I have managed to run it successfully before with squid configuration like this:
That's a more elegant approach. I usually just plow through obstacles, and the end result is not always ideal -- I like your approach better than the sidecar, I guess that I was using sidecars for other things and it sort of influenced my approach.
I'll try it your suggestions out and update the article, and thank you for your comment, already made sharing this worth it.
Not just squid but mostly any http proxy can be run in forward mode if you want.
Caddys "magic TLS" can be neat for this if you actually do want to dynamically intercept those https connections in an easy way. It's a use-case where Caddy really shines. You can go nuts trying to configure that cleanly in squid. The docs (perhaps intentionally) make you work for the hidden knowledge of these dark arts. You also get modernities like builtin http2, http3, etc.
Nobody else bothered by squids very lengthy restart time or have I just never configured it properly?
(Not to dunk on squid, it's otherwise mostly great. Especially for its caching features)
We use squid for egress control on Kubernetes and have also written a controller that runs in a sidecar container next to squid that monitors for custom CRD's, such as a whitelists.
The controller then updates squid.conf and reloads squid. This allows pods/namespaces to define their own whitelists.
The great thing about using squid and disabling DNS is you can stop DNS and HTTP exfil, but still allow certain websites to be accessible.
I am struggling to lock down a pod in my home cluster to allow local connections to it's web UI but force all other connections through a VPN client. I'm going to investigate if I could use squid for this.
My next approach is going to involve using a sidecar.
One heads up to the author, the text based charts didn't render well on FF mobile. Text is meant to reflow based on screen size, typeface etc. I feel this is a great case for using a drawing/image instead.
Using an http proxy like squid (or apache/haproxy/caddy/envoy/trafficserver/freenginx) does sound like what you should do next.
If you need the pod to do outbound connections as well as receive incoming traffic, usually that would be two different proxies (forward and reverse, respectively). Unless you do some fancy p2p service mesh.
I had challenges with split-DNS in my homelab k3s cluster trying to do this. I ended up just putting the apps in docker-compose on a VM that has static routes for my local homelab networks. I looked at tailscale to solve this since it has a kubernetes operator, but tailscale doesn't fit my use cases or work well with all of my devices.
> I had challenges with split-DNS in my homelab k3s cluster trying to do this. I ended up just putting the apps in docker-compose on a VM that has static routes for my local homelab networks. I looked at tailscale to solve this since it has a kubernetes operator, but tailscale doesn't fit my use cases or work well with all of my devices.
I don't need tails scale for this, seems like overkill.
I would like to better understand why my combination of marked packets and SOCK5 proxy are not fully working for certain UDP traffic. I also need to investigate if disabling ipv6 will help.
Using a VM or docker compose when I have k3s feels like admitting defeat with out understanding why.
> I would like to better understand why my combination of marked packets and SOCK5 proxy are not fully working for certain UDP traffic
I think UDP support for SOCKS5 proxies and clients is very spotty, especially beyond DNS. Probably some bugs out there. That might go for UDP in more or less esoteric container networking setups too...
If everything else fails, I've had the least hassle with socat, as well as just chucking workloads in full vm (if in container with --network=host) and using ip routes and policies.
You don't need a sidecar to stream logs of squid, that's anti-pattern, instead just tell squid to write logs to /dev/stdout, like this:
Running squid in container is a bit tricky, since it is indeed an ancient piece of software, but I have managed to run it successfully before with squid configuration like this: and deployment has these set, - UID 31 is squid user inside of containerThat's a more elegant approach. I usually just plow through obstacles, and the end result is not always ideal -- I like your approach better than the sidecar, I guess that I was using sidecars for other things and it sort of influenced my approach.
I'll try it your suggestions out and update the article, and thank you for your comment, already made sharing this worth it.
This is great! The only downside is that the app needs to understand proxies.
Not just squid but mostly any http proxy can be run in forward mode if you want.
Caddys "magic TLS" can be neat for this if you actually do want to dynamically intercept those https connections in an easy way. It's a use-case where Caddy really shines. You can go nuts trying to configure that cleanly in squid. The docs (perhaps intentionally) make you work for the hidden knowledge of these dark arts. You also get modernities like builtin http2, http3, etc.
Nobody else bothered by squids very lengthy restart time or have I just never configured it properly?
(Not to dunk on squid, it's otherwise mostly great. Especially for its caching features)
We use squid for egress control on Kubernetes and have also written a controller that runs in a sidecar container next to squid that monitors for custom CRD's, such as a whitelists.
The controller then updates squid.conf and reloads squid. This allows pods/namespaces to define their own whitelists.
The great thing about using squid and disabling DNS is you can stop DNS and HTTP exfil, but still allow certain websites to be accessible.
I like this approach!
I am struggling to lock down a pod in my home cluster to allow local connections to it's web UI but force all other connections through a VPN client. I'm going to investigate if I could use squid for this.
My next approach is going to involve using a sidecar.
One heads up to the author, the text based charts didn't render well on FF mobile. Text is meant to reflow based on screen size, typeface etc. I feel this is a great case for using a drawing/image instead.
Using an http proxy like squid (or apache/haproxy/caddy/envoy/trafficserver/freenginx) does sound like what you should do next.
If you need the pod to do outbound connections as well as receive incoming traffic, usually that would be two different proxies (forward and reverse, respectively). Unless you do some fancy p2p service mesh.
I had challenges with split-DNS in my homelab k3s cluster trying to do this. I ended up just putting the apps in docker-compose on a VM that has static routes for my local homelab networks. I looked at tailscale to solve this since it has a kubernetes operator, but tailscale doesn't fit my use cases or work well with all of my devices.
> I had challenges with split-DNS in my homelab k3s cluster trying to do this. I ended up just putting the apps in docker-compose on a VM that has static routes for my local homelab networks. I looked at tailscale to solve this since it has a kubernetes operator, but tailscale doesn't fit my use cases or work well with all of my devices.
I don't need tails scale for this, seems like overkill.
I would like to better understand why my combination of marked packets and SOCK5 proxy are not fully working for certain UDP traffic. I also need to investigate if disabling ipv6 will help.
Using a VM or docker compose when I have k3s feels like admitting defeat with out understanding why.
> I would like to better understand why my combination of marked packets and SOCK5 proxy are not fully working for certain UDP traffic
I think UDP support for SOCKS5 proxies and clients is very spotty, especially beyond DNS. Probably some bugs out there. That might go for UDP in more or less esoteric container networking setups too...
If everything else fails, I've had the least hassle with socat, as well as just chucking workloads in full vm (if in container with --network=host) and using ip routes and policies.
I’ve been working on running agents (Claude agent sdk) on k8s this looks great to control their egress
Pragmatic and practical. I learned something, thanks.