Be careful, Docker might be exposing ports to the world

Recently, I noticed logs for one of my web services had strange entries that looked like a bot trying to perform scripted attacks on an application endpoint. I was surprised, because all the endpoints that were exposed over the public Internet were protected by some form of authentication, or were locked down to specific IP addresses—or so I thought.

I had re-architected the service using Docker in the past year, and in the process of doing so, I changed the way the application ran—instead of having one server per process, I ran a group of processes on one server, and routed traffic to them using DNS names (one per process) and Nginx to proxy the traffic.

In this new setup, I built a custom firewall using iptables rules (since I had to control for a number of legacy services that I have yet to route through Docker—someday it will all be in Kubernetes), installed Docker, and set up a Docker Compose file (one per server) that ran all the processes in containers, using ports like 1234, 1235, etc.

The Docker Compose port declaration for each service looked like this:

version: '3.7'
      - ""

Nginx was then proxying the requests routing DNS names like (on port 443) through to the process on port 1234, to port 1235, and so on.

I thought the following port declaration would only expose the port on localhost:, according to the Published ports documentation. And in normal circumstances, this does seem to work correctly (see a test script proving this here).

But in my case, likely due to some of the other custom iptables rules conflicting with Docker's rules, Docker's iptables modifications exposed the container ports (e.g. 1234) to the outside world, on the eth0 interface.

I was scratching my head as to how external requests to server-ip:1234 were working even though the port declaration was binding the port to the localhost, and I found that I'm not the only person who was bumping into this issue. The fix, in my case, was to add a rule to the DOCKER-USER chain:

iptables -I DOCKER-USER -i eth0 ! -s -j DROP

This rule, which I found buried in some documentation about restricting connections to the Docker host, drops any traffic from a given interface that's not coming from localhost. This works in my case, because I'm not trying to expose any Docker containers to the world—if you wanted to have a mix of some containers open to the world, and others proxied, this wouldn't work for you.

After applying the rule, verify it's sticking by restarting your server and checking the DOCKER-USER chain:

$ sudo iptables -L
Chain DOCKER-USER (1 references)
target     prot opt source               destination        
DROP       all  -- !localhost            anywhere           
RETURN     all  --  anywhere             anywhere

To be doubly-sure your firewall is intact, you can verify which ports are open (-p- says 'scan all ports, 1-65535) using nmap:

sudo nmap -p- [server-ip-address]

This behavior is confusing and not well-documented, even more so because a lot of these options behave subtly different depending on if you're using docker run, docker-compose, or docker stack. As another example of this maddening behavior, try figuring out how cpu and memory restrictions work with docker run vs. docker-compose v2, vs docker-compose v3 vs 'Swarm mode'!

I think I might be going crazy with the realization that—at least in some cases—Kubernetes is simpler to use than Docker, owing to its more consistent networking model.


Thanks for this post, I have also realized this only very recently.

Thanks for this post!

I got caught by this bug too, after exposing a password-less Postgres instance to the internet and then finding someone had compromised it and was using it, presumably, to mine a cryptocurrency. Feel stupid for doing that and accept responsibility, but still can't help but feel Docker's default behaviour here ought to be far more verbose about the fact that it's exposing ports.

Thank you for the article!
I ran into an unfortunate side effect of the iptable rule "iptables -I DOCKER-USER -i eth0 ! -s -j DROP". While it protected my containers from outside access as intended, it somehow blocked all DNS queries my containerized services tried to make.

I found that the suggested iptable rules by still protect my containers, but don't interfere with the DNS communication. The rules are
iptables -I DOCKER-USER -i eth0 -j DROP
iptables -I DOCKER-USER -m state --state RELATED,ESTABLISHED -j ACCEPT

It sounds like maybe the docker-compose "ports" directive is the same as the docker --publish directive? That is, docker-compose makes the ports public by default which is the opposite of docker. If you want internal ports only, you might have to use the "expose" directive in your docker-compose configuration.

From the docker-compose documentation:

Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.