all the containers change IP addresses frequently. For home assistant a static IP address of the proxy manager is mandatory in order to reach it. For jellyfin it is useful to see which device accesses jellyfin. If the IP always changes, it doesn’t work properly.
How do I fix a container IP with podman compose (or docker)
You declare the ip in your setup, or in the yaml file. An example for the docker-compose file is in the link below. I’d expect you’ll want to declare the network and such as well, if you’re not familiar.
https://gist.github.com/natcl/3d881d00a56c8a961e6dab8ba51a5a37
It worked, thank you
This was a fast response, and doesn’t cover the whole scope of handling networking in docker. As mentioned elsewhere there is a different network philosophy for Standalone Containers & Overlay networking.
Thx. That helps a lot
This feels like an anti-pattern that should be avoided. Docker compose allows for scaling individual services to have more than one instance. By hard assigning an IP address to a service, how is that going to be scaled in the future?
I don’t know how to reconcile this issue directly for NPM, but the way to do this with Traefik is to use container labels (not hard assigning IP address) such that Traefik can discover the service and wire itself up automatically. I’d imagine there should be a similar way to perform service discovery in NPM?
The bigger issue (IMO) is, that you now have a hard requirement on the startup order of your services. If another one happens to get the IP assigned automatically befor your service starts that requests it explicitly, you now have a conflict that you manually have to resolve.
DNS is the only sane solution here.
DNS can also give you some headaches. If you also need to reach other things on the LAN by name, or want to have a mesh VPN going with Tailscale, now you have to juggle three DNS domains.
I meant DNS within your container network. Exposed stuff should be mapped to host ports.
docker compose isn’t really scalable. If you need automatic, hgih availability load balancing, you should look into Kubernetes Ingress.
It’s not as a fully scalable solution, no. Without swarm, last I checked, it cannot even really run on multiple instances. However, it does have the functionality to scale individual services within the same host if resources are available and the service can benefit from such a scaling. It is not very uncommon to see something require multiple worker instances and this breaks that paradigm.
Service discovery will certainly play a much larger role in even more orchestrated systems, but doesn’t mean it shouldn’t start here.
Docker can keep track of assigned ranges. It will either allocate around them or tell you there’s a conflict if you have explicit overlaps, and refuse to start the container.
Also, for most self-hosters scaling is the least of their needs.
Except it is explicitly being told to use a singular IP address here. So the engine is either going to go against explicit assignment or going to create a conflict within its own network. Neither of which are the expected behavior.
Just because people are self hosting, doesn’t mean they should be doing things incorrectly.