One month of Podman
In my previous post Migrating to Poadman you basically accompanied me on my migration from a docker-compose file to a more CLI-based approach with Podman. I am still working on my logging-vs-tracing showcase, but there were a few surprises and I just wanted to write a follow-up instead of just modifying my original post.
I originally started with OpenTracing, but while I was trying to figure everything out the Quarkus project finally made the switch to OpenTelemetry and I had to start again. I am going to explain how in my next post, so let us just accept the fact that OpenTelemetry needs another collector which has to be added to my setup.
Adding another container is no problem, but when I tried to fire it up I found this in the logs:
Error: cannot setup pipelines: cannot start receivers: listen udp :6832: bind: address already in use
The apparent problem here is that both the jaeger-collector and the otel-collector run on the same port. To my surprise, there is no network separation of the containers inside of a pod (by default), but after some headaches to be expected. When you publish a port it also just works on pod-level for rootless.
Networking between pods
I spent some hours trying to figure out, if I can just disable this port either for the jaeger-collector or the otel-collector, but to no avail. Desperate as I was, I just moved both collectors to different pod and considered it done.
This was, when I discovered how my networking really worked: Although I can access the published ports from my host machine, the application inside of the container inside of a pod cannot.
Possible solutions that I came up with:
- Switch the pod network from bridge to sirp4netns mode and find solutions for all newly introduced problems.
- Move from jaeger-all-in-one to the standalone versions of each component. (There is also opentelemetry-all-in-one version, but according to this post it has been discontinued.)
- Start Jaeger first and just hope it doesn’t complain.
Yes, I went the obvious way and just started Jaeger first, worked like a charm..
No support for UDP yet
LogManager error of type GENERIC_FAILURE: Port localhost:12201 not reachable
With it, I could verify two things:
- I couldn’t reach Fluentd from my host machine.
- And inside of the container everything worked fine.
Some hours later I stumbled upon this:
This pretty much explained all my problems: Currently gvproxy has no support for udp and just ignores the udp flag altogether. My instances were expecting udp, all they got was tcp and this ultimately failed.
The easiest solution here was to configure gelf to run on tcp, which both the input plugin and in the gelf handler of Quarkus support. I don’t want to bore you with the troubles I had to find the correct config option, so I will just point you to the source code:
Access to local filesystems
This virtualization causes problems, when the container shim expects paths inside of the guest machine to be the same as on the underlying host machine.
To bypass this, Docker automatically mounts the user home into the guest machine, so all access can be done like the container runs on the same host.
Podman doesn’t mount your home automatically, but can be forced to when you init your machine like this:
podman machine init -v $HOME:$HOME
I know I don’t have to remind you, but my logging-vs-tracing showcase can still be found here: