fluentd

Ignore noisy logs with fluentd in EKS or other Kubernetes clusters

Recently, I decided to use the fluentd-kubernetes-daemonset project to easily ship all logs from an EKS Kubernetes cluster in Amazon to an Elasticsearch cluster operating elsewhere.

The initial configuration worked great out of the box—just fill in details like the FLUENT_ELASTICSEARCH_HOST and any authentication info, and then deploy the RBAC rules and DaemonSet into your cluster, and you're off to the races (assuming your Elasticsearch instance is configured to allow access from the cluster!).

But once I did that, I noticed the brand new EKS cluster was sending over 16,000 log messages per second to Elasticsearch. Doing a tiny bit of analysis (not much was required, honestly), I found that over 98% of the logs were coming from two EKS-specific noisy containers, efs-csi-node and ebs-snapshot-controller.