Tutorial: Customizing Your CoreDNS Configuration

In addition to the default CoreDNS configuration, you can also customize the DNS behavior in your cluster by creating a coredns-custom ConfigMap in the kube-system namespace.

This ConfigMap can contain multiple configuration files:

  • Files with the .include extension are merged into the default .:53 server block.

  • Files with the .server extension create additional DNS server blocks.

Each file name in the ConfigMap must be unique.

The following examples will demonstrate how to use these configuration files to extend the CoreDNS behavior in your cluster.

Activating Logging on the CoreDNS Server

Before you begin:

  • Make sure your cluster:

    • Is configured to allow administrative access from your IP or CIDR. For more information, see Updating a Cluster.

    • Has at least one running node. For more information, see Creating a Node Pool.

You can enable DNS query logging on the default CoreDNS server by writing an .include configuration file.

  1. Create a coredns-custom ConfigMap containing an .include configuration:

    Request sample: CUSTOM_DNS_FILE.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns-custom
      namespace: kube-system
    data:
      log.include: |
        log

    You can check the changes that will be applied to your cluster by running the following command:

    Request sample
    $ kubectl diff -f ~/path/to/CUSTOM_DNS_FILE.yaml
  2. Apply the configuration to your cluster using the following command:

    Request sample
    $ kubectl apply -f ~/path/to/CUSTOM_DNS_FILE.yaml

    Enabling logging may impact performance on production clusters if a large number of DNS requests are handled by CoreDNS.

  3. Restart the CoreDNS deployment to apply the new configuration:

    Request sample
    $ kubectl rollout restart deployment -n kube-system coredns
    Result sample
    deployment.apps/coredns restarted
  4. You can verify that logging is working by starting a debug pod:

    Request sample
    $ kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
    Result sample
    If you don't see a command prompt, try pressing enter.
    dnstools# host kubernetes
    kubernetes.default.svc.cluster.local has address 192.0.2.10
    dnstools#
    E0224 14:01:26.474242   37983 v2.go:104] write on closed stream 0
    pod "dnstools" deleted

    The kubectl run command creates a temporary pod named dnstools using the infoblox/dnstools image and attaches an interactive terminal to it (-it). Once the pod starts, you are connected to the shell inside the container: this is indicated by the dnstools# prompt. From this shell, you can run DNS tools such as host to test DNS resolution within the cluster. When you exit the shell, the pod is automatically deleted because the --rm option was used.

  5. From inside the pod, run a DNS query to generate a log entry:

    Request sample
    $ dnstools# host kubernetes
    Result sample
    kubernetes.default.svc.cluster.local has address 10.92.0.4

    To exit the container, either press Ctrl+D or type and enter exit.

  6. You can check the logs of the CoreDNS pods in another terminal to confirm that the DNS queries are being logged properly:

    Request sample
    $ kubectl logs -f -n kube-system -l k8s-app=coredns
    Result sample
    maxprocs: Updating GOMAXPROCS=1: using minimum allowed GOMAXPROCS
    [WARNING] No files matching import glob pattern: /custom/*.server
    .:53
    [WARNING] No files matching import glob pattern: /custom/*.server
    [INFO] plugin/reload: Running configuration SHA512 = cbad7685960d3c9a274156888705ab6a3ecb9b6c1ef0a81930448b0259d443d061c3bcbf8c72e95937899ff6346e44900ca98b9a82f7540ed8f1c482d74a695a
    CoreDNS-1.14.1
    linux/amd64, go1.25.6, 80527fd
    [INFO] 127.0.0.1:33245 - 40321 "HINFO IN 1556348392293465600.6124538704085000125. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.002928639s
    [WARNING] No files matching import glob pattern: /custom/*.server
    [INFO] 10.91.2.213:57441 - 57970 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139929s
    [INFO] 10.91.2.213:48137 - 49709 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080541s
    [WARNING] No files matching import glob pattern: /custom/*.server
    [WARNING] No files matching import glob pattern: /custom/*.server

Adding a Custom Domain with a Static IP

Before you begin:

  • Make sure your cluster:

    • Is configured to allow administrative access from your IP or CIDR. For more information, see Updating a Cluster.

    • Has at least one running node. For more information, see Creating a Node Pool.

You can configure CoreDNS to resolve a custom domain to a static IP for all workloads in your cluster. In this example, we will create an example.local DNS zone by adding a .server configuration file named example-domain.server.

  1. Create a coredns-custom ConfigMap containing a .server configuration:

    Request sample: CUSTOM_DNS_FILE.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns-custom
      namespace: kube-system
    data:
      log.include: |
        log
      example-domain.server: |-
        # Define a custom DNS zone
        example.local. {
          errors
          forward . /etc/resolv.conf
          hosts {
            192.0.2.10 app.example.local.
            192.0.2.11 db.example.local.
            192.0.2.12 chat.example.local.
            fallthrough
          }
        }

    You can check the changes that will be applied to your cluster by running the following command:

    Request sample
    $ kubectl diff -f ~/path/to/CUSTOM_DNS_FILE.yaml
  2. Apply the configuration to your cluster:

    Request sample
    $ kubectl apply -f CUSTOM_DNS_FILE.yaml
    Result sample
    configmap/coredns-custom configured
  3. Restart the CoreDNS deployment to apply the new configuration:

    Request sample
    $ kubectl rollout restart deployment -n kube-system coredns
    Result sample
    deployment.apps/coredns restarted
  4. You can verify that the new DNS zone was loaded by checking the CoreDNS logs:

    Request sample
    $ kubectl logs -n kube-system -l k8s-app=coredns

    You should be able to see the new zone listed in the startup logs:

    Partial result sample
    .:53
    example.local.:53
  5. You can test the new domain resolution by starting a debug pod:

    Request sample
    $ kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
    Result sample
    If you don't see a command prompt, try pressing enter.
    dnstools# host kubernetes
    kubernetes.default.svc.cluster.local has address 192.0.2.10
    dnstools# host db.mydom.io
    db.mydom.io has address 198.51.100.25
    dnstools# host myapp.mydom.io
    myapp.mydom.io has address 203.0.113.45
    dnstools# host chat.mydom.io
    chat.mydom.io has address 192.0.2.78
    dnstools#

    The kubectl run command creates a temporary pod named dnstools using the infoblox/dnstools image and attaches an interactive terminal to it (-it). Once the pod starts, you are connected to the shell inside the container: this is indicated by the dnstools# prompt. From this shell, you can run DNS tools such as host to test DNS resolution within the cluster. When you exit the shell, the pod is automatically deleted because the --rm option was used.

  6. From inside the pod, run DNS queries for the custom domain:

    Request sample
    $ dnstools# host db.example.local
    Request sample
    $ dnstools# host app.example.local
    Request sample
    $ dnstools# host chat.example.local
    Result samples
    db.example.local has address 192.0.2.11
    app.example.local has address 192.0.2.10
    chat.example.local has address 192.0.2.12
    • To exit the container, either press Ctrl+D or type and enter exit.

    • You can also observe these DNS queries in the CoreDNS logs:

    Request sample
    $ kubectl logs -n kube-system -l k8s-app=coredns

Removing Your Customized CoreDNS Configuration

You can remove your customized CoreDNS configuration by deleting the coredns-custom ConfigMap and restarting CoreDNS:

  1. Delete the coredns-custom ConfigMap from the kube-system namespace:

    Request sample
    $ kubectl delete configmap -n kube-system coredns-custom
    Result sample
    configmap "coredns-custom" deleted
  2. Restart the CoreDNS deployment to apply the change:

    Request sample
    $ kubectl rollout restart deployment -n kube-system coredns
    Result sample
    deployment.apps/coredns restarted
  3. Verify that the custom configuration has been removed by checking the CoreDNS logs:

    Request sample
    $ kubectl logs -n kube-system -l k8s-app=coredns
    Partial result sample
    [WARNING] No files matching import glob pattern: /custom/.include
    [WARNING] No files matching import glob pattern: /custom/.server
    .:53

    These warnings indicate that no custom configuration files are loaded, thus confirming that the customization has effectively been removed.

Related Pages