Federation v1is strongly discouraged.
Federation V1never achieved GA status and is no longer under active development. Documentation is for historical purposes only.
For more information, see the intended replacement, Kubernetes Federation v2.
This page shows how to configure and deploy CoreDNS to be used as the DNS provider for Cluster Federation.
LoadBalancerservices in member clusters of federation is mandatory to enable
CoreDNSfor service discovery across federated clusters.
CoreDNS can be deployed in various configurations. Explained below is a reference and can be tweaked to suit the needs of the platform and the cluster federation.
To deploy CoreDNS, we shall make use of helm charts. CoreDNS will be deployed with etcd as the backend and should be pre-installed. etcd can also be deployed using helm charts. Shown below are the instructions to deploy etcd.
helm install --namespace my-namespace --name etcd-operator stable/etcd-operator helm upgrade --namespace my-namespace --set cluster.enabled=true etcd-operator stable/etcd-operator
Note: etcd default deployment configurations can be overridden, suiting the host cluster.
After deployment succeeds, etcd can be accessed with the http://etcd-cluster.my-namespace:2379 endpoint within the host cluster.
The CoreDNS default configuration should be customized to suit the federation. Shown below is the Values.yaml, which overrides the default configuration parameters on the CoreDNS chart.
isClusterService: false serviceType: "LoadBalancer" plugins: kubernetes: enabled: false etcd: enabled: true zones: - "example.com." endpoint: "http://etcd-cluster.my-namespace:2379"
The above configuration file needs some explanation:
isClusterServicespecifies whether CoreDNS should be deployed as a cluster-service, which is the default. You need to set it to false, so that CoreDNS is deployed as a Kubernetes application service.
serviceTypespecifies the type of Kubernetes service to be created for CoreDNS. You need to choose either “LoadBalancer” or “NodePort” to make the CoreDNS service accessible outside the Kubernetes cluster.
plugins.kubernetes, which is enabled by default by setting
plugins.etcd.zonesas shown above.
Now deploy CoreDNS by running
helm install --namespace my-namespace --name coredns -f Values.yaml stable/coredns
Verify that both etcd and CoreDNS pods are running as expected.
The Federation control plane can be deployed using
kubefed init. CoreDNS
can be chosen as the DNS provider by specifying two additional parameters.
coredns-provider.conf has below format:
[Global] etcd-endpoints = http://etcd-cluster.my-namespace:2379 zones = example.com. coredns-endpoints = <coredns-server-ip>:<port>
etcd-endpointsis the endpoint to access etcd.
zonesis the federation domain for which CoreDNS is authoritative and is same as –dns-zone-name flag of
coredns-endpointsis the endpoint to access CoreDNS server. This is an optional parameter introduced from v1.7 onwards.
plugins.etcd.zonesin the CoreDNS configuration and the
kubefed initshould match.
Note: The following section applies only to versions prior to v1.7 and will be automatically taken care of if the
coredns-endpointsparameter is configured in
coredns-provider.confas described in section above.
Once the federation control plane is deployed and federated clusters
are joined to the federation, you need to add the CoreDNS server to the
pod’s nameserver resolv.conf chain in all the federated clusters as this
self hosted CoreDNS server is not discoverable publicly. This can be
achieved by adding the below line to
dnsmasq container’s arg in
example.com above with federation domain.
Now the federated cluster is ready for cross-cluster service discovery!
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.