• Home
  • DevOps
  • Top Kubernetes Interview Questions and Answers

Top Kubernetes Interview Questions and Answers

Last updated on Feb 18 2022
Mukul Bose

Table of Contents

Top Kubernetes Interview Questions and Answers

What are Ks?

Ks is another term for Kubernetes.

How are Kubernetes and Docker related?

Docker is an open-source platform used to handle software development. Its main benefit is that it packages the settings and dependencies that the software/application needs to run into a container, which allows for portability and several other advantages. Kubernetes allows for the manual linking and orchestration of several containers, running on multiple hosts that have been created using Docker.

What are the main differences between the Docker Swarm and Kubernetes?

Docker Swarm is Docker’s native, open-source container orchestration platform that is used to cluster and schedule Docker containers. Swarm differs from Kubernetes in the following ways:

  • Docker Swarm is more convenient to set up but doesn’t have a robust cluster, while Kubernetes is more complicated to set up but the benefit of having the assurance of a robust cluster
  • Docker Swarm can’t do auto-scaling (as can Kubernetes); however, Docker scaling is five times faster than Kubernetes
  • Docker Swarm doesn’t have a GUI; Kubernetes has a GUI in the form of a dashboard
  • Docker Swarm does automatic load balancing of traffic between containers in a cluster, while Kubernetes requires manual intervention for load balancing such traffic
  • Docker requires third-party tools like ELK stack for logging and monitoring, while Kubernetes has integrated tools for the same
  • Docker Swarm can share storage volumes with any container easily, while Kubernetes can only share storage volumes with containers in the same pod
  • Docker can deploy rolling updates but can’t deploy automatic rollbacks; Kubernetes can deploy rolling updates as well as automatic rollbacks

What are the main components of Kubernetes architecture?

There are two primary components: the master node and the worker node. Each of these components has individual components in them.

What is orchestration when it comes to software and DevOps?

Orchestration refers to the integration of multiple services that allows them to automate processes or synchronize information in a timely fashion. Say, for example, you have six or seven microservices for an application to run. If you place them in separate containers, this would inevitably create obstacles for communication. Orchestration would help in such a situation by enabling all services in individual

What is a node in Kubernetes?

A node is the smallest fundamental unit of computing hardware. It represents a single machine in a cluster, which could be a physical machine in a data center or a virtual machine from a cloud provider. Each machine can substitute any other machine in a Kubernetes cluster. The master in Kubernetes controls the nodes that have containers.

What does the node status contain?

The main components of a node status are Address, Condition, Capacity, and Info.

What process runs on Kubernetes Master Node?

The Kube-api server process runs on the master node and serves to scale the deployment of more instances.

What is a pod in Kubernetes?

Pods are high-level structures that wrap one or more containers. This is because containers are not run directly in Kubernetes. Containers in the same pod share a local network and the same resources, allowing them to easily communicate with other containers in the same pod as if they were on the same machine while at the same time maintaining a degree of isolation.

What is the job of the kube-scheduler?

The kube-scheduler assigns nodes to newly created pods.

What is a cluster of containers in Kubernetes?

A cluster of containers is a set of machine elements that are nodes. Clusters initiate specific routes so that the containers running on the nodes can communicate with each other. In Kubernetes, the container engine (not the server of the Kubernetes API) provides hosting for the API server.

What is etcd?

Kubernetes uses etcd as a distributed key-value store for all of its data, including metadata and configuration data, and allows nodes in Kubernetes clusters to read and write data. Although etcd was purposely built for CoreOS, it also works on a variety of operating systems (e.g., Linux, BSB, and OS X) because it is open-source. Etcd represents the state of a cluster at a specific moment in time and is a canonical hub for state management and cluster coordination of a Kubernetes cluster.

What are the different services within Kubernetes?

Different types of Kubernetes services include:

  • Cluster IP service
  • Node Port service
  • External Name Creation service and
  • Load Balancer service

What is ClusterIP?

The ClusterIP is the default Kubernetes service that provides a service inside a cluster (with no external access) that other apps inside your cluster can access.

What is NodePort?

The NodePort service is the most fundamental way to get external traffic directly to your service. It opens a specific port on all Nodes and forwards any traffic sent to this port to the service.

What is the LoadBalancer in Kubernetes?

The LoadBalancer service is used to expose services to the internet. A Network load balancer, for example, creates a single IP address that forwards all traffic to your service.

What is a headless service?

A headless service is used to interface with service discovery mechanisms without being tied to a ClusterIP, therefore allowing you to directly reach pods without having to access them through a proxy. It is useful when neither load balancing nor a single Service IP is required.

What is Kubelet?

The kubelet is a service agent that controls and maintains a set of pods by watching for pod specs through the Kubernetes API server. It preserves the pod lifecycle by ensuring that a given set of containers are all running as they should. The kubelet runs on each node and enables the communication between the master and slave nodes.

What is Kubectl?

Kubectl is a CLI (command-line interface) that is used to run commands against Kubernetes clusters. As such, it controls the Kubernetes cluster manager through different create and manage commands on the Kubernetes component

Give examples of recommended security measures for Kubernetes.

Examples of standard Kubernetes security measures include defining resource quotas, support for auditing, restriction of etcd access, regular security updates to the environment, network segmentation, definition of strict resource policies, continuous scanning for security vulnerabilities, and using images from authorized repositories.

What is Kube-proxy?

Kube-proxy is an implementation of a load balancer and network proxy used to support service abstraction with other networking operations. Kube-proxy is responsible for directing traffic to the right container based on IP and the port number of incoming requests.

How can you get a static IP for a Kubernetes load balancer?

A static IP for the Kubernetes load balancer can be achieved by changing DNS records since the Kubernetes Master can assign a new static IP address.

Define node in Kubernetes

A node the smallest unit of hardware. It defines a single machine in a cluster that can be a virtual machine from a cloud provider or physical machine in the data center. Every machine available in the Kubernetes cluster can substitute other machines.

What is the work of a kube-scheduler?

Kube-scheduler is the default scheduler for Kubernetes. It assigns nodes to newly created pods.

Define daemon sets

Daemon sets are a set of pods that runs on a host. They are used for host layers attributes like monitoring network or simple network.

Define Heapster in Kubernetes

A Heapster is a metrics collection and performance monitoring system for data that are collected by the Kublet.

What tasks are performed by Kubernetes?

Kubernetes is the Linux kernel which is used for distributed systems. It helps you to be abstract the underlying hardware of the nodes(servers) and offers a consistent interface for applications that consume the shared pool of resources.

Define Kubernetes controller manager

The controller manager is a daemon used for garbage collection, core control loops, and namespace creation. It enables the running of more than one process on the master node.

Why use namespace in Kubernetes?

Namespaces in Kubernetes are used for dividing cluster resources between users. It helps the environment where more than one user spread projects or teams and provides a scope of resources.

Why use Kubernetes?

Kubernetes is used because:

  • Kubernetes can run on-premises bare metal, OpenStack, public clouds Google, Azure, AWS, etc.
  • It helps you to avoid vendor lock issues as it can use any vendor-specific APIs or services except where Kubernetes provides an abstraction, e.g., load balancer and storage.
  • It will enable applications that need to be released and updated without any downtime.
  • Kubernetes allows you to assure those containerized apps run where and when you want and help you to find resources and tools which you want to work.

What are the features of Kubernetes?

The features of Kubernetes are:

  • Automated Scheduling
  • Self-Healing Capabilities
  • Automated rollouts & rollback
  • Horizontal Scaling & Load Balancing
  • Offers environment consistency for development, testing, and production
  • Infrastructure is loosely coupled to each component can act as a separate unit
  • Provides a higher density of resource utilization
  • Offers enterprise-ready features
  • Application-centric management
  • Auto-scalable infrastructure
  • You can create predictable infrastructure

Mention the types of controller managers

Types of controller managers are: ) endpoints controller, ) service accounts controller, ) node controller, ) namespace controller, ) replication controller, ) token controller.

Explain Kubernetes Architecture

k1

Kubernetes Architecture Diagram

  • Master Node: The master node is the first and most vital component which is responsible for the management of Kubernetes cluster. It is the entry point for all kinds of administrative tasks. There may be more than one master node in the cluster to check for fault tolerance.
  • API Server: The API server acts as an entry point for all the REST commands used for controlling the cluster.
  • Scheduler: The scheduler schedules the tasks to the slave node. It stores the resource usage information for every slave node. It is responsible for distributing the workload.
  • Etcd: etcd components, store configuration detail, and wright values. It communicates with the most component to receive commands and work. It also manages network rules and port forwarding activity.
  • Worker/Slave nodes: Worker nodes are another essential component that contains all the required services to manage the networking between the containers, communicate with the master node, which allows you to assign resources to the scheduled containers.
  • Kubelet: It gets the configuration of a Pod from the API server and ensures that the described containers are up and running.
  • Docker Container: Docker container runs on each of the worker nodes, which runs the configured pods.
  • Pods: A pod is a combination of single or multiple containers that logically run together on nodes.

List various services available in Kubernetes

Various services available in Kubernetes are ) Cluster IP service, ) Load Balancer service, ) Node Port service, ) External Name Creation service.

Define Cluster IP

The Cluster IP is a Kubernetes service that offers a service inside the cluster that other apps inside cluster can access.

Explain node port

The node port service is a fundamental way to get external traffic to your service. It opens a particular port on all nodes and forwards network traffic sent to this port.

Define kubelet

The kubelet is a service agent which controls and maintains group pf pods by checking pod specification using Kubernetes. The kubelet runs on each node and allows to communicate between a master node and a slave node.

What are the disadvantages of Kubernetes?

  • Kubernetes dashboard is not as helpful as it should be
  • Security is not very effective.
  • It is very complex and can reduce productivity
  • Kubernetes is more costly than its alternatives.

What is Kube-proxy?

Kube-proxy is an implementation of both a network proxy and a load balancer. It is used to support service abstraction used with other networking operations. It is responsible for directing traffic to the container depend on IP and the port number.

What is the difference between Kubernetes and Docker Swarm?

The difference between Kubernetes and Docker Swarm is:

Kubernetes Docker Swarm
Kubernetes Provides an auto-scaling feature. Docker Swarm does not provide an auto-scaling feature.
Manually configure your load balancing settings. Does auto load balancing
Installation is complicated & time-consuming. Installation is easy & fast.
GUI is available. GUI not available.
It provides a built-in load balancing technique. Process scheduling is done to maintain services while updating.

Define Ingress Network

Ingress network is defined as a collection of rules which allow permission for connections into the Kubernetes cluster.

What is Kubectl used for?

Kubectl is a software for controlling Kubernetes clusters. Ctl stands for control, which is a command-line interface to pass the command to the cluster and manage the Kubernetes component.

What is GKE?

GKE or Google Container Engine is a management platform that supports clusters and Docker containers that run within public cloud services of Google.

Why load balancer is needed?

A load balancer is needed because it gives a standard way to distribute network traffic among different services, which runs in the backend.

How to run Kubernetes locally?

Kubernetes can be run locally using the Minikube tool. It runs a single-node cluster in a VM (virtual machine) on the computer. Therefore, it offers the ideal way for users who have just started learning Kubernetes.

What are the tools that are used for container monitoring?

Tools that are used for container monitoring are:

  • Heapster
  • cAdvisor
  • Prometheus
  • InfluxDB
  • Grafana

List components of Kubernetes

There are three components of Kubernetes, they are:

  • Addons
  • Node components
  • Master Components

Define headless service

Headless service is defined as a service that uses IP address, but instead of load balancing, it returns of associated pods.

What are the important components of node status?

The important component of node status are:

  • Condition
  • Capacity
  • Info
  • Address

What is minikube?

Minikube is a software that helps the user to run Kubernetes. It runs on the single nodes that are inside VM on your computer. This tool is also used by programmers who are developing an application using Kubernetes.

Mention the uses of GKE

The uses of the GKE (Google Kubernetes Engine) are:

  • It can be used to create docker container clusters
  • Resize application controllers
  • Update and then upgrade the clusters of container
  • Debug cluster of the container.
  • GKE can be used to creates a replication controller, jobs, services, container pods, or load balancer.

Define orchestration in Kubernetes

Orchestration in Kubernetes defines as an automatic method of scheduling the work of every container. It is used for applications that are based on microservices within clusters.

Explain Prometheus in Kubernetes

Prometheus is an application that is used for monitoring and alerting. It can be called out to your systems, grab real-time metrics, compress it, and stores properly in a database.

List tools for container orchestration

The tools for container orchestration are ) Docker swarm, ) Apache Mesos, and ) Kubernetes.

Mention the list of objects of Kubernetes?

Objects that are used in Kubernetes are: ) Pods, ) Replication sets and controllers, ) Jobs and cron jobs, ) Daemon sets, ) Distinctive identities, ) Deployments, ) and Stateful sets.

Define Stateful sets in Kubernetes

The stateful set is a workload API object that is used to manage the stateful application. It can also be used to manage the deployments and scaling the sets of pods. The state information and other data of stateful pods are store in the disk storage, which connects with stateful set.

Why use Daemon sets?

Daemon sets are used because:

  • It enables to runs storage platforms like ceph and glusterd on each node.
  • Daemon sets run the logs collection on every node such as filebeat or fluentd.
  • It performs node monitoring on each and every node.

Explain Replica set

A Replica set is used to keep replica pods stable. It enables us to specify the available number of identical pods. This can be considered a replacement for the replication .controller.

List out some important Kubectl commands:

The important Kubectl commands are:

  • kubectl annotate
  • kubectl cluster-info
  • kubectl attach
  • kubectl apply
  • kubectl config
  • kubectl autoscale
  • kubectl config current-context
  • kubectl config set.

Why uses Kube-apiserver?

Kube-apiserver is an API server of Kubernetes that is used to configure and validate API objects, which include services, controllers, etc. It provides the frontend to the cluster’s shared region using which components interact with each other.

Explain the types of Kubernetes pods

There are two types of pods in Kubernetes:

  • Single Container Pod: It can be created with the run command.
  • Multicontainer pods:  It can be created using the “create” command in Kubernetes.

What are the labels in Kubernetes?

Labels are a collection of keys that contain some values. The key values are connected to pods, replication controllers, and associated services. Generally, labels are added to some object during its creation time. They can be modified by the users at run time.

What are the objectives of the replication controller?

The objectives of the replication controller are:

  • It is responsible for controlling and administering the pod lifecycle.
  • It monitors and verifies whether the allowed number of replicas are running or not.
  • The replication controller helps the user to check the pod status.
  • It enables to alter a pod. The user can drag its position the way interested in it.

What do you mean by persistent volume?

A persistent volume is a storage unit that is controlled by the administrator. It is used to manage an individual pod in a cluster.

What are Secrets in Kubernetes?

Secrets are sensitive information like login credentials of the user. They are objects in Kubernetes that stores sensitive information like username and password after performing encryption.

What is Sematext Docker Agent?

Sematext Docker agent is a log collection agent with events and metrics. It runs as a small container in each Docker host. These agents gather metrics, events, and logs for all cluster nodes and containers.

Define OpenShift

OpenShift is a public cloud application development and hosting platform developed by Red Hat. It offers automation for management so that developers can focus on writing the code.

Define Ks

Ks (K-eight characters-S) is a term for Kubernetes. It is an open-source orchestration framework for the containerized applications.

What are federated clusters?

Federated clusters multiple clusters that are managed as a single cluster.

Mention the difference between Docker volumes and Kubernetes Volumes

Kubernetes Volumes Docker Volumes
Volumes are not limited to any container. Volumes are limited to a pod in the container.
Kubernetes volumes support all containers deployed in a pod of Kubernetes. Docker volumes do not support all containers deployed in Docker.

What are the ways to provide API-Security on Kubernetes?

The ways to provide API-Security on Kubernetes are:

  • Using correct auth mode with API server authentication mode= Node.
  • Make kubeless that protects its API via authorization-mode=Webhook.
  • Ensure the kube-dashboard uses a restrictive RBAC (Role-Based Access Control) policy

What is ContainerCreating pod?

A ContainerCreating pod is one that can be scheduled on a node but can’t start up properly.

What are the types of Kubernetes Volume?

The types of Kubernetes Volume are:

  • EmptyDir
  • GCE persistent disk
  • Flocker
  • HostPath
  • NFS
  • ISCSI
  • rbd
  • PersistentVolumeClaim
  • downwardAPI

Explain PVC

The full form of PVC stands for Persistent Volume Claim. It is storage requested by Kubernetes for pods. The user does not require to know the underlying provisioning. This claim should be created in the same namespace where the pod is created.

What is the Kubernetes Network Policy?

Network Policy defines how the pods in the same namespace would communicate with each other and the network endpoint.

What is Kubernetes proxy service?

Kubernetes proxy service is a service which runs on the node and helps in making it available to an external host.

How is Kubernetes different from Docker Swarm?

Features Kubernetes Docker Swarm
Installation & Cluster Config Setup is very complicated, but once installed cluster is robust. Installation is very simple, but the cluster is not robust.
GUI GUI is the Kubernetes Dashboard. There is no GUI.
Scalability Highly scalable and scales fast. Highly scalable and scales x faster than Kubernetes.
Auto-scaling Kubernetes can do auto-scaling. Docker swarm cannot do auto-scaling.
Load Balancing Manual intervention needed for load balancing traffic between different containers and pods. Docker swarm does auto load balancing of traffic between containers in the cluster.
Rolling Updates & Rollbacks Can deploy rolling updates and does automatic rollbacks. Can deploy rolling updates, but not automatic rollback.
DATA Volumes Can share storage volumes only with the other containers in the same pod. Can share storage volumes with any other container.
Logging & Monitoring In-built tools for logging and monitoring. rd party tools like ELK stack should be used for logging and monitoring.

What is Kubernetes?

k2

Kubernetes is an open-source container management tool which holds the responsibilities of container deployment, scaling & descaling of containers & load balancing. Being the Google’s brainchild, it offers excellent community and works brilliantly with all the cloud providers. So, we can say that Kubernetes is not a containerization platform, but it is a multi-container management solution.

Kubernetes is a container management system developed in the Google platform. The purpose of kubernetes is to manage a containerized application in various types of physical, virtual, and cloud environments. Google Kubernetes is a highly flexible container tool to deliver even complex applications, consistently. Applications run on clusters of hundreds to thousands of individual servers.

How is Kubernetes related to Docker?

It’s a known fact that Docker provides the lifecycle management of containers and a Docker image builds the runtime containers. But, since these individual containers have to communicate, Kubernetes is used.  So, Docker builds the containers and these containers communicate with each other via Kubernetes. So, containers running on multiple hosts can be manually linked and orchestrated using Kubernetes.

What is the difference between deploying applications on hosts and containers?

k3

Fig : Deploying Applications On Host vs Containers – Kubernetes Interview Questions

Refer to the above diagram. The left side architecture represents deploying applications on hosts. So, this kind of architecture will have an operating system and then the operating system will have a kernel which will have various libraries installed on the operating system needed for the application. So, in this kind of framework you can have n number of applications and all the applications will share the libraries present in that operating system whereas while deploying applications in containers the architecture is a little different.

This kind of architecture will have a kernel and that is the only thing that’s going to be the only thing common between all the applications. So, if there’s a particular application which needs Java then that particular application we’ll get access to Java and if there’s another application which needs Python then only that particular application will have access to Python.

The individual blocks that you can see on the right side of the diagram are basically containerized and these are isolated from other applications. So, the applications have the necessary libraries and binaries isolated from the rest of the system, and cannot be encroached by any other application.

What is the Google Container Engine?

The Google Container Engine is an open-source management platform tailor-made for Docker containers and clusters to provide support for the clusters that run in Google public cloud services.

What are Daemon sets?

A Daemon set is a set of pods that runs only once on a host. They are used for host layer attributes like a network or for monitoring a network, which you may not need to run on a host more than once.

What are minions in Kubernetes cluster?

a. They are components of the master node.

b. They are the work-horse / worker node of the cluster.[Ans]

c. They are monitoring engine used widely in kubernetes.

d. They are docker container service.

Kubernetes cluster data is stored in which of the following?

a. Kube-apiserver

b. Kubelet

c. Etcd[Ans]

d. None of the above

Which of them is a Kubernetes Controller?

a. ReplicaSet

b. Deployment

c. Rolling Updates

d. Both ReplicaSet and Deployment[Ans]

Which of the following are core Kubernetes objects?

a. Pods

b. Services

c. Volumes

d. All of the above[Ans]

The Kubernetes Network proxy runs on which node?

a. Master Node

b. Worker Node

c. All the nodes [Ans]

d. None of the above

What are the responsibilities of Replication Controller?

a. Update or delete multiple pods with a single command

b. Helps to achieve the desired state

c. Creates a new pod, if the existing pod crashes

d. All of the above [Ans]

How to define a service without a selector?

a. Specify the external name[Ans]

b. Specify an endpoint with IP Address and port

c. Just by specifying the IP address

d. Specifying the label and api-version

What did the . version of Kubernetes introduce?

a. Taints and Tolerations[Ans]

b. Cluster level Logging

c. Secrets

d. Federated Clusters

The handler invoked by Kubelet to check if a container’s IP address is open or not is?

a. HTTPGetAction

b. ExecAction

c. TCPSocketAction[Ans]

d. None of the above

What is ‘Heapster’ in Kubernetes?

A Heapster is a performance monitoring and metrics collection system for data collected by the Kublet. This aggregator is natively supported and runs like any other pod within a Kubernetes cluster, which allows it to discover and query usage data from all nodes within the cluster.

What is a Namespace in Kubernetes?

Namespaces are used for dividing cluster resources between multiple users. They are meant for environments where there are many users spread across projects or teams and provide a scope of resources.

Name the initial namespaces from which Kubernetes starts?

  • Default
  • Kube – system
  • Kube – public

What is the Kubernetes controller manager?

The controller manager is a daemon that is used for embedding core control loops, garbage collection, and Namespace creation. It enables the running of multiple processes on the master node even though they are compiled to run as a single process.

What are the types of controller managers?

The primary controller managers that can run on the master node are the endpoints controller, service accounts controller, namespace controller, node controller, token controller, and replication controller.

What is Container Orchestration?

Consider a scenario where you have – microservices for an application. Now, these microservices are put in individual containers, but won’t be able to communicate without container orchestration. So, as orchestration means the amalgamation of all instruments playing together in harmony in music, similarly container orchestration means all the services in individual containers working together to fulfill the needs of a single server.

What is the need for Container Orchestration?

Consider you have – microservices for a single application performing various tasks, and all these microservices are put inside containers. Now, to make sure that these containers communicate with each other we need container orchestration.

k4

Fig : Challenges Without Container Orchestration – Kubernetes Interview Questions

As you can see in the above diagram, there were also many challenges that came into place without the use of container orchestration. So, to overcome these challenges the container orchestration came into place.

What are the features of Kubernetes?

The features of Kubernetes, are as follows:

k5

How does Kubernetes simplify containerized Deployment?

As a typical application would have a cluster of containers running across multiple hosts, all these containers would need to talk to each other. So, to do this you need something big that would load balance, scale & monitor the containers. Since Kubernetes is cloud-agnostic and can run on any public/private providers it must be your choice simplify containerized deployment.

What do you know about clusters in Kubernetes?

The fundamental behind Kubernetes is that we can enforce the desired state management, by which I mean that we can feed the cluster services of a specific configuration, and it will be up to the cluster services to go out and run that configuration in the infrastructure.

k6

So, as you can see in the above diagram, the deployment file will have all the configurations required to be fed into the cluster services. Now, the deployment file will be fed to the API and then it will be up to the cluster services to figure out how to schedule these pods in the environment and make sure that the right number of pods are running.

So, the API which sits in front of services, the worker nodes & the Kubelet process that the nodes run, all together make up the Kubernetes Cluster.

What is Google Container Engine?

Google Container Engine (GKE) is an open source management platform for Docker containers and the clusters. This Kubernetes based engine supports only those clusters which run within the Google’s public cloud services.

What is Heapster?

Heapster is a cluster-wide aggregator of data provided by Kubelet running on each node. This container management tool is supported natively on Kubernetes cluster and runs as a pod, just like any other pod in the cluster. So, it basically discovers all nodes in the cluster and queries usage information from the Kubernetes nodes in the cluster, via on-machine Kubernetes agent.

What is Minikube?

Minikube is a tool that makes it easy to run Kubernetes locally. This runs a single-node Kubernetes cluster inside a virtual machine.

What is Kubectl?

Kubectl is the platform using which you can pass commands to the cluster. So, it basically provides the CLI to run commands against the Kubernetes cluster with various ways to create and manage the Kubernetes component.

What is Kubelet?

This is an agent service which runs on each node and enables the slave to communicate with the master. So, Kubelet works on the description of containers provided to it in the PodSpec and makes sure that the containers described in the PodSpec are healthy and running.

What do you understand by a node in Kubernetes?

k7

What are the different components of Kubernetes Architecture?

The Kubernetes Architecture has mainly components – the master node and the worker node. As you can see in the below diagram, the master and the worker nodes have many inbuilt components within them. The master node has the kube-controller-manager, kube-apiserver, kube-scheduler, etcd. Whereas the worker node has kubelet and kube-proxy running on each node.

k8

What do you understand by Kube-proxy?

Kube-proxy can run on each and every node and can do simple TCP/UDP packet forwarding across backend network service. So basically, it is a network proxy which reflects the services as configured in Kubernetes API on each node. So, the Docker-linkable compatible environment variables provide the cluster IPs and ports which are opened by proxy.

Can you brief on the working of the master node in Kubernetes?

Kubernetes master controls the nodes and inside the nodes the containers are present. Now, these individual containers are contained inside pods and inside each pod, you can have a various number of containers based upon the configuration and requirements. So, if the pods have to be deployed, then they can either be deployed using user interface or command line interface. Then, these pods are scheduled on the nodes and based on the resource requirements, the pods are allocated to these nodes. The kube-apiserver makes sure that there is communication established between the Kubernetes node and the master components.

k9

What is the role of kube-apiserver and kube-scheduler?

The kube – apiserver follows the scale-out architecture and, is the front-end of the master node control panel. This exposes all the APIs of the Kubernetes Master node components and is responsible for establishing communication between Kubernetes Node and the Kubernetes master components.

The kube-scheduler is responsible for distribution and management of workload on the worker nodes. So, it selects the most suitable node to run the unscheduled pod based on resource requirement and keeps a track of resource utilization. It makes sure that the workload is not scheduled on nodes which are already full.

Can you brief about the Kubernetes controller manager?

Multiple controller processes run on the master node but are compiled together to run as a single process which is the Kubernetes Controller Manager. So, Controller Manager is a daemon that embeds controllers and does namespace creation and garbage collection. It owns the responsibility and communicates with the API server to manage the end-points.

So, the different types of controller manager running on the master node are :

k10

What is ETCD?

Etcd is written in Go programming language and is a distributed key-value store used for coordinating between distributed work. So, Etcd stores the configuration data of the Kubernetes cluster, representing the state of the cluster at any given point in time.

What are the different types of services in Kubernetes?

The following are the different types of services used:

k11

What do you understand by load balancer in Kubernetes?

A load balancer is one of the most common and standard ways of exposing service. There are two types of load balancer used based on the working environment i.e. either the Internal Load Balancer or the External Load Balancer. The Internal Load Balancer automatically balances load and allocates the pods with the required configuration whereas the External Load Balancer directs the traffic from the external load to the backend pods.

What is Ingress network, and how does it work?

angress network is a collection of rules that acts as an entry point to the Kubernetes cluster. This allows inbound connections, which can be configured to give services externally through reachable URLs, load balance traffic, or by offering name-based virtual hosting. So, Ingress is an API object that manages external access to the services in a cluster, usually by HTTP and is the most powerful way of exposing service.

Now, let me explain to you the working of Ingress network with an example.

There are nodes having the pod and root network namespaces with a Linux bridge. In addition to this, there is also a new virtual ethernet device called flannel(network plugin) added to the root network.

Now, suppose we want the packet to flow from pod to pod . Refer to the below diagram.

k12

Fig : Working Of Ingress Network – Kubernetes Interview Questions

  • So, the packet leaves pod’s network at eth and enters the root network at veth.
  • Then it is passed on to cbr, which makes the ARP request to find the destination and it is found out that nobody on this node has the destination IP address.
  • So, the bridge sends the packet to flannel as the node’s route table is configured with flannel.
  • Now, the flannel daemon talks to the API server of Kubernetes to know all the pod IPs and their respective nodes to create mappings for pods IPs to node IPs.
  • The network plugin wraps this packet in a UDP packet with extra headers changing the source and destination IP’s to their respective nodes and sends this packet out via eth.
  • Now, since the route table already knows how to route traffic between nodes, it sends the packet to the destination node.
  • The packet arrives at eth of node and goes back to flannel to de-capsulate and emits it back in the root network namespace.
  • Again, the packet is forwarded to the Linux bridge to make an ARP request to find out the IP that belongs to veth.
  • The packet finally crosses the root network and reaches the destination Pod.

What do you understand by Cloud controller manager?

The Cloud Controller Manager is responsible for persistent storage, network routing, abstracting the cloud-specific code from the core Kubernetes specific code, and managing the communication with the underlying cloud services. It might be split out into several different containers depending on which cloud platform you are running on and then it enables the cloud vendors and Kubernetes code to be developed without any inter-dependency. So, the cloud vendor develops their code and connects with the Kubernetes cloud-controller-manager while running the Kubernetes.

The various types of cloud controller manager are as follows:

k13 1

What is Container resource monitoring?

As for users, it is really important to understand the performance of the application and resource utilization at all the different abstraction layer, Kubernetes factored the management of the cluster by creating abstraction at different levels like container, pods, services and whole cluster. Now, each level can be monitored and this is nothing but Container resource monitoring.

The various container resource monitoring tools are as follows:

k14

What is the difference between a replica set and replication controller?

Replica Set and Replication Controller do almost the same thing. Both of them ensure that a specified number of pod replicas are running at any given time. The difference comes with the usage of selectors to replicate pods. Replica Set use Set-Based selectors while replication controllers use Equity-Based selectors.

  • Equity-Based Selectors: This type of selector allows filtering by label key and values. So, in layman terms, the equity-based selector will only look for the pods which will have the exact same phrase as that of the label.Example: Suppose your label key says app=nginx, then, with this selector, you can only look for those pods with label app equal to nginx.
  • Selector-Based Selectors: This type of selector allows filtering keys according to a set of values. So, in other words, the selector based selector will look for pods whose label has been mentioned in the set.Example: Say your label key says app in (nginx, NPS, Apache). Then, with this selector, if your app is equal to any of nginx, NPS, or Apache, then the selector will take it as a true result.

What is a Headless Service?

Headless Service is similar to that of a ‘Normal’ services but does not have a Cluster IP. This service enables you to directly reach the pods without the need of accessing it through a proxy.

What are the best security measures that you can take while using Kubernetes?

The following are the best security measures that you can follow while using Kubernetes:

k15

What are federated clusters?

Multiple Kubernetes clusters can be managed as a single cluster with the help of federated clusters. So, you can create multiple Kubernetes clusters within a data center/cloud and use federation to control/manage them all at one place.

The federated clusters can achieve this by doing the following two things. Refer to the below diagram.

k16

Scenario: Suppose a company built on monolithic architecture handles numerous products. Now, as the company expands in today’s scaling industry, their monolithic architecture started causing problems.

How do you think the company shifted from monolithic to microservices and deploy their services containers?

As the company’s goal is to shift from their monolithic application to microservices, they can end up building piece by piece, in parallel and just switch configurations in the background. Then they can put each of these built-in microservices on the Kubernetes platform. So, they can start by migrating their services once or twice and monitor them to make sure everything is running stable. Once they feel everything is going good, then they can migrate the rest of the application into their Kubernetes cluster.

Scenario : Consider a multinational company with a very much distributed system, with a large number of data centers, virtual machines, and many employees working on various tasks.

How do you think can such a company manage all the tasks in a consistent way with Kubernetes?

As all of us know that I.T. departments launch thousands of containers, with tasks running across a numerous number of nodes across the world in a distributed system.

In such a situation the company can use something that offers them agility, scale-out capability, and DevOps practice to the cloud-based applications.

So, the company can, therefore, use Kubernetes to customize their scheduling architecture and support multiple container formats. This makes it possible for the affinity between container tasks that gives greater efficiency with an extensive support for various container networking solutions and container storage.

Scenario : Consider a situation, where a company wants to increase its efficiency and the speed of its technical operations by maintaining minimal costs.

How do you think the company will try to achieve this?

The company can implement the DevOps methodology, by building a CI/CD pipeline, but one problem that may occur here is the configurations may take time to go up and running. So, after implementing the CI/CD pipeline the company’s next step should be to work in the cloud environment. Once they start working on the cloud environment, they can schedule containers on a cluster and can orchestrate with the help of Kubernetes. This kind of approach will help the company reduce their deployment time, and also get faster across various environments.

Scenario :  Suppose a company wants to revise it’s deployment methods and wants to build a platform which is much more scalable and responsive.

How do you think this company can achieve this to satisfy their customers?

Solution:

In order to give millions of clients the digital experience they would expect, the company needs a platform that is scalable, and responsive, so that they could quickly get data to the client website. Now, to do this the company should move from their private data centers (if they are using any) to any cloud environment such as AWS. Not only this, but they should also implement the microservice architecture so that they can start using Docker containers. Once they have the base framework ready, then they can start using the best orchestration platform available i.e. Kubernetes. This would enable the teams to be autonomous in building applications and delivering them very quickly.

Scenario : Consider a multinational company with a very much distributed system, looking forward to solving the monolithic code base problem.

How do you think the company can solve their problem?

Solution

Well, to solve the problem, they can shift their monolithic code base to a microservice design and then each and every microservices can be considered as a container. So, all these containers can be deployed and orchestrated with the help of Kubernetes.

Want to get Kubernetes Certified? View Batches Now

Scenario : All of us know that the shift from monolithic to microservices solves the problem from the development side, but increases the problem at the deployment side.

How can the company solve the problem on the deployment side?

Solution

The team can experiment with container orchestration platforms, such as Kubernetes and run it in data centers. So, with this, the company can generate a templated application, deploy it within five minutes, and have actual instances containerized in the staging environment at that point. This kind of Kubernetes project will have dozens of microservices running in parallel to improve the production rate as even if a node goes down, then it can be rescheduled immediately without performance impact.

Scenario :  Suppose a company wants to optimize the distribution of its workloads, by adopting new technologies.

How can the company achieve this distribution of resources efficiently?

Solution

The solution to this problem is none other than Kubernetes. Kubernetes makes sure that the resources are optimized efficiently, and only those resources are used which are needed by that particular application. So, with the usage of the best container orchestration tool, the company can achieve the distribution of resources efficiently.

Scenario : Consider a carpooling company wants to increase their number of servers by simultaneously scaling their platform.

How do you think will the company deal with the servers and their installation?

Solution

The company can adopt the concept of containerization. Once they deploy all their application into containers, they can use Kubernetes for orchestration and use container monitoring tools like Prometheus to monitor the actions in containers. So, with such usage of containers, giving them better capacity planning in the data center because they will now have fewer constraints due to this abstraction between the services and the hardware they run on.

Scenario : Consider a scenario where a company wants to provide all the required hand-outs to its customers having various environments.

How do you think they can achieve this critical target in a dynamic manner?

Solution

The company can use Docker environments, to put together a cross-sectional team to build a web application using Kubernetes. This kind of framework will help the company achieve the goal of getting the required things into production within the shortest time frame. So, with such a machine running, the company can give the hands-outs to all the customers having various environments.

Scenario : Suppose a company wants to run various workloads on different cloud infrastructure from bare metal to a public cloud.

How will the company achieve this in the presence of different interfaces?

Solution

The company can decompose its infrastructure into microservices and then adopt Kubernetes. This will let the company run various workloads on different cloud infrastructures.

So, this brings us to the end of the Kubernetes Interview Questions blog.
This Tecklearn ‘Top Kubernetes Interview Questions and Answers’ helps you with commonly asked questions if you are looking out for a job in Kubernetes or DevOps Domain. If you wish to learn Kubernetes and build a career in DevOps domain, then check out our interactive Continuous Orchestration using Kubernetes Training, that comes with 24*7 support to guide you throughout your learning period.

https://www.tecklearn.com/course/continuous-orchestration-using-kubernetes/

Continuous Orchestration using Kubernetes Training

About the Course

Tecklearn has specially designed this Continuous Orchestration using Kubernetes Training Course to advance your skills for a successful career in this domain. Kubernetes training helps you master the container orchestration tool. As part of the training you will learn detailed Kubernetes, architecture of Kubernetes, what are Kubernetes Pods, node, how to deploy Kubernetes, creating a Kubernetes cluster, what are the various services available and how Kubernetes makes container orchestration simple. You will get an in-depth knowledge of these concepts and will be able to work on related demos. Upon completion of this online training, you will hold a solid understanding and hands-on experience with Kubernetes.

Why Should you take Continuous Orchestration using Kubernetes Training?

  • The average salary for people who possess
    Kubernetes as a skill is $117,000. – PayScale.com
  • Apple, Capital One, AT&T, Oracle, Raytheon & many other MNC’s worldwide use Kubernetes across industries.
  • The Kubernetes orchestration engine powers some of the biggest and most complex deployments in the world.

What you will Learn in this Course?

Introduction to DevOps

  • What is Software Development
  • Software Development Life Cycle
  • Why DevOps?
  • What is DevOps?
  • DevOps Lifecycle
  • DevOps Tools
  • Benefits of DevOps
  • How DevOps is related to Agile Delivery
  • DevOps Implementation

Continuous Orchestration using Kubernetes

  • Containers and Container Orchestration
  • Introduction to Kubernetes
  • Docker Swarm vs Kubernetes
  • Kubernetes Architecture
  • Deploying Kubernetes using Kubeadms
  • Alternate ways of deploying Kubernetes
  • Understanding YAML
  • Creating a Deployment in Kubernetes using YAML
  • Creating a Service in Kubernetes
  • Installing Kubernetes Dashboard
  • Deploying an App using Dashboard
  • Using Rolling Updates in Kubernetes

 

Got a question for us? Please mention it in the comments section and we will get back to you.

 

 

0 responses on "Top Kubernetes Interview Questions and Answers"

Leave a Message

Your email address will not be published. Required fields are marked *