A set of machines, called nodes, that run containerized applications managed by Kubernetes.
A cluster has at least one master node and at least one worker node.
The master node(s) manages the worker nodes and the pods in the cluster.
Multiple master nodes are used to provide a cluster with failover and high availability.
The worker node(s) host the pods that are the components of the application.
A node is a machine in Kubernetes.
A node may be a VM or physical machine, depending on the cluster.
It has local daemons or services necessary to run Pods and is managed by the control plane.
The daemons on a node include kubelet, kube-proxy, and a container runtime (implementing the CRI such as Docker).
The smallest and simplest Kubernetes object.
A Pod represents a set of running containers on your cluster.
A Pod is typically set up to run a single primary container.
It can also run optional sidecar containers that add supplementary features like logging.
Pods are commonly managed by a Deployment.
A lightweight and portable executable image that contains software and all of its dependencies.
Containers decouple applications from underlying host infrastructure to make deployment easier in different cloud or OS environments, and for easier scaling.
One or more initialization containers that must run to completion before any application containers run.
Initialization (init) containers are like regular application containers, with one difference:
initialization containers must run to completion before any application containers can start.
Initialization containers run in series:
each initialization container must run to completion before the next initialization container begins.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes.
These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it’s available.
Otherwise, kube-proxy forwards the traffic itself.
An agent that runs on each node in the cluster.
It makes sure that containers are running in a pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
The kubelet doesn’t manage containers which were not created by Kubernetes.
Component on the master that runs controllers.
Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
In Kubernetes, controllers are control loops that watch the state of your cluster, then make or request changes where needed.
Each controller tries to move the current cluster state closer to the desired state.
Controllers watch the shared state of your cluster through the apiserver (part of the Control Plane).
Some controllers also run inside the control plane, providing control loops that are core to Kubernetes’ operations.
For example: the deployment controller, the daemonset controller, the namespace controller, and the persistent volume controller (and others) all run within the kube-controller-manager.
A command line tool for communicating with a Kubernetes API server.
You can use kubectl to create, inspect, update, and delete Kubernetes objects.
The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API.
The API server is the front end for the Kubernetes control plane.
The main implementation of a Kubernetes API server is kube-apiserver.
kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances.
You can run several instances of kube-apiserver and balance traffic between those instances.
The application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster.
Kubernetes resources and “records of intent” are all stored as API objects, and modified via RESTful calls to the API.
The API allows configuration to be managed in a declarative way.
Users can interact with the Kubernetes API directly, or via tools like kubectl.
The core Kubernetes API is flexible and can also be extended to support custom resources.
An API object that manages a replicated application.
Each replica is represented by a Pod, and the Pods are distributed among the nodes of a cluster.
Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec.
Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods.
These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
A ReplicaSet (aims to) maintain a set of replica Pods running at any given time.
Workload objects such as Deployment make use of ReplicaSets to ensure that the configured number of Pods are running in your cluster, based on the spec of that ReplicaSet.
Ensures a copy of a Pod is running across a set of nodes in a cluster.
Used to deploy system daemons such as log collectors and monitoring agents that typically must run on every Node.
A finite or batch task that runs to completion.
Creates one or more Pod objects and ensures that a specified number of them successfully terminate.
As Pods successfully complete, the Job tracks the successful completions.
An abstract way to expose an application running on a set of Pods as a network service.
The set of Pods targeted by a Service is (usually) determined by a selector.
If more Pods are added or removed, the set of Pods matching the selector will change.
The Service makes sure that network traffic can be directed to the current set of Pods for the workload.
A workload is an application running on Kubernetes.
Various core objects that represent different types or parts of a workload include the DaemonSet, Deployment, Job, ReplicaSet, and StatefulSet objects.
For example, a workload that has a web server and a database might run the database in one StatefulSet and the web server in a Deployment.
The layer that provides capacity such as CPU, memory, network, and storage so that the containers can run and connect to a network.
The layer where various containerized applications run.
Specification of a Kubernetes API object in JSON or YAML format.
A manifest specifies the desired state of an object that Kubernetes will maintain when you apply the manifest.
Each configuration file can contain multiple manifests.
Container environment variables are name=value pairs that provide useful information into containers running in a Pod.
Container environment variables provide information that is required by the running containerized applications along with information about important resources to the Containers.
For example, file system details, information about the container itself, and other cluster resources such as service endpoints.
A group of Linux processes with optional resource isolation, accounting and limits.
cgroup is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network) for a collection of processes.
The container runtime is the software that is responsible for running containers.
Kubernetes supports several container runtimes: Docker, containerd, cri-o, rktlet and any implementation of the Container Runtime Interface (CRI).
Stored instance of a container that holds a set of software needed to run an application.
A way of packaging software that allows it to be stored in a container registry, pulled to a local system, and run as an application.
Meta data is included in the image that can indicate what executable to run, who built it, and other information.
Docker (specifically, Docker Engine) is a software technology providing operating-system-level virtualization also known as containers.
Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces,
and a union-capable file system such as OverlayFS and others to allow independent containers to run within a single Linux instance,
avoiding the overhead of starting and maintaining virtual machines (VMs).
Provides constraints that limit aggregate resource consumption per Namespace.
Limits the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project.
A directory containing data, accessible to the containers in a pod.
A Kubernetes volume lives as long as the pod that encloses it.
Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts.