<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Jumpstart K8s]]></title><description><![CDATA[Build your own app, and Master Kubernetes along the way!]]></description><link>http://jumpstartk8s.com/</link><generator>Ghost 5.35</generator><lastBuildDate>Sat, 11 Apr 2026 02:42:51 GMT</lastBuildDate><atom:link href="http://jumpstartk8s.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[K8s Summaries - Pods]]></title><description><![CDATA[<p><strong>Workloads in Kubernetes</strong></p><p>A workload is an application running on Kubernetes inside a set of pods. A Pod represents a set of running containers on your cluster.</p><ul><li>Pods have a defined lifecycle. If a pod fails, you would need to create a new Pod to recover.</li><li>To manage pods, you</li></ul>]]></description><link>http://jumpstartk8s.com/k8s-summaries-pods/</link><guid isPermaLink="false">645ff75d5280d5050279b9dd</guid><category><![CDATA[Summary]]></category><dc:creator><![CDATA[Jude Naveen Raj Ilango]]></dc:creator><pubDate>Mon, 15 May 2023 00:51:40 GMT</pubDate><content:encoded><![CDATA[<p><strong>Workloads in Kubernetes</strong></p><p>A workload is an application running on Kubernetes inside a set of pods. A Pod represents a set of running containers on your cluster.</p><ul><li>Pods have a defined lifecycle. If a pod fails, you would need to create a new Pod to recover.</li><li>To manage pods, you can use workload resources that manage a set of pods on your behalf.</li></ul><p><strong>Workload Resources</strong></p><p>Kubernetes provides several built-in workload resources:</p><ol><li><em>Deployment and ReplicaSet:</em> Good for managing a stateless application workload, where any Pod is interchangeable and can be replaced if needed.</li><li><em>StatefulSet:</em> Lets you run related Pods that track state. For example, if your workload records data persistently, you can run a StatefulSet that matches each Pod with a PersistentVolume.</li><li><em>DaemonSet:</em> Defines Pods that provide node-local facilities. These might be fundamental to the operation of your cluster, such as a networking helper tool, or be part of an add-on. Every time you add a node to your cluster that matches the specification in a DaemonSet, the control plane schedules a Pod for that DaemonSet onto the new node.</li><li><em>Job and CronJob:</em> Define tasks that run to completion and then stop. Jobs represent one-off tasks, whereas CronJobs recur according to a schedule.</li></ol><p>In the wider Kubernetes ecosystem, you can find third-party workload resources that provide additional behaviors. Using a custom resource definition, you can add in a third-party workload resource if you want a specific behavior that&apos;s not part of Kubernetes&apos; core.</p><p><strong>Supporting Concepts</strong></p><ul><li><em>Garbage collection:</em> Tidies up objects from your cluster after their owning resource has been removed.</li><li><em>The time-to-live after finished controller:</em> Removes Jobs once a defined time has passed since they completed.</li></ul><p><strong>Further Resources</strong></p><p>Once your application is running, you might want to make it available on the internet as a Service or, for web applications only, using an Ingress.</p><p></p><p><strong>Pods: Summary</strong></p><ul><li>A Pod is the smallest deployable unit in Kubernetes. It can contain one or more containers sharing storage and network resources.</li><li>Pods can also contain init containers that run during Pod startup and ephemeral containers for debugging, if supported by the cluster.</li><li>Pods have shared Linux namespaces, cgroups, and potential other facets of isolation, similar to a set of containers.</li><li>Pods can be created directly but are usually created using workload resources such as Deployment or Job.</li></ul><p><strong>Types of Pods</strong></p><ol><li><strong>Single Container Pods:</strong> Most common use-case where a Pod wraps around a single container.</li><li><strong>Multi-container Pods:</strong> These Pods encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. They form a single cohesive unit of service.</li></ol><p><strong>Pod Management</strong></p><ul><li>Each Pod runs a single instance of a given application, with replication handled by multiple Pods.</li><li>Pods support multiple cooperating processes (as containers) that form a cohesive unit of service.</li><li>Workload resources can be used to manage multiple Pods.</li><li>Controllers for workload resources create Pods from a pod template and manage those Pods.</li></ul><p><strong>Pod Lifecycle</strong></p><ul><li>Pods are considered relatively ephemeral, disposable entities. They remain on their node until they finish execution, the Pod object is deleted, the Pod is evicted for lack of resources, or the node fails.</li><li>Restarting a container in a Pod is different from restarting a Pod. A Pod persists until it is deleted.</li></ul><p><strong>Pod Networking and Storage</strong></p><ul><li>Each Pod is assigned a unique IP address. All containers in a Pod share the network namespace, including the IP address and network ports.</li><li>A Pod can specify a set of shared storage volumes. All containers in the Pod can access the shared volumes, allowing those containers to share data.</li></ul><p><strong>Privileged Mode for Containers</strong></p><ul><li>Containers in a Pod can run in privileged mode to use operating system administrative capabilities.</li></ul><p><strong>Static Pods</strong></p><ul><li>Managed directly by the kubelet daemon on a specific node, without the API server observing them. Useful for running a self-hosted control plane.</li></ul><p><strong>Container Probes</strong></p><ul><li>Probes are diagnostics performed periodically by the kubelet on a container, used to check the health of the container.</li></ul><p><strong>Operating System Support</strong></p><ul><li>As of Kubernetes v1.25, only Linux and Windows are supported. The .spec.os.name field indicates the OS on which you want the Pod to run.</li></ul><h3 id="pod-templates">Pod Templates</h3><ul><li>Controllers for workload resources (like Deployments, Jobs, and DaemonSets) create Pods using a Pod template.</li><li>Pod templates are specifications for creating Pods and are part of the desired state of the workload resource.</li><li>Modifying the Pod template or switching to a new Pod template does not directly affect existing Pods. Instead, the workload resource controller creates new Pods based on the updated template.</li><li>Each workload resource has its own rules for handling changes to the Pod template.</li></ul><h3 id="pod-update-and-replacement">Pod Update and Replacement</h3><ul><li>If the Pod template changes, the controller creates new Pods based on the updated template.</li><li>Kubernetes allows you to update some fields of a running Pod, in place. However, limitations exist:</li><li>Most metadata about a Pod is immutable.</li><li>If the <code>metadata.deletionTimestamp</code> is set, no new entry can be added to the <code>metadata.finalizers</code> list.</li><li>Pod updates may not change fields other than <code>spec.containers[*].image</code>, <code>spec.initContainers[*].image</code>, <code>spec.activeDeadlineSeconds</code>, or <code>spec.tolerations</code>.</li></ul><h3 id="resource-sharing-and-communication">Resource Sharing and Communication</h3><ul><li>Pods can specify a set of shared storage volumes, accessible by all containers in the Pod.</li><li>Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports.</li><li>Containers within the Pod can communicate with each other using localhost. They must coordinate their use of shared network resources when communicating with entities outside the Pod.</li><li>Containers in a Pod can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. </li><li>Containers in different Pods have distinct IP addresses and can not communicate by OS-level IPC without special configuration. </li><li>Containers that want to interact with a container running in a different Pod can use IP networking to communicate.</li></ul><h3 id="privileged-mode-for-containers">Privileged Mode for Containers</h3><ul><li>Any container in a pod can run in privileged mode to access operating system administrative capabilities.</li><li>This is available for both Windows and Linux.</li></ul><h4 id="linux-privileged-containers">Linux Privileged Containers</h4><ul><li>On Linux, containers can enable privileged mode using the <code>privileged</code> flag on the security context of the container spec.</li></ul><h4 id="windows-privileged-containers">Windows Privileged Containers</h4><ul><li>As of Kubernetes v1.26, you can create a Windows HostProcess pod by setting the <code>windowsOptions.hostProcess</code> flag on the security context of the pod spec.</li></ul><h3 id="static-pods">Static Pods</h3><ul><li>Static Pods are managed directly by the kubelet daemon on a specific node, not observed by the API server.</li><li>Static Pods are always bound to one Kubelet on a specific node.</li><li>The kubelet automatically tries to create a mirror Pod on the Kubernetes API server for each static Pod.</li><li>The Pods running on a node are visible on the API server, but cannot be controlled from there.</li></ul><h3 id="container-probes">Container Probes</h3><ul><li>A probe is a diagnostic performed periodically by the kubelet on a container.</li><li>The kubelet can perform diagnostics through different actions:</li><li>ExecAction (performed with the help of the container runtime)</li><li>TCPSocketAction (checked directly by the kubelet)</li><li>HTTPGetAction (checked directly by the kubelet)</li></ul><p></p><p></p><h1 id="kubernetes-pod-lifecycle">Kubernetes Pod Lifecycle</h1><h2 id="pod-lifetime">Pod Lifetime</h2><ul><li>Pods are ephemeral entities, created once, assigned a unique ID (UID), and scheduled to a node until terminated or deleted.</li><li>If a Node fails, Pods on the node are scheduled for deletion after a timeout period.</li><li>A Pod isn&apos;t rescheduled to a different node if it fails. Instead, a new Pod can replace it with a different UID.</li><li>Objects tied to a Pod&apos;s lifetime, like a volume, are also destroyed when the Pod is deleted.</li></ul><h2 id="pod-phases">Pod Phases</h2><ul><li>A Pod moves through several phases in its lifecycle:</li><li><strong>Pending</strong>: Accepted by the Kubernetes cluster, but some containers aren&apos;t ready yet.</li><li><strong>Running</strong>: Bound to a node, all containers have been created, and at least one container is still running.</li><li><strong>Succeeded</strong>: All containers in the Pod terminated successfully, will not be restarted.</li><li><strong>Failed</strong>: All containers in the Pod terminated, with at least one failure (either non-zero exit status or system termination).</li><li><strong>Unknown</strong>: Due to an error, the state of the Pod couldn&apos;t be obtained.</li></ul><h2 id="pod-and-container-states">Pod and Container States</h2><ul><li>Each container in a Pod has three possible states: Waiting, Running, and Terminated.</li><li>A container&apos;s lifecycle can be influenced by hooks that trigger events at certain points.</li></ul><h3 id="waiting">Waiting</h3><ul><li>If a container is not Running or Terminated, it&apos;s in the Waiting state. Reasons for Waiting might include pulling the container image or applying Secret data.</li></ul><h3 id="running">Running</h3><ul><li>A Running container is executing without issues. If a postStart hook was configured, it&apos;s already executed and finished.</li></ul><h3 id="terminated">Terminated</h3><ul><li>A Terminated container either ran to completion or failed. If a preStop hook was configured, it runs before the container enters the Terminated state.</li></ul><h2 id="container-restart-policy">Container Restart Policy</h2><ul><li>A Pod&apos;s spec contains a <code>restartPolicy</code> field. Values include &quot;Always&quot; (default), &quot;OnFailure&quot;, and &quot;Never&quot;.</li><li><code>restartPolicy</code> only refers to restarts of the containers by the kubelet on the same node.</li><li>After containers exit, the kubelet restarts them with an exponential back-off delay, capped at five minutes.</li></ul><h2 id="pod-conditions">Pod Conditions</h2><ul><li>A Pod has a PodStatus, which has an array of PodConditions.</li><li>Kubelet manages these conditions: PodScheduled, ContainersReady, Initialized, Ready, and PodHasNetwork (alpha feature).</li></ul><h2 id="pod-readiness">Pod Readiness</h2><ul><li>Pod readiness is a feature that allows your application to provide additional signals to the PodStatus.</li><li>It uses <code>readinessGates</code> in the Pod&apos;s spec to list extra conditions that the kubelet checks for Pod readiness.</li><li>The status of these conditions is extracted from the status.condition fields of the Pod.</li><li>If the condition isn&apos;t found, it defaults to &quot;False&quot;.</li><li>The condition names should conform to the Kubernetes label key format.</li><li>Custom conditions and readiness of all containers are required for a Pod to be considered ready.</li><li>If all containers are ready, but at least one custom condition is missing or False, the kubelet sets the Pod&apos;s condition to ContainersReady.</li></ul><h2 id="pod-network-readiness">Pod Network Readiness</h2><ul><li>After being scheduled on a node and admitted by the Kubelet, a Pod needs to have volumes mounted and network configuration set up.</li><li>The <code>PodHasNetworkCondition</code> feature gate allows Kubelet to report the Pod&apos;s network initialization status.</li><li>The <code>PodHasNetwork</code> condition can be set to False in early or later stages of the Pod&apos;s lifecycle under certain circumstances.</li><li>This condition is set to True after successful sandbox creation and network configuration.</li></ul><h2 id="pod-scheduling-readiness">Pod Scheduling Readiness</h2><ul><li>This feature is currently in alpha and additional information can be found in the Pod Scheduling Readiness documentation.</li></ul><h2 id="container-probes-1">Container Probes</h2><ul><li>Probes are diagnostics performed periodically by the kubelet on a container.</li><li>There are four types of probes: <code>exec</code>, <code>grpc</code>, <code>httpGet</code>, and <code>tcpSocket</code>.</li><li>Each probe has three possible outcomes: Success, Failure, Unknown.</li><li>Kubelet can perform and react to three kinds of probes on running containers: <code>livenessProbe</code>, <code>readinessProbe</code>, and <code>startupProbe</code>.</li></ul><h3 id="liveness-probe">Liveness Probe</h3><ul><li>It indicates whether the container is running.</li><li>If it fails, the kubelet kills the container and the container follows its restart policy.</li><li>Default state is Success if a liveness probe is not provided.</li></ul><h3 id="readiness-probe">Readiness Probe</h3><ul><li>It indicates whether the container is ready to respond to requests.</li><li>If it fails, the endpoints controller removes the Pod&apos;s IP address from all matching Services&apos; endpoints.</li><li>The default state before the initial delay is Failure if a readiness probe is not provided.</li></ul><h3 id="startup-probe">Startup Probe</h3><ul><li>It indicates whether the application within the container has started.</li><li>All other probes are disabled if a startup probe is provided, until it succeeds.</li><li>If it fails, the kubelet kills the container and the container follows its restart policy.</li><li>Default state is Success if a startup probe is not provided.</li></ul><h3 id="when-to-use-probes">When to Use Probes</h3><ul><li>Liveness probe: Useful if you want your container to be killed and restarted if a probe fails.</li><li>Readiness probe: Useful to start sending traffic to a Pod only when a probe succeeds, or for taking the container down for maintenance.</li><li>Startup probe: Useful for Pods that have containers that take a long time to start up.</li></ul><h2 id="note">Note</h2><ul><li>On deletion, a Pod puts itself into an unready state regardless of the readiness probe existence.</li><li>It stays in this state while waiting for the containers in the Pod to stop.</li></ul><p></p><h2></h2><h1 id="kubernetes-pod-termination-process">Kubernetes Pod Termination Process</h1><p>Pods in Kubernetes represent processes running on nodes. It&apos;s crucial to allow these processes to terminate gracefully instead of abruptly stopping them with a KILL signal. The key steps involved in this process are:</p><ol><li><strong>Requesting Deletion</strong>: When you request a Pod&apos;s deletion, the cluster records and starts tracking the grace period before the Pod can be forcefully killed.</li><li><strong>Graceful Shutdown</strong>: The container runtime typically sends a TERM signal to each container&apos;s main process. If the grace period expires and processes are still running, a KILL signal is sent, and the Pod is deleted from the API Server.</li><li><strong>Process Interruption Handling</strong>: If the kubelet or container runtime&apos;s management service restarts during process termination, the cluster retries from the start with the full original grace period.</li></ol><h2 id="deletion-flow-example">Deletion Flow Example:</h2><ol><li><strong>Request Deletion</strong>: Delete a specific Pod using the <code>kubectl</code> tool with a default grace period of 30 seconds.</li><li><strong>Pod Status Update</strong>: The API server updates the Pod with a &quot;dead&quot; status and grace period. If checked with <code>kubectl describe</code>, the Pod shows up as &quot;Terminating&quot;.</li><li><strong>Local Shutdown Process</strong>: The kubelet on the node starts the local Pod shutdown process once it detects the Pod as terminating.</li><li><strong>PreStop Hook</strong>: If a preStop hook is defined in any of the Pod&apos;s containers, the kubelet runs it. If it&apos;s still running after the grace period, the kubelet requests a grace period extension of 2 seconds.</li><li><strong>TERM Signal</strong>: The kubelet triggers the container runtime to send a TERM signal to process 1 in each container.</li><li><strong>Service Interruption</strong>: The control plane evaluates whether to remove the shutting-down Pod from EndpointSlice (and Endpoints) objects. ReplicaSets and other resources no longer treat the shutting-down Pod as a valid, in-service replica.</li><li><strong>Forcible Shutdown</strong>: When the grace period expires, the kubelet triggers forcible shutdown. Any remaining processes in any container in the Pod receive SIGKILL. The kubelet also cleans up a hidden pause container if the container runtime uses one.</li><li><strong>Pod Transition</strong>: The kubelet transitions the Pod into a terminal phase (Failed or Succeeded) depending on the end state of its containers.</li><li><strong>Forcible Removal</strong>: The kubelet triggers forcible removal of the Pod object from the API server by setting the grace period to 0 for immediate deletion.</li><li><strong>Deletion</strong>: The API server deletes the Pod&apos;s API object, rendering it invisible from any client.</li></ol><h1 id="forced-pod-termination">Forced Pod Termination</h1><p>Forced deletions can be potentially disruptive. By default, all deletions are graceful within 30 seconds. However, you can override this default with the <code>--grace-period=&lt;seconds&gt;</code> option in the <code>kubectl delete</code> command.</p><p>Setting the grace period to 0 forcibly and immediately deletes the Pod from the API server and triggers immediate cleanup by the kubelet on the node.</p><h1 id="garbage-collection-of-pods">Garbage Collection of Pods</h1><p>For failed Pods, their API objects remain in the cluster&apos;s API until explicitly removed. The Pod garbage collector (PodGC) in the control plane cleans up terminated Pods when the number of Pods exceeds the configured threshold. PodGC also cleans up Pods that are orphan pods, unscheduled terminating pods, or terminating pods bound to a non-ready node tainted with <code>node.kubernetes.io/out-of-service</code>.</p>]]></content:encoded></item><item><title><![CDATA[K8s Summaries - Storage]]></title><description><![CDATA[<h3 id="volumes-in-kubernetes">Volumes in Kubernetes</h3><p>Kubernetes supports several types of volumes, including:</p><h4 id="deprecated-volumes">Deprecated Volumes</h4><ul><li><code>awsElasticBlockStore</code></li><li><code>azureDisk</code></li><li><code>azureFile</code></li><li><code>gcePersistentDisk</code></li></ul><h4 id="migrating-volumes">Migrating Volumes</h4><ul><li>AWS EBS, Azure Disk, and Azure File have respective CSI migration paths.</li></ul><h4 id="other-volume-types">Other Volume Types</h4><ul><li><code>cephfs</code></li><li><code>cinder</code></li><li>OpenStack has a CSI migration path.</li></ul><h5 id="configmap">ConfigMap</h5><ul><li>Injects configuration data into pods.</li><li>You must create</li></ul>]]></description><link>http://jumpstartk8s.com/k8s-summaries-storage/</link><guid isPermaLink="false">646181805280d5050279b9f4</guid><category><![CDATA[Summary]]></category><dc:creator><![CDATA[Jude Naveen Raj Ilango]]></dc:creator><pubDate>Mon, 15 May 2023 00:51:31 GMT</pubDate><content:encoded><![CDATA[<h3 id="volumes-in-kubernetes">Volumes in Kubernetes</h3><p>Kubernetes supports several types of volumes, including:</p><h4 id="deprecated-volumes">Deprecated Volumes</h4><ul><li><code>awsElasticBlockStore</code></li><li><code>azureDisk</code></li><li><code>azureFile</code></li><li><code>gcePersistentDisk</code></li></ul><h4 id="migrating-volumes">Migrating Volumes</h4><ul><li>AWS EBS, Azure Disk, and Azure File have respective CSI migration paths.</li></ul><h4 id="other-volume-types">Other Volume Types</h4><ul><li><code>cephfs</code></li><li><code>cinder</code></li><li>OpenStack has a CSI migration path.</li></ul><h5 id="configmap">ConfigMap</h5><ul><li>Injects configuration data into pods.</li><li>You must create a ConfigMap before you can use it.</li><li>A container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.</li><li>Text data is exposed as files using the UTF-8 character encoding.</li></ul><h5 id="downwardapi">DownwardAPI</h5><ul><li>Makes downward API data available to applications.</li><li>Exposed data is in read-only files in plain text format.</li><li>A container using the downward API as a subPath volume mount does not receive updates when field values change.</li></ul><h5 id="emptydir">EmptyDir</h5><ul><li>Created when a Pod is assigned to a node and exists as long as that Pod is running on that node.</li><li>All containers in the Pod can read and write the same files in the emptyDir volume.</li><li>The data in an emptyDir volume is safe across container crashes.</li><li>A size limit can be specified for the default medium, which limits the capacity of the emptyDir volume.</li></ul><h5 id="fibre-channel-fc">Fibre Channel (fc)</h5><ul><li>Allows an existing fibre channel block storage volume to mount in a Pod.</li><li>You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.</li></ul><h5 id="gce-persistent-disk">GCE Persistent Disk</h5><ul><li>The contents of a PD are preserved and the volume is merely unmounted when a pod is removed.</li><li>A gcePersistentDisk volume permits multiple consumers to simultaneously mount a persistent disk as read-only.</li><li>Using a GCE persistent disk with a Pod controlled by a ReplicaSet will fail unless the PD is read-only or the replica count is 0 or 1.</li></ul><h5 id="regional-persistent-disks">Regional Persistent Disks</h5><ul><li>Available in two zones within the same region.</li><li>Must be provisioned as a PersistentVolume; referencing the volume directly from a pod is not supported.</li></ul><h5 id="gce-csi-migration">GCE CSI Migration</h5><ul><li>Redirects all plugin operations from the existing in-tree plugin to the pd.csi.storage.gke.io Container Storage Interface (CSI) Driver.</li><li>The GCE PD CSI Driver must be installed on the cluster.</li></ul><p>Please refer to the detailed documentation for more specific configuration examples and further usage details.</p><p></p><h2 id="gitrepo-deprecated">gitRepo (deprecated)</h2><ul><li>The <code>gitRepo</code> volume type is deprecated.</li><li>To provision a container with a git repo, use an EmptyDir with an InitContainer to clone the repo.</li><li>A gitRepo volume is a type of volume plugin that mounts an empty directory and clones a git repository into this directory for the Pod to use.</li></ul><h2 id="glusterfs-removed">glusterfs (removed)</h2><ul><li>As of Kubernetes 1.27, the glusterfs volume type is no longer included.</li><li>The GlusterFS in-tree storage driver was deprecated in Kubernetes v1.25 and removed entirely in v1.26.</li></ul><h2 id="hostpath">hostPath</h2><ul><li><code>HostPath</code> volumes mount a file or directory from the host node&apos;s filesystem into the Pod.</li><li>They pose security risks and should be used sparingly and with specific access controls.</li><li>They can be used for containers needing access to Docker internals, running cAdvisor, or allowing a Pod to specify whether a given hostPath should exist prior to the Pod running.</li><li>Can optionally specify a type for a hostPath volume. Types include DirectoryOrCreate, Directory, FileOrCreate, File, Socket, CharDevice, and BlockDevice.</li></ul><h2 id="iscsi">iscsi</h2><ul><li><code>iSCSI</code> volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod.</li><li>Its contents are preserved and the volume is merely unmounted when a Pod is removed.</li><li>Can be mounted as read-only by multiple consumers simultaneously.</li></ul><h2 id="local">local</h2><ul><li>A <code>local</code> volume represents a mounted local storage device such as a disk, partition or directory.</li><li>They can only be used as a statically created PersistentVolume and dynamic provisioning is not supported.</li><li>Subject to the availability of the underlying node and not suitable for all applications.</li><li>Recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer when using local volumes.</li></ul><h2 id="nfs">nfs</h2><ul><li>An <code>NFS</code> volume allows an existing NFS (Network File System) share to be mounted into a Pod.</li><li>Its contents are preserved and the volume is merely unmounted when a Pod is removed.</li><li>NFS can be mounted by multiple writers simultaneously.</li></ul><h2 id="persistentvolumeclaim">persistentVolumeClaim</h2><ul><li>A <code>persistentVolumeClaim</code> volume is used to mount a PersistentVolume into a Pod.</li><li>Allows users to &quot;claim&quot; durable storage without knowing the details of the particular cloud environment.</li></ul><h2 id="portworxvolume-deprecated">portworxVolume (deprecated)</h2><ul><li>A <code>portworxVolume</code> is an elastic block storage layer that runs hyperconverged with Kubernetes, but it&apos;s deprecated as of Kubernetes v1.25.</li><li>Can be dynamically created through Kubernetes or pre-provisioned and referenced inside a Pod.</li><li>Portworx CSI migration feature is in beta state as of Kubernetes v1.25, but turned off by default.</li><li>It redirects all plugin operations from the existing in-tree plugin to the <code>pxd.portworx.com</code> Container Storage Interface (CSI) Driver, which must be installed on the cluster.</li></ul><h2 id="projected-volume">Projected Volume</h2><ul><li>A projected volume maps multiple existing volume sources into the same directory.</li></ul><h2 id="rados-block-device-rbd-volume">Rados Block Device (RBD) Volume</h2><ul><li>An RBD volume allows a Rados Block Device volume to mount into your Pod.</li><li>Unlike <code>emptyDir</code>, an RBD volume&apos;s contents persist even when a pod is removed.</li><li>This allows pre-population of data that can be shared between pods.</li><li>You need a running Ceph installation to use RBD.</li><li>RBD volumes can be mounted as read-only by multiple consumers simultaneously, but can only be mounted by a single consumer in read-write mode.</li></ul><h3 id="rbd-csi-migration">RBD CSI Migration</h3><ul><li>As of Kubernetes v1.23 (alpha), all plugin operations for RBD can be redirected to the <code>rbd.csi.ceph.com</code> CSI driver when the <code>CSIMigrationRBD</code> feature gate is enabled.</li><li>To use this, you must:</li><li>Install the Ceph CSI driver (v3.5.0 or above) in your Kubernetes cluster.</li><li>Create a clusterID based on the monitors hash in the CSI config map.</li><li>If the adminId value in the StorageClass differs from <code>admin</code>, patch the adminSecretName with the base64 value of the adminId parameter value.</li></ul><h2 id="secret-volume">Secret Volume</h2><ul><li>Secret volumes are used to pass sensitive information to Pods.</li><li>Secrets can be stored in the Kubernetes API and mounted as files for use by pods.</li><li>They are backed by <code>tmpfs</code>, a RAM-backed filesystem, so they are never written to non-volatile storage.</li><li>You must create a Secret in the Kubernetes API before using it.</li><li>A container using a Secret as a subPath volume mount won&apos;t receive Secret updates.</li></ul><h2 id="vspherevolume-deprecated">vSphereVolume (deprecated)</h2><ul><li>vSphereVolume is used to mount a vSphere VMDK volume into your Pod and supports both VMFS and VSAN datastores.</li><li>The contents of a volume persist when it is unmounted.</li><li>The use of vSphere CSI out-of-tree driver is recommended instead of vSphereVolume.</li></ul><h3 id="vsphere-csi-migration">vSphere CSI Migration</h3><ul><li>As of Kubernetes v1.26 (stable), all operations for the in-tree vsphereVolume type are redirected to the <code>csi.vsphere.vmware.com</code> CSI driver.</li><li>To migrate, you must:</li><li>Install the vSphere CSI driver on your cluster.</li><li>Run vSphere 7.0u2 or later.</li><li>Some StorageClass parameters from the built-in vsphereVolume plugin are not supported by the vSphere CSI driver.</li></ul><h3 id="vsphere-csi-migration-complete">vSphere CSI Migration Complete</h3><ul><li>As of Kubernetes v1.19 (beta), to disable the vsphereVolume plugin, you need to set <code>InTreePluginvSphereUnregister</code> feature flag to true.</li><li>The <code>csi.vsphere.vmware.com</code> CSI driver must be installed on all worker nodes.</li></ul><p></p><h2 id="using-subpath">Using subPath</h2><ul><li>The <code>volumeMounts.subPath</code> property is used to specify a sub-path inside a referenced volume instead of the root. This allows sharing a single volume for multiple uses in one pod.</li><li>Example: A LAMP (Linux Apache MySQL PHP) stack pod configures a shared volume, mapping the PHP application&apos;s code and assets to the volume&apos;s <code>html</code> folder and the MySQL database to the <code>mysql</code> folder.</li></ul><h2 id="using-subpath-with-expanded-environment-variables">Using subPath with Expanded Environment Variables</h2><ul><li><code>subPathExpr</code> is used to construct <code>subPath</code> directory names from downward API environment variables. <code>subPath</code> and <code>subPathExpr</code> properties are mutually exclusive.</li><li>Example: A Pod uses <code>subPathExpr</code> to create a directory <code>pod1</code> within the <code>hostPath</code> volume <code>/var/log/pods</code>. The directory <code>/var/log/pods/pod1</code> is mounted at <code>/logs</code> in the container.</li></ul><h2 id="resources">Resources</h2><ul><li>The storage media of an <code>emptyDir</code> volume is determined by the filesystem holding the kubelet root dir (typically <code>/var/lib/kubelet</code>). There&apos;s no space limit or isolation for <code>emptyDir</code> or <code>hostPath</code> volumes.</li></ul><h2 id="out-of-tree-volume-plugins">Out-of-tree Volume Plugins</h2><ul><li>Out-of-tree volume plugins, like Container Storage Interface (CSI) and FlexVolume (deprecated), allow storage vendors to create custom storage plugins without adding their plugin source code to the Kubernetes repository.</li><li>These plugins can be developed independently of the Kubernetes code base and deployed on Kubernetes clusters as extensions.</li></ul><h2 id="csi">CSI</h2><ul><li>CSI defines a standard interface for container orchestration systems to expose arbitrary storage systems to their container workloads.</li><li>A CSI compatible volume driver, once deployed on a Kubernetes cluster, allows users to use the <code>csi</code> volume type to attach or mount the volumes exposed by the CSI driver.</li></ul><h2 id="csi-ephemeral-volumes">CSI Ephemeral Volumes</h2><ul><li>CSI volumes can be directly configured within the Pod specification. These volumes are ephemeral and do not persist across pod restarts.</li></ul><h2 id="windows-csi-proxy">Windows CSI Proxy</h2><ul><li>For Windows worker nodes, privileged operations for containerized CSI node plugins are supported using <code>csi-proxy</code>, a community-managed, stand-alone binary that needs to be pre-installed on each Windows node.</li></ul><h2 id="migrating-to-csi-drivers-from-in-tree-plugins">Migrating to CSI Drivers from In-tree Plugins</h2><ul><li>The <code>CSIMigration</code> feature redirects operations against existing in-tree plugins to corresponding CSI plugins. This allows a smooth transition to a CSI driver that supersedes an in-tree plugin without any configuration changes.</li></ul><h2 id="flexvolume">FlexVolume</h2><ul><li>FlexVolume is a deprecated out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. It&apos;s recommended to use an out-of-tree CSI driver for integrating external storage with Kubernetes.</li></ul><h2 id="mount-propagation">Mount Propagation</h2><ul><li>Mount propagation allows sharing volumes mounted by a container with other containers in the same pod or even other pods on the same node.</li><li>Three propagation modes are available: <code>None</code> (default), <code>HostToContainer</code>, and <code>Bidirectional</code>.</li><li><code>Bidirectional</code> mount propagation is only allowed in privileged containers due to potential damage to the host operating system.</li></ul><h2 id="configuration">Configuration</h2><ul><li>For mount propagation to work properly on some deployments (CoreOS, RedHat/Centos, Ubuntu), Docker&apos;s <code>MountFlags</code> must be configured correctly. After the configuration, Docker daemon needs to be restarted.</li></ul><h1 id="kubernetes-persistent-volumes-and-persistent-volume-claims">Kubernetes Persistent Volumes and Persistent Volume Claims</h1><h2 id="introduction">Introduction</h2><ul><li><strong>Persistent Volume (PV)</strong>: A piece of storage in the cluster that is provisioned by an administrator or dynamically provisioned using Storage Classes. It has a lifecycle independent of any individual Pod that uses the PV. It captures the details of the implementation of the storage system.</li><li><strong>Persistent Volume Claim (PVC)</strong>: A user&apos;s request for storage. It is similar to a Pod - as Pods consume node resources, PVCs consume PV resources. PVCs can request specific size and access modes.</li><li><strong>StorageClass resource</strong>: Allows cluster administrators to offer a variety of PersistentVolumes with different properties, without exposing users to the implementation details.</li></ul><h2 id="lifecycle-of-a-volume-and-claim">Lifecycle of a Volume and Claim</h2><p><strong>Provisioning</strong>: PVs can be provisioned statically or dynamically.</p><ul><li>Static: A cluster administrator creates a number of PVs which exist in the Kubernetes API for consumption.</li><li>Dynamic: When no static PVs match a user&apos;s PVC, the cluster may dynamically provision a volume for the PVC based on StorageClasses.</li><li><strong>Binding</strong>: A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. Once bound, PVC to PV binding is exclusive and one-to-one.</li><li><strong>Using</strong>: Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod.</li><li><strong>Storage Object in Use Protection</strong>: Ensures that PVCs in active use by a Pod and PVs that are bound to PVCs are not removed from the system to prevent data loss.</li></ul><p><strong>Reclaiming</strong>: When a user is done with their volume, they can delete the PVC objects from the API for resource reclamation. The reclaim policy for a PV tells the cluster what to do with the volume after it has been released of its claim. Policies include:</p><ul><li>Retain: Allows for manual reclamation of the resource.</li><li>Delete: Removes both the PV object from Kubernetes and the associated storage asset in the external infrastructure.</li><li>Recycle (deprecated): Performs a basic scrub on the volume and makes it available again for a new claim.</li></ul><h2 id="note">Note</h2><p>A PVC is in active use by a Pod when a Pod object exists that is using the PVC. PVC or PV removal is postponed until the PVC is no longer actively used by any Pods, or the PV is no longer bound to a PVC.</p><p></p><h2 id="persistentvolume-deletion-protection-finalizer">PersistentVolume Deletion Protection Finalizer</h2><ul><li>PersistentVolumes with a Delete reclaim policy are only deleted after the backing storage is deleted.</li><li>Two finalizers are introduced, <code>kubernetes.io/pv-controller</code> for in-tree plugin volumes, and <code>external-provisioner.volume.kubernetes.io/finalizer</code> for CSI volumes.</li><li>When the CSIMigration{provider} feature flag is enabled for an in-tree volume plugin, <code>kubernetes.io/pv-controller</code> is replaced by <code>external-provisioner.volume.kubernetes.io/finalizer</code>.</li></ul><h2 id="reserving-a-persistentvolume">Reserving a PersistentVolume</h2><ul><li>You can pre-bind a PersistentVolumeClaim (PVC) to a specific PersistentVolume (PV).</li><li>You specify a PV in a PVC to declare a binding.</li><li>The binding happens regardless of some volume matching criteria, but the control plane still checks that storage class, access modes, and requested storage size are valid.</li><li>To reserve a specific storage volume, specify the relevant PVC in the claimRef field of the PV so that other PVCs can&apos;t bind to it.</li></ul><h2 id="expanding-persistent-volumes-claims">Expanding Persistent Volumes Claims</h2><ul><li>PVCs can be expanded if their storage class&apos;s allowVolumeExpansion field is set to true.</li><li>A larger volume for a PVC can be requested by editing the PVC object and specifying a larger size.</li><li>Directly editing the size of a PersistentVolume can prevent an automatic resize of that volume.</li></ul><h2 id="csi-volume-expansion">CSI Volume Expansion</h2><ul><li>CSI volume expansion requires a specific CSI driver to support volume expansion.</li><li>Resizing a volume containing a file system can only be done if the file system is XFS, Ext3, or Ext4.</li><li>File system expansion is either done when a Pod is starting up or when a Pod is running and the underlying file system supports online expansion.</li></ul><h2 id="resizing-an-in-use-persistentvolumeclaim">Resizing an in-use PersistentVolumeClaim</h2><ul><li>In-use PVCs automatically become available to its Pod as soon as its file system has been expanded.</li><li>This has no effect on PVCs that are not in use by a Pod or deployment.</li></ul><h2 id="recovering-from-failure-when-expanding-volumes">Recovering from Failure when Expanding Volumes</h2><ul><li>If a new size specified is too big to be satisfied by underlying storage system, expansion of PVC will be continuously retried until action is taken.</li><li>A manual recovery can be done by the cluster administrator to cancel the resize requests.</li></ul><h2 id="types-of-persistent-volumes">Types of Persistent Volumes</h2><ul><li>PersistentVolume types are implemented as plugins.</li><li>Current supported types: cephfs, csi, fc, hostPath, iscsi, local, nfs, rbd.</li><li>Deprecated types: awsElasticBlockStore, azureDisk, azureFile, cinder, flexVolume, gcePersistentDisk, portworxVolume, vsphereVolume. These are still supported but will be removed in a future Kubernetes release.</li><li>No longer supported types: photonPersistentDisk</li></ul><p></p><h3 id="persistentvolume-pv-specifications">PersistentVolume (PV) Specifications</h3><ul><li>Spec and Status: Specification and status of the volume.</li><li>Name: Must be a valid DNS subdomain name.</li><li>Capacity: Storage capacity (e.g., 5Gi).</li><li>VolumeMode: Filesystem (default) or Block.</li><li>AccessModes: ReadWriteOnce, ReadOnlyMany, ReadWriteMany, ReadWriteOncePod (beta).</li><li>StorageClassName: Specifies the class of the PV.</li><li>PersistentVolumeReclaimPolicy: Retain, Recycle, or Delete.</li><li>MountOptions: Additional options for mounting PV on a node (not supported by all PV types).</li></ul><h3 id="capacity">Capacity</h3><ul><li>Storage size is the only resource that can be set or requested (for now).</li></ul><h3 id="volume-mode">Volume Mode</h3><ul><li>Filesystem: Mounted into Pods into a directory.</li><li>Block: Presented as a raw block device without any filesystem on it (useful for the fastest possible access).</li></ul><h3 id="access-modes">Access Modes</h3><ul><li>ReadWriteOnce (RWO): Volume can be mounted as read-write by a single node.</li><li>ReadOnlyMany (ROX): Volume can be mounted as read-only by many nodes.</li><li>ReadWriteMany (RWX): Volume can be mounted as read-write by many nodes.</li><li>ReadWriteOncePod (RWOP, beta): Volume can be mounted as read-write by a single Pod.</li></ul><h3 id="storage-class">Storage Class</h3><ul><li>StorageClassName attribute specifies the class of a PV.</li><li>PVs with no storageClassName have no class.</li></ul><h3 id="reclaim-policy">Reclaim Policy</h3><ul><li>Retain: Manual reclamation.</li><li>Recycle: Basic scrub (rm -rf /thevolume/*).</li><li>Delete: Deletes associated storage asset.</li></ul><h3 id="mount-options">Mount Options</h3><ul><li>Specify additional options for mounting a PV on a node.</li><li>Not all PV types support mount options.</li></ul><h3 id="node-affinity">Node Affinity</h3><ul><li>Define constraints to limit what nodes a volume can be accessed from.</li><li>Automatically populated for AWS EBS, GCE PD, and Azure Disk volume block types.</li></ul><h3 id="volume-phases">Volume Phases</h3><ul><li>Available: Free resource not yet bound to a claim.</li><li>Bound: Volume is bound to a claim.</li><li>Released: Claim has been deleted, resource not yet reclaimed by the cluster.</li><li>Failed: Volume has failed its automatic reclamation.</li></ul><p></p><h1 id="persistentvolumeclaims-pvc">PersistentVolumeClaims (PVC)</h1><h2 id="overview">Overview</h2><p>A PersistentVolumeClaim (PVC) specifies how much storage a user needs and how it should be accessed. It&apos;s a request for storage by a user.</p><h2 id="key-components">Key Components</h2><ul><li><strong>Spec and Status</strong>: Each PVC contains a spec and status, which outlines the specifications and status of the claim.</li><li><strong>Access Modes</strong>: Claims follow the same conventions as volumes when requesting storage.</li><li><strong>Volume Modes</strong>: Claims use the same conventions as volumes for consumption either as a filesystem or block device.</li><li><strong>Resources</strong>: Like Pods, claims can request specific quantities of a resource, in this case, storage.</li><li><strong>Selector</strong>: Claims can specify a label selector to filter the set of volumes. Only volumes whose labels match the selector can be bound to the claim.</li><li><strong>Class</strong>: A claim can request a particular class by specifying the name of a StorageClass using the attribute <code>storageClassName</code>.</li></ul><h2 id="selectors">Selectors</h2><p>Two types of fields in selectors:</p><ol><li><strong>matchLabels</strong>: The volume must have a label with this value.</li><li><strong>matchExpressions</strong>: A list of requirements made by specifying key, list of values, and operator that relates the key and values.</li></ol><h2 id="class">Class</h2><p>A claim can request a particular class by specifying the name of a StorageClass using the attribute <code>storageClassName</code>. PVCs don&apos;t necessarily have to request a class. The handling of <code>storageClassName</code> depends on whether the DefaultStorageClass admission plugin is turned on or off.</p><h2 id="retroactive-default-storageclass-assignment">Retroactive Default StorageClass Assignment</h2><p>In the event that a PVC is created without specifying a <code>storageClassName</code>, the control plane identifies any existing PVCs without <code>storageClassName</code> and updates those PVCs to match the new default StorageClass when it becomes available.</p><h2 id="claims-as-volumes">Claims as Volumes</h2><p>Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim.</p><h2 id="namespaces">Namespaces</h2><p>Since PersistentVolumeClaims are namespaced objects, mounting claims with &quot;Many&quot; modes (ROX, RWX) is only possible within one namespace.</p><h2 id="raw-block-volume-support">Raw Block Volume Support</h2><p>Certain volume plugins support raw block volumes, including dynamic provisioning where applicable.</p><h2 id="binding-block-volumes">Binding Block Volumes</h2><p>When a user requests a raw block volume by using the <code>volumeMode</code> field in the PVC spec, the binding rules change. There&apos;s a matrix for possible combinations of requesting a raw block device.</p><h2></h2><h2 id="volume-snapshot-and-restore-volume-from-snapshot">Volume Snapshot and Restore Volume from Snapshot</h2><ul><li>Supported only by out-of-tree CSI volume plugins (In-tree volume plugins are deprecated).</li><li>To create a PersistentVolumeClaim (PVC) from a Volume Snapshot:</li></ul><p>yamlCopy code<code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: restore-pvc spec:storageClassName: csi-hostpath-sc dataSource: name: new-snapshot-test kind:VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources:requests: storage: 10Gi</code></p><h2 id="volume-cloning">Volume Cloning</h2><ul><li>Only available for CSI volume plugins.</li><li>To create a PVC from an existing PVC:</li></ul><p>yamlCopy code<code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloned-pvc spec:storageClassName: my-csi-plugin dataSource: name: existing-src-pvc-name kind:PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 10Gi</code></p><h2 id="volume-populators-and-data-sources">Volume Populators and Data Sources</h2><ul><li>Kubernetes supports custom volume populators; must enable the AnyVolumeDataSource feature gate.</li><li>The dataSourceRef field can contain a reference to any object in the same namespace, except for core objects other than PVCs.</li><li>Use of dataSourceRef is preferred over dataSource for clusters with the feature gate enabled.</li></ul><h2 id="cross-namespace-data-sources">Cross Namespace Data Sources</h2><ul><li>Kubernetes supports cross-namespace volume data sources; must enable AnyVolumeDataSource and CrossNamespaceVolumeDataSource feature gates.</li><li>Allows you to specify a namespace in the dataSourceRef field.</li><li>Requires ReferenceGrant from the Gateway API to use this mechanism.</li></ul><h2 id="data-source-references">Data Source References</h2><ul><li>dataSourceRef and dataSource fields are almost identical and cannot be changed after creation.</li><li>The dataSource field ignores invalid values while the dataSourceRef field never does.</li><li>The dataSource field only allows PVCs and VolumeSnapshots, while the dataSourceRef field may contain different types of objects.</li><li>When the CrossNamespaceVolumeDataSource feature is enabled, the dataSourceRef field allows objects in any namespaces and does not sync with dataSource when a namespace is specified.</li></ul><h2 id="using-volume-populators">Using Volume Populators</h2><ul><li>Volume populators are controllers that create non-empty volumes determined by a Custom Resource.</li><li>A populated volume is created by referring to a Custom Resource using the dataSourceRef field.</li></ul><h2 id="using-cross-namespace-volume-data-sources">Using Cross-Namespace Volume Data Sources</h2><ul><li>Create a ReferenceGrant to allow the namespace owner to accept the reference.</li><li>Define a populated volume by specifying a cross-namespace volume data source using the dataSourceRef field.</li><li>Requires a valid ReferenceGrant in the source namespace.</li></ul><h2 id="writing-portable-configuration">Writing Portable Configuration</h2><ul><li>Include PVC objects in your configuration bundle.</li><li>Do not include PersistentVolume (PV) objects in the config.</li><li>Allow the user to provide a storage class name when instantiating the template.</li><li>If no storage class name is provided, leave the persistentVolumeClaim.storageClassName field as nil to automatically provision a PV with the default StorageClass.</li><li>Monitor PVCs not getting bound after some time to identify potential issues with dynamic storage support or the lack of a storage system.</li></ul>]]></content:encoded></item><item><title><![CDATA[K8s Summary - Containers]]></title><description><![CDATA[<p><strong>Containers</strong></p><ul><li>Containers are repeatable, which ensures the same behavior wherever they&apos;re run due to standardization and included dependencies.</li><li>Containers decouple applications from the underlying host infrastructure, making deployment easier across different cloud or OS environments.</li><li>Containers form the Pods assigned to a node in a Kubernetes cluster, and</li></ul>]]></description><link>http://jumpstartk8s.com/k8s-summary-container/</link><guid isPermaLink="false">645fed4f5280d5050279b9bb</guid><category><![CDATA[Summary]]></category><dc:creator><![CDATA[Jude Naveen Raj Ilango]]></dc:creator><pubDate>Sat, 13 May 2023 20:06:53 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1494961104209-3c223057bd26?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fENvbnRhaW5lcnxlbnwwfHx8fDE2ODQwMDgzOTh8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1494961104209-3c223057bd26?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fENvbnRhaW5lcnxlbnwwfHx8fDE2ODQwMDgzOTh8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="K8s Summary - Containers"><p><strong>Containers</strong></p><ul><li>Containers are repeatable, which ensures the same behavior wherever they&apos;re run due to standardization and included dependencies.</li><li>Containers decouple applications from the underlying host infrastructure, making deployment easier across different cloud or OS environments.</li><li>Containers form the Pods assigned to a node in a Kubernetes cluster, and they are co-located and co-scheduled to run on the same node.</li></ul><p><strong>Container Images</strong></p><ul><li>A container image is a ready-to-run software package. It includes everything needed to run an application: the code, any required runtime, application and system libraries, and default values for any essential settings.</li><li>Containers are intended to be stateless and immutable. If you need to make changes, the correct process is to build a new image that includes the change, then recreate the container to start from the updated image.</li></ul><p><strong>Container Runtimes</strong></p><ul><li>The container runtime is the software responsible for running containers.</li><li>Kubernetes supports several container runtimes such as containerd, CRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).</li><li>Usually, the default container runtime for a Pod can be chosen by the cluster. However, if there&apos;s a need to use more than one container runtime in a cluster, the RuntimeClass for a Pod can be specified.</li><li>RuntimeClass can also be used to run different Pods with the same container runtime but with different settings</li></ul><p></p><p><strong>Images</strong></p><ul><li>A container image encapsulates an application and all its software dependencies. It is an executable software bundle that can run standalone.</li><li>Container images are typically created and pushed to a registry before being referred to in a Pod.</li><li>Container images have names which can include a registry hostname and possibly a port number. If no registry hostname is specified, Kubernetes assumes the Docker public registry.</li><li>Image names can be followed by a tag to identify different versions of the same series of images. If no tag is specified, Kubernetes assumes the &quot;latest&quot; tag.</li></ul><p><strong>Updating Images</strong></p><ul><li>By default, when you create a Deployment, StatefulSet, Pod, or other object with a Pod template, the pull policy of all containers in that pod will be set to &quot;IfNotPresent&quot; if not specified. This policy avoids pulling an image if it already exists locally.</li></ul><p><strong>Image Pull Policy</strong></p><ul><li>The &quot;imagePullPolicy&quot; and the image tag affect when the kubelet attempts to pull the specified image.</li><li>&quot;IfNotPresent&quot;: the image is pulled only if it is not already present locally.</li><li>&quot;Always&quot;: the kubelet queries the container image registry to resolve the name to an image digest each time it launches a container. If the image is cached locally, it is used; otherwise, it is pulled.</li><li>&quot;Never&quot;: the kubelet does not try fetching the image. It attempts to start the container if the image is already present locally; otherwise, startup fails.</li></ul><p><strong>Notes</strong></p><ul><li>Avoid using the &quot;latest&quot; tag in production as it makes it harder to track versions and roll back.</li><li>To ensure the Pod always uses the same version of a container image, specify the image&apos;s digest.</li><li>If you specify an image by its digest, Kubernetes runs the same code every time it starts a container with that image name and digest, avoiding potential issues with registry changes.</li><li>There are third-party admission controllers that mutate Pods to ensure the workload runs based on an image digest rather than a tag, offering more control over the code that is run.</li></ul><p><strong>Default image pull policy:</strong></p><ul><li>If <code>imagePullPolicy</code> is not specified and the image tag is <code>:latest</code>, or there&apos;s no tag, <code>imagePullPolicy</code> is set to <code>Always</code>.</li><li>If <code>imagePullPolicy</code> is not specified and the image tag is not <code>:latest</code>, <code>imagePullPolicy</code> is set to <code>IfNotPresent</code>.</li><li>The <code>imagePullPolicy</code> is set when the object is first created and is not updated if the image&apos;s tag later changes.</li></ul><p><strong>Required image pull:</strong></p><ul><li>To always force a pull, set <code>imagePullPolicy</code> to <code>Always</code>, or omit it and use <code>:latest</code> as the image tag. You can also enable the <code>AlwaysPullImages</code> admission controller.</li></ul><p><strong>ImagePullBackOff:</strong></p><ul><li>If Kubernetes cannot pull a container image, the container might be in <code>ImagePullBackOff</code> state, meaning Kubernetes will retry pulling the image with an increasing delay, up to a maximum of 300 seconds.</li></ul><p><strong>Serial and parallel image pulls:</strong></p><ul><li>By default, kubelet pulls images serially. You can enable parallel image pulls by setting <code>serializeImagePulls</code> to <code>false</code> in kubelet configuration.</li><li>Kubelet never pulls multiple images in parallel for one Pod, but can pull images in parallel for different Pods.</li></ul><p><strong>Maximum parallel image pulls:</strong></p><ul><li>If <code>serializeImagePulls</code> is <code>false</code>, there is no limit on the number of images being pulled at the same time.</li><li>You can set <code>maxParallelImagePulls</code> in kubelet configuration to limit the number of parallel image pulls.</li></ul><p><strong>Multi-architecture images with image indexes:</strong></p><ul><li>An image index in a container registry can point to multiple image manifests for architecture-specific versions of a container.</li></ul><p><strong>Using a private registry:</strong></p><ul><li>Kubernetes supports specifying registry keys on a Pod via <code>imagePullSecrets</code>.</li><li>Private registries may require keys for reading images, which can be provided in several ways:</li><li>Configuring nodes to authenticate to a private registry.</li><li>Using Kubelet Credential Provider.</li><li>Pre-pulling images.</li><li>Specifying <code>imagePullSecrets</code> on a Pod.</li><li>Configuring nodes to authenticate to a private registry is dependent on the container runtime and registry.</li><li>The Kubelet Credential Provider dynamically fetches registry credentials for a container image.</li><li>Pre-pulled images can be used as an alternative to authenticating to a private registry.</li><li><code>imagePullSecrets</code> can be added to a Pod definition or to a ServiceAccount resource.</li></ul><p><strong>Use cases:</strong></p><ul><li>For clusters running only open-source images, no configuration is required.</li><li>For clusters running proprietary images visible to all users, use a private registry, possibly with an admission controller active.</li><li>For multi-tenant clusters where each tenant needs their own private registry, run a private registry with authorization, generate a registry credential for each tenant, and populate the secret to each tenant namespace.</li></ul><p><strong>Interpretation of config.json</strong></p><ul><li>Kubernetes and Docker interpret <code>config.json</code> differently. In Docker, the <code>auths</code> keys only specify root URLs, but Kubernetes allows glob URLs and prefix-matched paths.</li><li>The root URL is matched using specific syntax patterns.</li><li>Multiple entries in <code>config.json</code> are possible, and Kubernetes performs image pulls sequentially for every found credential.</li></ul><p><strong>Pre-pulled Images</strong></p><ul><li>Kubelet tries to pull each image from the specified registry by default. However, if <code>imagePullPolicy</code> is set to <code>IfNotPresent</code> or <code>Never</code>, a local image is used.</li><li>If you want to rely on pre-pulled images instead of registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.</li><li>This method requires control over node configuration and is not reliable if your cloud provider manages nodes.</li><li>All pods will have read access to any pre-pulled images.</li></ul><p><strong>Specifying imagePullSecrets on a Pod</strong></p><ul><li>Kubernetes supports specifying container image registry keys on a Pod using <code>imagePullSecrets</code>.</li><li>The referenced Secrets must be of type <code>kubernetes.io/dockercfg</code> or <code>kubernetes.io/dockerconfigjson</code>.</li><li>You can create a Secret with a Docker config using the command <code>kubectl create secret docker-registry &lt;name&gt; ...</code></li><li>Pods can only reference image pull secrets in their own namespace, so the process needs to be done once per namespace.</li><li>Setting of this field can be automated by setting the <code>imagePullSecrets</code> in a ServiceAccount resource.</li></ul><p><strong>Use Cases and Solutions</strong></p><ol><li>Cluster running only non-proprietary images: Use public images from a public registry. No configuration required.</li><li>Cluster running some proprietary images: Use a hosted private registry or run an internal private registry behind your firewall with open read access.</li><li>Cluster with proprietary images that require stricter access control: Ensure <code>AlwaysPullImages</code> admission controller is active. Move sensitive data into a &quot;Secret&quot; resource.</li><li>Multi-tenant cluster where each tenant needs own private registry: Run a private registry with authorization required. Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.</li></ol><p><strong>Container Environment</strong></p><p>The container environment in Kubernetes provides several essential resources to containers:</p><ul><li>A filesystem, a combination of an image and one or more volumes.</li><li>Information about the container itself.</li><li>Information about other objects in the cluster.</li></ul><p><strong>Container Information</strong></p><ul><li>The container&apos;s hostname is the name of the Pod where the container is running. It can be accessed using the <code>hostname</code> command or the <code>gethostname</code> function in libc.</li><li>The Pod&apos;s name and namespace are accessible as environment variables through the downward API.</li><li>User-defined environment variables from the Pod definition are available to the container, along with any environment variables specified statically in the container image.</li></ul><p><strong>Cluster Information</strong></p><ul><li>A list of all services running when a container was created is available to the container as environment variables. This list only includes services in the same namespace as the container&apos;s Pod and Kubernetes control plane services.</li><li>For a service named <code>foo</code> mapping to a container named <code>bar</code>, the variables <code>FOO_SERVICE_HOST</code> (the host where the service runs) and <code>FOO_SERVICE_PORT</code> (the port where the service runs) are defined.</li><li>Services have dedicated IP addresses and can be accessed by the container via DNS, provided the DNS addon is enabled.</li></ul><p><strong>Runtime Class</strong></p><p>RuntimeClass is a Kubernetes feature that allows the selection of the container runtime configuration. It is used to run a Pod&apos;s containers.</p><p><strong>Motivation</strong></p><ul><li>RuntimeClass can be set differently between Pods to balance performance and security. For instance, Pods requiring high security may use a container runtime that uses hardware virtualization.</li><li>RuntimeClass can also be used to run different Pods with the same container runtime but with different settings.</li></ul><p><strong>Setup</strong></p><ol><li>Configure the Container Runtime Interface (CRI) implementation on nodes. Note that RuntimeClass assumes a homogeneous node configuration across the cluster by default.</li><li>Create the corresponding RuntimeClass resources. The configuration set up in step 1 should each have an associated handler name, which identifies the configuration.</li></ol><p>A RuntimeClass resource consists of two significant fields:</p><ul><li>The RuntimeClass name (metadata.name)</li><li>The handler (handler)</li></ul><pre><code class="language-yaml">apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: myclass 
handler: myconfiguration </code></pre><p><strong>Usage</strong></p><p>A runtimeClassName can be specified in the Pod spec to use it. If no runtimeClassName is specified, the default RuntimeHandler will be used.</p><pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  runtimeClassName: myclass</code></pre><p><strong>CRI Configuration</strong></p><p>Runtime handlers can be configured through the CRI implementation&apos;s configuration.</p><p><strong>Scheduling</strong></p><p>By specifying the scheduling field for a RuntimeClass, constraints can be set to ensure that Pods running with this RuntimeClass are scheduled to nodes that support it.</p><p>The supported nodes should have a common label selected by the runtimeclass.scheduling.nodeSelector field.</p><p>If the supported nodes are tainted to prevent other RuntimeClass pods from running on the node, tolerations can be added to the RuntimeClass.</p><p><strong>Pod Overhead</strong></p><p>Pod overhead is defined in RuntimeClass through the overhead field. Overhead resources associated with running a Pod can be declared, allowing the cluster to account for it when making decisions about Pods and resources.</p><p></p><p><strong>Container Lifecycle Hooks</strong></p><p>Lifecycle hooks allow Containers to be aware of events in their management lifecycle and run code when the corresponding lifecycle hook is executed.</p><p><strong>Types of Hooks</strong></p><ul><li><em>PostStart:</em> Executed immediately after a container is created. There&apos;s no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.</li><li><em>PreStop:</em> Called immediately before a container is terminated due to an API request or management event. The hook must complete before the TERM signal to stop the container can be sent.</li></ul><p><strong>Hook Handler Implementations</strong></p><p>Containers can implement and register a handler for a hook. There are two types of hook handlers:</p><ul><li><em>Exec:</em> Executes a specific command inside the cgroups and namespaces of the Container.</li><li><em>HTTP:</em> Executes an HTTP request against a specific endpoint on the Container.</li></ul><p><strong>Hook Handler Execution</strong></p><p>Hook handler calls are synchronous within the context of the Pod containing the Container.</p><ul><li>For a PostStart hook, the Container ENTRYPOINT and hook fire asynchronously.</li><li>PreStop hooks must complete their execution before the TERM signal can be sent.</li></ul><p>If a hook fails, it kills the Container. Therefore, hook handlers should be as lightweight as possible.</p><p><strong>Hook Delivery Guarantees</strong></p><p>Hook delivery is intended to be at least once, meaning a hook may be called multiple times for any given event. It&apos;s up to the hook implementation to handle this correctly.</p><p><strong>Debugging Hook Handlers</strong></p><p>The logs for a Hook handler are not exposed in Pod events. If a handler fails, it broadcasts an event. For PostStart, this is the FailedPostStartHook event, and for PreStop, this is the FailedPreStopHook event.</p>]]></content:encoded></item><item><title><![CDATA[K8s Summary - Nodes]]></title><description><![CDATA[<h2 id="communication-between-nodes-and-the-control-plane">Communication between Nodes and the Control Plane</h2><ul><li>Kubernetes follows a &quot;hub-and-spoke&quot; API pattern where all API usage from nodes (or the pods they run) terminates at the API server. The API server is designed to listen for remote connections on a secure HTTPS port (typically 443) with client</li></ul>]]></description><link>http://jumpstartk8s.com/kubernetes-docs-summary/</link><guid isPermaLink="false">645e51f25280d5050279b977</guid><category><![CDATA[Summary]]></category><dc:creator><![CDATA[Jude Naveen Raj Ilango]]></dc:creator><pubDate>Fri, 12 May 2023 15:09:42 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1484557052118-f32bd25b45b5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIxfHxzZXJ2ZXJ8ZW58MHx8fHwxNjg0MDA4NDM2fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="communication-between-nodes-and-the-control-plane">Communication between Nodes and the Control Plane</h2><ul><li>Kubernetes follows a &quot;hub-and-spoke&quot; API pattern where all API usage from nodes (or the pods they run) terminates at the API server. The API server is designed to listen for remote connections on a secure HTTPS port (typically 443) with client authentication and authorization enabled.</li><li>Nodes must have the public root certificate for the cluster to connect securely to the API server along with valid client credentials.</li><li>Pods can connect securely to the API server using a service account, which allows Kubernetes to inject the public root certificate and a valid bearer token into the pod when it is instantiated.</li><li>Control plane components also communicate with the API server over the secure port, making the connections from the nodes and pod running on the nodes to the control plane secure by default.</li></ul><img src="https://images.unsplash.com/photo-1484557052118-f32bd25b45b5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIxfHxzZXJ2ZXJ8ZW58MHx8fHwxNjg0MDA4NDM2fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="K8s Summary - Nodes"><p><strong>Control Plane to Node Communication</strong></p><p>There are two primary communication paths from the control plane to the nodes:</p><ol><li>From the API server to the kubelet process which runs on each node.</li><li>From the API server to any node, pod, or service through the API server&apos;s proxy functionality.</li></ol><ul><li>The connections from the API server to the kubelet are used for fetching logs for pods, attaching to running pods, and providing the kubelet&apos;s port-forwarding functionality.</li><li>These connections terminate at the kubelet&apos;s HTTPS endpoint, but the API server does not verify the kubelet&apos;s serving certificate by default, which could make the connection subject to attacks if run over untrusted/public networks.</li><li>To secure this, use the <code>--kubelet-certificate-authority</code> flag to provide the API server with a root certificate bundle to verify the kubelet&apos;s serving certificate. Alternatively, use SSH tunneling between the API server and kubelet to avoid connecting over an untrusted or public network.</li><li>The connections from the API server to a node, pod, or service default to plain HTTP connections and are not authenticated nor encrypted. They can run over a secure HTTPS connection by prefixing <code>https:</code> to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials.</li></ul><p><strong>SSH Tunnels</strong></p><ul><li>Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. However, SSH tunnels are currently deprecated.</li></ul><p><strong>Konnectivity Service</strong></p><ul><li>Introduced in Kubernetes v1.18 (beta), the Konnectivity service provides a TCP level proxy for the control plane to cluster communication, replacing SSH tunnels. The service comprises the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate and maintain connections to the Konnectivity server, and all control plane to nodes traffic goes through these connections after enabling the Konnectivity service.</li></ul><p></p><p><strong>Controllers</strong></p><ul><li>Controllers in Kubernetes are control loops that observe the state of the cluster and make necessary changes to bring the current state closer to the desired state.</li></ul><p><strong>Controller Pattern</strong></p><ul><li>A controller tracks at least one Kubernetes resource type. The spec field in these objects represents the desired state.</li><li>Controllers make the current state come closer to the desired state. They can either perform the action themselves or send messages to the API server that create useful side effects.</li></ul><p><strong>Control via API Server</strong></p><ul><li>Built-in controllers, like the Job controller, manage state by interacting with the cluster API server.</li><li>Job is a resource that runs a Pod or several Pods to carry out a task and then stop.</li><li>When the Job controller sees a new task, it ensures that the right number of Pods are running to get the work done. It doesn&apos;t run any Pods or containers itself but tells the API server to create or remove Pods.</li><li>Once a Job is done, the Job controller updates that Job object to mark it Finished.</li></ul><p><strong>Direct Control</strong></p><ul><li>Some controllers make changes to things outside the cluster. For example, a controller that ensures there are enough Nodes in your cluster needs to set up new Nodes when needed.</li><li>Such controllers get their desired state from the API server and then communicate directly with an external system to align the current state.</li></ul><p><strong>Desired versus Current State</strong></p><ul><li>Kubernetes can handle constant change. It doesn&apos;t matter if the overall state is stable or not as long as the controllers for your cluster are running and able to make useful changes.</li></ul><p><strong>Design</strong></p><ul><li>Kubernetes uses many controllers, each managing a particular aspect of cluster state.</li><li>It&apos;s better to have simple controllers rather than a monolithic set of interlinked control loops. Controllers can fail, and Kubernetes is designed to handle that.</li><li>There can be several controllers that create or update the same kind of object. Kubernetes controllers make sure they only pay attention to the resources linked to their controlling resource.</li></ul><p><strong>Ways of Running Controllers</strong></p><ul><li>Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager.</li><li>You can also find controllers that run outside the control plane to extend Kubernetes.</li><li>You can run your own controller as a set of Pods, or externally to Kubernetes, depending on what that particular controller does.</li></ul><h2 id="leases">Leases</h2><ul><li>Kubernetes uses Lease objects in the coordination.k8s.io API Group to lock shared resources and coordinate activity in distributed systems.</li><li><strong>Node heartbeats</strong>: Each Node has a matching Lease object in the kube-node-lease namespace. The kubelet updates the spec.renewTime field in this Lease object to communicate its heartbeat to the Kubernetes API server.</li><li><strong>Leader election</strong>: Control plane components like kube-controller-manager and kube-scheduler use Leases to ensure that only one instance of a component is running at a time in HA configurations.</li><li><strong>API server identity</strong>: From Kubernetes v1.26, each kube-apiserver uses Leases to publish its identity. This allows clients to discover how many instances of kube-apiserver are operating the control plane.</li><li>API server identity leases can be inspected in the kube-system namespace using kubectl.</li><li>API server identity leases are named using a SHA256 hash based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a unique hostname within the cluster.</li><li>API server identity leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.</li><li>You can disable API server identity leases by disabling the APIServerIdentity feature gate.</li><li>Custom workloads can define their own use of Leases, for example, to elect a leader in a custom controller. The Lease name should be obviously linked to the product or component.</li><li>Care should be taken to avoid name collisions for Leases when multiple instances of a component could be deployed.</li></ul><h2 id="cloud-controller-manager">Cloud Controller Manager</h2><ul><li>The cloud-controller-manager is a Kubernetes control plane component that adds cloud-specific control logic. It allows you to link your cluster into your cloud provider&apos;s API, and separates components that interact with the cloud platform from those that only interact with the cluster.</li><li>By decoupling the interoperability logic between Kubernetes and the underlying cloud infrastructure, CCM enables cloud providers to release features independently of the main Kubernetes project.</li><li>CCM is structured using a plugin mechanism that allows different cloud providers to integrate their platforms with Kubernetes.</li><li>The cloud controller manager runs as a replicated set of processes in the control plane (usually containers in Pods). It can also be run as a Kubernetes addon.</li></ul><p>CCM includes the following controllers:</p><ul><li><strong>Node Controller</strong>: Updates Node objects when new servers are created in your cloud infrastructure. It annotates and labels the Node object with cloud-specific information, obtains the node&apos;s hostname and network addresses, and verifies the node&apos;s health.</li><li><strong>Route Controller</strong>: Configures routes in the cloud to ensure that containers on different nodes in the Kubernetes cluster can communicate with each other. It might also allocate blocks of IP addresses for the Pod network.</li><li><strong>Service Controller</strong>: Interacts with your cloud provider&apos;s APIs to set up load balancers and other infrastructure components when you declare a Service resource that requires them.</li></ul><p>The access required by CCM on various API objects includes:</p><ul><li><strong>Node Controller</strong>: Full access to read and modify Node objects.</li><li><strong>Route Controller</strong>: Get access to Node objects.</li><li><strong>Service Controller</strong>: List and watch access to Services, and patch and update access to update Services. It requires create, list, get, watch, and update access to set up Endpoints resources for the Services.</li><li><strong>Others</strong>: CCM requires access to create Event objects and ServiceAccounts for secure operation.</li></ul><p></p><h2 id="cgroup-v2">cgroup v2</h2><p>Here&apos;s a summarized version of the information about cgroups v2 in Linux and its relation to Kubernetes:</p><ul><li><strong>What are cgroups?</strong> Control groups (cgroups) are a Linux feature that limits and allocates resources&#x2014; such as CPU time, system memory, network bandwidth, or combinations of these resources&#x2014;among user-defined groups of processes.</li></ul><p><strong>cgroup v2</strong>: cgroup v2 is the new generation of the cgroup API. It is a unified control system with enhanced resource management capabilities. cgroup v2 offers several improvements over cgroup v1:</p><ul><li>Unified hierarchy design in API</li><li>Safer sub-tree delegation to containers</li><li>New features like Pressure Stall Information</li><li>Enhanced resource allocation management and isolation</li><li>Unified accounting for different types of memory allocations</li><li>Accounting for non-immediate resource changes</li><li><strong>Kubernetes and cgroup v2</strong>: Kubernetes v1.25 and onwards use cgroup v2 for enhanced resource management and isolation. Some features, like MemoryQoS, rely exclusively on cgroup v2 primitives.</li></ul><p><strong>Using cgroup v2</strong>: It&apos;s recommended to use a Linux distribution that enables and uses cgroup v2 by default. To use cgroup v2, you need:</p><ul><li>An OS distribution that enables cgroup v2</li><li>Linux Kernel version 5.8 or later</li><li>A container runtime that supports cgroup v2 (e.g., containerd v1.4+ or cri-o v1.20+)</li><li>The kubelet and the container runtime configured to use the systemd cgroup driver</li></ul><p><strong>Linux Distribution cgroup v2 support</strong>: Here are some Linux distributions that support cgroup v2:</p><ul><li>Container Optimized OS (since M97)</li><li>Ubuntu (since 21.10, 22.04+ recommended)</li><li>Debian GNU/Linux (since Debian 11 bullseye)</li><li>Fedora (since 31)</li><li>Arch Linux (since April 2021)</li><li>RHEL and RHEL-like distributions (since 9)</li><li><strong>Migrating to cgroup v2</strong>: To migrate to cgroup v2, ensure that you meet the requirements, then upgrade to a kernel version that enables cgroup v2 by default. The kubelet will automatically detect cgroup v2 and no additional configuration is required.</li><li><strong>Identifying the cgroup version</strong>: To check which cgroup version your distribution uses, run the <code>stat -fc %T /sys/fs/cgroup/</code> command on the node. The output is <code>cgroup2fs</code> for cgroup v2, and <code>tmpfs</code> for cgroup v1.</li><li><strong>Compatibility</strong>: cgroup v2 uses a different API than cgroup v1, so applications that directly access the cgroup file system need to be updated to support cgroup v2. This includes third-party monitoring and security agents, standalone cAdvisor, and Java applications.</li></ul><h2 id="container-runtime-interface-cri">Container Runtime Interface (CRI)<br></h2><ul><li><strong>Container Runtime Interface (CRI)</strong>: CRI is a plugin interface that allows the kubelet to use a variety of container runtimes without needing to recompile the cluster components. Every Node in the cluster requires a working container runtime for the kubelet to launch Pods and their containers.</li><li><strong>Communication</strong>: CRI is the main protocol for communication between the kubelet and the Container Runtime. This communication is defined by a gRPC protocol.</li><li><strong>The API</strong>: As of Kubernetes v1.23 (stable), the kubelet acts as a client when connecting to the container runtime via gRPC. The runtime and image service endpoints must be available in the container runtime, and can be configured separately within the kubelet using the --image-service-endpoint and --container-runtime-endpoint command-line flags.</li><li><strong>CRI versions</strong>: For Kubernetes v1.27, the kubelet prefers to use CRI v1. If a container runtime does not support v1 of the CRI, the kubelet tries to negotiate any older supported version. CRI v1alpha2 is supported but considered deprecated. If the kubelet cannot negotiate a supported CRI version, it does not register as a node.</li><li><strong>Upgrading</strong>: When upgrading Kubernetes, the kubelet attempts to automatically select the latest CRI version upon restart. If this fails, fallback occurs as mentioned above. If a gRPC re-dial is required due to an upgrade of the container runtime, the runtime must also support the initially selected version or the re-dial is expected to fail, necessitating a kubelet restart.</li></ul><h2 id="k8s-garbage-collection"><br><br>K8s Garbage Collection</h2><p><strong>Garbage Collection</strong>: Kubernetes uses various mechanisms to clean up cluster resources such as terminated pods, completed jobs, objects without owner references, unused containers and images, dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete, stale or expired CertificateSigningRequests (CSRs), deleted nodes, and Node Lease objects.</p><ul><li><strong>Owners and Dependents</strong>: Many Kubernetes objects are linked through owner references. Owner references tell the control plane which objects are dependent on others. Kubernetes uses owner references to ensure related resources are cleaned up before deleting an object. Cross-namespace owner references are disallowed by design.</li></ul><p><strong>Cascading Deletion</strong>: When an object is deleted, Kubernetes can automatically delete the object&apos;s dependents, known as cascading deletion. Two types of cascading deletion exist:</p><ul><li><strong>Foreground Cascading Deletion</strong>: The owner object first enters a deletion in progress state, then dependents are deleted, and finally, the owner object is deleted.</li><li><strong>Background Cascading Deletion</strong>: The owner object is deleted immediately, and the dependents are cleaned up in the background.</li><li><strong>Orphaned Dependents</strong>: When an owner object is deleted, the dependents left behind are called orphan objects. By default, Kubernetes deletes dependent objects.</li><li><strong>Garbage Collection of Unused Containers and Images</strong>: The kubelet performs garbage collection on unused images every five minutes and on unused containers every minute. Configuration options for this garbage collection can be set using the KubeletConfiguration resource type.</li><li><strong>Container Image Lifecycle</strong>: Kubernetes manages the lifecycle of all images through its image manager, part of the kubelet, with the cooperation of cadvisor. Disk usage limits guide garbage collection decisions.</li><li><strong>Container Garbage Collection</strong>: The kubelet garbage collects unused containers based on certain variables like MinAge, MaxPerPodContainer, and MaxContainers.</li><li><strong>Configuring Garbage Collection</strong>: Garbage collection of resources can be configured by tuning options specific to the controllers managing those resources.</li></ul><p>To delve deeper into garbage collection in Kubernetes, you can explore topics such as configuring cascading deletion of Kubernetes objects, configuring cleanup of finished Jobs, learning more about ownership of Kubernetes objects, Kubernetes finalizers, and the TTL controller that cleans up finished Jobs.</p><h2></h2>]]></content:encoded></item><item><title><![CDATA[Let's talk about Containers]]></title><description><![CDATA[<p>Welcome to the first lesson! Before we dive straight into Kubernetes, I would like to talk to you about Containers and why they have become central to our deployment paradigms in recent years. It is essential that we understand what containers are in order to see the true value of</p>]]></description><link>http://jumpstartk8s.com/lets-talk-about-containers/</link><guid isPermaLink="false">63f2953e07f5f917a537ba4c</guid><category><![CDATA[Photo Storage App]]></category><dc:creator><![CDATA[Jude Naveen Raj Ilango]]></dc:creator><pubDate>Wed, 22 Feb 2023 03:33:49 GMT</pubDate><content:encoded><![CDATA[<p>Welcome to the first lesson! Before we dive straight into Kubernetes, I would like to talk to you about Containers and why they have become central to our deployment paradigms in recent years. It is essential that we understand what containers are in order to see the true value of the Kubernetes ecosystem.</p><h2 id="what-are-containers">What are containers?</h2><blockquote><strong><u>A Container</u></strong> is <em>a unit of software that packages code and its dependencies, such that it can be executed reliably across </em>different<em> computing environments. </em></blockquote><p>In other words, a container is an executable package that includes everything an application needs to run, such as code, necessary frameworks, libraries, packages and system tools. Essentially, containers help us create reproducible environments for an application. This is particularly useful for deploying applications in a distributed system with many nodes, where dependencies need to be consistent across different machines.</p><h3 id="why-use-containers">Why use containers?</h3><h5 id="scalability-and-parallelization">Scalability and Parallelization:</h5><p>If the code is an isolated/decoupled manner, containers can help us scale the application to manage heavy loads and perform operations in parallel across a large number of compute instances. This means we can more easily handle spikes in traffic without having to manually provision additional servers or resources.</p><h5 id="portability">Portability</h5><p>Containers follow standard formats for packaging an app and its dependencies, which makes it possible to move an application from one environment to another, such as from a developer&apos;s laptop to a staging environment to a production environment in a cloud provider. This enables faster development and deployment cycles, and predictable production deployment behavior.</p><h5 id="isolation-and-security">Isolation and Security:</h5><p>Containers help run applications in their own self-contained environments. This level of isolation allows for multiple apps with their own dependencies to run on the same hardware without conflicts and limits the blast radius of security vulnerabilities. Beyond that, they can be made to adhere to the principle of least privilege, subject to access control and network restrictions. This means that even if a container is compromised, it cannot affect the host system or other containers running on the same machine.</p><h5 id="efficiency-of-resources">Efficiency of Resources</h5><p>Containers usually define the amount of resources they require in order to run. This (combined with isolation guarantees) makes it simple to run containers of varying sizes and to optimize costs by running multiple containers on the same machine.</p><h3 id="how-do-containers-work">How do containers work?</h3><p>Containers are run on a host OS and share the Host OS&apos;s kernel. &quot;But you told us there was isolation in containers, but now you say it shares the kernel with the same OS?!?!&quot;. You&apos;re right, it does share the kernel, but it uses a few kernel tricks to achieve the isolation.</p><p>First, it uses cgroups. Cgroups allows containers to allocate and limit the system resources provisioned for it. This includes resources such as CPU, memory, and disk I/O. It ensures that a container cannot use more resources than it is allowed and allows for resource allocation to be managed at the container level.</p><p>Second, it uses namespaces. Namespaces allow the container to obtain a separate view of the system&apos;s file system resources and network interfaces. Each container has its own file system, network interfaces, and hostname, which are separate from the host system. This ensures that different containers do not interfere with each other and operate as though they are on separate machines.</p><p>I&apos;m sure there are more tricks but knowing this should be enough for now.</p><h3 id="how-are-containers-run">How are containers run?</h3><p>Now for a container to run an application, it first needs to be packaged. For that, we introduce the concept of <em>container images</em>.</p><blockquote><em><strong>A Container Image</strong> is a single unit of packaged code, which includes both your applications and all of its dependencies, that can be stored, distributed and run via <strong>container runtimes.</strong></em></blockquote><p>What is a <strong>container runtime?</strong></p><p>Once you have created a container image, you need a container runtime to execute the container image.</p><blockquote><em><strong>A Container Runtime </strong>creates a container instance from a container image, and provides the necessary runtime assistance for the container, such as provisioning a read-write layer for the container to use during runtime.</em></blockquote><p>The two most popular container runtimes are <em>Docker</em> and <em>containerd</em>. </p><p><strong><em>Docker</em></strong> is a high-level container runtime that provides a simple and easy-to-use interface for building, managing, and running containers. </p><p><strong><em>Containerd</em></strong> is a low-level container runtime that provides a more modular and extensible architecture for managing containers.</p><p>Regardless of which runtime you choose, the runtime alone is not enough to manage a large number of containers. There needs to be a component that can help manage and coordinate these containers, so they can play well with each other. This is where Kubernetes comes in, and we&apos;ll take a look at that in the next chapter.</p><h2 id="conclusion">Conclusion</h2><p>Containers have become an essential component of modern-day software development. If you were previously intimidated by containers, I hope that this course helps you build an appreciation for their value. In the long term, containers can enhance your productivity and the quality of the software you build. </p><p>With their ability to provide an isolated and portable application environment, containers offer numerous benefits including efficient resource utilization, consistent deployment, and ease of management. By embracing containers, you can stay ahead of the curve and take advantage of the latest technologies to improve your development workflow.</p><p>Excited to be on this journey with you!</p><h2></h2>]]></content:encoded></item><item><title><![CDATA[K8s Services - Photo Upload Service]]></title><description><![CDATA[<p>Kubernetes is a powerful container orchestration tool that simplifies the deployment and management of containerized applications at scale. In a Kubernetes cluster, a service is an abstraction layer that exposes a set of pods to the network. In this post, we&apos;ll dive deeper into Kubernetes services and their</p>]]></description><link>http://jumpstartk8s.com/k8s-services-photo-upload-service/</link><guid isPermaLink="false">63f1826307f5f917a537b9ef</guid><category><![CDATA[Photo Storage App]]></category><category><![CDATA[Kubernetes Objects]]></category><dc:creator><![CDATA[Jude Naveen Raj Ilango]]></dc:creator><pubDate>Sun, 19 Feb 2023 01:59:48 GMT</pubDate><content:encoded><![CDATA[<p>Kubernetes is a powerful container orchestration tool that simplifies the deployment and management of containerized applications at scale. In a Kubernetes cluster, a service is an abstraction layer that exposes a set of pods to the network. In this post, we&apos;ll dive deeper into Kubernetes services and their various types, features, and use cases.</p><h2 id="what-is-a-kubernetes-service">What is a Kubernetes Service?</h2><p>A Kubernetes service is a logical abstraction layer that groups a set of pods and exposes them to the network. Services provide a stable IP address and DNS name for the pods they target, regardless of their location or health status. This enables other applications inside or outside the cluster to access the pods via the service without knowing their specific IP addresses.</p><p>Services are defined using Kubernetes manifest files in YAML or JSON format. Each service has a unique name, a set of labels that match the labels on the targeted pods, and a service type that determines how the service is exposed to the network. Services can also have optional annotations, which provide additional metadata for the service.</p><h2 id="types-of-kubernetes-services">Types of Kubernetes Services</h2><p>Kubernetes supports four types of services, each with different networking characteristics and use cases:</p><ol><li><strong>ClusterIP:</strong> The default type of service, which exposes the service on a cluster-internal IP address. This type is suitable for accessing the service within the cluster but not from outside the cluster.</li><li><strong>NodePort:</strong> Exposes the service on a static port on each worker node&apos;s IP address, which makes it accessible from outside the cluster. This type is useful for testing and development purposes but is not recommended for production environments due to security and scalability concerns.</li><li><strong>LoadBalancer:</strong> Creates an external load balancer that routes traffic to the service. This type is suitable for exposing the service to external clients and provides load balancing, high availability, and scalability.</li><li><strong>ExternalName: </strong>Maps the service to an external DNS name without creating a selector-based service. This type is useful for accessing external services from inside the cluster.</li></ol>]]></content:encoded></item><item><title><![CDATA[K8s Pods - Photo upload server]]></title><description><![CDATA[<p>Kubernetes (also known as k8s) is an open-source platform for managing containerized workloads and services. It provides a wide range of features that make it easy to deploy, scale, and manage applications in a cloud-native environment. One of the key components of Kubernetes is the Pod, which is the smallest</p>]]></description><link>http://jumpstartk8s.com/k8s-pods-photo-upload-micro-service/</link><guid isPermaLink="false">63f17e4407f5f917a537b9d1</guid><category><![CDATA[Photo Storage App]]></category><category><![CDATA[Kubernetes Objects]]></category><dc:creator><![CDATA[Jude Naveen Raj Ilango]]></dc:creator><pubDate>Sun, 19 Feb 2023 01:46:46 GMT</pubDate><content:encoded><![CDATA[<p>Kubernetes (also known as k8s) is an open-source platform for managing containerized workloads and services. It provides a wide range of features that make it easy to deploy, scale, and manage applications in a cloud-native environment. One of the key components of Kubernetes is the Pod, which is the smallest and simplest unit in the Kubernetes object model.</p><p>In this post, we will dive deep into the Kubernetes Pod specification, exploring what it is, how it works, and how to configure it.</p><h2 id="what-is-a-kubernetes-pod">What is a Kubernetes Pod?</h2><p>A Pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process in a cluster. A Pod can contain one or more containers, which share the same network namespace and can communicate with each other using localhost.</p><p>The main purpose of a Pod is to provide a way to run and manage a single container or a set of tightly coupled containers that share resources, such as CPU, memory, and storage. Pods are also designed to be ephemeral, meaning they can be created, destroyed, and replaced at any time without affecting the rest of the cluster.</p><h2 id="kubernetes-pod-specification">Kubernetes Pod Specification</h2><p>The Kubernetes Pod specification defines how a Pod should be configured and deployed in a cluster. It is defined using a YAML file or a JSON object, which includes various fields and parameters that control the Pod&apos;s behavior.</p>]]></content:encoded></item></channel></rss>