During my journey to learn Kubernetes, I came across lots of books, courses, and documents—each packed with useful information. Today, I want to talk about the books I’ve read. Among all of them, two really stood out and added a lot to my knowledge.
The first one is Kubernetes in Action by Marko Lukša. I highly recommend this book if you’re starting out with Kubernetes. The second one, which I read recently, is Core Kubernetes by Jay Vyas & Chris Love. This book is amazing—it gives you a really hands-on view of what’s happening under the hood in Kubernetes, along with best practices and deep knowledge that will help you debug your Kubernetes environment.
In this post, I’m going to review some of the coolest parts of this book across three core Kubernetes areas: Containers, Networking, and Storage. The book breaks down what’s happening under the hood so Kubernetes won’t feel like some sort of magic. Let’s get started!
Containers and Pods
For most people, when they first encounter containers, it feels like magic or some complex coding—“Wow, I have another isolated Linux inside my Linux!” But here’s what’s really happening behind the scenes.
When we say “container,” we really mean a Linux process with a few kernel tricks applied to make it feel isolated. Kubernetes builds its entire deployment model on top of this, with a few extra twists that make it even more powerful.
A container is basically a normal process—just like any other—but with some Linux tricks to keep it isolated. To truly isolate a process, you need to handle three things:
- What the process can see
- How much resources it can use
- Where it can write
Linux comes with built-in features to handle each of these.
Containers and Pods — The Real Story Behind the Abstraction
When we say “container,” what we really mean is a Linux process with some kernel tricks applied to make it feel isolated.
Kubernetes builds its entire deployment model on top of this — but with a few twists that make it more powerful.
Let’s unpack it step-by-step.
From Process → Container
A container is a process started by a container runtime (like containerd
, CRI-O
, or Docker), with:
- Namespaces → isolate what the process can see.
- cgroups → limit how much it can use.
- Filesystem overlays → control where it writes.
Peek into namespaces
Namespaces are managed by the Linux kernel. You can see them with lsns
:
# On a Linux host or inside a container
lsns
Typical container namespaces include:
Namespace type | Purpose | Inspect |
---|---|---|
PID | Process tree isolation | ps aux |
NET | Network stack isolation | ip addr |
MNT | Filesystem isolation | mount |
UTS | Hostname/domain isolation | hostname |
IPC | Shared memory isolation | ipcs |
Example: Start a shell in its own PID namespace:
unshare --pid --fork --mount-proc bash
ps aux # Only shows processes inside this namespace
Adding Resource Limits with cgroups
cgroups (control groups) are how Kubernetes enforces CPU and memory limits.
Try it with Docker:
docker run --memory=50m --cpus=0.5 busybox sh -c 'while :; do :; done'
Check the memory limit from inside the container:
cat /sys/fs/cgroup/memory/memory.limit_in_bytes
In Kubernetes, you set this in your Pod spec:
apiVersion: v1
kind: Pod
metadata:
name: limited-pod
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'while true; do :; done']
resources:
limits:
cpu: "0.5"
memory: "50Mi"
Apply:
kubectl apply -f pod.yaml
What Makes a Pod Special?
Here’s the big thing from Core Kubernetes:
A Pod is NOT just a container. It’s a group of containers sharing the same Linux namespaces (for network, IPC, hostname, etc.) and the same storage volumes.
You can prove this easily.
Example: Two containers in one Pod sharing a network namespace
apiVersion: v1
kind: Pod
metadata:
name: shared-net-pod
spec:
containers:
- name: nginx
image: nginx
- name: busybox
image: busybox
command: ['sh', '-c', 'sleep 3600']
Apply:
kubectl apply -f shared-net-pod.yaml
Exec into the busybox container and curl nginx over localhost:
kubectl exec -it shared-net-pod -c busybox – wget -qO- http://127.0.0.1
You’ll get the Nginx welcome page — because both containers share the same eth0
interface.
Under the Hood Pod Creation Flow
When you run:
kubectl apply -f mypod.yaml
Here’s what happens:
- API Server stores Pod definition in etcd.
- Scheduler picks a node.
- kubelet on that node:
- Calls CRI to pull images.
- Creates pause container → this holds the network namespace.
- Starts other containers in the Pod, joining the pause container’s namespaces.
- Sets up cgroups and mounts volumes.
You can see the pause container:
crictl ps | grep pause
Inspect a Running Pod’s Namespaces
Find the Pod’s container ID:
crictl ps | grep <pod-name>
Get the PID of the main process:
crictl inspect <container-id> | grep pid
Check its namespaces:
sudo ls -l /proc/<PID>/ns
You’ll see symlinks for net
, pid
, mnt
, etc. — these are the kernel namespaces the Pod is using.
Takeaway
Kubernetes Pods are built from the same Linux primitives you can use manually — namespaces, cgroups, and mounts — but wrapped in automation.
The pause container trick + shared namespaces are what make multi-container Pods possible.
Kubernetes Networking
One of Kubernetes’ biggest promises is:
Every Pod gets its own IP, and every Pod can talk to every other Pod without NAT, across any node.
This sounds simple, but making it happen involves network namespaces, veth pairs, CNI plugins, and a whole lot of iptables or eBPF. Let’s break it down.
How Kubernetes Gives a Pod an IP
When a Pod starts, kubelet works with the Container Network Interface (CNI) plugin to set up networking:
- Container runtime (via CRI) creates the Pod’s network namespace.
- CNI plugin:
- Creates a veth pair — two connected virtual Ethernet devices.
- Puts one end inside the Pod’s network namespace (
eth0
). - Puts the other end in the node’s root network namespace.
- Assigns the Pod an IP from the cluster network range.
- Connects the host-side veth to a Linux bridge or overlay network.
See It for Yourself
Deploy a Pod:
kubectl run net-test --image=busybox -- sleep 3600
Get its IP:
kubectl get pod net-test -o wide
Exec in and check network interfaces:
kubectl exec -it net-test – ip addr
Output will look something like:
1: lo: ...
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
inet 10.244.0.12/24 brd 10.244.0.255 scope global eth0
eth0@if4
means eth0
inside the Pod is connected to interface if4
on the host.
Finding the veth Pair on the Node
Get the Pod’s container ID:
crictl ps | grep net-test
Find its PID:
crictl inspect <container-id> | grep pid
Map the veth:
sudo ip link | grep <if-number-from-above>
You’ll see the host end of the veth pair — that’s how traffic leaves the Pod.
Pod-to-Pod Communication
If two Pods are on the same node, traffic flows:
Pod A eth0 – >veth pair – >Linux bridge (cni0) – >veth pair – > Pod B eth0
Check the bridge:
sudo brctl show
If they’re on different nodes, the CNI plugin handles routing:
- Overlay networks (Flannel, Weave) encapsulate packets.
- Routing-based CNIs (Calico, Cilium) program routes directly into the kernel.
See routes on the node:
ip route
Services and kube-proxy
Kubernetes Services (ClusterIP, NodePort, LoadBalancer) are implemented by kube-proxy
:
- iptables mode: installs DNAT rules to send traffic from the Service IP to Pod IPs.
- IPVS mode: uses the Linux IP Virtual Server for faster load balancing.
- eBPF mode (with Cilium): programs the kernel directly for ultra-low latency.
Check iptables rules for a Service:
sudo iptables-save | grep KUBE-
Example:
-A KUBE-SERVICES -d 10.96.0.1/32 ... -j KUBE-SVC-XXXX
Debugging Networking Inside Kubernetes
Ping between Pods in the same namespace:
kubectl exec -it net-test – ping <other-pod-ip>
Check DNS resolution:
kubectl exec -it net-test – nslookup kubernetes.default
Trace packets:
kubectl exec -it net-test – tcpdump -i eth0
Inspect network policies:
kubectl describe networkpolicy
Takeaway
Kubernetes networking is built on Linux primitives you can inspect: namespaces, veth pairs, bridges, and routing. The CNI plugin is the glue that wires it all together — and kube-proxy
(or eBPF) handles Service load balancing.
Kubernetes Storage — How Pods Get Their Data
As Core Kubernetes explains, containers are ephemeral — when a Pod dies, the data inside its container filesystem is lost.
Kubernetes solves this with Volumes (data mounted into a container’s filesystem) and the Container Storage Interface (CSI), which standardizes how storage systems integrate.
Volumes in Kubernetes
A Volume is just a directory available to containers in a Pod.
Types include:
- Ephemeral: e.g.,
emptyDir
,configMap
,secret
— tied to Pod lifetime. - Persistent: backed by PersistentVolumes (PVs) that outlive the Pod.
Example: Ephemeral Volume
apiVersion: v1
kind: Pod
metadata:
name: emptydir-example
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'echo hello > /cache/msg && sleep 3600']
volumeMounts:
- name: cache-volume
mountPath: /cache
volumes:
- name: cache-volume
emptyDir: {}
Apply:
kubectl apply -f emptydir-example.yaml
Check:
kubectl exec -it emptydir-example -- cat /cache/msg
Persistent Volumes and PVCs
Persistent Volumes represent storage in the cluster, provided by a storage backend (local disk, NFS, cloud volume, etc.).
Pods use PersistentVolumeClaims (PVCs) to request them.
Example: PVC + Pod
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
---
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pvc-example
spec:
containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'echo hello-pv > /data/msg && sleep 3600']
volumeMounts:
- name: my-storage
mountPath: /data
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
Apply:
kubectl apply -f pvc.yaml
kubectl apply -f pod.yaml
Verify mount:
kubectl exec -it pvc-example -- cat /data/msg
The CSI Model
From the book’s breakdown, CSI drivers have two parts:
- Controller Plugin → provisions and attaches volumes.
- Node Plugin → mounts volumes on the node and into Pods.
When a Pod with a PVC starts:
- kubelet calls the Node Plugin’s
NodeStageVolume
to prepare the volume. - kubelet calls
NodePublishVolume
to bind-mount it into the Pod. - The volume appears in
/var/lib/kubelet/pods/<pod-uid>/volumes/
on the node.
Check mounts on the node:
mount | grep my-pvc
Ephemeral vs Persistent Recap
emptyDir
→ Deleted when Pod stops.- PVC/PV → Survives Pod restarts, until the PV is deleted.
Takeaway
Kubernetes doesn’t manage your storage content — it orchestrates access to it.
The kubelet + CSI handle all attachment, mounting, and unmounting, while the Pod just sees a regular directory.
Conclusion
In this blog post, I’ve reviewed and summarized some of the coolest parts of Core Kubernetes and mixed them with my own experience. The great thing about this book is that it’s easy to read, but to get the most out of it, you’ll need a solid understanding of Linux and networking.
I used to stop on every page that mentioned something unfamiliar to me, look it up, and learn it in order to fully understand the book. The author assumes that readers already know what Kubernetes does and are now ready to dive deeper.
This approach helped me learn a lot and has been incredibly useful during debugging. I definitely recommend reading it, but don’t just skim—take your time to understand it thoroughly, word by word.
Thanks for reading! I’d love to hear your thoughts on this book or any other Kubernetes books or resources you’ve explored. I really appreciate your time and insights!