최신 CKA 무료덤프 - Linux Foundation Certified Kubernetes Administrator (CKA) Program
Score: 4%

Task
Create a persistent volume with name app-data , of capacity 1Gi and access mode ReadOnlyMany. The type of volume is hostPath and its location is /srv/app-data .

Task
Create a persistent volume with name app-data , of capacity 1Gi and access mode ReadOnlyMany. The type of volume is hostPath and its location is /srv/app-data .
정답:
Solution:
#vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /srv/app-config
#
kubectl create -f pv.yaml
#vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /srv/app-config
#
kubectl create -f pv.yaml

Task
Create a new Ingress resource as follows:
. Name: echo
. Namespace : sound-repeater
. Exposing Service echoserver-service on
http://example.org/echo
using Service port 8080
The availability of Service
echoserver-service can be checked
i
using the following command, which should return 200 :
[candidate@cka000024] $ curl -o /de v/null -s -w "%{http_code}\n"
http://example.org/echo
정답:
Task Summary
Create an Ingress named echo in the sound-repeater namespace that:
* Routes requests to /echo on host example.org
* Forwards traffic to service echoserver-service
* Uses service port 8080
* Verification should return HTTP 200 using curl
# Step-by-Step Answer
1## SSH into the correct node
As shown in the image:
bash
CopyEdit
ssh cka000024
## Skipping this will result in a ZERO score!
2## Verify the namespace and service
Ensure the sound-repeater namespace and echoserver-service exist:
kubectl get svc -n sound-repeater
Look for:
echoserver-service ClusterIP ... 8080/TCP
3## Create the Ingress manifest
Create a YAML file: echo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: sound-repeater
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.org
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: echoserver-service
port:
number: 8080
4## Apply the Ingress resource
kubectl apply -f echo-ingress.yaml
5## Test with curl as instructed
Use the exact verification command:
curl -o /dev/null -s -w "%{http_code}\n"
http://example.org/echo
# You should see:
200
# Final Answer Summary
ssh cka000024
kubectl get svc -n sound-repeater
# Create the Ingress YAML
cat <<EOF > echo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: sound-repeater
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.org
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: echoserver-service
port:
number: 8080
EOF
kubectl apply -f echo-ingress.yaml
curl -o /dev/null -s -w "%{http_code}\n"
http://example.org/echo
Create an Ingress named echo in the sound-repeater namespace that:
* Routes requests to /echo on host example.org
* Forwards traffic to service echoserver-service
* Uses service port 8080
* Verification should return HTTP 200 using curl
# Step-by-Step Answer
1## SSH into the correct node
As shown in the image:
bash
CopyEdit
ssh cka000024
## Skipping this will result in a ZERO score!
2## Verify the namespace and service
Ensure the sound-repeater namespace and echoserver-service exist:
kubectl get svc -n sound-repeater
Look for:
echoserver-service ClusterIP ... 8080/TCP
3## Create the Ingress manifest
Create a YAML file: echo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: sound-repeater
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.org
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: echoserver-service
port:
number: 8080
4## Apply the Ingress resource
kubectl apply -f echo-ingress.yaml
5## Test with curl as instructed
Use the exact verification command:
curl -o /dev/null -s -w "%{http_code}\n"
http://example.org/echo
# You should see:
200
# Final Answer Summary
ssh cka000024
kubectl get svc -n sound-repeater
# Create the Ingress YAML
cat <<EOF > echo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: sound-repeater
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.org
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: echoserver-service
port:
number: 8080
EOF
kubectl apply -f echo-ingress.yaml
curl -o /dev/null -s -w "%{http_code}\n"
http://example.org/echo
Quick Reference
ConfigMaps,
Documentation Deployments,
Namespace
You must connect to the correct host . Failure to do so may result in a zero score.
[candidate@base] $ ssh cka000048b
Task
An NGINX Deployment named nginx-static is running in the nginx-static namespace. It is configured using a ConfigMap named nginx-config .
First, update the nginx-config ConfigMap to also allow TLSv1.2. connections.
You may re-create, restart, or scale resources as necessary.
You can use the following command to test the changes:
[candidate@cka000048b] $ curl -- tls-max
1.2 https://web.k8s.local
ConfigMaps,
Documentation Deployments,
Namespace
You must connect to the correct host . Failure to do so may result in a zero score.
[candidate@base] $ ssh cka000048b
Task
An NGINX Deployment named nginx-static is running in the nginx-static namespace. It is configured using a ConfigMap named nginx-config .
First, update the nginx-config ConfigMap to also allow TLSv1.2. connections.
You may re-create, restart, or scale resources as necessary.
You can use the following command to test the changes:
[candidate@cka000048b] $ curl -- tls-max
1.2 https://web.k8s.local
정답:
Task Summary
* SSH into cka000048b
* Update the nginx-config ConfigMap in the nginx-static namespace to allow TLSv1.2
* Ensure the nginx-static Deployment picks up the new config
* Verify the change using the provided curl command
Step-by-Step Instructions
Step 1: SSH into the correct host
ssh cka000048b
Step 2: Get the ConfigMap
kubectl get configmap nginx-config -n nginx-static -o yaml > nginx-config.yaml Open the file for editing:
nano nginx-config.yaml
Look for the TLS configuration in the data field. You are likely to find something like:
ssl_protocols TLSv1.3;
Modify it to include TLSv1.2 as well:
ssl_protocols TLSv1.2 TLSv1.3;
Save and exit the file.
Now update the ConfigMap:
kubectl apply -f nginx-config.yaml
Step 3: Restart the NGINX pods to pick up the new ConfigMap
Pods will not reload a ConfigMap automatically unless it's mounted in a way that supports dynamic reload and the app is watching for it (NGINX typically doesn't by default).
The safest way is to restart the pods:
Option 1: Roll the deployment
kubectl rollout restart deployment nginx-static -n nginx-static
Option 2: Delete pods to force recreation
kubectl delete pod -n nginx-static -l app=nginx-static
Step 4: Verify using curl
Use the provided curl command to confirm that TLS 1.2 is accepted:
curl --tls-max 1.2 https://web.k8s.local
A successful response means the TLS configuration is correct.
Final Command Summary
ssh cka000048b
kubectl get configmap nginx-config -n nginx-static -o yaml > nginx-config.yaml nano nginx-config.yaml # Modify to include "ssl_protocols TLSv1.2 TLSv1.3;" kubectl apply -f nginx-config.yaml kubectl rollout restart deployment nginx-static -n nginx-static
# or
kubectl delete pod -n nginx-static -l app=nginx-static
curl --tls-max 1.2 https://web.k8s.local
* SSH into cka000048b
* Update the nginx-config ConfigMap in the nginx-static namespace to allow TLSv1.2
* Ensure the nginx-static Deployment picks up the new config
* Verify the change using the provided curl command
Step-by-Step Instructions
Step 1: SSH into the correct host
ssh cka000048b
Step 2: Get the ConfigMap
kubectl get configmap nginx-config -n nginx-static -o yaml > nginx-config.yaml Open the file for editing:
nano nginx-config.yaml
Look for the TLS configuration in the data field. You are likely to find something like:
ssl_protocols TLSv1.3;
Modify it to include TLSv1.2 as well:
ssl_protocols TLSv1.2 TLSv1.3;
Save and exit the file.
Now update the ConfigMap:
kubectl apply -f nginx-config.yaml
Step 3: Restart the NGINX pods to pick up the new ConfigMap
Pods will not reload a ConfigMap automatically unless it's mounted in a way that supports dynamic reload and the app is watching for it (NGINX typically doesn't by default).
The safest way is to restart the pods:
Option 1: Roll the deployment
kubectl rollout restart deployment nginx-static -n nginx-static
Option 2: Delete pods to force recreation
kubectl delete pod -n nginx-static -l app=nginx-static
Step 4: Verify using curl
Use the provided curl command to confirm that TLS 1.2 is accepted:
curl --tls-max 1.2 https://web.k8s.local
A successful response means the TLS configuration is correct.
Final Command Summary
ssh cka000048b
kubectl get configmap nginx-config -n nginx-static -o yaml > nginx-config.yaml nano nginx-config.yaml # Modify to include "ssl_protocols TLSv1.2 TLSv1.3;" kubectl apply -f nginx-config.yaml kubectl rollout restart deployment nginx-static -n nginx-static
# or
kubectl delete pod -n nginx-static -l app=nginx-static
curl --tls-max 1.2 https://web.k8s.local
Score:7%

Context
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e. g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task
Add a sidecar container named sidecar, using the busybox Image, to the existing Pod big-corp-app. The new sidecar container has to run the following command:
/bin/sh -c tail -n+1 -f /va r/log/big-corp-app.log
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.


Context
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e. g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task
Add a sidecar container named sidecar, using the busybox Image, to the existing Pod big-corp-app. The new sidecar container has to run the following command:
/bin/sh -c tail -n+1 -f /va r/log/big-corp-app.log
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.

정답:
Solution:
#
kubectl get pod big-corp-app -o yaml
#
apiVersion: v1
kind: Pod
metadata:
name: big-corp-app
spec:
containers:
- name: big-corp-app
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/big-corp-app.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/big-corp-app.log']
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {
}
#
kubectl logs big-corp-app -c count-log-1
#
kubectl get pod big-corp-app -o yaml
#
apiVersion: v1
kind: Pod
metadata:
name: big-corp-app
spec:
containers:
- name: big-corp-app
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/big-corp-app.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/big-corp-app.log']
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {
}
#
kubectl logs big-corp-app -c count-log-1
You must connect to the correct host.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000047
Task
A MariaDB Deployment in the mariadb namespace has been deleted by mistake. Your task is to restore the Deployment ensuring data persistence. Follow these steps:
Create a PersistentVolumeClaim (PVC ) named mariadb in the mariadb namespace with the following specifications:
Access mode ReadWriteOnce
Storage 250Mi
You must use the existing retained PersistentVolume (PV ).
Failure to do so will result in a reduced score.
There is only one existing PersistentVolume .
Edit the MariaDB Deployment file located at ~/mariadb-deployment.yaml to use PVC you created in the previous step.
Apply the updated Deployment file to the cluster.
Ensure the MariaDB Deployment is running and stable.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000047
Task
A MariaDB Deployment in the mariadb namespace has been deleted by mistake. Your task is to restore the Deployment ensuring data persistence. Follow these steps:
Create a PersistentVolumeClaim (PVC ) named mariadb in the mariadb namespace with the following specifications:
Access mode ReadWriteOnce
Storage 250Mi
You must use the existing retained PersistentVolume (PV ).
Failure to do so will result in a reduced score.
There is only one existing PersistentVolume .
Edit the MariaDB Deployment file located at ~/mariadb-deployment.yaml to use PVC you created in the previous step.
Apply the updated Deployment file to the cluster.
Ensure the MariaDB Deployment is running and stable.
정답:
Task Overview
You're restoring a MariaDB deployment in the mariadb namespace with persistent data.
# Tasks:
* SSH into cka000047
* Create a PVC named mariadb:
* Namespace: mariadb
* Access mode: ReadWriteOnce
* Storage: 250Mi
* Use the existing retained PV (there's only one)
* Edit ~/mariadb-deployment.yaml to use the PVC
* Apply the deployment
* Verify MariaDB is running and stable
Step-by-Step Solution
1## SSH into the correct host
ssh cka000047
## Required - skipping = zero score
2## Inspect the existing PersistentVolume
kubectl get pv
# Identify the only existing PV, e.g.:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
mariadb-pv 250Mi RWO Retain Available <none> manual
Ensure the status is Available, and it is not already bound to a claim.
3## Create the PVC to bind the retained PV
Create a file mariadb-pvc.yaml:
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv # Match the PV name exactly
EOF
Apply the PVC:
kubectl apply -f mariadb-pvc.yaml
# This binds the PVC to the retained PV.
4## Edit the MariaDB Deployment YAML
Open the file:
nano ~/mariadb-deployment.yaml
Look under the spec.template.spec.containers.volumeMounts and spec.template.spec.volumes sections and update them like so:
Add this under the container:
yaml
CopyEdit
volumeMounts:
- name: mariadb-storage
mountPath: /var/lib/mysql
And under the pod spec:
volumes:
- name: mariadb-storage
persistentVolumeClaim:
claimName: mariadb
# These lines mount the PVC at the MariaDB data directory.
5## Apply the updated Deployment
kubectl apply -f ~/mariadb-deployment.yaml
6## Verify the Deployment is running and stable
kubectl get pods -n mariadb
kubectl describe pod -n mariadb <mariadb-pod-name>
# Ensure the pod is in Running state and volume is mounted.
# Final Command Summary
ssh cka000047
kubectl get pv # Find the retained PV
# Create PVC
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv
EOF
kubectl apply -f mariadb-pvc.yaml
# Edit Deployment
nano ~/mariadb-deployment.yaml
# Add volumeMount and volume to use the PVC as described
kubectl apply -f ~/mariadb-deployment.yaml
kubectl get pods -n mariadb
You're restoring a MariaDB deployment in the mariadb namespace with persistent data.
# Tasks:
* SSH into cka000047
* Create a PVC named mariadb:
* Namespace: mariadb
* Access mode: ReadWriteOnce
* Storage: 250Mi
* Use the existing retained PV (there's only one)
* Edit ~/mariadb-deployment.yaml to use the PVC
* Apply the deployment
* Verify MariaDB is running and stable
Step-by-Step Solution
1## SSH into the correct host
ssh cka000047
## Required - skipping = zero score
2## Inspect the existing PersistentVolume
kubectl get pv
# Identify the only existing PV, e.g.:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
mariadb-pv 250Mi RWO Retain Available <none> manual
Ensure the status is Available, and it is not already bound to a claim.
3## Create the PVC to bind the retained PV
Create a file mariadb-pvc.yaml:
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv # Match the PV name exactly
EOF
Apply the PVC:
kubectl apply -f mariadb-pvc.yaml
# This binds the PVC to the retained PV.
4## Edit the MariaDB Deployment YAML
Open the file:
nano ~/mariadb-deployment.yaml
Look under the spec.template.spec.containers.volumeMounts and spec.template.spec.volumes sections and update them like so:
Add this under the container:
yaml
CopyEdit
volumeMounts:
- name: mariadb-storage
mountPath: /var/lib/mysql
And under the pod spec:
volumes:
- name: mariadb-storage
persistentVolumeClaim:
claimName: mariadb
# These lines mount the PVC at the MariaDB data directory.
5## Apply the updated Deployment
kubectl apply -f ~/mariadb-deployment.yaml
6## Verify the Deployment is running and stable
kubectl get pods -n mariadb
kubectl describe pod -n mariadb <mariadb-pod-name>
# Ensure the pod is in Running state and volume is mounted.
# Final Command Summary
ssh cka000047
kubectl get pv # Find the retained PV
# Create PVC
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv
EOF
kubectl apply -f mariadb-pvc.yaml
# Edit Deployment
nano ~/mariadb-deployment.yaml
# Add volumeMount and volume to use the PVC as described
kubectl apply -f ~/mariadb-deployment.yaml
kubectl get pods -n mariadb
Score: 4%

Task
Scale the deployment presentation to 6 pods.

Task
Scale the deployment presentation to 6 pods.
정답:
Solution:
kubectl get deployment
kubectl scale deployment.apps/presentation --replicas=6
kubectl get deployment
kubectl scale deployment.apps/presentation --replicas=6
You must connect to the correct host.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000058
Context
You manage a WordPress application. Some Pods
are not starting because resource requests are
too high.
Task
A WordPress application in the relative-fawn
namespace consists of:
. A WordPress Deployment with 3 replicas.
Adjust all Pod resource requests as follows:
. Divide node resources evenly across all 3 Pods.
. Give each Pod a fair share of CPU and memory.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000058
Context
You manage a WordPress application. Some Pods
are not starting because resource requests are
too high.
Task
A WordPress application in the relative-fawn
namespace consists of:
. A WordPress Deployment with 3 replicas.
Adjust all Pod resource requests as follows:
. Divide node resources evenly across all 3 Pods.
. Give each Pod a fair share of CPU and memory.
정답:
Task Summary
You are managing a WordPress Deployment in namespace relative-fawn.
* Deployment has 3 replicas.
* Pods are not starting due to high resource requests.
* Your job:# Adjust CPU and memory requests so that all 3 pods evenly split the node's capacity.
Step-by-Step Solution
1## SSH into the correct host
bash
CopyEdit
ssh cka000058
# Skipping this will result in a zero score.
2## Check node resource capacity
You need to know the node's CPU and memory resources.
bash
CopyEdit
kubectl describe node | grep -A5 "Capacity"
Example output:
yaml
CopyEdit
Capacity:
cpu: 3
memory: 3Gi
Let's assume the node has:
* 3 CPUs
* 3Gi memory
So for 3 pods, divide evenly:
* CPU request per pod: 1
* Memory request per pod: 1Gi
## In the actual exam, check real values and divide accordingly. If the node has 4 CPUs and 8Gi, you'd allocate ~1.33 CPUs and ~2.66Gi RAM per pod (rounded reasonably).
3## Edit the Deployment
Edit the WordPress deployment in the relative-fawn namespace:
kubectl edit deployment wordpress -n relative-fawn
Look for the resources section under spec.template.spec.containers like this:
resources:
requests:
cpu: "1"
memory: "1Gi"
If the section doesn't exist, add it manually.
Save and exit the editor (:wq if using vi).
4## Confirm changes
Wait a few seconds, then check:
kubectl get pods -n relative-fawn
Ensure all 3 pods are in Running state.
You can also describe a pod to confirm resource requests are set:
kubectl describe pod <pod-name> -n relative-fawn | grep -A5 "Containers" ssh cka000058 kubectl describe node | grep -A5 "Capacity" kubectl edit deployment wordpress -n relative-fawn
# Set CPU: 1, Memory: 1Gi (or according to node capacity)
kubectl get pods -n relative-fawn
You are managing a WordPress Deployment in namespace relative-fawn.
* Deployment has 3 replicas.
* Pods are not starting due to high resource requests.
* Your job:# Adjust CPU and memory requests so that all 3 pods evenly split the node's capacity.
Step-by-Step Solution
1## SSH into the correct host
bash
CopyEdit
ssh cka000058
# Skipping this will result in a zero score.
2## Check node resource capacity
You need to know the node's CPU and memory resources.
bash
CopyEdit
kubectl describe node | grep -A5 "Capacity"
Example output:
yaml
CopyEdit
Capacity:
cpu: 3
memory: 3Gi
Let's assume the node has:
* 3 CPUs
* 3Gi memory
So for 3 pods, divide evenly:
* CPU request per pod: 1
* Memory request per pod: 1Gi
## In the actual exam, check real values and divide accordingly. If the node has 4 CPUs and 8Gi, you'd allocate ~1.33 CPUs and ~2.66Gi RAM per pod (rounded reasonably).
3## Edit the Deployment
Edit the WordPress deployment in the relative-fawn namespace:
kubectl edit deployment wordpress -n relative-fawn
Look for the resources section under spec.template.spec.containers like this:
resources:
requests:
cpu: "1"
memory: "1Gi"
If the section doesn't exist, add it manually.
Save and exit the editor (:wq if using vi).
4## Confirm changes
Wait a few seconds, then check:
kubectl get pods -n relative-fawn
Ensure all 3 pods are in Running state.
You can also describe a pod to confirm resource requests are set:
kubectl describe pod <pod-name> -n relative-fawn | grep -A5 "Containers" ssh cka000058 kubectl describe node | grep -A5 "Capacity" kubectl edit deployment wordpress -n relative-fawn
# Set CPU: 1, Memory: 1Gi (or according to node capacity)
kubectl get pods -n relative-fawn
Create a busybox pod that runs the command "env" and save the output to "envpod" file See the solution below.
정답:
kubectl run busybox --image=busybox --restart=Never --rm -it -- env > envpod.yaml