mirror of
https://github.com/sailor-sh/CK-X.git
synced 2026-02-14 17:39:51 +00:00
Compare commits
64 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fb2bc9a77f | ||
|
|
b88dcd7f40 | ||
|
|
18e02d0d9d | ||
|
|
eaa11c1ad0 | ||
|
|
038e9de8c2 | ||
|
|
eb2eb1e047 | ||
|
|
dd436c0d2c | ||
|
|
d54d35f53e | ||
|
|
9ccb20e287 | ||
|
|
95c771ff9e | ||
|
|
7e1b32a3d9 | ||
|
|
502ee9f4de | ||
|
|
c308a4d455 | ||
|
|
9c3777931a | ||
|
|
6f42a6991a | ||
|
|
e72959ebee | ||
|
|
328fe6a1f4 | ||
|
|
256d782722 | ||
|
|
8c31da5e6e | ||
|
|
df9e9d33c4 | ||
|
|
d052945bfe | ||
|
|
ecd285e683 | ||
|
|
f0f81e7f20 | ||
|
|
e4925f4775 | ||
|
|
601aab38bb | ||
|
|
184ba26a51 | ||
|
|
cff3feb604 | ||
|
|
b38525ea88 | ||
|
|
20ef4fc149 | ||
|
|
64e133d1ad | ||
|
|
f050233572 | ||
|
|
e81cd4098a | ||
|
|
d1297f43fd | ||
|
|
1a053ca441 | ||
|
|
fb358ad1a5 | ||
|
|
2c6cf9bc17 | ||
|
|
abf7dab359 | ||
|
|
d27887c0a0 | ||
|
|
aff16d53a5 | ||
|
|
1bd3fe8370 | ||
|
|
837aabd505 | ||
|
|
e85d81f47c | ||
|
|
03e91dfc72 | ||
|
|
eb793885c4 | ||
|
|
27c174e2f5 | ||
|
|
23280b1c10 | ||
|
|
d7b9ca8afb | ||
|
|
ff2ea805c0 | ||
|
|
61b824e22b | ||
|
|
89808a64e3 | ||
|
|
d65ea253a8 | ||
|
|
4c0980c23f | ||
|
|
d092eaa9ed | ||
|
|
f3624a7bb8 | ||
|
|
91b6263cf3 | ||
|
|
c7bf53c630 | ||
|
|
c9005dc150 | ||
|
|
29ac1b257b | ||
|
|
a0b42b0472 | ||
|
|
ff602f5fb6 | ||
|
|
f4c5d7109e | ||
|
|
96a3726871 | ||
|
|
005e7826af | ||
|
|
58d44be656 |
107
.github/ISSUE_TEMPLATE/bug-report.md
vendored
Normal file
107
.github/ISSUE_TEMPLATE/bug-report.md
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
name: "🐛 Bug Report"
|
||||
about: Report a bug or issue you've found while using CK-X Simulator
|
||||
title: "[Bug] <short description>"
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Describe the bug
|
||||
|
||||
<!-- A clear and concise description of what the bug is.
|
||||
|
||||
Why does it bother you, and how could it be harmful for the user?
|
||||
|
||||
-->
|
||||
|
||||
## Environment (please complete the following):
|
||||
|
||||
<!--
|
||||
|
||||
- Architecture: [e.g. x86_64, ARM]
|
||||
- Docker Desktop: [yes/no]
|
||||
- For Windows: Is WSL2 enabled? [yes/no]
|
||||
- System resources: [CPU model, RAM size, available storage]
|
||||
- OS: [e.g. Ubuntu 22.04, Windows 11]
|
||||
- Browser: [e.g. Chrome 112]
|
||||
- CK-X Version: [e.g. v1.2.0 or commit SHA]
|
||||
- Docker version: `docker version`
|
||||
- How did you install CK-X? [installer script / manual / other]
|
||||
|
||||
-->
|
||||
|
||||
## Report in Depth
|
||||
|
||||
<!-- Please describe the bug clearly.
|
||||
Try answering this question:
|
||||
|
||||
- What exactly happens?
|
||||
- Why does it bother you or impact the user experience?
|
||||
- When did you first notice it?
|
||||
- Does it happen consistently or only sometimes?
|
||||
- Which part of the system is affected? (e.g., web UI, evaluator, virtual cluster)
|
||||
- Does this bug prevent you from completing a lab or exercise?
|
||||
|
||||
-->
|
||||
|
||||
## Steps to reproduce
|
||||
|
||||
<!-- Is this bug reproducible?
|
||||
|
||||
If so, uncomment and fill out one of the following:
|
||||
|
||||
Steps to reproduce the behavior:
|
||||
|
||||
1. Go to `...`
|
||||
2. Click on `...`
|
||||
3. See error
|
||||
|
||||
Or use a code block, if you prefer:
|
||||
|
||||
```
|
||||
...
|
||||
```
|
||||
|
||||
-->
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
<!-- What you expected to happen? -->
|
||||
|
||||
## Actual Behaviour
|
||||
|
||||
<!-- What actually happened instead? -->
|
||||
|
||||
## Screenshots / Logs
|
||||
|
||||
<!-- If applicable, add screenshots or relevant logs.
|
||||
|
||||
For logs, use a code block or (better) a log file.
|
||||
|
||||
-->
|
||||
|
||||
## Workaround
|
||||
|
||||
<!-- Have you found a workaround?
|
||||
|
||||
If so, please describe the steps you followed to temporarily solve the issue.
|
||||
|
||||
Use an ordinated list, like so:
|
||||
|
||||
1. Go to `...`
|
||||
2. Run: `...`
|
||||
3. Paste this: `...`
|
||||
...
|
||||
|
||||
Or use a code block if you prefer:
|
||||
|
||||
```
|
||||
...
|
||||
```
|
||||
|
||||
-->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!-- Add any other context or details here. -->
|
||||
55
.github/ISSUE_TEMPLATE/feature-request.md
vendored
Normal file
55
.github/ISSUE_TEMPLATE/feature-request.md
vendored
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
name: "✨ Feature Request"
|
||||
about: Suggest an improvement or new feature for CK-X Simulator
|
||||
title: "[Feature] <short description>"
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Describe the feature
|
||||
|
||||
<!-- A clear and concise description of the feature or improvement.
|
||||
|
||||
What would this feature do?
|
||||
|
||||
Why is it useful or necessary for the user?
|
||||
|
||||
-->
|
||||
|
||||
## Motivation
|
||||
|
||||
<!-- Explain *why* you’re proposing this.
|
||||
|
||||
- What problem does it solve?
|
||||
- What benefit would it bring to users or developers?
|
||||
- Is this inspired by a real use case?
|
||||
|
||||
-->
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
<!-- Describe how you imagine the feature should work.
|
||||
|
||||
You can include:
|
||||
|
||||
- A workflow
|
||||
- Example UI mockups or commands
|
||||
- Integration points
|
||||
-->
|
||||
|
||||
## Alternatives considered
|
||||
|
||||
<!-- Have you considered any alternative approaches?
|
||||
|
||||
Why did you choose this one?
|
||||
|
||||
-->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!-- Add any other context or supporting info (e.g. links, diagrams, references). -->
|
||||
|
||||
## Related Issues
|
||||
|
||||
<!-- Link to any related issues or pull requests, if applicable. -->
|
||||
73
.github/ISSUE_TEMPLATE/lab-proposal.md
vendored
Normal file
73
.github/ISSUE_TEMPLATE/lab-proposal.md
vendored
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
name: "🧪 Lab Request"
|
||||
about: Suggest a new lab to be added to the CK-X Simulator
|
||||
title: "[Lab] <short description of the lab>"
|
||||
labels: "lab-request"
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Lab Overview
|
||||
|
||||
<!--
|
||||
Give a short summary of the lab idea.
|
||||
|
||||
What is the main goal of the lab?
|
||||
What skills or concepts should the user learn or demonstrate?
|
||||
-->
|
||||
|
||||
## Target Audience
|
||||
|
||||
<!--
|
||||
Who is this lab designed for?
|
||||
|
||||
- CKA / CKS / Custom...?
|
||||
- Beginner, Intermediate, Advanced?
|
||||
- Any specific professional roles in mind (e.g. DevOps, SRE, Security Engineer)?
|
||||
-->
|
||||
|
||||
## Topics & Scope
|
||||
|
||||
<!--
|
||||
List the main topics covered by the lab.
|
||||
|
||||
You can include:
|
||||
|
||||
- Kubernetes area (e.g., RBAC, Networking, Cluster Maintenance, Security)
|
||||
- Concepts or tools used (e.g., Helm, Falco, etcd, NetworkPolicy)
|
||||
- Real-world scenarios or challenges (optional)
|
||||
-->
|
||||
|
||||
## Lab Metadata
|
||||
|
||||
<!-- Fill in the suggested values below -->
|
||||
|
||||
- **Estimated Duration**: `<e.g., 45 minutes>`
|
||||
- **Difficulty**: `<easy / medium / hard>`
|
||||
- **Domain/Category**: `<e.g., Kubernetes / Security>`
|
||||
- **Total Score**: `<e.g., 100>`
|
||||
- **Passing Thresholds**:
|
||||
- Low Score: `<e.g., 50>`
|
||||
- Medium Score: `<e.g., 75>`
|
||||
- High Score: `<e.g., 90>`
|
||||
|
||||
## Example Exercises (optional)
|
||||
|
||||
<!--
|
||||
Propose 1–3 example exercises that could be part of the lab.
|
||||
|
||||
You can list the title, a short description, and the estimated points.
|
||||
|
||||
1. **Create a NetworkPolicy** (20 points)
|
||||
- Restrict traffic to a specific pod based on namespace and labels.
|
||||
2. **RBAC for Read-Only Access** (30 points)
|
||||
- Grant a user read-only access to all resources in a namespace.
|
||||
-->
|
||||
|
||||
## Additional Notes
|
||||
|
||||
<!--
|
||||
Any extra info, references, or notes to support the lab request.
|
||||
|
||||
This can include links to real CKA/CKS questions, documentation, or public repos.
|
||||
-->
|
||||
100
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
100
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,100 @@
|
||||
<!-- Title of the PR:
|
||||
|
||||
Example: `fix: Update volume mount path in CKAD 001 Q6`
|
||||
|
||||
-->
|
||||
|
||||
---
|
||||
|
||||
#### 🧾 What this PR does
|
||||
|
||||
<!-- Briefly describe what this PR does.
|
||||
|
||||
Example: "This PR fixes the volume mount path in `sidecar-pod.yaml` to prevent conflicts with the nginx container's log directory."
|
||||
|
||||
-->
|
||||
|
||||
---
|
||||
|
||||
#### 🧩 Type of change
|
||||
|
||||
- [ ] Bug fix
|
||||
- [ ] New feature
|
||||
- [ ] Documentation update
|
||||
- [ ] Refactor
|
||||
- [ ] Other (please describe): ____________
|
||||
|
||||
---
|
||||
|
||||
#### 🧪 How to test it
|
||||
|
||||
<!-- Provide clear, step-by-step instructions for testing this change.
|
||||
|
||||
Example: "Run the updated lab CKAD 001, Question 6. The pod should start correctly and nginx logs should be accessible."
|
||||
|
||||
-->
|
||||
|
||||
---
|
||||
|
||||
#### ✅ Acceptance Criteria
|
||||
|
||||
<!-- Tick each box once the corresponding criterion is met.
|
||||
|
||||
Feel free to add or remove items depending on what your PR changes.
|
||||
|
||||
-->
|
||||
|
||||
- [ ] Pod starts without errors
|
||||
- [ ] Docker Compose logs are available
|
||||
- [ ] All relevant tests pass
|
||||
- [ ] Documentation is updated (if needed - usually it is)
|
||||
|
||||
<!-- If you are adding labs, please, also add the following -->
|
||||
|
||||
- [ ] New labs pass
|
||||
- [ ] Answers work as expected
|
||||
|
||||
---
|
||||
|
||||
#### 📎 Related Issue(s)
|
||||
|
||||
<!-- Mention any related issues.
|
||||
|
||||
Example: "Closes #42" or "Refs #15"
|
||||
|
||||
-->
|
||||
|
||||
---
|
||||
|
||||
#### 💬 Notes for Reviewers
|
||||
|
||||
<!-- Add any extra context or notes for the reviewer.
|
||||
|
||||
Example: "I also tweaked the wording of the exercise to explicitly define container names."
|
||||
|
||||
-->
|
||||
|
||||
---
|
||||
|
||||
#### 🧠 Additional Context
|
||||
|
||||
<!--
|
||||
|
||||
Add any background or technical context that might help reviewers understand the motivation or constraints of the PR.
|
||||
|
||||
-->
|
||||
|
||||
---
|
||||
|
||||
#### 📄 Attachments
|
||||
|
||||
<!-- Add any relevant attachments such as:
|
||||
|
||||
- Screenshots of the labs list showing the the newly added labs
|
||||
- Screenshots of the labs running
|
||||
- Screenshots of the labs passing
|
||||
- E2E test results (screenshots or, preferably, logs/human-readable reports)
|
||||
- Static code analysis reports
|
||||
- Any other supporting material (e.g. logs, error traces, terminal output)
|
||||
|
||||
-->
|
||||
16
README.md
16
README.md
@@ -1,5 +1,9 @@
|
||||
|
||||

|
||||
|
||||
# CK-X Simulator 🚀
|
||||
|
||||
|
||||
A powerful Kubernetes certification practice environment that provides a realistic exam-like experience for kubernetess exam preparation.
|
||||
|
||||
## Major Features
|
||||
@@ -8,7 +12,7 @@ A powerful Kubernetes certification practice environment that provides a realist
|
||||
- Comprehensive practice labs for **CKAD, CKA, CKS**, and other Kubernetes certifications
|
||||
- **Smart evaluation system** with real-time solution verification
|
||||
- **Docker-based deployment** for easy setup and consistent environment
|
||||
- **Timed exam mode** with real exam-like conditions and countdown timer
|
||||
- **Timed exam mode** with real exam-like conditions and countdown timer
|
||||
|
||||
|
||||
##
|
||||
@@ -21,10 +25,10 @@ Watch live demo video showcasing the CK-X Simulator in action:
|
||||
|
||||
#### Linux & macOS
|
||||
```bash
|
||||
bash <(curl -fsSL https://raw.githubusercontent.com/nishanb/ck-x/master/scripts/install.sh)
|
||||
curl -fsSL https://raw.githubusercontent.com/nishanb/ck-x/master/scripts/install.sh | bash
|
||||
```
|
||||
|
||||
#### Windows ( Windows installation is unstable and not supported yet, may break during setup )
|
||||
#### Windows ( make sure WSL2 is enabled in the docker desktop )
|
||||
```powershell
|
||||
irm https://raw.githubusercontent.com/nishanb/ck-x/master/scripts/install.ps1 | iex
|
||||
```
|
||||
@@ -34,7 +38,7 @@ For detailed installation instructions, please refer to our [Deployment Guide](s
|
||||
|
||||
## Community & Support
|
||||
|
||||
- Join our [Telegram Community](https://t.me/ckxdev) for discussions and support
|
||||
- Join our [Discord Community](https://discord.gg/6FPQMXNgG9) for discussions and support
|
||||
- Feature requests and pull requests are welcome
|
||||
|
||||
## Adding New Labs
|
||||
@@ -59,8 +63,8 @@ CK-X is an independent tool, not affiliated with CNCF, Linux Foundation, or PSI.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- [DIND](https://github.com/earthly/dind)
|
||||
- [KIND](https://github.com/kubernetes-sigs/kind)
|
||||
- [DIND](https://www.docker.com/)
|
||||
- [K3D](https://k3d.io/stable/)
|
||||
- [Node](https://nodejs.org/en)
|
||||
- [Nginx](https://nginx.org/)
|
||||
- [ConSol-Vnc](https://github.com/ConSol/docker-headless-vnc-container/)
|
||||
|
||||
@@ -10,6 +10,7 @@ RUN npm install --production
|
||||
|
||||
# Copy the application files
|
||||
COPY server.js ./
|
||||
COPY services/ ./services/
|
||||
COPY public/ ./public/
|
||||
|
||||
# Ensure the public directory exists
|
||||
|
||||
@@ -1,19 +0,0 @@
|
||||
/**
|
||||
* Configuration module for the application
|
||||
* Centralizes all environment variables and configuration settings
|
||||
*/
|
||||
|
||||
// Server configuration
|
||||
const PORT = process.env.PORT || 3000;
|
||||
|
||||
// VNC service configuration
|
||||
const VNC_SERVICE_HOST = process.env.VNC_SERVICE_HOST || 'remote-desktop-service';
|
||||
const VNC_SERVICE_PORT = process.env.VNC_SERVICE_PORT || 6901;
|
||||
const VNC_PASSWORD = process.env.VNC_PASSWORD || 'bakku-the-wizard'; // Default password
|
||||
|
||||
module.exports = {
|
||||
PORT,
|
||||
VNC_SERVICE_HOST,
|
||||
VNC_SERVICE_PORT,
|
||||
VNC_PASSWORD
|
||||
};
|
||||
@@ -1,22 +0,0 @@
|
||||
/**
|
||||
* Error Handler Middleware
|
||||
* Centralized error handling for the application
|
||||
*/
|
||||
|
||||
const path = require('path');
|
||||
|
||||
/**
|
||||
* Global error handler middleware
|
||||
* @param {Error} err - The error object
|
||||
* @param {Object} req - Express request object
|
||||
* @param {Object} res - Express response object
|
||||
* @param {Function} next - Express next function
|
||||
*/
|
||||
function errorHandler(err, req, res, next) {
|
||||
console.error('Server error:', err);
|
||||
|
||||
// Send a user-friendly error page
|
||||
res.status(500).sendFile(path.join(__dirname, '..', 'public', '50x.html'));
|
||||
}
|
||||
|
||||
module.exports = errorHandler;
|
||||
@@ -1,91 +0,0 @@
|
||||
/**
|
||||
* Proxy middleware module
|
||||
* Sets up the proxies for VNC connections
|
||||
*/
|
||||
|
||||
const { createProxyMiddleware } = require('http-proxy-middleware');
|
||||
const config = require('../config/config');
|
||||
|
||||
/**
|
||||
* Creates the VNC proxy configuration object
|
||||
* @returns {Object} Proxy configuration
|
||||
*/
|
||||
function createVncProxyConfig() {
|
||||
return {
|
||||
target: `http://${config.VNC_SERVICE_HOST}:${config.VNC_SERVICE_PORT}`,
|
||||
changeOrigin: true,
|
||||
ws: true,
|
||||
secure: false,
|
||||
pathRewrite: {
|
||||
'^/vnc-proxy': ''
|
||||
},
|
||||
onProxyReq: (proxyReq, req, res) => {
|
||||
// Log HTTP requests being proxied
|
||||
console.log(`Proxying HTTP request to VNC server: ${req.url}`);
|
||||
},
|
||||
onProxyReqWs: (proxyReq, req, socket, options, head) => {
|
||||
// Log WebSocket connections
|
||||
console.log(`WebSocket connection established to VNC server: ${req.url}`);
|
||||
},
|
||||
onProxyRes: (proxyRes, req, res) => {
|
||||
// Log the responses from VNC server
|
||||
console.log(`Received response from VNC server for: ${req.url}`);
|
||||
},
|
||||
onError: (err, req, res) => {
|
||||
console.error(`Proxy error: ${err.message}`);
|
||||
if (res && res.writeHead) {
|
||||
res.writeHead(500, {
|
||||
'Content-Type': 'text/plain'
|
||||
});
|
||||
res.end(`Proxy error: ${err.message}`);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets up VNC proxy middleware on the Express app
|
||||
* @param {Object} app - Express application
|
||||
*/
|
||||
function setupProxies(app) {
|
||||
const vncProxyConfig = createVncProxyConfig();
|
||||
|
||||
// Middleware to enhance VNC URLs with authentication if needed
|
||||
app.use('/vnc-proxy', (req, res, next) => {
|
||||
// Check if the URL already has a password parameter
|
||||
if (!req.query.password) {
|
||||
// If no password provided, add default password
|
||||
console.log('Adding default VNC password to request');
|
||||
const separator = req.url.includes('?') ? '&' : '?';
|
||||
req.url = `${req.url}${separator}password=${config.VNC_PASSWORD}`;
|
||||
}
|
||||
next();
|
||||
}, createProxyMiddleware(vncProxyConfig));
|
||||
|
||||
// Direct WebSocket proxy to handle the websockify endpoint
|
||||
app.use('/websockify', createProxyMiddleware({
|
||||
...vncProxyConfig,
|
||||
pathRewrite: {
|
||||
'^/websockify': '/websockify'
|
||||
},
|
||||
ws: true,
|
||||
onProxyReqWs: (proxyReq, req, socket, options, head) => {
|
||||
// Log WebSocket connections to websockify
|
||||
console.log(`WebSocket connection to websockify established: ${req.url}`);
|
||||
|
||||
// Add additional headers if needed
|
||||
proxyReq.setHeader('Origin', `http://${config.VNC_SERVICE_HOST}:${config.VNC_SERVICE_PORT}`);
|
||||
},
|
||||
onError: (err, req, res) => {
|
||||
console.error(`Websockify proxy error: ${err.message}`);
|
||||
if (res && res.writeHead) {
|
||||
res.writeHead(500, {
|
||||
'Content-Type': 'text/plain'
|
||||
});
|
||||
res.end(`Websockify proxy error: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}));
|
||||
}
|
||||
|
||||
module.exports = setupProxies;
|
||||
@@ -11,7 +11,7 @@
|
||||
"cors": "^2.8.5",
|
||||
"express": "^4.21.2",
|
||||
"http-proxy-middleware": "^2.0.7",
|
||||
"socket.io": "^4.7.2",
|
||||
"socket.io": "4.7.2",
|
||||
"ssh2": "^1.14.0",
|
||||
"xterm": "^5.3.0"
|
||||
},
|
||||
|
||||
@@ -98,4 +98,97 @@
|
||||
font-size: 18px;
|
||||
cursor: pointer;
|
||||
color: #666;
|
||||
}
|
||||
|
||||
/* Feedback modal styles */
|
||||
.feedback-form {
|
||||
padding: 10px 0;
|
||||
}
|
||||
|
||||
.rating-container {
|
||||
margin-bottom: 20px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.star-rating {
|
||||
display: inline-flex;
|
||||
flex-direction: row-reverse;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.star-rating input {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.star-rating label {
|
||||
cursor: pointer;
|
||||
font-size: 30px;
|
||||
color: #ddd;
|
||||
margin: 0 5px;
|
||||
transition: color 0.2s ease;
|
||||
}
|
||||
|
||||
.star-rating label:hover,
|
||||
.star-rating label:hover ~ label,
|
||||
.star-rating input:checked ~ label {
|
||||
color: #ffb33e;
|
||||
}
|
||||
|
||||
.feedback-text {
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.feedback-text label {
|
||||
display: block;
|
||||
margin-bottom: 5px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.feedback-text textarea {
|
||||
width: 100%;
|
||||
padding: 5px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 4px;
|
||||
resize: vertical;
|
||||
font-family: inherit;
|
||||
}
|
||||
|
||||
.testimonial-option {
|
||||
margin-bottom: 15px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.testimonial-option input[type="checkbox"] {
|
||||
margin-right: 10px;
|
||||
}
|
||||
|
||||
.form-field {
|
||||
margin-bottom: 5px;
|
||||
}
|
||||
|
||||
.form-field label {
|
||||
display: block;
|
||||
margin-bottom: 5px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.form-field input {
|
||||
width: 100%;
|
||||
padding: 8px 10px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 4px;
|
||||
font-family: inherit;
|
||||
}
|
||||
|
||||
#testimonialFields {
|
||||
padding: 10px;
|
||||
background-color: #f9f9f9;
|
||||
border-radius: 4px;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
.form-field label .required {
|
||||
color: #e74c3c;
|
||||
margin-left: 3px;
|
||||
}
|
||||
@@ -493,7 +493,7 @@ body {
|
||||
padding: 2rem;
|
||||
border-radius: 8px;
|
||||
text-align: center;
|
||||
max-width: 400px;
|
||||
max-width: 500px;
|
||||
width: 90%;
|
||||
}
|
||||
|
||||
|
||||
@@ -110,7 +110,7 @@
|
||||
<li><hr class="dropdown-divider"></li>
|
||||
<li><a class="dropdown-item" href="#" id="toggleViewBtn">Switch to Terminal</a></li>
|
||||
<li><hr class="dropdown-divider"></li>
|
||||
<li><a class="dropdown-item" href="https://github.com/nishanb/CKAD-X">Help</a></li>
|
||||
<li><a class="dropdown-item" href="https://github.com/nishanb/CK-X">Help</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="dropdown d-inline-block">
|
||||
|
||||
@@ -175,6 +175,10 @@
|
||||
</div>
|
||||
<div class="loading-message" id="loadingMessage">Initializing environment...</div>
|
||||
<div class="exam-info" id="examInfo"></div>
|
||||
<hr>
|
||||
<div class="text-muted text-center w-100" style="font-size: 0.8 rem;">
|
||||
Lab environment setup may take 3-5 minutes. Please wait patiently. Once preparation is complete, you'll be automatically redirected to the exam interface.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -233,7 +237,7 @@
|
||||
|
||||
<!-- Footer -->
|
||||
<footer class="text-center py-4 mt-4">
|
||||
<p>Made with <span style="color: #ff3366;">❤️</span> | <a href="https://ckx.nishann.com" target="_blank">ckx.nishann.com</a></p>
|
||||
<p>Made with <span style="color: #ff3366;">❤️</span> | <a href="https://play.sailor.sh" target="_blank">sailor.sh</a></p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -131,6 +131,27 @@ function trackExamEvent(examId, events) {
|
||||
});
|
||||
}
|
||||
|
||||
// Function to submit user feedback
|
||||
function submitFeedback(examId, feedbackData) {
|
||||
return fetch(`/facilitator/api/v1/exams/metrics/${examId}`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify(feedbackData)
|
||||
})
|
||||
.then(response => {
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP error! Status: ${response.status}`);
|
||||
}
|
||||
return response.json();
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error submitting feedback:', error);
|
||||
throw error; // Re-throw to be handled by the calling function
|
||||
});
|
||||
}
|
||||
|
||||
// Export the API functions
|
||||
export {
|
||||
getExamId,
|
||||
@@ -140,5 +161,6 @@ export {
|
||||
evaluateExam,
|
||||
terminateSession,
|
||||
getVncInfo,
|
||||
trackExamEvent
|
||||
trackExamEvent,
|
||||
submitFeedback
|
||||
};
|
||||
@@ -152,8 +152,9 @@ function connectToSocketIO() {
|
||||
// Connect to Socket.io server
|
||||
socket = io('/ssh', {
|
||||
forceNew: true,
|
||||
reconnectionAttempts: 5,
|
||||
timeout: 10000
|
||||
reconnectionAttempts: 1000,
|
||||
timeout: 1000,
|
||||
transports: ['polling'] // force polling for now to avoid invalid frame error , TODO: fix this
|
||||
});
|
||||
console.log('Creating new socket connection to SSH server');
|
||||
|
||||
@@ -199,10 +200,18 @@ function connectToSocketIO() {
|
||||
if (terminal) {
|
||||
terminal.writeln(`\r\n\x1b[1;31m[ERROR]\x1b[0m ${err.message}\r\n`);
|
||||
}
|
||||
// try to reconnect
|
||||
setTimeout(() => {
|
||||
if (socket && !socket.connected) {
|
||||
socket.connect();
|
||||
}
|
||||
}, 2000);
|
||||
});
|
||||
|
||||
// Handle SSH data with processing for ANSI codes
|
||||
socket.on('data', (data) => {
|
||||
console.log('Received data from SSH server:', data);
|
||||
// if data is a string, write it to the terminal
|
||||
if (terminal) {
|
||||
terminal.write(data);
|
||||
}
|
||||
|
||||
@@ -3,53 +3,207 @@
|
||||
* Handles displaying feedback prompts and notifications
|
||||
*/
|
||||
|
||||
// Feedback state management
|
||||
const feedbackState = {
|
||||
rating: null,
|
||||
comment: '',
|
||||
isTestimonial: false,
|
||||
name: '',
|
||||
socialHandle: '',
|
||||
submitted: false
|
||||
};
|
||||
|
||||
// Wait for DOM to be loaded
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
// Show feedback reminder after a delay
|
||||
setTimeout(function() {
|
||||
// Check if results have loaded
|
||||
const resultsContent = document.getElementById('resultsContent');
|
||||
if (resultsContent && resultsContent.style.display !== 'none') {
|
||||
showFeedbackReminder();
|
||||
} else {
|
||||
// If results haven't loaded yet, wait for them
|
||||
const observer = new MutationObserver(function(mutations) {
|
||||
mutations.forEach(function(mutation) {
|
||||
if (mutation.target.style.display !== 'none') {
|
||||
showFeedbackReminder();
|
||||
observer.disconnect();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (resultsContent) {
|
||||
observer.observe(resultsContent, {
|
||||
attributes: true,
|
||||
attributeFilter: ['style']
|
||||
});
|
||||
}
|
||||
// DOM elements
|
||||
const feedbackModal = document.getElementById('feedbackModal');
|
||||
const submitFeedbackBtn = document.getElementById('submitFeedbackBtn');
|
||||
const testimonialConsent = document.getElementById('testimonialConsent');
|
||||
const testimonialFields = document.getElementById('testimonialFields');
|
||||
const feedbackComment = document.getElementById('feedbackComment');
|
||||
const starRatingInputs = document.querySelectorAll('input[name="rating"]');
|
||||
|
||||
// Check if user has already submitted feedback
|
||||
const hasSubmittedFeedback = localStorage.getItem('ckx_feedback_submitted');
|
||||
|
||||
// Check if we should show the reminder now
|
||||
if (!hasSubmittedFeedback) {
|
||||
// Check if we're skipping and when to ask again
|
||||
const skipTimestamp = localStorage.getItem('ckx_feedback_skip_until');
|
||||
const currentTime = new Date().getTime();
|
||||
|
||||
if (!skipTimestamp || currentTime > parseInt(skipTimestamp)) {
|
||||
// Safe to show after a delay
|
||||
setTimeout(showFeedbackModal, 5000);
|
||||
}
|
||||
}, 10 * 1000); // Show after 10 seconds
|
||||
}
|
||||
|
||||
// Toggle testimonial fields visibility based on checkbox
|
||||
testimonialConsent.addEventListener('change', function() {
|
||||
testimonialFields.style.display = this.checked ? 'block' : 'none';
|
||||
feedbackState.isTestimonial = this.checked;
|
||||
});
|
||||
|
||||
// Handle rating selection
|
||||
starRatingInputs.forEach(input => {
|
||||
input.addEventListener('change', function() {
|
||||
feedbackState.rating = parseInt(this.value);
|
||||
});
|
||||
});
|
||||
|
||||
// Handle comment input
|
||||
feedbackComment.addEventListener('input', function() {
|
||||
feedbackState.comment = this.value.trim();
|
||||
});
|
||||
|
||||
// Handle name and social handle inputs
|
||||
document.getElementById('testimonialName').addEventListener('input', function() {
|
||||
feedbackState.name = this.value.trim();
|
||||
});
|
||||
|
||||
document.getElementById('testimonialSocial').addEventListener('input', function() {
|
||||
feedbackState.socialHandle = this.value.trim();
|
||||
});
|
||||
|
||||
// Submit feedback handler
|
||||
submitFeedbackBtn.addEventListener('click', function() {
|
||||
// Validate that we have at least a rating
|
||||
if (!feedbackState.rating) {
|
||||
alert('Please select a rating before submitting your feedback.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Validate name is provided if user opts for testimonial
|
||||
if (feedbackState.isTestimonial && !feedbackState.name) {
|
||||
alert('Please provide your name to be featured in testimonials.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Disable the button to prevent multiple submissions
|
||||
submitFeedbackBtn.disabled = true;
|
||||
submitFeedbackBtn.innerHTML = '<i class="fas fa-spinner fa-spin me-2"></i> Submitting...';
|
||||
|
||||
// Send feedback data
|
||||
sendFeedbackData();
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Display a toast notification prompting for feedback
|
||||
* Display the feedback modal
|
||||
*/
|
||||
function showFeedbackReminder() {
|
||||
function showFeedbackModal() {
|
||||
// Only show if results have loaded
|
||||
const resultsContent = document.getElementById('resultsContent');
|
||||
if (resultsContent && resultsContent.style.display !== 'none') {
|
||||
document.getElementById('feedbackModal').style.display = 'flex';
|
||||
} else {
|
||||
// If results haven't loaded yet, wait for them
|
||||
const observer = new MutationObserver(function(mutations) {
|
||||
mutations.forEach(function(mutation) {
|
||||
if (mutation.target.style.display !== 'none') {
|
||||
document.getElementById('feedbackModal').style.display = 'flex';
|
||||
observer.disconnect();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (resultsContent) {
|
||||
observer.observe(resultsContent, {
|
||||
attributes: true,
|
||||
attributeFilter: ['style']
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send feedback data to the server
|
||||
*/
|
||||
function sendFeedbackData() {
|
||||
// Import API functions if needed
|
||||
import('./components/exam-api.js').then(api => {
|
||||
// Get the exam ID from the URL or DOM
|
||||
const urlParams = new URLSearchParams(window.location.search);
|
||||
const examId = urlParams.get('id') || document.getElementById('examId')?.textContent.replace('Exam ID: ', '').trim();
|
||||
|
||||
if (!examId) {
|
||||
console.error('No exam ID found for feedback submission');
|
||||
showFeedbackSubmissionResult(false);
|
||||
return;
|
||||
}
|
||||
|
||||
// Construct feedback data
|
||||
const feedbackData = {
|
||||
type: 'feedback',
|
||||
rating: feedbackState.rating,
|
||||
comment: feedbackState.comment,
|
||||
isTestimonial: feedbackState.isTestimonial,
|
||||
name: feedbackState.name,
|
||||
socialHandle: feedbackState.socialHandle
|
||||
};
|
||||
|
||||
// Use the API function to submit feedback
|
||||
api.submitFeedback(examId, feedbackData)
|
||||
.then(data => {
|
||||
console.log('Feedback submitted successfully:', data);
|
||||
// Set local storage to remember that feedback was submitted
|
||||
localStorage.setItem('ckx_feedback_submitted', 'true');
|
||||
// Hide the modal
|
||||
document.getElementById('feedbackModal').style.display = 'none';
|
||||
// Show success notification
|
||||
showFeedbackSubmissionResult(true);
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error submitting feedback:', error);
|
||||
// Hide the modal despite the error
|
||||
document.getElementById('feedbackModal').style.display = 'none';
|
||||
showFeedbackSubmissionResult(false);
|
||||
});
|
||||
}).catch(error => {
|
||||
console.error('Error importing API module:', error);
|
||||
document.getElementById('feedbackModal').style.display = 'none';
|
||||
showFeedbackSubmissionResult(false);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Show toast notification for feedback submission result
|
||||
* @param {boolean} success - Whether the submission was successful
|
||||
*/
|
||||
function showFeedbackSubmissionResult(success) {
|
||||
const submitFeedbackBtn = document.getElementById('submitFeedbackBtn');
|
||||
|
||||
// Reset button state
|
||||
submitFeedbackBtn.disabled = false;
|
||||
submitFeedbackBtn.innerHTML = '<i class="fas fa-paper-plane me-2"></i> Submit Feedback';
|
||||
|
||||
// Create toast element
|
||||
const toast = document.createElement('div');
|
||||
toast.className = 'toast-notification';
|
||||
toast.innerHTML = `
|
||||
<div class="toast-content">
|
||||
<i class="fas fa-comment-dots toast-icon"></i>
|
||||
<div class="toast-message">
|
||||
<p><strong>Your opinion matters!</strong></p>
|
||||
<p>Please take a moment to share your feedback on CK-X</p>
|
||||
|
||||
if (success) {
|
||||
toast.innerHTML = `
|
||||
<div class="toast-content">
|
||||
<i class="fas fa-check-circle toast-icon" style="color: #28a745;"></i>
|
||||
<div class="toast-message">
|
||||
<p><strong>Thank you!</strong></p>
|
||||
<p>Your feedback has been submitted successfully.</p>
|
||||
</div>
|
||||
<button class="toast-close">×</button>
|
||||
</div>
|
||||
<a href="https://forms.gle/Dac9ALQnQb2dH1mw8" target="_blank" class="toast-button">Give Feedback</a>
|
||||
<button class="toast-close">×</button>
|
||||
</div>
|
||||
`;
|
||||
`;
|
||||
} else {
|
||||
toast.innerHTML = `
|
||||
<div class="toast-content">
|
||||
<i class="fas fa-exclamation-circle toast-icon" style="color: #dc3545;"></i>
|
||||
<div class="toast-message">
|
||||
<p><strong>Something went wrong</strong></p>
|
||||
<p>We couldn't submit your feedback. Please try again.</p>
|
||||
</div>
|
||||
<button class="toast-close">×</button>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
document.body.appendChild(toast);
|
||||
|
||||
@@ -63,7 +217,7 @@ function showFeedbackReminder() {
|
||||
}, 500);
|
||||
});
|
||||
|
||||
// Auto-close after 15 seconds
|
||||
// Auto-close after 5 seconds
|
||||
setTimeout(function() {
|
||||
if (document.body.contains(toast)) {
|
||||
toast.style.animation = 'slideOut 0.5s ease forwards';
|
||||
@@ -73,5 +227,5 @@ function showFeedbackReminder() {
|
||||
}
|
||||
}, 500);
|
||||
}
|
||||
}, 15000);
|
||||
}, 5000);
|
||||
}
|
||||
@@ -49,6 +49,21 @@ document.addEventListener('DOMContentLoaded', function() {
|
||||
viewPastResultsBtn.closest('li').style.display = 'block';
|
||||
}
|
||||
}
|
||||
|
||||
// If exam is in PREPARING state, show loading overlay and start polling
|
||||
if (data.status === 'PREPARING') {
|
||||
console.log('Exam is in PREPARING state, showing loading overlay');
|
||||
showLoadingOverlay();
|
||||
updateLoadingMessage('Preparing lab environment...');
|
||||
updateExamInfo(data.info?.name || 'Unknown Exam');
|
||||
// Start polling for status
|
||||
pollExamStatus(data.id).then(statusData => {
|
||||
if (statusData.status === 'READY') {
|
||||
// Redirect to exam page when ready
|
||||
window.location.href = `/exam.html?id=${data.id}`;
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
@@ -366,6 +381,13 @@ document.addEventListener('DOMContentLoaded', function() {
|
||||
showLoadingOverlay(); // Show the loading overlay instead of pageLoader
|
||||
updateLoadingMessage('Starting lab environment...');
|
||||
updateExamInfo(`Lab: ${selectedLab.name} | Difficulty: ${selectedLab.difficulty || 'Medium'}`);
|
||||
let userAgent = '';
|
||||
try {
|
||||
userAgent = navigator.userAgent;
|
||||
} catch (error) {
|
||||
console.error('Error getting user agent:', error);
|
||||
}
|
||||
selectedLab.userAgent = userAgent;
|
||||
|
||||
// Make a POST request to the facilitator API - using exams endpoint for POST
|
||||
fetch('/facilitator/api/v1/exams/', {
|
||||
@@ -428,7 +450,7 @@ document.addEventListener('DOMContentLoaded', function() {
|
||||
|
||||
async function pollExamStatus(examId) {
|
||||
const startTime = Date.now();
|
||||
const pollInterval = 1000; // Poll every 5 seconds
|
||||
const pollInterval = 1000; // Poll every 1 second
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const poll = async () => {
|
||||
|
||||
@@ -111,6 +111,60 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Feedback Modal -->
|
||||
<div id="feedbackModal" class="modal-overlay" style="display: none;">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<h4>Your Feedback Matters</h4>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<div class="feedback-form">
|
||||
<div class="rating-container">
|
||||
<p>How would you rate your overall experience with CK-X?</p>
|
||||
<div class="star-rating">
|
||||
<input type="radio" id="star5" name="rating" value="5" />
|
||||
<label for="star5" title="5 stars"><i class="fas fa-star"></i></label>
|
||||
<input type="radio" id="star4" name="rating" value="4" />
|
||||
<label for="star4" title="4 stars"><i class="fas fa-star"></i></label>
|
||||
<input type="radio" id="star3" name="rating" value="3" />
|
||||
<label for="star3" title="3 stars"><i class="fas fa-star"></i></label>
|
||||
<input type="radio" id="star2" name="rating" value="2" />
|
||||
<label for="star2" title="2 stars"><i class="fas fa-star"></i></label>
|
||||
<input type="radio" id="star1" name="rating" value="1" />
|
||||
<label for="star1" title="1 star"><i class="fas fa-star"></i></label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="feedback-text">
|
||||
<label for="feedbackComment">How was your overall experience?</label>
|
||||
<textarea id="feedbackComment" rows="4" placeholder="Please share your thoughts..."></textarea>
|
||||
</div>
|
||||
|
||||
<div class="testimonial-option">
|
||||
<input type="checkbox" id="testimonialConsent" />
|
||||
<label for="testimonialConsent">I'd like to be featured in testimonials</label>
|
||||
</div>
|
||||
|
||||
<div id="testimonialFields" style="display: none;">
|
||||
<div class="form-field">
|
||||
<label for="testimonialName">Name <span class="required">*</span></label>
|
||||
<input type="text" id="testimonialName" placeholder="Your name" required />
|
||||
</div>
|
||||
<div class="form-field">
|
||||
<label for="testimonialSocial">Social Handle (optional)</label>
|
||||
<input type="text" id="testimonialSocial" placeholder="@yoursocialhandle" />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="modal-footer">
|
||||
<button id="submitFeedbackBtn" class="btn btn-primary">
|
||||
<i class="fas fa-paper-plane me-2"></i> Submit Feedback
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<script data-name="BMC-Widget" data-cfasync="false" src="https://cdnjs.buymeacoffee.com/1.0.0/widget.prod.min.js" data-id="nishan.b" data-description="Support me on Buy me a coffee!" data-message="CK-X helped you prep? A coffee helps it grow !!" data-color="#5F7FFF" data-position="Right" data-x_margin="18" data-y_margin="18"></script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,24 +0,0 @@
|
||||
/**
|
||||
* API Routes module
|
||||
* Defines all API endpoints for the application
|
||||
*/
|
||||
|
||||
const express = require('express');
|
||||
const router = express.Router();
|
||||
const config = require('../config/config');
|
||||
|
||||
/**
|
||||
* GET /api/vnc-info
|
||||
* Returns information about the VNC server
|
||||
*/
|
||||
router.get('/vnc-info', (req, res) => {
|
||||
res.json({
|
||||
host: config.VNC_SERVICE_HOST,
|
||||
port: config.VNC_SERVICE_PORT,
|
||||
wsUrl: `/websockify`,
|
||||
defaultPassword: config.VNC_PASSWORD,
|
||||
status: 'connected'
|
||||
});
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
239
app/server.js
239
app/server.js
@@ -1,11 +1,12 @@
|
||||
const express = require('express');
|
||||
const cors = require('cors');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
const { createProxyMiddleware } = require('http-proxy-middleware');
|
||||
const http = require('http');
|
||||
const socketio = require('socket.io');
|
||||
const { Client } = require('ssh2');
|
||||
const SSHTerminal = require('./services/ssh-terminal');
|
||||
const PublicService = require('./services/public-service');
|
||||
const RouteService = require('./services/route-service');
|
||||
const VNCService = require('./services/vnc-service');
|
||||
|
||||
// Server configuration
|
||||
const PORT = process.env.PORT || 3000;
|
||||
@@ -25,211 +26,47 @@ const app = express();
|
||||
const server = http.createServer(app);
|
||||
const io = socketio(server);
|
||||
|
||||
// SSH terminal namespace
|
||||
const sshIO = io.of('/ssh');
|
||||
|
||||
// Handle SSH connections
|
||||
sshIO.on('connection', (socket) => {
|
||||
console.log('New SSH terminal connection established');
|
||||
|
||||
let ssh = new Client();
|
||||
|
||||
// Connect to the SSH server
|
||||
ssh.on('ready', () => {
|
||||
console.log('SSH connection established');
|
||||
|
||||
// Create shell session
|
||||
ssh.shell((err, stream) => {
|
||||
if (err) {
|
||||
console.error('SSH shell error:', err);
|
||||
socket.emit('data', `Error: ${err.message}\r\n`);
|
||||
socket.disconnect();
|
||||
return;
|
||||
}
|
||||
|
||||
// Handle incoming data from SSH server
|
||||
stream.on('data', (data) => {
|
||||
socket.emit('data', data.toString('utf-8'));
|
||||
});
|
||||
|
||||
// Handle errors on stream
|
||||
stream.on('close', () => {
|
||||
console.log('SSH stream closed');
|
||||
ssh.end();
|
||||
socket.disconnect();
|
||||
});
|
||||
|
||||
stream.on('error', (err) => {
|
||||
console.error('SSH stream error:', err);
|
||||
socket.emit('data', `Error: ${err.message}\r\n`);
|
||||
});
|
||||
|
||||
// Handle incoming data from browser
|
||||
socket.on('data', (data) => {
|
||||
stream.write(data);
|
||||
});
|
||||
|
||||
// Handle resize events
|
||||
socket.on('resize', (dimensions) => {
|
||||
if (dimensions && dimensions.cols && dimensions.rows) {
|
||||
stream.setWindow(dimensions.rows, dimensions.cols, 0, 0);
|
||||
}
|
||||
});
|
||||
|
||||
// Handle socket disconnection
|
||||
socket.on('disconnect', () => {
|
||||
console.log('Client disconnected from SSH terminal');
|
||||
stream.close();
|
||||
ssh.end();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
// Handle SSH connection errors
|
||||
ssh.on('error', (err) => {
|
||||
console.error('SSH connection error:', err);
|
||||
socket.emit('data', `SSH connection error: ${err.message}\r\n`);
|
||||
socket.disconnect();
|
||||
});
|
||||
|
||||
// Connect to SSH server
|
||||
ssh.connect({
|
||||
host: SSH_HOST,
|
||||
port: SSH_PORT,
|
||||
username: SSH_USER,
|
||||
password: SSH_PASSWORD,
|
||||
readyTimeout: 30000,
|
||||
keepaliveInterval: 10000
|
||||
});
|
||||
// Initialize SSH Terminal
|
||||
const sshTerminal = new SSHTerminal({
|
||||
host: SSH_HOST,
|
||||
port: SSH_PORT,
|
||||
username: SSH_USER,
|
||||
password: SSH_PASSWORD
|
||||
});
|
||||
|
||||
// Create the public directory if it doesn't exist
|
||||
const publicDir = path.join(__dirname, 'public');
|
||||
if (!fs.existsSync(publicDir)) {
|
||||
fs.mkdirSync(publicDir, { recursive: true });
|
||||
console.log('Created public directory');
|
||||
}
|
||||
// Initialize Public Service
|
||||
const publicService = new PublicService(path.join(__dirname, 'public'));
|
||||
publicService.initialize();
|
||||
|
||||
// Copy index.html to public directory if it doesn't exist
|
||||
const indexHtmlSrc = path.join(__dirname, 'index.html');
|
||||
const indexHtmlDest = path.join(publicDir, 'index.html');
|
||||
if (fs.existsSync(indexHtmlSrc) && !fs.existsSync(indexHtmlDest)) {
|
||||
fs.copyFileSync(indexHtmlSrc, indexHtmlDest);
|
||||
console.log('Copied index.html to public directory');
|
||||
}
|
||||
|
||||
// Initialize VNC Service
|
||||
const vncService = new VNCService({
|
||||
host: VNC_SERVICE_HOST,
|
||||
port: VNC_SERVICE_PORT,
|
||||
password: VNC_PASSWORD
|
||||
});
|
||||
|
||||
// SSH terminal namespace
|
||||
const sshIO = io.of('/ssh');
|
||||
sshIO.on('connection', (socket) => {
|
||||
sshTerminal.handleConnection(socket);
|
||||
});
|
||||
|
||||
// Initialize Route Service
|
||||
const routeService = new RouteService(publicService, vncService);
|
||||
|
||||
// Serve static files from the public directory
|
||||
app.use(express.static(publicService.getPublicDir()));
|
||||
|
||||
// Setup VNC proxy
|
||||
vncService.setupVNCProxy(app);
|
||||
|
||||
// Setup routes
|
||||
routeService.setupRoutes(app);
|
||||
|
||||
// Enable CORS
|
||||
app.use(cors());
|
||||
|
||||
// Serve static files from the public directory
|
||||
app.use(express.static(path.join(__dirname, 'public')));
|
||||
|
||||
// Configure VNC proxy middleware
|
||||
const vncProxyConfig = {
|
||||
target: `http://${VNC_SERVICE_HOST}:${VNC_SERVICE_PORT}`,
|
||||
changeOrigin: true,
|
||||
ws: true,
|
||||
secure: false,
|
||||
pathRewrite: {
|
||||
'^/vnc-proxy': ''
|
||||
},
|
||||
onProxyReq: (proxyReq, req, res) => {
|
||||
// Log HTTP requests being proxied
|
||||
console.log(`Proxying HTTP request to VNC server: ${req.url}`);
|
||||
},
|
||||
onProxyReqWs: (proxyReq, req, socket, options, head) => {
|
||||
// Log WebSocket connections
|
||||
console.log(`WebSocket connection established to VNC server: ${req.url}`);
|
||||
},
|
||||
onProxyRes: (proxyRes, req, res) => {
|
||||
// Log the responses from VNC server
|
||||
console.log(`Received response from VNC server for: ${req.url}`);
|
||||
},
|
||||
onError: (err, req, res) => {
|
||||
console.error(`Proxy error: ${err.message}`);
|
||||
if (res && res.writeHead) {
|
||||
res.writeHead(500, {
|
||||
'Content-Type': 'text/plain'
|
||||
});
|
||||
res.end(`Proxy error: ${err.message}`);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Middleware to enhance VNC URLs with authentication if needed
|
||||
app.use('/vnc-proxy', (req, res, next) => {
|
||||
// Check if the URL already has a password parameter
|
||||
if (!req.query.password) {
|
||||
// If no password provided, add default password
|
||||
console.log('Adding default VNC password to request');
|
||||
const separator = req.url.includes('?') ? '&' : '?';
|
||||
req.url = `${req.url}${separator}password=${VNC_PASSWORD}`;
|
||||
}
|
||||
next();
|
||||
}, createProxyMiddleware(vncProxyConfig));
|
||||
|
||||
// Direct WebSocket proxy to handle the websockify endpoint
|
||||
app.use('/websockify', createProxyMiddleware({
|
||||
...vncProxyConfig,
|
||||
pathRewrite: {
|
||||
'^/websockify': '/websockify'
|
||||
},
|
||||
ws: true,
|
||||
onProxyReqWs: (proxyReq, req, socket, options, head) => {
|
||||
// Log WebSocket connections to websockify
|
||||
console.log(`WebSocket connection to websockify established: ${req.url}`);
|
||||
|
||||
// Add additional headers if needed
|
||||
proxyReq.setHeader('Origin', `http://${VNC_SERVICE_HOST}:${VNC_SERVICE_PORT}`);
|
||||
},
|
||||
onError: (err, req, res) => {
|
||||
console.error(`Websockify proxy error: ${err.message}`);
|
||||
if (res && res.writeHead) {
|
||||
res.writeHead(500, {
|
||||
'Content-Type': 'text/plain'
|
||||
});
|
||||
res.end(`Websockify proxy error: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}));
|
||||
|
||||
// API endpoint to get VNC server info
|
||||
app.get('/api/vnc-info', (req, res) => {
|
||||
res.json({
|
||||
host: VNC_SERVICE_HOST,
|
||||
port: VNC_SERVICE_PORT,
|
||||
wsUrl: `/websockify`,
|
||||
defaultPassword: VNC_PASSWORD,
|
||||
status: 'connected'
|
||||
});
|
||||
});
|
||||
|
||||
// Health check endpoint
|
||||
app.get('/health', (req, res) => {
|
||||
res.status(200).json({ status: 'ok', message: 'Service is healthy' });
|
||||
});
|
||||
|
||||
// Catch-all route to serve index.html for any other requests
|
||||
app.get('*', (req, res) => {
|
||||
// Special handling for exam page
|
||||
if (req.path === '/exam') {
|
||||
res.sendFile(path.join(__dirname, 'public', 'exam.html'));
|
||||
}
|
||||
// Special handling for results page
|
||||
else if (req.path === '/results') {
|
||||
res.sendFile(path.join(__dirname, 'public', 'results.html'));
|
||||
}
|
||||
else {
|
||||
res.sendFile(path.join(__dirname, 'public', 'index.html'));
|
||||
}
|
||||
});
|
||||
|
||||
// Handle errors
|
||||
app.use((err, req, res, next) => {
|
||||
console.error('Server error:', err);
|
||||
res.status(500).sendFile(path.join(__dirname, 'public', '50x.html'));
|
||||
});
|
||||
|
||||
// Start the server
|
||||
server.listen(PORT, () => {
|
||||
console.log(`Server running on port ${PORT}`);
|
||||
|
||||
35
app/services/public-service.js
Normal file
35
app/services/public-service.js
Normal file
@@ -0,0 +1,35 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
class PublicService {
|
||||
constructor(publicDir) {
|
||||
this.publicDir = publicDir;
|
||||
this.indexHtmlSrc = path.join(__dirname, '..', 'index.html');
|
||||
this.indexHtmlDest = path.join(publicDir, 'index.html');
|
||||
}
|
||||
|
||||
initialize() {
|
||||
this.createPublicDirectory();
|
||||
this.copyIndexHtml();
|
||||
}
|
||||
|
||||
createPublicDirectory() {
|
||||
if (!fs.existsSync(this.publicDir)) {
|
||||
fs.mkdirSync(this.publicDir, { recursive: true });
|
||||
console.log('Created public directory');
|
||||
}
|
||||
}
|
||||
|
||||
copyIndexHtml() {
|
||||
if (fs.existsSync(this.indexHtmlSrc) && !fs.existsSync(this.indexHtmlDest)) {
|
||||
fs.copyFileSync(this.indexHtmlSrc, this.indexHtmlDest);
|
||||
console.log('Copied index.html to public directory');
|
||||
}
|
||||
}
|
||||
|
||||
getPublicDir() {
|
||||
return this.publicDir;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = PublicService;
|
||||
43
app/services/route-service.js
Normal file
43
app/services/route-service.js
Normal file
@@ -0,0 +1,43 @@
|
||||
const path = require('path');
|
||||
|
||||
class RouteService {
|
||||
constructor(publicService, vncService) {
|
||||
this.publicService = publicService;
|
||||
this.vncService = vncService;
|
||||
}
|
||||
|
||||
setupRoutes(app) {
|
||||
// API endpoint to get VNC server info
|
||||
app.get('/api/vnc-info', (req, res) => {
|
||||
res.json(this.vncService.getVNCInfo());
|
||||
});
|
||||
|
||||
// Health check endpoint
|
||||
app.get('/health', (req, res) => {
|
||||
res.status(200).json({ status: 'ok', message: 'Service is healthy' });
|
||||
});
|
||||
|
||||
// Catch-all route to serve index.html for any other requests
|
||||
app.get('*', (req, res) => {
|
||||
// Special handling for exam page
|
||||
if (req.path === '/exam') {
|
||||
res.sendFile(path.join(this.publicService.getPublicDir(), 'exam.html'));
|
||||
}
|
||||
// Special handling for results page
|
||||
else if (req.path === '/results') {
|
||||
res.sendFile(path.join(this.publicService.getPublicDir(), 'results.html'));
|
||||
}
|
||||
else {
|
||||
res.sendFile(path.join(this.publicService.getPublicDir(), 'index.html'));
|
||||
}
|
||||
});
|
||||
|
||||
// Handle errors
|
||||
app.use((err, req, res, next) => {
|
||||
console.error('Server error:', err);
|
||||
res.status(500).sendFile(path.join(this.publicService.getPublicDir(), '50x.html'));
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = RouteService;
|
||||
81
app/services/ssh-terminal.js
Normal file
81
app/services/ssh-terminal.js
Normal file
@@ -0,0 +1,81 @@
|
||||
const { Client } = require('ssh2');
|
||||
|
||||
class SSHTerminal {
|
||||
constructor(config) {
|
||||
this.config = {
|
||||
host: config.host || 'remote-terminal',
|
||||
port: config.port || 22,
|
||||
username: config.username || 'candidate',
|
||||
password: config.password || 'password',
|
||||
readyTimeout: 30000,
|
||||
keepaliveInterval: 10000
|
||||
};
|
||||
}
|
||||
|
||||
handleConnection(socket) {
|
||||
console.log('New SSH terminal connection established');
|
||||
|
||||
let ssh = new Client();
|
||||
|
||||
ssh.on('ready', () => {
|
||||
console.log('SSH connection established');
|
||||
this.createShellSession(ssh, socket);
|
||||
});
|
||||
|
||||
ssh.on('error', (err) => {
|
||||
console.error('SSH connection error:', err);
|
||||
socket.emit('data', `SSH connection error: ${err.message}\r\n`);
|
||||
socket.disconnect();
|
||||
});
|
||||
|
||||
ssh.connect(this.config);
|
||||
}
|
||||
|
||||
createShellSession(ssh, socket) {
|
||||
ssh.shell((err, stream) => {
|
||||
if (err) {
|
||||
console.error('SSH shell error:', err);
|
||||
socket.emit('data', `Error: ${err.message}\r\n`);
|
||||
socket.disconnect();
|
||||
return;
|
||||
}
|
||||
|
||||
this.setupStreamHandlers(stream, socket, ssh);
|
||||
});
|
||||
}
|
||||
|
||||
setupStreamHandlers(stream, socket, ssh) {
|
||||
stream.on('data', (data) => {
|
||||
socket.emit('data', data.toString('utf-8'));
|
||||
});
|
||||
|
||||
stream.on('close', () => {
|
||||
console.log('SSH stream closed');
|
||||
ssh.end();
|
||||
socket.disconnect();
|
||||
});
|
||||
|
||||
stream.on('error', (err) => {
|
||||
console.error('SSH stream error:', err);
|
||||
socket.emit('data', `Error: ${err.message}\r\n`);
|
||||
});
|
||||
|
||||
socket.on('data', (data) => {
|
||||
stream.write(data);
|
||||
});
|
||||
|
||||
socket.on('resize', (dimensions) => {
|
||||
if (dimensions && dimensions.cols && dimensions.rows) {
|
||||
stream.setWindow(dimensions.rows, dimensions.cols, 0, 0);
|
||||
}
|
||||
});
|
||||
|
||||
socket.on('disconnect', () => {
|
||||
console.log('Client disconnected from SSH terminal');
|
||||
stream.close();
|
||||
ssh.end();
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = SSHTerminal;
|
||||
93
app/services/vnc-service.js
Normal file
93
app/services/vnc-service.js
Normal file
@@ -0,0 +1,93 @@
|
||||
const { createProxyMiddleware } = require('http-proxy-middleware');
|
||||
|
||||
class VNCService {
|
||||
constructor(config) {
|
||||
this.config = {
|
||||
host: config.host || 'remote-desktop-service',
|
||||
port: config.port || 6901,
|
||||
password: config.password || 'bakku-the-wizard'
|
||||
};
|
||||
|
||||
this.vncProxyConfig = {
|
||||
target: `http://${this.config.host}:${this.config.port}`,
|
||||
changeOrigin: true,
|
||||
ws: true,
|
||||
secure: false,
|
||||
pathRewrite: {
|
||||
'^/vnc-proxy': ''
|
||||
},
|
||||
onProxyReq: (proxyReq, req, res) => {
|
||||
// Log HTTP requests being proxied
|
||||
console.log(`Proxying HTTP request to VNC server: ${req.url}`);
|
||||
},
|
||||
onProxyReqWs: (proxyReq, req, socket, options, head) => {
|
||||
// Log WebSocket connections
|
||||
console.log(`WebSocket connection established to VNC server: ${req.url}`);
|
||||
},
|
||||
onProxyRes: (proxyRes, req, res) => {
|
||||
// Log the responses from VNC server
|
||||
console.log(`Received response from VNC server for: ${req.url}`);
|
||||
},
|
||||
onError: (err, req, res) => {
|
||||
console.error(`Proxy error: ${err.message}`);
|
||||
if (res && res.writeHead) {
|
||||
res.writeHead(500, {
|
||||
'Content-Type': 'text/plain'
|
||||
});
|
||||
res.end(`Proxy error: ${err.message}`);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
setupVNCProxy(app) {
|
||||
// Middleware to enhance VNC URLs with authentication if needed
|
||||
app.use('/vnc-proxy', (req, res, next) => {
|
||||
// Check if the URL already has a password parameter
|
||||
if (!req.query.password) {
|
||||
// If no password provided, add default password
|
||||
console.log('Adding default VNC password to request');
|
||||
const separator = req.url.includes('?') ? '&' : '?';
|
||||
req.url = `${req.url}${separator}password=${this.config.password}`;
|
||||
}
|
||||
next();
|
||||
}, createProxyMiddleware(this.vncProxyConfig));
|
||||
|
||||
// Direct WebSocket proxy to handle the websockify endpoint
|
||||
app.use('/websockify', createProxyMiddleware({
|
||||
...this.vncProxyConfig,
|
||||
pathRewrite: {
|
||||
'^/websockify': '/websockify'
|
||||
},
|
||||
ws: true,
|
||||
onProxyReqWs: (proxyReq, req, socket, options, head) => {
|
||||
// Log WebSocket connections to websockify
|
||||
console.log(`WebSocket connection to websockify established: ${req.url}`);
|
||||
|
||||
// Add additional headers if needed
|
||||
proxyReq.setHeader('Origin', `http://${this.config.host}:${this.config.port}`);
|
||||
},
|
||||
onError: (err, req, res) => {
|
||||
console.error(`Websockify proxy error: ${err.message}`);
|
||||
if (res && res.writeHead) {
|
||||
res.writeHead(500, {
|
||||
'Content-Type': 'text/plain'
|
||||
});
|
||||
res.end(`Websockify proxy error: ${err.message}`);
|
||||
}
|
||||
}
|
||||
}));
|
||||
}
|
||||
|
||||
getVNCInfo() {
|
||||
return {
|
||||
host: this.config.host,
|
||||
port: this.config.port,
|
||||
wsUrl: `/websockify`,
|
||||
defaultPassword: this.config.password,
|
||||
status: 'connected'
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = VNCService;
|
||||
@@ -6,8 +6,8 @@ services:
|
||||
context: ./remote-desktop
|
||||
hostname: terminal
|
||||
expose:
|
||||
- "5901" # VNC port (internal only)
|
||||
- "6901" # Web VNC port (internal only)
|
||||
- "5901" # VNC port (internal only)
|
||||
- "6901" # Web VNC port (internal only)
|
||||
environment:
|
||||
- VNC_PW=bakku-the-wizard
|
||||
- VNC_PASSWORD=bakku-the-wizard
|
||||
@@ -21,10 +21,10 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1'
|
||||
cpus: "1"
|
||||
memory: 1G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
networks:
|
||||
- ckx-network
|
||||
@@ -35,7 +35,7 @@ services:
|
||||
build:
|
||||
context: ./app
|
||||
expose:
|
||||
- "3000" # Only exposed to internal network
|
||||
- "3000" # Only exposed to internal network
|
||||
environment:
|
||||
- VNC_SERVICE_HOST=remote-desktop
|
||||
- VNC_SERVICE_PORT=6901
|
||||
@@ -47,10 +47,10 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
reservations:
|
||||
cpus: '0.2'
|
||||
cpus: "0.2"
|
||||
memory: 256M
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-q", "-O", "-", "http://localhost:3000/"]
|
||||
@@ -72,14 +72,14 @@ services:
|
||||
- facilitator
|
||||
- k8s-api-server
|
||||
ports:
|
||||
- "30080:80" # Expose Nginx on port 30080
|
||||
- "30080:80" # Expose Nginx on port 30080
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.2'
|
||||
cpus: "0.2"
|
||||
memory: 256M
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
cpus: "0.1"
|
||||
memory: 128M
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost/"]
|
||||
@@ -98,15 +98,15 @@ services:
|
||||
privileged: true
|
||||
# No external port mappings - only accessible internally
|
||||
expose:
|
||||
- "22" # SSH port (internal only)
|
||||
- "22" # SSH port (internal only)
|
||||
volumes:
|
||||
- kube-config:/home/candidate/.kube # Shared volume for Kubernetes config
|
||||
- kube-config:/home/candidate/.kube # Shared volume for Kubernetes config
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1'
|
||||
cpus: "1"
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
networks:
|
||||
- ckx-network
|
||||
@@ -115,7 +115,7 @@ services:
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
|
||||
|
||||
# Remote Terminal Service
|
||||
remote-terminal:
|
||||
image: nishanb/ck-x-simulator-remote-terminal:latest
|
||||
@@ -123,14 +123,14 @@ services:
|
||||
context: ./remote-terminal
|
||||
hostname: remote-terminal
|
||||
expose:
|
||||
- "22" # SSH port (internal only)
|
||||
- "22" # SSH port (internal only)
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
reservations:
|
||||
cpus: '0.2'
|
||||
cpus: "0.2"
|
||||
memory: 256M
|
||||
networks:
|
||||
- ckx-network
|
||||
@@ -139,27 +139,27 @@ services:
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
|
||||
|
||||
# KIND Kubernetes Cluster
|
||||
k8s-api-server: # Service name that will be used for DNS resolution
|
||||
k8s-api-server: # Service name that will be used for DNS resolution
|
||||
image: nishanb/ck-x-simulator-cluster:latest
|
||||
build:
|
||||
context: ./kind-cluster
|
||||
container_name: kind-cluster
|
||||
hostname: k8s-api-server
|
||||
privileged: true # Required for running containers inside KIND
|
||||
privileged: true # Required for running containers inside KIND
|
||||
expose:
|
||||
- "6443:6443"
|
||||
- "6443"
|
||||
- "22"
|
||||
volumes:
|
||||
- kube-config:/home/candidate/.kube # Shared volume for Kubernetes config
|
||||
- kube-config:/home/candidate/.kube # Shared volume for Kubernetes config
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2'
|
||||
cpus: "2"
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '1'
|
||||
cpus: "1"
|
||||
memory: 2G
|
||||
networks:
|
||||
- ckx-network
|
||||
@@ -177,18 +177,16 @@ services:
|
||||
command: ["redis-server", "--appendonly", "yes"]
|
||||
expose:
|
||||
- "6379"
|
||||
volumes:
|
||||
- redis-data:/data
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- ckx-network
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.3'
|
||||
cpus: "0.3"
|
||||
memory: 256M
|
||||
reservations:
|
||||
cpus: '0.1'
|
||||
cpus: "0.1"
|
||||
memory: 128M
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
@@ -224,10 +222,10 @@ services:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
cpus: "0.5"
|
||||
memory: 512M
|
||||
reservations:
|
||||
cpus: '0.2'
|
||||
cpus: "0.2"
|
||||
memory: 256M
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-q", "-O", "-", "http://localhost:3000"]
|
||||
@@ -241,5 +239,4 @@ networks:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
kube-config: # Shared volume for Kubernetes configuration
|
||||
redis-data: # Persistent volume for Redis data
|
||||
kube-config: # Shared volume for Kubernetes configuration
|
||||
|
||||
@@ -5,13 +5,13 @@ Thank you for your interest in contributing! Here's how you can help:
|
||||
## Quick Start
|
||||
|
||||
1. Fork and clone the [repository](https://github.com/@nishanb/CK-X)
|
||||
2. Follow our [Development Setup Guide](docs/development-setup.md)
|
||||
2. Follow our [Development Setup Guide](development-setup.md)
|
||||
3. Create a new branch for your changes
|
||||
4. Submit a Pull Request
|
||||
|
||||
## Community
|
||||
|
||||
- Join our [Telegram Community](https://t.me/ckxdev)
|
||||
- Join our [Discord Community](https://discord.gg/24HtTEjA)
|
||||
- Star the repository if you find it helpful
|
||||
|
||||
## Important Rules
|
||||
@@ -22,7 +22,7 @@ Thank you for your interest in contributing! Here's how you can help:
|
||||
- Focus on teaching concepts
|
||||
|
||||
### Lab Guidelines
|
||||
- Follow [Lab Creation Guide](docs/how-to-add-new-labs.md)
|
||||
- Follow [Lab Creation Guide](how-to-add-new-labs.md)
|
||||
- Include verification scripts
|
||||
- Test thoroughly
|
||||
- Provide clear instructions
|
||||
@@ -35,7 +35,7 @@ Thank you for your interest in contributing! Here's how you can help:
|
||||
|
||||
## Questions?
|
||||
|
||||
Check our [FAQ](docs/FAQ.md) or join our [Telegram Community](https://t.me/ckxdev).
|
||||
Check our [FAQ](docs/FAQ.md) or join our [Discord Community](https://discord.gg/24HtTEjA).
|
||||
|
||||
## License
|
||||
|
||||
|
||||
@@ -60,7 +60,7 @@ The `index.html` file serves as the main landing page for the CK-X Simulator. He
|
||||
</a>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<a class="nav-link" href="https://github.com/nishanb/CKAD-X" target="_blank">
|
||||
<a class="nav-link" href="https://github.com/nishanb/CK-X" target="_blank">
|
||||
<!-- GitHub Icon -->
|
||||
GitHub
|
||||
</a>
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"id": "1",
|
||||
"namespace": "app-team1",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a pod named `nginx-pod` using the `nginx:1.19` image.\n\nEnsure the pod is created in the `app-team1` namespace and has the label `run=nginx-pod`.",
|
||||
"question": "Create a pod named `nginx-pod` using the `nginx:1.19` image.\n\nEnsure the pod is created in the `app-team1` namespace and has the label `run=nginx-pod`.\n\nVerify that the pod is in the Running state.",
|
||||
"concepts": ["pods", "labels", "namespaces"],
|
||||
"verification": [
|
||||
{
|
||||
@@ -34,7 +34,7 @@
|
||||
"id": "2",
|
||||
"namespace": "default",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a static pod named `static-web` on `ckad9999` using the `nginx:1.19` image.\n\nPlace the static pod manifest file at `/etc/kubernetes/manifests/static-web.yaml`.",
|
||||
"question": "Create a static pod named `static-web` on `ckad9999` using the `nginx:1.19` image.\nThe pod should expose port `80`.\n\nPlace the static pod manifest file at `/etc/kubernetes/manifests/static-web.yaml`.",
|
||||
"concepts": ["static pods", "node configuration"],
|
||||
"verification": [
|
||||
{
|
||||
@@ -80,7 +80,7 @@
|
||||
"id": "4",
|
||||
"namespace": "monitoring",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a pod named `logger` with two containers:\n\n1. A `busybox` container that writes logs to `/var/log/app.log`\n2. A `fluentd` container that reads logs from the same location\n\nUse an `emptyDir` volume to share logs between containers.",
|
||||
"question": "Create a pod named `logger` in the `monitoring` namespace with two containers:\n\n1. A `busybox` container that writes logs to `/var/log/app.log`\n2. A `fluentd` container that reads logs from the same location\n\nUse an `emptyDir` volume named `log-volume` to share logs between containers. Mount this volume at `/var/log` in both containers.\n\nEnsure both containers are running.",
|
||||
"concepts": ["multi-container pods", "volumes", "logging"],
|
||||
"verification": [
|
||||
{
|
||||
@@ -103,7 +103,7 @@
|
||||
"id": "5",
|
||||
"namespace": "default",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a ServiceAccount named `app-sa`.\n\nCreate a Role named `pod-reader` that allows listing and getting pods.\n\nCreate a RoleBinding named `read-pods` that binds the `pod-reader` Role to the `app-sa` ServiceAccount.",
|
||||
"question": "Create a ServiceAccount named `app-sa` in the `default` namespace.\n\nCreate a Role named `pod-reader` that allows listing and getting pods.\n\nCreate a RoleBinding named `read-pods` that binds the `pod-reader` Role to the `app-sa` ServiceAccount.",
|
||||
"concepts": ["RBAC", "service accounts", "roles"],
|
||||
"verification": [
|
||||
{
|
||||
@@ -133,7 +133,7 @@
|
||||
"id": "6",
|
||||
"namespace": "networking",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a NetworkPolicy named `db-policy` in the `networking` namespace that:\n\n1. Allows pods with label `role=frontend` to connect to pods with label `role=db` on port `3306`\n2. Denies all other ingress traffic to pods with label `role=db`",
|
||||
"question": "Create a NetworkPolicy named `db-policy` in the `networking` namespace that:\n\n1. Allows pods with label `role=frontend` to connect to pods with label `role=db` on port `3306`\n2. Denies all other ingress traffic to pods with label `role=db`\n\nEnsure the policy is correctly applied to pods with the matching labels.",
|
||||
"concepts": ["network policies", "pod networking"],
|
||||
"verification": [
|
||||
{
|
||||
@@ -156,7 +156,7 @@
|
||||
"id": "7",
|
||||
"namespace": "default",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a deployment named `web-app` with `3` replicas using the `nginx:1.19` image.\n\nCreate a NodePort service named `web-service` that exposes the deployment on port `80`.\n\nEnsure the pods are distributed across multiple nodes.",
|
||||
"question": "Create a deployment named `web-app` in the `default` namespace with `3` replicas using the `nginx:1.19` image.\n\nCreate a NodePort service named `web-service` that exposes the deployment on port `80`.\n\nEnsure the pods are distributed across multiple nodes using an appropriate pod anti-affinity rule.",
|
||||
"concepts": ["deployments", "services", "pod distribution"],
|
||||
"verification": [
|
||||
{
|
||||
@@ -179,7 +179,7 @@
|
||||
"id": "8",
|
||||
"namespace": "monitoring",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a pod named `resource-pod` in the `monitoring` namespace with the following resource requirements:\n\n- CPU request: `100m`\n- CPU limit: `200m`\n- Memory request: `128Mi`\n- Memory limit: `256Mi`",
|
||||
"question": "Create a pod named `resource-pod` in the `monitoring` namespace using the `nginx` image with the following resource requirements:\n\n- CPU request: `100m`\n- CPU limit: `200m`\n- Memory request: `128Mi`\n- Memory limit: `256Mi`\n\nEnsure the pod is in the Running state with all resource constraints applied correctly.",
|
||||
"concepts": ["resource management", "pod configuration"],
|
||||
"verification": [
|
||||
{
|
||||
@@ -202,7 +202,7 @@
|
||||
"id": "9",
|
||||
"namespace": "default",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a ConfigMap named `app-config` with the key `APP_COLOR` and value `blue`.\n\nCreate a pod named `config-pod` that mounts this ConfigMap as a volume at `/etc/config`.",
|
||||
"question": "Create a ConfigMap named `app-config` with the key `APP_COLOR` and value `blue`.\n\nCreate a pod named `config-pod` using the `nginx` image that mounts this ConfigMap as a volume named `config-volume` at `/etc/config`.\n\nVerify that the configuration is correctly accessible within the pod.",
|
||||
"concepts": ["configmaps", "volumes", "pod configuration"],
|
||||
"verification": [
|
||||
{
|
||||
@@ -232,7 +232,7 @@
|
||||
"id": "10",
|
||||
"namespace": "default",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a pod named `health-check` with the following health check configuration:\n\n- Liveness probe: HTTP GET on `/` port `80` with initial delay of `5` seconds\n- Readiness probe: HTTP GET on `/` port `80` with initial delay of `5` seconds",
|
||||
"question": "Create a pod named `health-check` in the `default` namespace using the `nginx` image with the following health check configuration:\n\n- Liveness probe: HTTP GET on path `/` port `80` with initial delay of `5` seconds\n- Readiness probe: HTTP GET on path `/` port `80` with initial delay of `5` seconds\n\nEnsure the pod is running with both probes functioning correctly.",
|
||||
"concepts": ["health checks", "pod lifecycle"],
|
||||
"verification": [
|
||||
{
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"lab": "cka-001",
|
||||
"workerNodes": 2,
|
||||
"workerNodes": 1,
|
||||
"answers": "assets/exams/cka/001/answers.md",
|
||||
"questions": "assessment.json",
|
||||
"totalMarks": 100,
|
||||
|
||||
1088
facilitator/assets/exams/cka/002/answers.md
Normal file
1088
facilitator/assets/exams/cka/002/answers.md
Normal file
File diff suppressed because it is too large
Load Diff
999
facilitator/assets/exams/cka/002/answers.sh
Normal file
999
facilitator/assets/exams/cka/002/answers.sh
Normal file
@@ -0,0 +1,999 @@
|
||||
#!/bin/bash
|
||||
|
||||
# answers.md convetred to answers.sh for quick testing
|
||||
echo "Implementing Question 1: Dynamic PVC and Pod"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: data-pvc
|
||||
namespace: storage-task
|
||||
spec:
|
||||
storageClassName: standard
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: data-pod
|
||||
namespace: storage-task
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: data-pvc
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 2: Storage Class Configuration"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: fast-local
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: rancher.io/local-path
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
EOF
|
||||
|
||||
kubectl patch storageclass default-test -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
|
||||
kubectl patch storageclass local-path -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
|
||||
|
||||
echo "Implementing Question 3: Manual Storage Configuration"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: manual-pv
|
||||
spec:
|
||||
capacity:
|
||||
storage: 1Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: /mnt/data
|
||||
nodeAffinity:
|
||||
required:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- k3d-cluster-agent-0
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: manual-pvc
|
||||
namespace: manual-storage
|
||||
spec:
|
||||
storageClassName: ""
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: manual-pod
|
||||
namespace: manual-storage
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox
|
||||
command: ["sleep", "3600"]
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: manual-pvc
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 4: Deployment with HPA"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: scaling-app
|
||||
namespace: scaling
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: scaling-app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: scaling-app
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
---
|
||||
apiVersion: autoscaling/v1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: scaling-app
|
||||
namespace: scaling
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: scaling-app
|
||||
minReplicas: 2
|
||||
maxReplicas: 5
|
||||
targetCPUUtilizationPercentage: 70
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 5: Node Affinity Configuration"
|
||||
kubectl label node k3d-cluster-agent-1 disk=ssd
|
||||
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: app-scheduling
|
||||
namespace: scheduling
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: app-scheduling
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: app-scheduling
|
||||
spec:
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: disk
|
||||
operator: In
|
||||
values:
|
||||
- ssd
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 6: Pod Security Policy"
|
||||
kubectl label namespace security pod-security.kubernetes.io/enforce=restricted pod-security.kubernetes.io/enforce-version=latest
|
||||
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secure-pod
|
||||
namespace: security
|
||||
spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
volumes:
|
||||
- name: html
|
||||
emptyDir: {}
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 7: Node Taints and Tolerations"
|
||||
kubectl taint node k3d-cluster-agent-1 special-workload=true:NoSchedule
|
||||
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: toleration-deploy
|
||||
namespace: scheduling
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: toleration-deploy
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: toleration-deploy
|
||||
spec:
|
||||
tolerations:
|
||||
- key: "special-workload"
|
||||
operator: "Equal"
|
||||
value: "true"
|
||||
effect: "NoSchedule"
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: normal-deploy
|
||||
namespace: scheduling
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: normal-deploy
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: normal-deploy
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 8: StatefulSet and Headless Service"
|
||||
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: web-svc
|
||||
namespace: stateful
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: web
|
||||
ports:
|
||||
- port: 80
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: web
|
||||
namespace: stateful
|
||||
spec:
|
||||
serviceName: web-svc
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: web
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
storageClassName: cold
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 9: DNS Configuration and Debugging"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: web-app
|
||||
namespace: dns-debug
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: web-app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: web-app
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: web-svc
|
||||
namespace: dns-debug
|
||||
spec:
|
||||
selector:
|
||||
app: web-app
|
||||
ports:
|
||||
- port: 80
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dns-test
|
||||
namespace: dns-debug
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- "wget -qO- http://web-svc && wget -qO- http://web-svc.dns-debug.svc.cluster.local && sleep 36000"
|
||||
dnsConfig:
|
||||
searches:
|
||||
- dns-debug.svc.cluster.local
|
||||
- svc.cluster.local
|
||||
- cluster.local
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: dns-config
|
||||
namespace: dns-debug
|
||||
data:
|
||||
search-domains: |
|
||||
search dns-debug.svc.cluster.local svc.cluster.local cluster.local
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 10: Set up basic DNS service discovery"
|
||||
echo "Implementing solution for Question 10"
|
||||
kubectl create namespace dns-config
|
||||
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dns-app
|
||||
namespace: dns-config
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dns-app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dns-app
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
---
|
||||
# Create the service
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: dns-svc
|
||||
namespace: dns-config
|
||||
spec:
|
||||
selector:
|
||||
app: dns-app
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
---
|
||||
# Create the DNS tester pod
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dns-tester
|
||||
namespace: dns-config
|
||||
spec:
|
||||
containers:
|
||||
- name: dns-tester
|
||||
image: infoblox/dnstools
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
nslookup dns-svc > /tmp/dns-test.txt
|
||||
nslookup dns-svc.dns-config.svc.cluster.local >> /tmp/dns-test.txt
|
||||
sleep 3600
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 11: Helm Chart Deployment"
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
helm repo update
|
||||
helm install web-release bitnami/nginx \
|
||||
--namespace helm-test \
|
||||
--set service.type=NodePort \
|
||||
--set replicaCount=2
|
||||
|
||||
echo "Implementing Question 12: Kustomize Configuration"
|
||||
mkdir -p /tmp/exam/kustomize/base
|
||||
mkdir -p /tmp/exam/kustomize/overlays/production
|
||||
|
||||
cat <<'EOF' > /tmp/exam/kustomize/base/deployment.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: nginx-index
|
||||
mountPath: /usr/share/nginx/html/
|
||||
volumes:
|
||||
- name: nginx-index
|
||||
configMap:
|
||||
name: nginx-config
|
||||
EOF
|
||||
|
||||
cat <<'EOF' > /tmp/exam/kustomize/base/kustomization.yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- deployment.yaml
|
||||
EOF
|
||||
|
||||
cat <<'EOF' > /tmp/exam/kustomize/overlays/production/kustomization.yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: kustomize
|
||||
bases:
|
||||
- ../../base
|
||||
patches:
|
||||
- patch: |
|
||||
- op: replace
|
||||
path: /spec/replicas
|
||||
value: 3
|
||||
target:
|
||||
kind: Deployment
|
||||
name: nginx
|
||||
commonLabels:
|
||||
environment: production
|
||||
configMapGenerator:
|
||||
- name: nginx-config
|
||||
literals:
|
||||
- index.html=Welcome to Production
|
||||
EOF
|
||||
|
||||
kubectl create namespace kustomize
|
||||
kubectl apply -k /tmp/exam/kustomize/overlays/production/
|
||||
|
||||
echo "Implementing Question 13: Gateway API Configuration"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: gateway.networking.k8s.io/v1beta1
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: main-gateway
|
||||
namespace: gateway
|
||||
spec:
|
||||
gatewayClassName: standard
|
||||
listeners:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: HTTP
|
||||
---
|
||||
apiVersion: gateway.networking.k8s.io/v1beta1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: app-routes
|
||||
namespace: gateway
|
||||
spec:
|
||||
parentRefs:
|
||||
- name: main-gateway
|
||||
rules:
|
||||
- matches:
|
||||
- path:
|
||||
value: /app1
|
||||
backendRefs:
|
||||
- name: app1-svc
|
||||
port: 8080
|
||||
- matches:
|
||||
- path:
|
||||
value: /app2
|
||||
backendRefs:
|
||||
- name: app2-svc
|
||||
port: 8080
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: app1
|
||||
namespace: gateway
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: app1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: app1
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: app1-svc
|
||||
namespace: gateway
|
||||
spec:
|
||||
selector:
|
||||
app: app1
|
||||
ports:
|
||||
- port: 8080
|
||||
targetPort: 80
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: app2
|
||||
namespace: gateway
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: app2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: app2
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: app2-svc
|
||||
namespace: gateway
|
||||
spec:
|
||||
selector:
|
||||
app: app2
|
||||
ports:
|
||||
- port: 8080
|
||||
targetPort: 80
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 14: Resource Limits and Quotas"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: resource-limits
|
||||
namespace: limits
|
||||
spec:
|
||||
limits:
|
||||
- type: Container
|
||||
default:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
defaultRequest:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
max:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: compute-quota
|
||||
namespace: limits
|
||||
spec:
|
||||
hard:
|
||||
cpu: "2"
|
||||
memory: 2Gi
|
||||
pods: "5"
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: test-limits
|
||||
namespace: limits
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: test-limits
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: test-limits
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 15: Horizontal Pod Autoscaling"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: resource-consumer
|
||||
namespace: monitoring
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: resource-consumer
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: resource-consumer
|
||||
spec:
|
||||
containers:
|
||||
- name: resource-consumer
|
||||
image: gcr.io/kubernetes-e2e-test-images/resource-consumer:1.5
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
---
|
||||
apiVersion: autoscaling/v1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: resource-consumer
|
||||
namespace: monitoring
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: resource-consumer
|
||||
minReplicas: 3
|
||||
maxReplicas: 6
|
||||
targetCPUUtilizationPercentage: 50
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 16: RBAC Configuration"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: app-admin
|
||||
namespace: cluster-admin
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: app-admin
|
||||
namespace: cluster-admin
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["list", "get", "watch"]
|
||||
- apiGroups: ["apps"]
|
||||
resources: ["deployments"]
|
||||
verbs: ["list", "get", "watch", "update"]
|
||||
- apiGroups: [""]
|
||||
resources: ["configmaps"]
|
||||
verbs: ["create", "delete"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: app-admin
|
||||
namespace: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: app-admin
|
||||
namespace: cluster-admin
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: app-admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: admin-pod
|
||||
namespace: cluster-admin
|
||||
spec:
|
||||
serviceAccountName: app-admin
|
||||
containers:
|
||||
- name: kubectl
|
||||
image: bitnami/kubectl:latest
|
||||
command: ["sleep", "3600"]
|
||||
volumeMounts:
|
||||
- name: token
|
||||
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
|
||||
volumes:
|
||||
- name: token
|
||||
projected:
|
||||
sources:
|
||||
- serviceAccountToken:
|
||||
expirationSeconds: 3600
|
||||
audience: kubernetes.default.svc
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 17: Network Policies"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: web
|
||||
namespace: network
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: web
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: api
|
||||
namespace: network
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: api
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: api
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: db
|
||||
namespace: network
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: db
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
containers:
|
||||
- name: postgres
|
||||
image: postgres
|
||||
env:
|
||||
- name: POSTGRES_HOST_AUTH_METHOD
|
||||
value: trust
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: web-policy
|
||||
namespace: network
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: web
|
||||
policyTypes:
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: api
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: api-policy
|
||||
namespace: network
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: api
|
||||
policyTypes:
|
||||
- Egress
|
||||
- Ingress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: web
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: db
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: db-policy
|
||||
namespace: network
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: db
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: api
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 18: Rolling Updates"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: app-v1
|
||||
namespace: upgrade
|
||||
spec:
|
||||
replicas: 4
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
maxSurge: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: app-v1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: app-v1
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.19
|
||||
EOF
|
||||
|
||||
kubectl set image deployment/app-v1 nginx=nginx:1.20 -n upgrade --record
|
||||
kubectl rollout history deployment app-v1 -n upgrade > /tmp/exam/rollout-history.txt
|
||||
kubectl rollout undo deployment/app-v1 -n upgrade
|
||||
|
||||
echo "Implementing Question 19: Pod Priority and Anti-affinity"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: high-priority
|
||||
value: 1000
|
||||
---
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: low-priority
|
||||
value: 100
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: high-priority
|
||||
namespace: scheduling
|
||||
labels:
|
||||
priority: high
|
||||
spec:
|
||||
priorityClassName: high-priority
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: priority
|
||||
operator: In
|
||||
values:
|
||||
- high
|
||||
- low
|
||||
topologyKey: kubernetes.io/hostname
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: low-priority
|
||||
namespace: scheduling
|
||||
labels:
|
||||
priority: low
|
||||
spec:
|
||||
priorityClassName: low-priority
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: priority
|
||||
operator: In
|
||||
values:
|
||||
- high
|
||||
- low
|
||||
topologyKey: kubernetes.io/hostname
|
||||
EOF
|
||||
|
||||
echo "Implementing Question 20: Troubleshooting Application"
|
||||
cat <<'EOF' | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: failing-app
|
||||
namespace: troubleshoot
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: failing-app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: failing-app
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.25
|
||||
ports:
|
||||
- containerPort: 80
|
||||
resources:
|
||||
limits:
|
||||
memory: 256Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 80
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
EOF
|
||||
|
||||
echo "All solutions have been implemented successfully!"
|
||||
597
facilitator/assets/exams/cka/002/assessment.json
Normal file
597
facilitator/assets/exams/cka/002/assessment.json
Normal file
@@ -0,0 +1,597 @@
|
||||
{
|
||||
"questions": [
|
||||
{
|
||||
"id": "1",
|
||||
"namespace": "storage-task",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a Dynamic PVC named `data-pvc` with the following specifications:\n\n- Storage Class: `standard`\n- Access Mode: `ReadWriteOnce`\n- Storage Request: `2Gi`\n\nThen create a Pod named `data-pod` using the `nginx` image that mounts this PVC as volume with name `data` at `/usr/share/nginx/html`.\n\nEnsure both the PVC and Pod are in the `storage-task` namespace.",
|
||||
"concepts": ["storage", "persistent-volumes", "pods"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "PVC exists with correct specifications",
|
||||
"verificationScriptFile": "q1_s1_validate_pvc.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Pod exists and is running",
|
||||
"verificationScriptFile": "q1_s2_validate_pod.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "PVC is correctly mounted in the pod",
|
||||
"verificationScriptFile": "q1_s3_validate_mount.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"namespace": "storage-class",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a new StorageClass named `fast-local` with the following specifications:\n\n- Provisioner: `rancher.io/local-path`\n- VolumeBindingMode: `WaitForFirstConsumer`\n- Set it as the `default` StorageClass\n\nNote: Ensure any existing default StorageClass is no longer marked as default.",
|
||||
"concepts": ["storage", "storage-classes"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "StorageClass exists with correct specifications",
|
||||
"verificationScriptFile": "q2_s1_validate_sc.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "StorageClass is set as default",
|
||||
"verificationScriptFile": "q2_s2_validate_default.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "No other StorageClass is marked as default",
|
||||
"verificationScriptFile": "q2_s3_validate_no_other_default.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"namespace": "manual-storage",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a PersistentVolume named `manual-pv` with the following specifications:\n\n- Storage: `1Gi`\n- Access Mode: `ReadWriteOnce`\n- Host Path: `/mnt/data`\n- Node Affinity: Must run on node `k3d-cluster-agent-0`\n\nThen create a PersistentVolumeClaim named `manual-pvc` that binds to this PV.\n\nFinally, create a Pod named `manual-pod` using the `busybox` image that mounts this PVC at `/data` and runs the command `sleep 3600`. ",
|
||||
"concepts": ["storage", "persistent-volumes", "node-affinity"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "PV exists with correct specifications",
|
||||
"verificationScriptFile": "q3_s1_validate_pv.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "PVC exists and is bound to PV",
|
||||
"verificationScriptFile": "q3_s2_validate_pvc.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Pod exists, is running, and uses the PVC",
|
||||
"verificationScriptFile": "q3_s3_validate_pod.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "4",
|
||||
"namespace": "scaling",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a Deployment named `scaling-app` with the following specifications:\n\n- Image: `nginx`\n- Initial replicas: `2`\n- Resource requests: CPU: `200m`, Memory: `256Mi`\n- Resource limits: CPU: `500m`, Memory: `512Mi`\n\nThen create a `HorizontalPodAutoscaler` for this deployment:\n- Use apiVersion: `autoscaling/v1`\n- Min replicas: `2`\n- Max replicas: `5`\n- Target CPU utilization: `70%`",
|
||||
"concepts": ["autoscaling", "deployments", "resource-management"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Deployment exists with correct specifications",
|
||||
"verificationScriptFile": "q4_s1_validate_deployment.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "HPA exists with correct specifications",
|
||||
"verificationScriptFile": "q4_s2_validate_hpa.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Deployment has correct resource configurations",
|
||||
"verificationScriptFile": "q4_s3_validate_resources.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "5",
|
||||
"namespace": "scheduling",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a deployment named `app-scheduling` with `3` replicas using the `nginx` image that must only run on node `k3d-cluster-agent-1` using node affinity (not node selector).\n\nRequirements:\n- Use `requiredDuringSchedulingIgnoredDuringExecution`\n- Match the node by its hostname\n- Label the target node with `disk=ssd` before creating the deployment",
|
||||
"concepts": ["scheduling", "node-affinity", "deployments"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Node is correctly labeled",
|
||||
"verificationScriptFile": "q5_s1_validate_node_label.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Deployment exists with correct node affinity",
|
||||
"verificationScriptFile": "q5_s2_validate_affinity.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "All pods are running on the correct node",
|
||||
"verificationScriptFile": "q5_s3_validate_pod_placement.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "6",
|
||||
"namespace": "security",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Configure Pod Security for the `security` namespace and create a secure `nginx` pod with `nginx` image :\n\n1. Create the `security` namespace with Pod Security Admission (PSA) controls\n - Set the namespace to enforce the 'restricted' security profile\n - Use the latest version of the security profile\n\n2. Create a secure pod named `secure-pod` in the `security` namespace with the following specifications:\n - Use the `nginx` image\n - Set the pod-level security context to:\n * Run as a non-root user with UID `1000`\n * Enable `runAsNonRoot`\n\n - Configure container-level security context to:\n * Prevent privilege escalation\n * Run as non-root user (UID `1000`)\n * Drop ALL Linux capabilities\n\n - Add a volume mount:\n * Create an emptyDir volume named 'html'\n * Mount the volume at '/usr/share/nginx/html' \n use default seccomp profile",
|
||||
"concepts": ["security", "pod-security-policy"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Pod Security Policy exists with correct specifications",
|
||||
"verificationScriptFile": "q6_s1_validate_psp.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Pod exists and complies with the policy",
|
||||
"verificationScriptFile": "q6_s2_validate_pod.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "7",
|
||||
"namespace": "scheduling",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Configure node `k3d-cluster-agent-1` with a taint `key=special-workload`, `value=true` and `effect=NoSchedule`.\n\nThen create a deployment named `toleration-deploy` with `2` replicas using the `nginx` image that can tolerate this taint.\n\nFinally, create another deployment named `normal-deploy` with `2` replicas that should not run on the tainted node.",
|
||||
"concepts": ["scheduling", "taints", "tolerations"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Node is correctly tainted",
|
||||
"verificationScriptFile": "q7_s1_validate_taint.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Toleration deployment exists and pods are scheduled correctly",
|
||||
"verificationScriptFile": "q7_s2_validate_toleration_deploy.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Normal deployment exists and pods avoid tainted node",
|
||||
"verificationScriptFile": "q7_s3_validate_normal_deploy.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "8",
|
||||
"namespace": "stateful",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a StatefulSet named `web` with `3` replicas using the `nginx` image. Requirements:\n\n- Create a headless service named `web-svc` to expose the StatefulSet\n- Each pod should have a volume mounted at `/usr/share/nginx/html`\n- Use the StorageClass `cold` for dynamic provisioning\n- Volume claim template should request 1Gi storage\n\nEnsure pods are created in sequence and can be accessed using their stable network identity.",
|
||||
"concepts": ["statefulsets", "headless-services", "storage"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "StatefulSet exists with correct specifications",
|
||||
"verificationScriptFile": "q8_s1_validate_statefulset.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Headless service exists and is configured correctly",
|
||||
"verificationScriptFile": "q8_s2_validate_service.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "PVCs are created and bound correctly",
|
||||
"verificationScriptFile": "q8_s3_validate_storage.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "9",
|
||||
"namespace": "dns-debug",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Set up DNS service discovery testing environment:\n\n1. Create a deployment named `web-app` with `3` replicas using `nginx` image\n\n2. Create a `ClusterIP` service named `web-svc` to expose the deployment\n\n3. Create a Pod named `dns-test` using the `busybox` image that will:\n - Run the command `wget -qO- http://web-svc && wget -qO- http://web-svc.dns-debug.svc.cluster.local && sleep 36000`\n - Verify it can resolve both the service DNS (`web-svc.dns-debug.svc.cluster.local`)\n - Verify it can resolve pod DNS entries\n4. Create a ConfigMap named `dns-config` with custom search domains for the test pod",
|
||||
"concepts": ["dns", "service-discovery", "networking"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Deployment and service exist and are correctly configured",
|
||||
"verificationScriptFile": "q9_s1_validate_deployment_svc.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "DNS test pod exists and can resolve service",
|
||||
"verificationScriptFile": "q9_s2_validate_dns_resolution.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "DNS configuration is correct with custom search domains",
|
||||
"verificationScriptFile": "q9_s3_validate_dns_config.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "10",
|
||||
"namespace": "dns-config",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Set up basic DNS service discovery:\n\n1. Create a deployment named `dns-app` with 2 replicas using the `nginx` image\n2. Create a service named `dns-svc` to expose the deployment\n3. Create a Pod named `dns-tester` using the `infoblox/dnstools` image that:\n - Runs the command to test DNS resolution of the service\n - Verifies both service DNS (`dns-svc.dns-config.svc.cluster.local`)\n - Stores the test results in `/tmp/dns-test.txt` inside the pod\n\nNote: The test results should include both the service DNS resolution and FQDN resolution.",
|
||||
"concepts": ["dns", "service-discovery", "networking"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Deployment and service exist and are correctly configured",
|
||||
"verificationScriptFile": "q10_s1_validate_deployment.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "DNS resolution works correctly",
|
||||
"verificationScriptFile": "q10_s2_validate_dns.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Test results are properly stored",
|
||||
"verificationScriptFile": "q10_s3_validate_results.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "11",
|
||||
"namespace": "helm-test",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Using Helm:\n\n1. Add the bitnami repository `https://charts.bitnami.com/bitnami`\n2. Install the `nginx` chart from bitnami with release name `web-release` in namespace `helm-test`\n3. Configure the service type as `NodePort` and set the replica count to `2`\n4. Verify the deployment is successful and pods are running",
|
||||
"concepts": ["helm", "package-management"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Helm repository is added correctly",
|
||||
"verificationScriptFile": "q11_s1_validate_repo.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Helm release is installed with correct configuration",
|
||||
"verificationScriptFile": "q11_s2_validate_release.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Deployment is running with correct specifications",
|
||||
"verificationScriptFile": "q11_s3_validate_deployment.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "12",
|
||||
"namespace": "kustomize",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Using Kustomize: in the directory `/tmp/exam/kustomize/` \n\n1. Create a base deployment for `nginx` with `2` replicas\n2. Create an overlay that:\n - Adds a label `environment=production`\n - Increases replicas to `3`\n - Adds a ConfigMap named `nginx-config` with key `index.html` and value `Welcome to Production`, mount config map as volume named `nginx-index`\n3. Apply the overlay to create resources in the `kustomize` namespace",
|
||||
"concepts": ["kustomize", "configuration-management"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Kustomization files exist with correct structure",
|
||||
"verificationScriptFile": "q12_s1_validate_files.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Resources are created with correct overlay modifications",
|
||||
"verificationScriptFile": "q12_s2_validate_resources.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "ConfigMap is created and mounted correctly",
|
||||
"verificationScriptFile": "q12_s3_validate_configmap.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "13",
|
||||
"namespace": "gateway",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Configure Gateway API resources:\n\n1. Create a Gateway named `main-gateway` listening on port `80`\n2. Create an HTTPRoute to route traffic to a backend service:\n - Path: `/app1` should route to service `app1-svc` port `8080`\n - Path: `/app2` should route to service `app2-svc` port `8080`\n3. Create two deployments (`app1` and `app2`) with corresponding services to test the routing",
|
||||
"concepts": ["gateway-api", "networking"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Gateway resource exists and is configured correctly",
|
||||
"verificationScriptFile": "q13_s1_validate_gateway.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "HTTPRoute exists with correct routing rules",
|
||||
"verificationScriptFile": "q13_s2_validate_httproute.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Backend services and deployments are running",
|
||||
"verificationScriptFile": "q13_s3_validate_backends.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "14",
|
||||
"namespace": "limits",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Configure resource management:\n\n1. Create a LimitRange in namespace `limits` with:\n - Default request: CPU: `100m`, Memory: `128Mi`\n - Default limit: CPU: `200m`, Memory: `256Mi`\n - Max limit: CPU: `500m`, Memory: `512Mi`\n\n2. Create a ResourceQuota for the namespace:\n - Max total CPU: `2`\n - Max total memory: `2Gi`\n - Max number of pods: `5`\n\n3. Create a deployment named `test-limits` with `2` replicas to verify the limits are applied",
|
||||
"concepts": ["resource-management", "limitrange", "resourcequota"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "LimitRange exists with correct specifications",
|
||||
"verificationScriptFile": "q14_s1_validate_limitrange.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "ResourceQuota exists with correct specifications",
|
||||
"verificationScriptFile": "q14_s2_validate_quota.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Deployment respects resource constraints",
|
||||
"verificationScriptFile": "q14_s3_validate_deployment.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "15",
|
||||
"namespace": "monitoring",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Create a deployment named `resource-consumer` with 3 replicas that:\n\n1. Uses the image `gcr.io/kubernetes-e2e-test-images/resource-consumer:1.5`\n2. Sets resource requests:\n - CPU: `100m`\n - Memory: `128Mi`\n3. Sets resource limits:\n - CPU: `200m`\n - Memory: `256Mi`\n4. Create a `HorizontalPodAutoscaler` for this deployment:\n - Min replicas: `3`\n - Max replicas: `6`\n - Target CPU utilization: `50%`\n\nNote: metrics-server is already installed in the cluster.",
|
||||
"concepts": ["monitoring", "resource-management", "autoscaling"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Deployment exists with correct resource configuration",
|
||||
"verificationScriptFile": "q15_s1_validate_deployment.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 3
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "HPA is configured correctly",
|
||||
"verificationScriptFile": "q15_s2_validate_hpa.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Pods are running and ready",
|
||||
"verificationScriptFile": "q15_s3_validate_pods.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "16",
|
||||
"namespace": "cluster-admin",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Perform the following cluster administration tasks:\n\n1. Create a ServiceAccount named `app-admin` in the `cluster-admin` namespace\n2. Create a Role that allows:\n - `List`, `get`, `watch` operations on `pods` and `deployments`\n - `Create` and `delete` operations on `configmaps`\n - `Update` operations on `deployments`\n3. Bind the Role to the ServiceAccount\n4. Create a test Pod named `admin-pod` that uses this ServiceAccount with:\n - Image: `bitnami/kubectl:latest`\n - Command: `sleep 3600`\n - Mount the ServiceAccount token as a volume\n\nVerify the pod can perform the allowed operations but cannot perform other operations (like creating pods).",
|
||||
"concepts": ["rbac", "service-accounts", "security"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "ServiceAccount exists with correct configuration",
|
||||
"verificationScriptFile": "q16_s1_validate_sa.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Role and RoleBinding are configured correctly",
|
||||
"verificationScriptFile": "q16_s2_validate_rbac.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Test pod is running with correct ServiceAccount",
|
||||
"verificationScriptFile": "q16_s3_validate_pod.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "17",
|
||||
"namespace": "network",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Configure network policies:\n\n1. Create a deployment named `web` using `nginx` image with label `app=web`\n2. Create a deployment named `api` using `nginx` image with label `app=api`\n3. Create a deployment named `db` using `postgres` image with label `app=db` and environment variable `POSTGRES_HOST_AUTH_METHOD=trust`\n4. Create NetworkPolicies to:\n - Allow web to communicate only with api\n - Allow api to communicate only with db\n - Deny all other traffic between pods",
|
||||
"concepts": ["networking", "network-policies", "security"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Deployments exist with correct labels",
|
||||
"verificationScriptFile": "q17_s1_validate_deployments.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Network policies exist with correct specifications",
|
||||
"verificationScriptFile": "q17_s2_validate_policies.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "18",
|
||||
"namespace": "upgrade",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Perform a rolling update:\n\n1. Create a deployment named `app-v1` with `4` replicas using `nginx:1.19`\n2. Perform a rolling update to `nginx:1.20` with:\n - Max unavailable: 1\n - Max surge: 1\n3. Record the update\n4. Save the output of `kubectl rollout history deployment app-v1 -n upgrade` to `/tmp/exam/rollout-history.txt`\n5. Roll back to the previous version",
|
||||
"concepts": ["deployments", "rolling-updates", "rollback"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Initial deployment exists with correct specifications",
|
||||
"verificationScriptFile": "q18_s1_validate_initial.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Rolling update performed correctly and history saved",
|
||||
"verificationScriptFile": "q18_s2_validate_update.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Rollback completed successfully",
|
||||
"verificationScriptFile": "q18_s3_validate_rollback.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "19",
|
||||
"namespace": "scheduling",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "Configure advanced scheduling:\n\n1. Create a pod named `high-priority` with PriorityClass priority `1000`\n2. Create a pod named `low-priority` with PriorityClass priority `100`\n3. Configure pod anti-affinity to ensure these pods don't run on the same node\n4. Create enough resource pressure to trigger pod eviction and observe priority behavior\n\nNote for Testing Resource Pressure:\n- Use a resource-intensive image like `polinux/stress` to generate load\n- Example: Create a pod with command: `stress -c 4 -m 2 --vm-bytes 1G` to consume CPU and memory\n- Deploy multiple instances of resource-heavy pods to simulate node resource exhaustion",
|
||||
"concepts": ["scheduling", "priority", "anti-affinity"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "PriorityClasses exist with correct priorities",
|
||||
"verificationScriptFile": "q19_s1_validate_priority.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Pods are created with correct specifications",
|
||||
"verificationScriptFile": "q19_s2_validate_pods.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Anti-affinity rules are enforced correctly",
|
||||
"verificationScriptFile": "q19_s3_validate_antiaffinity.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 1
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "20",
|
||||
"namespace": "troubleshoot",
|
||||
"machineHostname": "ckad9999",
|
||||
"question": "A deployment named `failing-app` has been created in the namespace `troubleshoot` but pods are not running. The deployment uses image `nginx:1.25` and should have 3 replicas.\n\nFix the following issues:\n\n1. The deployment is using an incorrect container port (port 8080 instead of 80)\n2. The pods are failing due to insufficient memory (current limit is 64Mi, should be 256Mi)\n3. There's a misconfigured liveness probe checking port 8080 instead of 80\n\nEnsure all pods are running successfully after applying the fixes.",
|
||||
"concepts": ["troubleshooting", "debugging", "deployment-configuration"],
|
||||
"verification": [
|
||||
{
|
||||
"id": "1",
|
||||
"description": "Container port is correctly set to 80",
|
||||
"verificationScriptFile": "q20_s1_validate_port.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"description": "Memory limits are set correctly to 256Mi",
|
||||
"verificationScriptFile": "q20_s2_validate_memory.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "3",
|
||||
"description": "Liveness probe is configured correctly",
|
||||
"verificationScriptFile": "q20_s3_validate_probe.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
},
|
||||
{
|
||||
"id": "4",
|
||||
"description": "All pods are running successfully",
|
||||
"verificationScriptFile": "q20_s4_validate_pods.sh",
|
||||
"expectedOutput": "0",
|
||||
"weightage": 2
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
10
facilitator/assets/exams/cka/002/config.json
Normal file
10
facilitator/assets/exams/cka/002/config.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"lab": "cka-002",
|
||||
"workerNodes": 2,
|
||||
"answers": "assets/exams/cka/002/answers.md",
|
||||
"questions": "assessment.json",
|
||||
"totalMarks": 100,
|
||||
"lowScore": 60,
|
||||
"mediumScore": 75,
|
||||
"highScore": 85
|
||||
}
|
||||
10
facilitator/assets/exams/cka/002/scripts/setup/q10_setup.sh
Normal file
10
facilitator/assets/exams/cka/002/scripts/setup/q10_setup.sh
Normal file
@@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace dns-config --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Create directory for test results if it doesn't exist
|
||||
mkdir -p /tmp/dns-test
|
||||
|
||||
echo "Setup completed for Question 10"
|
||||
16
facilitator/assets/exams/cka/002/scripts/setup/q11_setup.sh
Normal file
16
facilitator/assets/exams/cka/002/scripts/setup/q11_setup.sh
Normal file
@@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace helm-test --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Ensure helm is installed
|
||||
helm version || {
|
||||
echo "Helm is not installed"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Remove bitnami repo if it exists (to test adding it)
|
||||
helm repo remove bitnami 2>/dev/null || true
|
||||
|
||||
echo "Setup completed for Question 11"
|
||||
18
facilitator/assets/exams/cka/002/scripts/setup/q12_setup.sh
Normal file
18
facilitator/assets/exams/cka/002/scripts/setup/q12_setup.sh
Normal file
@@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace kustomize --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Create directory structure for kustomize
|
||||
mkdir -p /tmp/exam/kustomize/{base,overlays/production}
|
||||
|
||||
# Create initial base files
|
||||
cat > /tmp/exam/kustomize/base/kustomization.yaml <<EOF
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- deployment.yaml
|
||||
EOF
|
||||
|
||||
echo "Setup completed for Question 12"
|
||||
14
facilitator/assets/exams/cka/002/scripts/setup/q13_setup.sh
Normal file
14
facilitator/assets/exams/cka/002/scripts/setup/q13_setup.sh
Normal file
@@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace gateway --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Install Gateway API CRDs
|
||||
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v0.8.0/standard-install.yaml
|
||||
|
||||
# Wait for CRDs to be ready
|
||||
kubectl wait --for=condition=established --timeout=60s crd/gateways.gateway.networking.k8s.io
|
||||
kubectl wait --for=condition=established --timeout=60s crd/httproutes.gateway.networking.k8s.io
|
||||
|
||||
echo "Setup completed for Question 13"
|
||||
11
facilitator/assets/exams/cka/002/scripts/setup/q14_setup.sh
Normal file
11
facilitator/assets/exams/cka/002/scripts/setup/q14_setup.sh
Normal file
@@ -0,0 +1,11 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace limits --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Remove any existing LimitRange or ResourceQuota
|
||||
kubectl delete limitrange --all -n limits 2>/dev/null || true
|
||||
kubectl delete resourcequota --all -n limits 2>/dev/null || true
|
||||
|
||||
echo "Setup completed for Question 14"
|
||||
16
facilitator/assets/exams/cka/002/scripts/setup/q15_setup.sh
Normal file
16
facilitator/assets/exams/cka/002/scripts/setup/q15_setup.sh
Normal file
@@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace monitoring --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Pre-pull the resource consumer image to speed up deployment
|
||||
kubectl run pull-resource-consumer --image=gcr.io/kubernetes-e2e-test-images/resource-consumer:1.5 -n monitoring --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Wait for the pull pod to complete
|
||||
sleep 10
|
||||
|
||||
# Clean up the pull pod
|
||||
kubectl delete pod pull-resource-consumer -n monitoring 2>/dev/null || true
|
||||
|
||||
echo "Setup completed for Question 15"
|
||||
@@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace cluster-admin --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
echo "Setup completed for Question 16"
|
||||
16
facilitator/assets/exams/cka/002/scripts/setup/q17_setup.sh
Normal file
16
facilitator/assets/exams/cka/002/scripts/setup/q17_setup.sh
Normal file
@@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace network --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Ensure NetworkPolicy API is enabled
|
||||
kubectl get networkpolicies -n network || {
|
||||
echo "NetworkPolicy API is not enabled"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Delete any existing network policies
|
||||
kubectl delete networkpolicy --all -n network 2>/dev/null || true
|
||||
|
||||
echo "Setup completed for Question 17"
|
||||
13
facilitator/assets/exams/cka/002/scripts/setup/q18_setup.sh
Normal file
13
facilitator/assets/exams/cka/002/scripts/setup/q18_setup.sh
Normal file
@@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace upgrade --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Create directory for rollout history
|
||||
mkdir -p /tmp/exam
|
||||
|
||||
# Clean up the pull pods
|
||||
kubectl delete pod pull-nginx-1-19 pull-nginx-1-20 -n upgrade
|
||||
|
||||
echo "Setup completed for Question 18"
|
||||
29
facilitator/assets/exams/cka/002/scripts/setup/q19_setup.sh
Normal file
29
facilitator/assets/exams/cka/002/scripts/setup/q19_setup.sh
Normal file
@@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace (reusing scheduling namespace)
|
||||
kubectl create namespace scheduling --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Delete any existing PriorityClasses
|
||||
kubectl delete priorityclass high-priority low-priority 2>/dev/null || true
|
||||
|
||||
# # Create the PriorityClasses
|
||||
# kubectl create -f - <<EOF
|
||||
# apiVersion: scheduling.k8s.io/v1
|
||||
# kind: PriorityClass
|
||||
# metadata:
|
||||
# name: high-priority
|
||||
# value: 1000
|
||||
# globalDefault: false
|
||||
# description: "High priority class for testing"
|
||||
# ---
|
||||
# apiVersion: scheduling.k8s.io/v1
|
||||
# kind: PriorityClass
|
||||
# metadata:
|
||||
# name: low-priority
|
||||
# value: 100
|
||||
# globalDefault: false
|
||||
# description: "Low priority class for testing"
|
||||
# EOF
|
||||
|
||||
echo "Setup completed for Question 19"
|
||||
17
facilitator/assets/exams/cka/002/scripts/setup/q1_setup.sh
Normal file
17
facilitator/assets/exams/cka/002/scripts/setup/q1_setup.sh
Normal file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace storage-task --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Ensure the standard storage class exists
|
||||
kubectl get storageclass standard || kubectl create -f - <<EOF
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: standard
|
||||
provisioner: rancher.io/local-path
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
EOF
|
||||
|
||||
echo "Setup completed for Question 1"
|
||||
41
facilitator/assets/exams/cka/002/scripts/setup/q20_setup.sh
Normal file
41
facilitator/assets/exams/cka/002/scripts/setup/q20_setup.sh
Normal file
@@ -0,0 +1,41 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace troubleshoot --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Create the failing deployment
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: failing-app
|
||||
namespace: troubleshoot
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: failing-app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: failing-app
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.25
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
resources:
|
||||
limits:
|
||||
memory: "64Mi"
|
||||
cpu: "100m"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8080
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 3
|
||||
EOF
|
||||
|
||||
echo "Setup completed for Question 20"
|
||||
18
facilitator/assets/exams/cka/002/scripts/setup/q2_setup.sh
Normal file
18
facilitator/assets/exams/cka/002/scripts/setup/q2_setup.sh
Normal file
@@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace storage-class --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Create a dummy default storage class to test removal of default
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: default-test
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: rancher.io/local-path
|
||||
EOF
|
||||
|
||||
echo "Setup completed for Question 2"
|
||||
13
facilitator/assets/exams/cka/002/scripts/setup/q3_setup.sh
Normal file
13
facilitator/assets/exams/cka/002/scripts/setup/q3_setup.sh
Normal file
@@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace manual-storage --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Create the directory on the node (this assumes we have access to the node)
|
||||
mkdir -p /mnt/data
|
||||
|
||||
# Label the node for identification
|
||||
kubectl label node k3d-cluster-agent-0 kubernetes.io/hostname=k3d-cluster-agent-0 --overwrite
|
||||
|
||||
echo "Setup completed for Question 3"
|
||||
14
facilitator/assets/exams/cka/002/scripts/setup/q4_setup.sh
Normal file
14
facilitator/assets/exams/cka/002/scripts/setup/q4_setup.sh
Normal file
@@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace scaling --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Enable metrics-server if not present
|
||||
kubectl get deployment metrics-server -n kube-system || {
|
||||
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
|
||||
# Wait for metrics-server to be ready
|
||||
kubectl wait --for=condition=available --timeout=180s deployment/metrics-server -n kube-system
|
||||
}
|
||||
|
||||
echo "Setup completed for Question 4"
|
||||
10
facilitator/assets/exams/cka/002/scripts/setup/q5_setup.sh
Normal file
10
facilitator/assets/exams/cka/002/scripts/setup/q5_setup.sh
Normal file
@@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace scheduling --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Ensure the target node exists and is labeled with hostname
|
||||
kubectl label node k3d-cluster-agent-1 kubernetes.io/hostname=k3d-cluster-agent-1 --overwrite
|
||||
|
||||
echo "Setup completed for Question 5"
|
||||
37
facilitator/assets/exams/cka/002/scripts/setup/q6_setup.sh
Normal file
37
facilitator/assets/exams/cka/002/scripts/setup/q6_setup.sh
Normal file
@@ -0,0 +1,37 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace security --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Enable PodSecurity admission controller if not already enabled
|
||||
# Note: This might require cluster-level access and might not be possible in all environments
|
||||
|
||||
# Create the role and rolebinding for PSP
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: psp-role
|
||||
namespace: security
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
verbs: ['use']
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: psp-role-binding
|
||||
namespace: security
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: psp-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: default
|
||||
namespace: security
|
||||
EOF
|
||||
|
||||
echo "Setup completed for Question 6"
|
||||
13
facilitator/assets/exams/cka/002/scripts/setup/q7_setup.sh
Normal file
13
facilitator/assets/exams/cka/002/scripts/setup/q7_setup.sh
Normal file
@@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace if not exists (reusing scheduling namespace from Q5)
|
||||
kubectl create namespace scheduling --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Ensure the node exists
|
||||
kubectl get node k3d-cluster-agent-0 || {
|
||||
echo "Required node k3d-cluster-agent-0 not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "Setup completed for Question 7"
|
||||
17
facilitator/assets/exams/cka/002/scripts/setup/q8_setup.sh
Normal file
17
facilitator/assets/exams/cka/002/scripts/setup/q8_setup.sh
Normal file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace stateful --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Ensure storage class exists
|
||||
kubectl get storageclass cold || kubectl create -f - <<EOF
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: cold
|
||||
provisioner: rancher.io/local-path
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
EOF
|
||||
|
||||
echo "Setup completed for Question 8"
|
||||
13
facilitator/assets/exams/cka/002/scripts/setup/q9_setup.sh
Normal file
13
facilitator/assets/exams/cka/002/scripts/setup/q9_setup.sh
Normal file
@@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Create namespace
|
||||
kubectl create namespace dns-debug --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Ensure CoreDNS is running
|
||||
kubectl rollout status deployment/coredns -n kube-system --timeout=30s || {
|
||||
echo "CoreDNS is not running properly"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "Setup completed for Question 9"
|
||||
@@ -0,0 +1,31 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists and has correct replicas
|
||||
DEPLOY_STATUS=$(kubectl get deployment dns-app -n dns-config -o jsonpath='{.status.replicas},{.status.availableReplicas}' 2>/dev/null || echo "not found")
|
||||
if [ "$DEPLOY_STATUS" = "not found" ]; then
|
||||
echo "Deployment dns-app not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
REPLICAS=$(echo $DEPLOY_STATUS | cut -d',' -f1)
|
||||
AVAILABLE=$(echo $DEPLOY_STATUS | cut -d',' -f2)
|
||||
|
||||
if [ "$REPLICAS" != "2" ] || [ "$AVAILABLE" != "2" ]; then
|
||||
echo "Deployment does not have correct number of replicas"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if service exists and has correct port
|
||||
SVC_PORT=$(kubectl get svc dns-svc -n dns-config -o jsonpath='{.spec.ports[0].port}' 2>/dev/null || echo "not found")
|
||||
if [ "$SVC_PORT" = "not found" ]; then
|
||||
echo "Service dns-svc not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$SVC_PORT" != "80" ]; then
|
||||
echo "Service port is not configured correctly"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
@@ -0,0 +1,30 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if tester pod exists and is running
|
||||
POD_STATUS=$(kubectl get pod dns-tester -n dns-config -o jsonpath='{.status.phase}' 2>/dev/null || echo "not found")
|
||||
if [ "$POD_STATUS" = "not found" ]; then
|
||||
echo "DNS tester pod not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$POD_STATUS" != "Running" ]; then
|
||||
echo "DNS tester pod is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test DNS resolution
|
||||
TEST_RESULT=$(kubectl exec -n dns-config dns-tester -- nslookup dns-svc 2>/dev/null || echo "failed")
|
||||
if echo "$TEST_RESULT" | grep -q "failed"; then
|
||||
echo "DNS resolution test failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test FQDN resolution
|
||||
TEST_RESULT=$(kubectl exec -n dns-config dns-tester -- nslookup dns-svc.dns-config.svc.cluster.local 2>/dev/null || echo "failed")
|
||||
if echo "$TEST_RESULT" | grep -q "failed"; then
|
||||
echo "FQDN DNS resolution test failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
@@ -0,0 +1,24 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if results file exists in the pod
|
||||
FILE_CHECK=$(kubectl exec -n dns-config dns-tester -- test -f /tmp/dns-test.txt && echo "exists" || echo "not found")
|
||||
if [ "$FILE_CHECK" = "not found" ]; then
|
||||
echo "DNS test results file not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if file has content
|
||||
CONTENT=$(kubectl exec -n dns-config dns-tester -- cat /tmp/dns-test.txt 2>/dev/null || echo "")
|
||||
if [ -z "$CONTENT" ]; then
|
||||
echo "DNS test results file is empty"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify file contains required information
|
||||
if ! echo "$CONTENT" | grep -q "dns-svc.dns-config.svc.cluster.local"; then
|
||||
echo "FQDN resolution results not found in file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
@@ -0,0 +1,24 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if bitnami repo is added
|
||||
helm repo list | grep bitnami || {
|
||||
echo "Bitnami repository not found in helm repo list"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if repo URL is correct
|
||||
REPO_URL=$(helm repo list | grep bitnami | awk '{print $2}')
|
||||
if [[ "$REPO_URL" != "https://charts.bitnami.com/bitnami" ]]; then
|
||||
echo "Incorrect repository URL. Expected https://charts.bitnami.com/bitnami, got $REPO_URL"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if repo is up to date
|
||||
helm repo update bitnami || {
|
||||
echo "Failed to update bitnami repository"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "Helm repository validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if release exists
|
||||
helm status web-release -n helm-test || {
|
||||
echo "Helm release web-release not found in namespace helm-test"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if it's using nginx chart
|
||||
CHART=$(helm get manifest web-release -n helm-test | grep "chart:" | head -1)
|
||||
if [[ ! "$CHART" =~ "nginx" ]]; then
|
||||
echo "Release is not using nginx chart"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if service type is NodePort
|
||||
SERVICE_TYPE=$(kubectl get service web-release-nginx -n helm-test -o jsonpath='{.spec.type}')
|
||||
if [[ "$SERVICE_TYPE" != "NodePort" ]]; then
|
||||
echo "Service type is not NodePort. Current type: $SERVICE_TYPE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check replica count
|
||||
REPLICAS=$(kubectl get deployment web-release-nginx -n helm-test -o jsonpath='{.spec.replicas}')
|
||||
if [[ "$REPLICAS" != "2" ]]; then
|
||||
echo "Incorrect number of replicas. Expected 2, got $REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Helm release validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment web-release-nginx -n helm-test || {
|
||||
echo "Deployment web-release-nginx not found in namespace helm-test"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if all pods are running
|
||||
READY_PODS=$(kubectl get deployment web-release-nginx -n helm-test -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "2" ]]; then
|
||||
echo "Not all pods are ready. Expected 2, got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pods are using correct image
|
||||
POD_IMAGE=$(kubectl get deployment web-release-nginx -n helm-test -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ ! "$POD_IMAGE" =~ "nginx" ]]; then
|
||||
echo "Pods are not using nginx image. Current image: $POD_IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if service is accessible
|
||||
SERVICE_PORT=$(kubectl get service web-release-nginx -n helm-test -o jsonpath='{.spec.ports[0].nodePort}')
|
||||
if [[ -z "$SERVICE_PORT" ]]; then
|
||||
echo "NodePort not configured for service"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Deployment validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,52 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check base directory structure
|
||||
BASE_DIR="/tmp/exam/kustomize/base"
|
||||
if [[ ! -d "$BASE_DIR" ]]; then
|
||||
echo "Base directory not found at $BASE_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check overlay directory structure
|
||||
OVERLAY_DIR="/tmp/exam/kustomize/overlays/production"
|
||||
if [[ ! -d "$OVERLAY_DIR" ]]; then
|
||||
echo "Overlay directory not found at $OVERLAY_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check base kustomization.yaml
|
||||
if [[ ! -f "$BASE_DIR/kustomization.yaml" ]]; then
|
||||
echo "Base kustomization.yaml not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check base deployment.yaml
|
||||
if [[ ! -f "$BASE_DIR/deployment.yaml" ]]; then
|
||||
echo "Base deployment.yaml not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check overlay kustomization.yaml
|
||||
if [[ ! -f "$OVERLAY_DIR/kustomization.yaml" ]]; then
|
||||
echo "Overlay kustomization.yaml not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate base kustomization.yaml content
|
||||
BASE_RESOURCES=$(cat "$BASE_DIR/kustomization.yaml" | grep "deployment.yaml")
|
||||
if [[ -z "$BASE_RESOURCES" ]]; then
|
||||
echo "Base kustomization.yaml does not reference deployment.yaml"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate overlay kustomization.yaml content
|
||||
OVERLAY_CONTENT=$(cat "$OVERLAY_DIR/kustomization.yaml")
|
||||
if [[ ! "$OVERLAY_CONTENT" =~ "production" ]] || \
|
||||
[[ ! "$OVERLAY_CONTENT" =~ "value: 3" ]]; then
|
||||
echo "Overlay kustomization.yaml missing required configurations"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Kustomize files validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment nginx -n kustomize || {
|
||||
echo "Deployment nginx not found in namespace kustomize"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check replicas
|
||||
REPLICAS=$(kubectl get deployment nginx -n kustomize -o jsonpath='{.spec.replicas}')
|
||||
if [[ "$REPLICAS" != "3" ]]; then
|
||||
echo "Incorrect number of replicas. Expected 3, got $REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check environment label
|
||||
ENV_LABEL=$(kubectl get deployment nginx -n kustomize -o jsonpath='{.metadata.labels.environment}')
|
||||
if [[ "$ENV_LABEL" != "production" ]]; then
|
||||
echo "Incorrect environment label. Expected production, got $ENV_LABEL"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pods are running
|
||||
READY_PODS=$(kubectl get deployment nginx -n kustomize -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "3" ]]; then
|
||||
echo "Not all pods are ready. Expected 3, got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pods have the environment label
|
||||
POD_LABEL=$(kubectl get pods -n kustomize -l app=nginx -o jsonpath='{.items[0].metadata.labels.environment}')
|
||||
if [[ "$POD_LABEL" != "production" ]]; then
|
||||
echo "Pods do not have correct environment label"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Resources validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,38 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
NAMESPACE="kustomize"
|
||||
|
||||
# Find the full name of the nginx-config ConfigMap
|
||||
CONFIGMAP_NAME=$(kubectl get configmap -n $NAMESPACE --no-headers | awk '/^nginx-config-/ {print $1; exit}')
|
||||
|
||||
if [[ -z "$CONFIGMAP_NAME" ]]; then
|
||||
echo "ConfigMap starting with nginx-config not found in namespace $NAMESPACE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Found ConfigMap: $CONFIGMAP_NAME"
|
||||
|
||||
# Check ConfigMap content
|
||||
CONFIG_CONTENT=$(kubectl get configmap "$CONFIGMAP_NAME" -n $NAMESPACE -o jsonpath='{.data.index\.html}')
|
||||
if [[ "$CONFIG_CONTENT" != "Welcome to Production" ]]; then
|
||||
echo "Incorrect ConfigMap content. Expected 'Welcome to Production', got '$CONFIG_CONTENT'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if ConfigMap is mounted in pods
|
||||
MOUNT_PATH=$(kubectl get deployment nginx -n $NAMESPACE -o jsonpath='{.spec.template.spec.containers[0].volumeMounts[?(@.name=="nginx-index")].mountPath}')
|
||||
if [[ -z "$MOUNT_PATH" ]]; then
|
||||
echo "ConfigMap not mounted in deployment"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if volume is configured correctly
|
||||
VOLUME_NAME=$(kubectl get deployment nginx -n $NAMESPACE -o jsonpath="{.spec.template.spec.volumes[?(@.configMap.name==\"$CONFIGMAP_NAME\")].name}")
|
||||
if [[ -z "$VOLUME_NAME" ]]; then
|
||||
echo "ConfigMap volume not configured correctly in deployment"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "ConfigMap validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if Gateway exists
|
||||
kubectl get gateway main-gateway -n gateway || {
|
||||
echo "Gateway main-gateway not found in namespace gateway"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if Gateway is listening on port 80
|
||||
PORT=$(kubectl get gateway main-gateway -n gateway -o jsonpath='{.spec.listeners[0].port}')
|
||||
if [[ "$PORT" != "80" ]]; then
|
||||
echo "Gateway is not listening on port 80. Current port: $PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check protocol
|
||||
PROTOCOL=$(kubectl get gateway main-gateway -n gateway -o jsonpath='{.spec.listeners[0].protocol}')
|
||||
if [[ "$PROTOCOL" != "HTTP" ]]; then
|
||||
echo "Gateway is not using HTTP protocol. Current protocol: $PROTOCOL"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if Gateway is ready
|
||||
# READY_STATUS=$(kubectl get gateway main-gateway -n gateway -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}')
|
||||
# if [[ "$READY_STATUS" != "True" ]]; then
|
||||
# echo "Gateway is not ready. Current status: $READY_STATUS"
|
||||
# exit 1
|
||||
# fi
|
||||
|
||||
echo "Gateway validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,46 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if HTTPRoute exists
|
||||
kubectl get httproute -n gateway || {
|
||||
echo "No HTTPRoute found in namespace gateway"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check app1 path rule
|
||||
APP1_PATH=$(kubectl get httproute -n gateway -o jsonpath='{.items[0].spec.rules[?(@.matches[0].path.value=="/app1")].matches[0].path.value}')
|
||||
if [[ "$APP1_PATH" != "/app1" ]]; then
|
||||
echo "Missing or incorrect path rule for /app1"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check app2 path rule
|
||||
APP2_PATH=$(kubectl get httproute -n gateway -o jsonpath='{.items[0].spec.rules[?(@.matches[0].path.value=="/app2")].matches[0].path.value}')
|
||||
if [[ "$APP2_PATH" != "/app2" ]]; then
|
||||
echo "Missing or incorrect path rule for /app2"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check backend services
|
||||
APP1_BACKEND=$(kubectl get httproute -n gateway -o jsonpath='{.items[0].spec.rules[?(@.matches[0].path.value=="/app1")].backendRefs[0].name}')
|
||||
if [[ "$APP1_BACKEND" != "app1-svc" ]]; then
|
||||
echo "Incorrect backend service for /app1. Expected app1-svc, got $APP1_BACKEND"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
APP2_BACKEND=$(kubectl get httproute -n gateway -o jsonpath='{.items[0].spec.rules[?(@.matches[0].path.value=="/app2")].backendRefs[0].name}')
|
||||
if [[ "$APP2_BACKEND" != "app2-svc" ]]; then
|
||||
echo "Incorrect backend service for /app2. Expected app2-svc, got $APP2_BACKEND"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check backend ports
|
||||
APP1_PORT=$(kubectl get httproute -n gateway -o jsonpath='{.items[0].spec.rules[?(@.matches[0].path.value=="/app1")].backendRefs[0].port}')
|
||||
APP2_PORT=$(kubectl get httproute -n gateway -o jsonpath='{.items[0].spec.rules[?(@.matches[0].path.value=="/app2")].backendRefs[0].port}')
|
||||
if [[ "$APP1_PORT" != "8080" ]] || [[ "$APP2_PORT" != "8080" ]]; then
|
||||
echo "Incorrect backend ports. Expected 8080 for both services"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "HTTPRoute validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,52 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check app1 deployment and service
|
||||
kubectl get deployment app1 -n gateway || {
|
||||
echo "Deployment app1 not found in namespace gateway"
|
||||
exit 1
|
||||
}
|
||||
|
||||
kubectl get service app1-svc -n gateway || {
|
||||
echo "Service app1-svc not found in namespace gateway"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check app2 deployment and service
|
||||
kubectl get deployment app2 -n gateway || {
|
||||
echo "Deployment app2 not found in namespace gateway"
|
||||
exit 1
|
||||
}
|
||||
|
||||
kubectl get service app2-svc -n gateway || {
|
||||
echo "Service app2-svc not found in namespace gateway"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if services are configured correctly
|
||||
for SVC in app1-svc app2-svc; do
|
||||
# Check port
|
||||
PORT=$(kubectl get service $SVC -n gateway -o jsonpath='{.spec.ports[0].port}')
|
||||
if [[ "$PORT" != "8080" ]]; then
|
||||
echo "Service $SVC has incorrect port. Expected 8080, got $PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pods are running
|
||||
APP_NAME=${SVC%-svc}
|
||||
READY_PODS=$(kubectl get deployment $APP_NAME -n gateway -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ -z "$READY_PODS" ]] || [[ "$READY_PODS" -lt 1 ]]; then
|
||||
echo "No ready pods found for deployment $APP_NAME"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if service endpoints exist
|
||||
ENDPOINTS=$(kubectl get endpoints $SVC -n gateway -o jsonpath='{.subsets[0].addresses}')
|
||||
if [[ -z "$ENDPOINTS" ]]; then
|
||||
echo "No endpoints found for service $SVC"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Backend services validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,50 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if LimitRange exists
|
||||
kubectl get limitrange -n limits || {
|
||||
echo "No LimitRange found in namespace limits"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check default request
|
||||
CPU_REQUEST=$(kubectl get limitrange -n limits -o jsonpath='{.items[0].spec.limits[?(@.type=="Container")].defaultRequest.cpu}')
|
||||
if [[ "$CPU_REQUEST" != "100m" ]]; then
|
||||
echo "Incorrect default CPU request. Expected 100m, got $CPU_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_REQUEST=$(kubectl get limitrange -n limits -o jsonpath='{.items[0].spec.limits[?(@.type=="Container")].defaultRequest.memory}')
|
||||
if [[ "$MEM_REQUEST" != "128Mi" ]]; then
|
||||
echo "Incorrect default memory request. Expected 128Mi, got $MEM_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check default limit
|
||||
CPU_LIMIT=$(kubectl get limitrange -n limits -o jsonpath='{.items[0].spec.limits[?(@.type=="Container")].default.cpu}')
|
||||
if [[ "$CPU_LIMIT" != "200m" ]]; then
|
||||
echo "Incorrect default CPU limit. Expected 200m, got $CPU_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_LIMIT=$(kubectl get limitrange -n limits -o jsonpath='{.items[0].spec.limits[?(@.type=="Container")].default.memory}')
|
||||
if [[ "$MEM_LIMIT" != "256Mi" ]]; then
|
||||
echo "Incorrect default memory limit. Expected 256Mi, got $MEM_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check max limit
|
||||
MAX_CPU=$(kubectl get limitrange -n limits -o jsonpath='{.items[0].spec.limits[?(@.type=="Container")].max.cpu}')
|
||||
if [[ "$MAX_CPU" != "500m" ]]; then
|
||||
echo "Incorrect max CPU limit. Expected 500m, got $MAX_CPU"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MAX_MEM=$(kubectl get limitrange -n limits -o jsonpath='{.items[0].spec.limits[?(@.type=="Container")].max.memory}')
|
||||
if [[ "$MAX_MEM" != "512Mi" ]]; then
|
||||
echo "Incorrect max memory limit. Expected 512Mi, got $MAX_MEM"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "LimitRange validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if ResourceQuota exists
|
||||
kubectl get resourcequota -n limits || {
|
||||
echo "No ResourceQuota found in namespace limits"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check CPU quota
|
||||
CPU_QUOTA=$(kubectl get resourcequota -n limits -o jsonpath='{.items[0].spec.hard.cpu}')
|
||||
if [[ "$CPU_QUOTA" != "2" ]]; then
|
||||
echo "Incorrect CPU quota. Expected 2, got $CPU_QUOTA"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check memory quota
|
||||
MEM_QUOTA=$(kubectl get resourcequota -n limits -o jsonpath='{.items[0].spec.hard.memory}')
|
||||
if [[ "$MEM_QUOTA" != "2Gi" ]]; then
|
||||
echo "Incorrect memory quota. Expected 2Gi, got $MEM_QUOTA"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check pod quota
|
||||
POD_QUOTA=$(kubectl get resourcequota -n limits -o jsonpath='{.items[0].spec.hard.pods}')
|
||||
if [[ "$POD_QUOTA" != "5" ]]; then
|
||||
echo "Incorrect pod quota. Expected 5, got $POD_QUOTA"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if quota is being enforced
|
||||
QUOTA_STATUS=$(kubectl get resourcequota -n limits -o jsonpath='{.items[0].status.used}')
|
||||
if [[ -z "$QUOTA_STATUS" ]]; then
|
||||
echo "ResourceQuota is not being enforced"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "ResourceQuota validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,55 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment test-limits -n limits || {
|
||||
echo "Deployment test-limits not found in namespace limits"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check replicas
|
||||
REPLICAS=$(kubectl get deployment test-limits -n limits -o jsonpath='{.spec.replicas}')
|
||||
if [[ "$REPLICAS" != "2" ]]; then
|
||||
echo "Incorrect number of replicas. Expected 2, got $REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pods are running
|
||||
READY_PODS=$(kubectl get deployment test-limits -n limits -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "2" ]]; then
|
||||
echo "Not all pods are ready. Expected 2, got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pods respect LimitRange
|
||||
PODS=$(kubectl get pods -n limits -l app=test-limits -o jsonpath='{.items[*].metadata.name}')
|
||||
for POD in $PODS; do
|
||||
# Check resource requests
|
||||
CPU_REQUEST=$(kubectl get pod $POD -n limits -o jsonpath='{.spec.containers[0].resources.requests.cpu}')
|
||||
if [[ "$CPU_REQUEST" != "100m" ]]; then
|
||||
echo "Pod $POD has incorrect CPU request. Expected 100m, got $CPU_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_REQUEST=$(kubectl get pod $POD -n limits -o jsonpath='{.spec.containers[0].resources.requests.memory}')
|
||||
if [[ "$MEM_REQUEST" != "128Mi" ]]; then
|
||||
echo "Pod $POD has incorrect memory request. Expected 128Mi, got $MEM_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check resource limits
|
||||
CPU_LIMIT=$(kubectl get pod $POD -n limits -o jsonpath='{.spec.containers[0].resources.limits.cpu}')
|
||||
if [[ "$CPU_LIMIT" != "200m" ]]; then
|
||||
echo "Pod $POD has incorrect CPU limit. Expected 200m, got $CPU_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_LIMIT=$(kubectl get pod $POD -n limits -o jsonpath='{.spec.containers[0].resources.limits.memory}')
|
||||
if [[ "$MEM_LIMIT" != "256Mi" ]]; then
|
||||
echo "Pod $POD has incorrect memory limit. Expected 256Mi, got $MEM_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Deployment validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,51 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment resource-consumer -n monitoring || {
|
||||
echo "Deployment resource-consumer not found in namespace monitoring"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check replicas
|
||||
REPLICAS=$(kubectl get deployment resource-consumer -n monitoring -o jsonpath='{.spec.replicas}')
|
||||
if [[ "$REPLICAS" != "3" ]]; then
|
||||
echo "Incorrect number of replicas. Expected 3, got $REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check image
|
||||
IMAGE=$(kubectl get deployment resource-consumer -n monitoring -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ "$IMAGE" != "gcr.io/kubernetes-e2e-test-images/resource-consumer:1.5" ]]; then
|
||||
echo "Incorrect image. Expected gcr.io/kubernetes-e2e-test-images/resource-consumer:1.5, got $IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check resource requests
|
||||
CPU_REQUEST=$(kubectl get deployment resource-consumer -n monitoring -o jsonpath='{.spec.template.spec.containers[0].resources.requests.cpu}')
|
||||
if [[ "$CPU_REQUEST" != "100m" ]]; then
|
||||
echo "Incorrect CPU request. Expected 100m, got $CPU_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_REQUEST=$(kubectl get deployment resource-consumer -n monitoring -o jsonpath='{.spec.template.spec.containers[0].resources.requests.memory}')
|
||||
if [[ "$MEM_REQUEST" != "128Mi" ]]; then
|
||||
echo "Incorrect memory request. Expected 128Mi, got $MEM_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check resource limits
|
||||
CPU_LIMIT=$(kubectl get deployment resource-consumer -n monitoring -o jsonpath='{.spec.template.spec.containers[0].resources.limits.cpu}')
|
||||
if [[ "$CPU_LIMIT" != "200m" ]]; then
|
||||
echo "Incorrect CPU limit. Expected 200m, got $CPU_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_LIMIT=$(kubectl get deployment resource-consumer -n monitoring -o jsonpath='{.spec.template.spec.containers[0].resources.limits.memory}')
|
||||
if [[ "$MEM_LIMIT" != "256Mi" ]]; then
|
||||
echo "Incorrect memory limit. Expected 256Mi, got $MEM_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Deployment validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if HPA exists
|
||||
kubectl get hpa -n monitoring resource-consumer || {
|
||||
echo "HorizontalPodAutoscaler resource-consumer not found in namespace monitoring"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check min replicas
|
||||
MIN_REPLICAS=$(kubectl get hpa -n monitoring resource-consumer -o jsonpath='{.spec.minReplicas}')
|
||||
if [[ "$MIN_REPLICAS" != "3" ]]; then
|
||||
echo "Incorrect minimum replicas. Expected 3, got $MIN_REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check max replicas
|
||||
MAX_REPLICAS=$(kubectl get hpa -n monitoring resource-consumer -o jsonpath='{.spec.maxReplicas}')
|
||||
if [[ "$MAX_REPLICAS" != "6" ]]; then
|
||||
echo "Incorrect maximum replicas. Expected 6, got $MAX_REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check target CPU utilization
|
||||
TARGET_CPU=$(kubectl get hpa resource-consumer -n monitoring -o jsonpath='{.spec.metrics[0].resource.target.averageUtilization}')
|
||||
if [[ "$TARGET_CPU" != "50" ]]; then
|
||||
echo "Incorrect target CPU utilization. Expected 50, got $TARGET_CPU"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if HPA is targeting the correct deployment
|
||||
TARGET_REF=$(kubectl get hpa -n monitoring resource-consumer -o jsonpath='{.spec.scaleTargetRef.name}')
|
||||
if [[ "$TARGET_REF" != "resource-consumer" ]]; then
|
||||
echo "HPA is not targeting the correct deployment. Expected resource-consumer, got $TARGET_REF"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "HPA validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,44 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if pods are running
|
||||
READY_PODS=$(kubectl get deployment resource-consumer -n monitoring -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "3" ]]; then
|
||||
echo "Not all pods are ready. Expected 3, got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get all pods
|
||||
PODS=$(kubectl get pods -n monitoring -l app=resource-consumer -o jsonpath='{.items[*].metadata.name}')
|
||||
|
||||
# Check each pod's configuration
|
||||
for POD in $PODS; do
|
||||
# Check if pod is running
|
||||
POD_STATUS=$(kubectl get pod $POD -n monitoring -o jsonpath='{.status.phase}')
|
||||
if [[ "$POD_STATUS" != "Running" ]]; then
|
||||
echo "Pod $POD is not running. Current status: $POD_STATUS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check resource requests
|
||||
CPU_REQUEST=$(kubectl get pod $POD -n monitoring -o jsonpath='{.spec.containers[0].resources.requests.cpu}')
|
||||
if [[ "$CPU_REQUEST" != "100m" ]]; then
|
||||
echo "Pod $POD has incorrect CPU request. Expected 100m, got $CPU_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_REQUEST=$(kubectl get pod $POD -n monitoring -o jsonpath='{.spec.containers[0].resources.requests.memory}')
|
||||
if [[ "$MEM_REQUEST" != "128Mi" ]]; then
|
||||
echo "Pod $POD has incorrect memory request. Expected 128Mi, got $MEM_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if metrics are being collected
|
||||
kubectl top pod $POD -n monitoring > /dev/null || {
|
||||
echo "Unable to get metrics for pod $POD"
|
||||
exit 1
|
||||
}
|
||||
done
|
||||
|
||||
echo "Pods validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,31 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if ServiceAccount exists
|
||||
kubectl get serviceaccount app-admin -n cluster-admin || {
|
||||
echo "ServiceAccount app-admin not found in namespace cluster-admin"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if token is automatically mounted
|
||||
AUTO_MOUNT=$(kubectl get serviceaccount app-admin -n cluster-admin -o jsonpath='{.automountServiceAccountToken}')
|
||||
if [[ "$AUTO_MOUNT" == "false" ]]; then
|
||||
echo "ServiceAccount token automounting is disabled"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if secret is created for the ServiceAccount
|
||||
# SECRET_NAME=$(kubectl get serviceaccount app-admin -n cluster-admin -o jsonpath='{.secrets[0].name}')
|
||||
# if [[ -z "$SECRET_NAME" ]]; then
|
||||
# echo "No token secret found for ServiceAccount"
|
||||
# exit 1
|
||||
# fi
|
||||
|
||||
# # Verify secret exists
|
||||
# kubectl get secret $SECRET_NAME -n cluster-admin || {
|
||||
# echo "Token secret $SECRET_NAME not found"
|
||||
# exit 1
|
||||
# }
|
||||
|
||||
echo "ServiceAccount validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,54 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if Role exists
|
||||
kubectl get role app-admin -n cluster-admin || {
|
||||
echo "Role app-admin not found in namespace cluster-admin"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check Role permissions for pods and deployments
|
||||
RULES=$(kubectl get role app-admin -n cluster-admin -o json)
|
||||
|
||||
# Check list, get, watch permissions for pods and deployments
|
||||
POD_RULES=$(echo "$RULES" | jq -r '.rules[] | select(.resources[] | contains("pods"))')
|
||||
DEPLOYMENT_RULES=$(echo "$RULES" | jq -r '.rules[] | select(.resources[] | contains("deployments"))')
|
||||
|
||||
if [[ ! "$POD_RULES" =~ "list" ]] || [[ ! "$POD_RULES" =~ "get" ]] || [[ ! "$POD_RULES" =~ "watch" ]]; then
|
||||
echo "Missing required permissions for pods"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! "$DEPLOYMENT_RULES" =~ "list" ]] || [[ ! "$DEPLOYMENT_RULES" =~ "get" ]] || [[ ! "$DEPLOYMENT_RULES" =~ "watch" ]] || [[ ! "$DEPLOYMENT_RULES" =~ "update" ]]; then
|
||||
echo "Missing required permissions for deployments"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check configmap permissions
|
||||
CONFIGMAP_RULES=$(echo "$RULES" | jq -r '.rules[] | select(.resources[] | contains("configmaps"))')
|
||||
if [[ ! "$CONFIGMAP_RULES" =~ "create" ]] || [[ ! "$CONFIGMAP_RULES" =~ "delete" ]]; then
|
||||
echo "Missing required permissions for configmaps"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if RoleBinding exists
|
||||
kubectl get rolebinding app-admin -n cluster-admin || {
|
||||
echo "RoleBinding app-admin not found in namespace cluster-admin"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if RoleBinding references correct Role and ServiceAccount
|
||||
ROLE_REF=$(kubectl get rolebinding app-admin -n cluster-admin -o jsonpath='{.roleRef.name}')
|
||||
if [[ "$ROLE_REF" != "app-admin" ]]; then
|
||||
echo "RoleBinding references incorrect Role. Expected app-admin, got $ROLE_REF"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SA_REF=$(kubectl get rolebinding app-admin -n cluster-admin -o jsonpath='{.subjects[0].name}')
|
||||
if [[ "$SA_REF" != "app-admin" ]]; then
|
||||
echo "RoleBinding references incorrect ServiceAccount. Expected app-admin, got $SA_REF"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "RBAC validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,55 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Create a test pod with the app-admin ServiceAccount
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: rbac-test-pod
|
||||
namespace: cluster-admin
|
||||
spec:
|
||||
serviceAccountName: app-admin
|
||||
containers:
|
||||
- name: curl
|
||||
image: curlimages/curl
|
||||
command: ["sleep", "3600"]
|
||||
EOF
|
||||
|
||||
# Wait for pod to be running
|
||||
for i in {1..30}; do
|
||||
if kubectl get pod rbac-test-pod -n cluster-admin | grep -q Running; then
|
||||
break
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Test pod operations (should succeed)
|
||||
LIST_PODS=$(kubectl auth can-i list pods --as=system:serviceaccount:cluster-admin:app-admin -n cluster-admin)
|
||||
GET_PODS=$(kubectl auth can-i get pods --as=system:serviceaccount:cluster-admin:app-admin -n cluster-admin)
|
||||
WATCH_PODS=$(kubectl auth can-i watch pods --as=system:serviceaccount:cluster-admin:app-admin -n cluster-admin)
|
||||
|
||||
# Test deployment operations (should succeed)
|
||||
LIST_DEPLOY=$(kubectl auth can-i list deployments --as=system:serviceaccount:cluster-admin:app-admin -n cluster-admin)
|
||||
GET_DEPLOY=$(kubectl auth can-i get deployments --as=system:serviceaccount:cluster-admin:app-admin -n cluster-admin)
|
||||
UPDATE_DEPLOY=$(kubectl auth can-i update deployments --as=system:serviceaccount:cluster-admin:app-admin -n cluster-admin)
|
||||
|
||||
# Test configmap operations (should succeed)
|
||||
CREATE_CM=$(kubectl auth can-i create configmaps --as=system:serviceaccount:cluster-admin:app-admin -n cluster-admin)
|
||||
DELETE_CM=$(kubectl auth can-i delete configmaps --as=system:serviceaccount:cluster-admin:app-admin -n cluster-admin)
|
||||
|
||||
# Clean up
|
||||
kubectl delete pod rbac-test-pod -n cluster-admin --force --grace-period=0 2>/dev/null || true
|
||||
|
||||
# Check all permissions are correct
|
||||
if [[ "$LIST_PODS" == "yes" && \
|
||||
"$GET_PODS" == "yes" && \
|
||||
"$WATCH_PODS" == "yes" && \
|
||||
"$LIST_DEPLOY" == "yes" && \
|
||||
"$GET_DEPLOY" == "yes" && \
|
||||
"$UPDATE_DEPLOY" == "yes" && \
|
||||
"$CREATE_CM" == "yes" && \
|
||||
"$DELETE_CM" == "yes" ]]; then
|
||||
exit 0
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
@@ -0,0 +1,71 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check web deployment
|
||||
kubectl get deployment web -n network || {
|
||||
echo "Deployment web not found in namespace network"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check api deployment
|
||||
kubectl get deployment api -n network || {
|
||||
echo "Deployment api not found in namespace network"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check db deployment
|
||||
kubectl get deployment db -n network || {
|
||||
echo "Deployment db not found in namespace network"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check web deployment configuration
|
||||
WEB_IMAGE=$(kubectl get deployment web -n network -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ ! "$WEB_IMAGE" =~ "nginx" ]]; then
|
||||
echo "Web deployment is not using nginx image"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
WEB_LABEL=$(kubectl get deployment web -n network -o jsonpath='{.spec.template.metadata.labels.app}')
|
||||
if [[ "$WEB_LABEL" != "web" ]]; then
|
||||
echo "Web deployment does not have correct label app=web"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check api deployment configuration
|
||||
API_IMAGE=$(kubectl get deployment api -n network -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ ! "$API_IMAGE" =~ "nginx" ]]; then
|
||||
echo "API deployment is not using nginx image"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
API_LABEL=$(kubectl get deployment api -n network -o jsonpath='{.spec.template.metadata.labels.app}')
|
||||
if [[ "$API_LABEL" != "api" ]]; then
|
||||
echo "API deployment does not have correct label app=api"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check db deployment configuration
|
||||
DB_IMAGE=$(kubectl get deployment db -n network -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ ! "$DB_IMAGE" =~ "postgres" ]]; then
|
||||
echo "DB deployment is not using postgres image"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
DB_LABEL=$(kubectl get deployment db -n network -o jsonpath='{.spec.template.metadata.labels.app}')
|
||||
if [[ "$DB_LABEL" != "db" ]]; then
|
||||
echo "DB deployment does not have correct label app=db"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if all pods are running
|
||||
for DEPLOY in web api db; do
|
||||
READY_PODS=$(kubectl get deployment $DEPLOY -n network -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ -z "$READY_PODS" ]] || [[ "$READY_PODS" -lt 1 ]]; then
|
||||
echo "Deployment $DEPLOY has no ready pods"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Deployments validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,33 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Helper to check if a policy exists
|
||||
check_policy_exists() {
|
||||
local name=$1
|
||||
if ! kubectl get networkpolicy "$name" -n network >/dev/null 2>&1; then
|
||||
echo "❌ $name not found"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Helper to check if a policy egress allows traffic to a given app
|
||||
check_egress_to() {
|
||||
local policy=$1
|
||||
local app=$2
|
||||
local found=$(kubectl get networkpolicy "$policy" -n network -o jsonpath="{.spec.egress[*].to[*].podSelector.matchLabels.app}" | grep -w "$app" || true)
|
||||
if [[ -z "$found" ]]; then
|
||||
echo "❌ $policy does not allow egress to $app"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check required policies
|
||||
check_policy_exists web-policy
|
||||
check_policy_exists api-policy
|
||||
|
||||
# Check egress rules
|
||||
check_egress_to web-policy api
|
||||
check_egress_to api-policy db
|
||||
|
||||
echo "✅ NetworkPolicies validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,54 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "🔎 Starting network policy validation..."
|
||||
|
||||
NAMESPACE="network"
|
||||
CLIENT_POD="test-client"
|
||||
IMAGE="curlimages/curl:8.5.0"
|
||||
|
||||
# 1. Ensure test-client exists
|
||||
if ! kubectl get pod -n $NAMESPACE $CLIENT_POD &>/dev/null; then
|
||||
echo "🚀 Creating test-client pod..."
|
||||
kubectl run $CLIENT_POD -n $NAMESPACE --image=$IMAGE --restart=Never --command -- sleep 3600
|
||||
kubectl wait --for=condition=Ready pod/$CLIENT_POD -n $NAMESPACE --timeout=30s
|
||||
else
|
||||
echo "ℹ️ $CLIENT_POD pod already exists, skipping creation"
|
||||
fi
|
||||
|
||||
# 2. Test service connectivity using curl
|
||||
echo "🔧 Testing service connectivity with curl..."
|
||||
|
||||
test_connection() {
|
||||
local target=$1
|
||||
local port=$2
|
||||
local expected=$3
|
||||
local url="http://$target:$port"
|
||||
|
||||
echo "➡️ Testing: $CLIENT_POD ➡ $url (expected: $expected)"
|
||||
|
||||
if kubectl exec -n $NAMESPACE $CLIENT_POD -- curl -s --max-time 2 $url &>/dev/null; then
|
||||
if [[ "$expected" == "fail" ]]; then
|
||||
echo "❌ FAILED: Connection to $url should be BLOCKED but SUCCEEDED"
|
||||
exit 1
|
||||
else
|
||||
echo "✅ Connection to $url succeeded"
|
||||
fi
|
||||
else
|
||||
if [[ "$expected" == "success" ]]; then
|
||||
echo "❌ FAILED: Connection to $url should be ALLOWED but FAILED"
|
||||
exit 1
|
||||
else
|
||||
echo "✅ Connection to $url correctly blocked"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Run tests
|
||||
test_connection api 80 success
|
||||
test_connection db 5432 success
|
||||
test_connection web 80 fail
|
||||
test_connection db 80 fail
|
||||
test_connection api 5432 fail
|
||||
|
||||
echo "🎉 All network policy tests passed successfully"
|
||||
@@ -0,0 +1,53 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment app-v1 -n upgrade || {
|
||||
echo "Deployment app-v1 not found in namespace upgrade"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check replicas
|
||||
REPLICAS=$(kubectl get deployment app-v1 -n upgrade -o jsonpath='{.spec.replicas}')
|
||||
if [[ "$REPLICAS" != "4" ]]; then
|
||||
echo "Incorrect number of replicas. Expected 4, got $REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check image
|
||||
IMAGE=$(kubectl get deployment app-v1 -n upgrade -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ "$IMAGE" != "nginx:1.19" ]]; then
|
||||
echo "Incorrect image. Expected nginx:1.19, got $IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pods are running
|
||||
READY_PODS=$(kubectl get deployment app-v1 -n upgrade -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "4" ]]; then
|
||||
echo "Not all pods are ready. Expected 4, got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check deployment strategy
|
||||
STRATEGY=$(kubectl get deployment app-v1 -n upgrade -o jsonpath='{.spec.strategy.type}')
|
||||
if [[ "$STRATEGY" != "RollingUpdate" ]]; then
|
||||
echo "Incorrect deployment strategy. Expected RollingUpdate, got $STRATEGY"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check max unavailable
|
||||
MAX_UNAVAILABLE=$(kubectl get deployment app-v1 -n upgrade -o jsonpath='{.spec.strategy.rollingUpdate.maxUnavailable}')
|
||||
if [[ "$MAX_UNAVAILABLE" != "1" ]]; then
|
||||
echo "Incorrect maxUnavailable. Expected 1, got $MAX_UNAVAILABLE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check max surge
|
||||
MAX_SURGE=$(kubectl get deployment app-v1 -n upgrade -o jsonpath='{.spec.strategy.rollingUpdate.maxSurge}')
|
||||
if [[ "$MAX_SURGE" != "1" ]]; then
|
||||
echo "Incorrect maxSurge. Expected 1, got $MAX_SURGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Initial deployment validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,56 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
NAMESPACE="upgrade"
|
||||
DEPLOYMENT="app-v1"
|
||||
HISTORY_FILE="/tmp/exam/rollout-history.txt"
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment $DEPLOYMENT -n $NAMESPACE > /dev/null || {
|
||||
echo "Deployment $DEPLOYMENT not found in namespace $NAMESPACE"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check current image
|
||||
CURRENT_IMAGE=$(kubectl get deployment $DEPLOYMENT -n $NAMESPACE -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ "$CURRENT_IMAGE" != "nginx:1.19" ]]; then
|
||||
echo "Deployment is not using the expected image nginx:1.19. Got $CURRENT_IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check both images exist in RS history
|
||||
RS_IMAGES=$(kubectl get rs -n $NAMESPACE -l app=app-v1 -o jsonpath='{range .items[*]}{.spec.template.spec.containers[0].image}{"\n"}{end}')
|
||||
|
||||
echo "$RS_IMAGES" | grep -q "nginx:1.19" || {
|
||||
echo "ReplicaSet with image nginx:1.19 not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "$RS_IMAGES" | grep -q "nginx:1.20" || {
|
||||
echo "ReplicaSet with image nginx:1.20 not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check rollout history file exists
|
||||
if [[ ! -f "$HISTORY_FILE" ]]; then
|
||||
echo "Rollout history file not found at $HISTORY_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check all pods are ready
|
||||
READY_PODS=$(kubectl get deployment $DEPLOYMENT -n $NAMESPACE -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "4" ]]; then
|
||||
echo "Expected 4 ready pods, but got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Ensure no pods are running nginx:1.20
|
||||
PODS_WITH_20=$(kubectl get pods -n $NAMESPACE -l app=app-v1 -o jsonpath='{range .items[*]}{.metadata.name} {.spec.containers[0].image}{"\n"}{end}' | grep "nginx:1.20" || true)
|
||||
if [[ -n "$PODS_WITH_20" ]]; then
|
||||
echo "Some pods are still running nginx:1.20:"
|
||||
echo "$PODS_WITH_20"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Validation successful: rollback confirmed, both images used, and all pods healthy"
|
||||
exit 0
|
||||
@@ -0,0 +1,50 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
NAMESPACE="upgrade"
|
||||
DEPLOYMENT="app-v1"
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment $DEPLOYMENT -n $NAMESPACE > /dev/null || {
|
||||
echo "Deployment $DEPLOYMENT not found in namespace $NAMESPACE"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if current image is nginx:1.19
|
||||
CURRENT_IMAGE=$(kubectl get deployment $DEPLOYMENT -n $NAMESPACE -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ "$CURRENT_IMAGE" != "nginx:1.19" ]]; then
|
||||
echo "Deployment not rolled back. Expected image nginx:1.19, got $CURRENT_IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get list of ReplicaSets for this deployment
|
||||
RS_LIST=$(kubectl get rs -n $NAMESPACE -l app=$DEPLOYMENT -o jsonpath='{range .items[*]}{.metadata.name} {.spec.template.spec.containers[0].image}{"\n"}{end}')
|
||||
|
||||
# Check if both versions exist
|
||||
echo "$RS_LIST" | grep -q "nginx:1.19" || {
|
||||
echo "ReplicaSet with nginx:1.19 not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "$RS_LIST" | grep -q "nginx:1.20" || {
|
||||
echo "ReplicaSet with nginx:1.20 not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check all pods are ready
|
||||
READY_PODS=$(kubectl get deployment $DEPLOYMENT -n $NAMESPACE -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "4" ]]; then
|
||||
echo "Not all pods are ready. Expected 4, got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check no pods are still using nginx:1.20
|
||||
BAD_PODS=$(kubectl get pods -n $NAMESPACE -l app=$DEPLOYMENT -o jsonpath='{range .items[*]}{.metadata.name} {.spec.containers[0].image}{"\n"}{end}' | grep "nginx:1.20" || true)
|
||||
if [[ -n "$BAD_PODS" ]]; then
|
||||
echo "Some pods are still using nginx:1.20:"
|
||||
echo "$BAD_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Rollback validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,40 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check high priority class
|
||||
kubectl get priorityclass high-priority || {
|
||||
echo "PriorityClass high-priority not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check low priority class
|
||||
kubectl get priorityclass low-priority || {
|
||||
echo "PriorityClass low-priority not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check high priority value
|
||||
HIGH_VALUE=$(kubectl get priorityclass high-priority -o jsonpath='{.value}')
|
||||
if [[ "$HIGH_VALUE" != "1000" ]]; then
|
||||
echo "Incorrect priority value for high-priority. Expected 1000, got $HIGH_VALUE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check low priority value
|
||||
LOW_VALUE=$(kubectl get priorityclass low-priority -o jsonpath='{.value}')
|
||||
if [[ "$LOW_VALUE" != "100" ]]; then
|
||||
echo "Incorrect priority value for low-priority. Expected 100, got $LOW_VALUE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check that neither is set as default
|
||||
HIGH_DEFAULT=$(kubectl get priorityclass high-priority -o jsonpath='{.globalDefault}')
|
||||
LOW_DEFAULT=$(kubectl get priorityclass low-priority -o jsonpath='{.globalDefault}')
|
||||
|
||||
if [[ "$HIGH_DEFAULT" == "true" ]] || [[ "$LOW_DEFAULT" == "true" ]]; then
|
||||
echo "PriorityClasses should not be set as default"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "PriorityClass validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,40 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check high priority pod
|
||||
kubectl get pod high-priority -n scheduling || {
|
||||
echo "Pod high-priority not found in namespace scheduling"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check low priority pod
|
||||
kubectl get pod low-priority -n scheduling || {
|
||||
echo "Pod low-priority not found in namespace scheduling"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check high priority pod configuration
|
||||
HIGH_PRIORITY_CLASS=$(kubectl get pod high-priority -n scheduling -o jsonpath='{.spec.priorityClassName}')
|
||||
if [[ "$HIGH_PRIORITY_CLASS" != "high-priority" ]]; then
|
||||
echo "High priority pod not using correct PriorityClass"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check low priority pod configuration
|
||||
LOW_PRIORITY_CLASS=$(kubectl get pod low-priority -n scheduling -o jsonpath='{.spec.priorityClassName}')
|
||||
if [[ "$LOW_PRIORITY_CLASS" != "low-priority" ]]; then
|
||||
echo "Low priority pod not using correct PriorityClass"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pods are running
|
||||
for POD in high-priority low-priority; do
|
||||
STATUS=$(kubectl get pod $POD -n scheduling -o jsonpath='{.status.phase}')
|
||||
if [[ "$STATUS" != "Running" ]]; then
|
||||
echo "Pod $POD is not running. Current status: $STATUS"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Pods validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,42 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Get pod nodes
|
||||
HIGH_NODE=$(kubectl get pod high-priority -n scheduling -o jsonpath='{.spec.nodeName}')
|
||||
LOW_NODE=$(kubectl get pod low-priority -n scheduling -o jsonpath='{.spec.nodeName}')
|
||||
|
||||
# Check if pods are on different nodes
|
||||
if [[ "$HIGH_NODE" == "$LOW_NODE" ]]; then
|
||||
echo "Pods are scheduled on the same node: $HIGH_NODE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check anti-affinity configuration for high priority pod
|
||||
HIGH_AFFINITY=$(kubectl get pod high-priority -n scheduling -o json | jq -r '.spec.affinity.podAntiAffinity')
|
||||
if [[ -z "$HIGH_AFFINITY" ]]; then
|
||||
echo "High priority pod does not have pod anti-affinity configured"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check anti-affinity configuration for low priority pod
|
||||
LOW_AFFINITY=$(kubectl get pod low-priority -n scheduling -o json | jq -r '.spec.affinity.podAntiAffinity')
|
||||
if [[ -z "$LOW_AFFINITY" ]]; then
|
||||
echo "Low priority pod does not have pod anti-affinity configured"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if anti-affinity is using the correct topology key
|
||||
HIGH_TOPOLOGY=$(kubectl get pod high-priority -n scheduling -o jsonpath='{.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[0].topologyKey}')
|
||||
if [[ "$HIGH_TOPOLOGY" != "kubernetes.io/hostname" ]]; then
|
||||
echo "High priority pod using incorrect topology key"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
LOW_TOPOLOGY=$(kubectl get pod low-priority -n scheduling -o jsonpath='{.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[0].topologyKey}')
|
||||
if [[ "$LOW_TOPOLOGY" != "kubernetes.io/hostname" ]]; then
|
||||
echo "Low priority pod using incorrect topology key"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Anti-affinity validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if PVC exists
|
||||
kubectl get pvc data-pvc -n storage-task || {
|
||||
echo "PVC data-pvc not found in namespace storage-task"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Validate storage class
|
||||
STORAGE_CLASS=$(kubectl get pvc data-pvc -n storage-task -o jsonpath='{.spec.storageClassName}')
|
||||
if [[ "$STORAGE_CLASS" != "standard" ]]; then
|
||||
echo "Incorrect storage class. Expected 'standard', got '$STORAGE_CLASS'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate access mode
|
||||
ACCESS_MODE=$(kubectl get pvc data-pvc -n storage-task -o jsonpath='{.spec.accessModes[0]}')
|
||||
if [[ "$ACCESS_MODE" != "ReadWriteOnce" ]]; then
|
||||
echo "Incorrect access mode. Expected 'ReadWriteOnce', got '$ACCESS_MODE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate storage size
|
||||
STORAGE_SIZE=$(kubectl get pvc data-pvc -n storage-task -o jsonpath='{.spec.resources.requests.storage}')
|
||||
if [[ "$STORAGE_SIZE" != "2Gi" ]]; then
|
||||
echo "Incorrect storage size. Expected '2Gi', got '$STORAGE_SIZE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "PVC validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,25 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if pod exists
|
||||
kubectl get pod data-pod -n storage-task || {
|
||||
echo "Pod data-pod not found in namespace storage-task"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if pod is running
|
||||
POD_STATUS=$(kubectl get pod data-pod -n storage-task -o jsonpath='{.status.phase}')
|
||||
if [[ "$POD_STATUS" != "Running" ]]; then
|
||||
echo "Pod is not in Running state. Current state: $POD_STATUS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pod is using nginx image
|
||||
POD_IMAGE=$(kubectl get pod data-pod -n storage-task -o jsonpath='{.spec.containers[0].image}')
|
||||
if [[ "$POD_IMAGE" != *"nginx"* ]]; then
|
||||
echo "Pod is not using nginx image. Current image: $POD_IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Pod validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,19 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if volume mount exists
|
||||
MOUNT_PATH=$(kubectl get pod data-pod -n storage-task -o jsonpath='{.spec.containers[0].volumeMounts[?(@.name=="data")].mountPath}')
|
||||
if [[ "$MOUNT_PATH" != "/usr/share/nginx/html" ]]; then
|
||||
echo "Volume not mounted at correct path. Expected '/usr/share/nginx/html', got '$MOUNT_PATH'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if volume is using the PVC
|
||||
VOLUME_PVC=$(kubectl get pod data-pod -n storage-task -o jsonpath='{.spec.volumes[?(@.name=="data")].persistentVolumeClaim.claimName}')
|
||||
if [[ "$VOLUME_PVC" != "data-pvc" ]]; then
|
||||
echo "Pod is not using the correct PVC. Expected 'data-pvc', got '$VOLUME_PVC'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Volume mount validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,28 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment failing-app -n troubleshoot || {
|
||||
echo "Deployment failing-app not found in namespace troubleshoot"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check container port configuration
|
||||
PORT=$(kubectl get deployment failing-app -n troubleshoot -o jsonpath='{.spec.template.spec.containers[0].ports[0].containerPort}')
|
||||
if [[ "$PORT" != "80" ]]; then
|
||||
echo "Incorrect container port. Expected 80, got $PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if port is correctly configured in pods
|
||||
PODS=$(kubectl get pods -n troubleshoot -l app=failing-app -o jsonpath='{.items[*].metadata.name}')
|
||||
for POD in $PODS; do
|
||||
POD_PORT=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.spec.containers[0].ports[0].containerPort}')
|
||||
if [[ "$POD_PORT" != "80" ]]; then
|
||||
echo "Pod $POD has incorrect port configuration. Expected 80, got $POD_PORT"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Container port validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment failing-app -n troubleshoot || {
|
||||
echo "Deployment failing-app not found in namespace troubleshoot"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check memory limit configuration
|
||||
MEM_LIMIT=$(kubectl get deployment failing-app -n troubleshoot -o jsonpath='{.spec.template.spec.containers[0].resources.limits.memory}')
|
||||
if [[ "$MEM_LIMIT" != "256Mi" ]]; then
|
||||
echo "Incorrect memory limit. Expected 256Mi, got $MEM_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if memory limit is correctly applied to pods
|
||||
PODS=$(kubectl get pods -n troubleshoot -l app=failing-app -o jsonpath='{.items[*].metadata.name}')
|
||||
for POD in $PODS; do
|
||||
POD_MEM_LIMIT=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.spec.containers[0].resources.limits.memory}')
|
||||
if [[ "$POD_MEM_LIMIT" != "256Mi" ]]; then
|
||||
echo "Pod $POD has incorrect memory limit. Expected 256Mi, got $POD_MEM_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Check if pods are not being OOMKilled
|
||||
for POD in $PODS; do
|
||||
RESTARTS=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.status.containerStatuses[0].restartCount}')
|
||||
LAST_STATE=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.status.containerStatuses[0].lastState.terminated.reason}')
|
||||
|
||||
if [[ "$LAST_STATE" == "OOMKilled" ]]; then
|
||||
echo "Pod $POD was terminated due to OOMKilled"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Memory limit validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,37 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment failing-app -n troubleshoot || {
|
||||
echo "Deployment failing-app not found in namespace troubleshoot"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check liveness probe configuration
|
||||
PROBE_PORT=$(kubectl get deployment failing-app -n troubleshoot -o jsonpath='{.spec.template.spec.containers[0].livenessProbe.httpGet.port}')
|
||||
if [[ "$PROBE_PORT" != "80" ]]; then
|
||||
echo "Incorrect liveness probe port. Expected 80, got $PROBE_PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if probe is configured in pods
|
||||
PODS=$(kubectl get pods -n troubleshoot -l app=failing-app -o jsonpath='{.items[*].metadata.name}')
|
||||
for POD in $PODS; do
|
||||
POD_PROBE_PORT=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.spec.containers[0].livenessProbe.httpGet.port}')
|
||||
if [[ "$POD_PROBE_PORT" != "80" ]]; then
|
||||
echo "Pod $POD has incorrect liveness probe port. Expected 80, got $POD_PROBE_PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pod is being restarted due to failed liveness probe
|
||||
RESTARTS=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.status.containerStatuses[0].restartCount}')
|
||||
LAST_STATE=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.status.containerStatuses[0].lastState.terminated.reason}')
|
||||
|
||||
if [[ "$LAST_STATE" == "LivenessProbe" ]]; then
|
||||
echo "Pod $POD is failing due to liveness probe"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Liveness probe validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,61 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment failing-app -n troubleshoot || {
|
||||
echo "Deployment failing-app not found in namespace troubleshoot"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if all pods are running
|
||||
READY_PODS=$(kubectl get deployment failing-app -n troubleshoot -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "3" ]]; then
|
||||
echo "Not all pods are ready. Expected 3, got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check pod status
|
||||
PODS=$(kubectl get pods -n troubleshoot -l app=failing-app -o jsonpath='{.items[*].metadata.name}')
|
||||
for POD in $PODS; do
|
||||
# Check pod phase
|
||||
PHASE=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.status.phase}')
|
||||
if [[ "$PHASE" != "Running" ]]; then
|
||||
echo "Pod $POD is not running. Current phase: $PHASE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check container ready status
|
||||
READY=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.status.containerStatuses[0].ready}')
|
||||
if [[ "$READY" != "true" ]]; then
|
||||
echo "Container in pod $POD is not ready"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for recent restarts
|
||||
RESTARTS=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.status.containerStatuses[0].restartCount}')
|
||||
if [[ "$RESTARTS" -gt 0 ]]; then
|
||||
# Check if the last restart was recent (within last minute)
|
||||
LAST_RESTART=$(kubectl get pod $POD -n troubleshoot -o jsonpath='{.status.containerStatuses[0].lastState.terminated.finishedAt}')
|
||||
if [[ -n "$LAST_RESTART" ]]; then
|
||||
RESTART_TIME=$(date -d "$LAST_RESTART" +%s)
|
||||
NOW=$(date +%s)
|
||||
DIFF=$((NOW - RESTART_TIME))
|
||||
if [[ "$DIFF" -lt 60 ]]; then
|
||||
echo "Pod $POD has recent restarts"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Test pod functionality
|
||||
for POD in $PODS; do
|
||||
# Test nginx is serving on port 80
|
||||
kubectl exec $POD -n troubleshoot -- curl -s localhost:80 > /dev/null || {
|
||||
echo "Pod $POD is not serving content on port 80"
|
||||
exit 1
|
||||
}
|
||||
done
|
||||
|
||||
echo "Pod validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,25 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if StorageClass exists
|
||||
kubectl get storageclass fast-local || {
|
||||
echo "StorageClass fast-local not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Validate provisioner
|
||||
PROVISIONER=$(kubectl get storageclass fast-local -o jsonpath='{.provisioner}')
|
||||
if [[ "$PROVISIONER" != "rancher.io/local-path" ]]; then
|
||||
echo "Incorrect provisioner. Expected 'rancher.io/local-path', got '$PROVISIONER'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate volumeBindingMode
|
||||
BINDING_MODE=$(kubectl get storageclass fast-local -o jsonpath='{.volumeBindingMode}')
|
||||
if [[ "$BINDING_MODE" != "WaitForFirstConsumer" ]]; then
|
||||
echo "Incorrect volumeBindingMode. Expected 'WaitForFirstConsumer', got '$BINDING_MODE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "StorageClass validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if StorageClass is marked as default
|
||||
IS_DEFAULT=$(kubectl get storageclass fast-local -o jsonpath='{.metadata.annotations.storageclass\.kubernetes\.io/is-default-class}')
|
||||
if [[ "$IS_DEFAULT" != "true" ]]; then
|
||||
echo "StorageClass fast-local is not marked as default"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Default StorageClass validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Count number of default StorageClasses
|
||||
DEFAULT_COUNT=$(kubectl get storageclass -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}' | wc -w)
|
||||
|
||||
if [[ "$DEFAULT_COUNT" -ne 1 ]]; then
|
||||
echo "Found $DEFAULT_COUNT default StorageClasses. Expected exactly 1"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify the only default is our StorageClass
|
||||
DEFAULT_SC=$(kubectl get storageclass -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}')
|
||||
if [[ "$DEFAULT_SC" != "fast-local" ]]; then
|
||||
echo "Wrong StorageClass is default. Expected 'fast-local', got '$DEFAULT_SC'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "No other default StorageClass validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if PV exists
|
||||
kubectl get pv manual-pv || {
|
||||
echo "PV manual-pv not found"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Validate storage size
|
||||
STORAGE_SIZE=$(kubectl get pv manual-pv -o jsonpath='{.spec.capacity.storage}')
|
||||
if [[ "$STORAGE_SIZE" != "1Gi" ]]; then
|
||||
echo "Incorrect storage size. Expected '1Gi', got '$STORAGE_SIZE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate access mode
|
||||
ACCESS_MODE=$(kubectl get pv manual-pv -o jsonpath='{.spec.accessModes[0]}')
|
||||
if [[ "$ACCESS_MODE" != "ReadWriteOnce" ]]; then
|
||||
echo "Incorrect access mode. Expected 'ReadWriteOnce', got '$ACCESS_MODE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate host path
|
||||
HOST_PATH=$(kubectl get pv manual-pv -o jsonpath='{.spec.hostPath.path}')
|
||||
if [[ "$HOST_PATH" != "/mnt/data" ]]; then
|
||||
echo "Incorrect host path. Expected '/mnt/data', got '$HOST_PATH'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate node affinity
|
||||
NODE_HOSTNAME=$(kubectl get pv manual-pv -o jsonpath='{.spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[?(@.key=="kubernetes.io/hostname")].values[0]}')
|
||||
if [[ "$NODE_HOSTNAME" != "k3d-cluster-agent-0" ]]; then
|
||||
echo "Incorrect node affinity. Expected 'k3d-cluster-agent-0', got '$NODE_HOSTNAME'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "PV validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if PVC exists
|
||||
kubectl get pvc manual-pvc -n manual-storage || {
|
||||
echo "PVC manual-pvc not found in namespace manual-storage"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Validate storage size
|
||||
STORAGE_SIZE=$(kubectl get pvc manual-pvc -n manual-storage -o jsonpath='{.spec.resources.requests.storage}')
|
||||
if [[ "$STORAGE_SIZE" != "1Gi" ]]; then
|
||||
echo "Incorrect storage size. Expected '1Gi', got '$STORAGE_SIZE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate access mode
|
||||
ACCESS_MODE=$(kubectl get pvc manual-pvc -n manual-storage -o jsonpath='{.spec.accessModes[0]}')
|
||||
if [[ "$ACCESS_MODE" != "ReadWriteOnce" ]]; then
|
||||
echo "Incorrect access mode. Expected 'ReadWriteOnce', got '$ACCESS_MODE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if PVC is bound
|
||||
PVC_STATUS=$(kubectl get pvc manual-pvc -n manual-storage -o jsonpath='{.status.phase}')
|
||||
if [[ "$PVC_STATUS" != "Bound" ]]; then
|
||||
echo "PVC is not bound. Current status: $PVC_STATUS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify it's bound to the correct PV
|
||||
BOUND_PV=$(kubectl get pvc manual-pvc -n manual-storage -o jsonpath='{.spec.volumeName}')
|
||||
if [[ "$BOUND_PV" != "manual-pv" ]]; then
|
||||
echo "PVC is bound to wrong PV. Expected 'manual-pv', got '$BOUND_PV'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "PVC validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,52 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if pod exists
|
||||
kubectl get pod manual-pod -n manual-storage || {
|
||||
echo "Pod manual-pod not found in namespace manual-storage"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if pod is running
|
||||
POD_STATUS=$(kubectl get pod manual-pod -n manual-storage -o jsonpath='{.status.phase}')
|
||||
if [[ "$POD_STATUS" != "Running" ]]; then
|
||||
echo "Pod is not in Running state. Current state: $POD_STATUS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pod is using busybox image
|
||||
POD_IMAGE=$(kubectl get pod manual-pod -n manual-storage -o jsonpath='{.spec.containers[0].image}')
|
||||
if [[ "$POD_IMAGE" != *"busybox"* ]]; then
|
||||
echo "Pod is not using busybox image. Current image: $POD_IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if volume is mounted correctly
|
||||
MOUNT_PATH=$(kubectl get pod manual-pod -n manual-storage -o jsonpath='{.spec.containers[0].volumeMounts[0].mountPath}')
|
||||
if [[ "$MOUNT_PATH" != "/data" ]]; then
|
||||
echo "Volume not mounted at correct path. Expected '/data', got '$MOUNT_PATH'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if the correct PVC is used
|
||||
VOLUME_PVC=$(kubectl get pod manual-pod -n manual-storage -o jsonpath='{.spec.volumes[0].persistentVolumeClaim.claimName}')
|
||||
if [[ "$VOLUME_PVC" != "manual-pvc" ]]; then
|
||||
echo "Pod is not using the correct PVC. Expected 'manual-pvc', got '$VOLUME_PVC'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if command is set correctly
|
||||
POD_COMMAND=$(kubectl get pod manual-pod -n manual-storage -o jsonpath='{.spec.containers[0].command[0]}')
|
||||
if [[ "$POD_COMMAND" != "sleep" ]]; then
|
||||
echo "Pod command is not set to sleep"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
POD_ARGS=$(kubectl get pod manual-pod -n manual-storage -o jsonpath='{.spec.containers[0].command[1]}')
|
||||
if [[ "$POD_ARGS" != "3600" ]]; then
|
||||
echo "Pod sleep duration is not set to 3600"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Pod validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,51 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if deployment exists
|
||||
kubectl get deployment scaling-app -n scaling || {
|
||||
echo "Deployment scaling-app not found in namespace scaling"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check replicas
|
||||
REPLICAS=$(kubectl get deployment scaling-app -n scaling -o jsonpath='{.spec.replicas}')
|
||||
if [[ "$REPLICAS" != "2" ]]; then
|
||||
echo "Incorrect number of replicas. Expected 2, got $REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check image
|
||||
IMAGE=$(kubectl get deployment scaling-app -n scaling -o jsonpath='{.spec.template.spec.containers[0].image}')
|
||||
if [[ "$IMAGE" != *"nginx"* ]]; then
|
||||
echo "Incorrect image. Expected nginx, got $IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check resource requests
|
||||
CPU_REQUEST=$(kubectl get deployment scaling-app -n scaling -o jsonpath='{.spec.template.spec.containers[0].resources.requests.cpu}')
|
||||
if [[ "$CPU_REQUEST" != "200m" ]]; then
|
||||
echo "Incorrect CPU request. Expected 200m, got $CPU_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_REQUEST=$(kubectl get deployment scaling-app -n scaling -o jsonpath='{.spec.template.spec.containers[0].resources.requests.memory}')
|
||||
if [[ "$MEM_REQUEST" != "256Mi" ]]; then
|
||||
echo "Incorrect memory request. Expected 256Mi, got $MEM_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check resource limits
|
||||
CPU_LIMIT=$(kubectl get deployment scaling-app -n scaling -o jsonpath='{.spec.template.spec.containers[0].resources.limits.cpu}')
|
||||
if [[ "$CPU_LIMIT" != "500m" ]]; then
|
||||
echo "Incorrect CPU limit. Expected 500m, got $CPU_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MEM_LIMIT=$(kubectl get deployment scaling-app -n scaling -o jsonpath='{.spec.template.spec.containers[0].resources.limits.memory}')
|
||||
if [[ "$MEM_LIMIT" != "512Mi" ]]; then
|
||||
echo "Incorrect memory limit. Expected 512Mi, got $MEM_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Deployment validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,39 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if HPA exists
|
||||
kubectl get hpa -n scaling scaling-app || {
|
||||
echo "HorizontalPodAutoscaler scaling-app not found in namespace scaling"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check min replicas
|
||||
MIN_REPLICAS=$(kubectl get hpa -n scaling scaling-app -o jsonpath='{.spec.minReplicas}')
|
||||
if [[ "$MIN_REPLICAS" != "2" ]]; then
|
||||
echo "Incorrect minimum replicas. Expected 2, got $MIN_REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check max replicas
|
||||
MAX_REPLICAS=$(kubectl get hpa -n scaling scaling-app -o jsonpath='{.spec.maxReplicas}')
|
||||
if [[ "$MAX_REPLICAS" != "5" ]]; then
|
||||
echo "Incorrect maximum replicas. Expected 5, got $MAX_REPLICAS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check target CPU utilization
|
||||
TARGET_CPU=$(kubectl get hpa -n scaling scaling-app -o jsonpath='{.spec.metrics[0].resource.target.averageUtilization}')
|
||||
if [[ "$TARGET_CPU" != "70" ]]; then
|
||||
echo "Incorrect target CPU utilization. Expected 70, got $TARGET_CPU"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if HPA is targeting the correct deployment
|
||||
TARGET_REF=$(kubectl get hpa -n scaling scaling-app -o jsonpath='{.spec.scaleTargetRef.name}')
|
||||
if [[ "$TARGET_REF" != "scaling-app" ]]; then
|
||||
echo "HPA is not targeting the correct deployment. Expected scaling-app, got $TARGET_REF"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "HPA validation successful"
|
||||
exit 0
|
||||
@@ -0,0 +1,46 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Check if pods are running
|
||||
READY_PODS=$(kubectl get deployment scaling-app -n scaling -o jsonpath='{.status.readyReplicas}')
|
||||
if [[ "$READY_PODS" != "2" ]]; then
|
||||
echo "Not all pods are ready. Expected 2, got $READY_PODS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get pod names
|
||||
PODS=$(kubectl get pods -n scaling -l app=scaling-app -o jsonpath='{.items[*].metadata.name}')
|
||||
|
||||
# Check each pod's resource configuration
|
||||
for POD in $PODS; do
|
||||
# Check CPU request
|
||||
CPU_REQUEST=$(kubectl get pod $POD -n scaling -o jsonpath='{.spec.containers[0].resources.requests.cpu}')
|
||||
if [[ "$CPU_REQUEST" != "200m" ]]; then
|
||||
echo "Pod $POD has incorrect CPU request. Expected 200m, got $CPU_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check memory request
|
||||
MEM_REQUEST=$(kubectl get pod $POD -n scaling -o jsonpath='{.spec.containers[0].resources.requests.memory}')
|
||||
if [[ "$MEM_REQUEST" != "256Mi" ]]; then
|
||||
echo "Pod $POD has incorrect memory request. Expected 256Mi, got $MEM_REQUEST"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check CPU limit
|
||||
CPU_LIMIT=$(kubectl get pod $POD -n scaling -o jsonpath='{.spec.containers[0].resources.limits.cpu}')
|
||||
if [[ "$CPU_LIMIT" != "500m" ]]; then
|
||||
echo "Pod $POD has incorrect CPU limit. Expected 500m, got $CPU_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check memory limit
|
||||
MEM_LIMIT=$(kubectl get pod $POD -n scaling -o jsonpath='{.spec.containers[0].resources.limits.memory}')
|
||||
if [[ "$MEM_LIMIT" != "512Mi" ]]; then
|
||||
echo "Pod $POD has incorrect memory limit. Expected 512Mi, got $MEM_LIMIT"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Resource configuration validation successful"
|
||||
exit 0
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user