The 8-Week IT Proving Ground: From Zero to a Real-World Portfolio
Alright, let's get started. My name is Ramon Rios Jr., and I've spent more than two decades in this industry, building and breaking systems. Over that time, I’ve learned one fundamental truth: you can't learn tech from a book. You can't learn it from a video course. You learn it with your hands, by building something, seeing it fail, and figuring out how to fix it. Theory is a map, but experience is the journey.
This guide is built on that philosophy. Over the next eight weeks, you won't be a student; you'll be a system administrator, a developer, and a security analyst. You're going to build your own server from the ground up, not in some sandboxed, consequence-free environment, but on a real, live Virtual Private Server (VPS) on the internet. You will provision it, harden it, deploy applications on it, automate it, and even respond to a simulated attack.
This isn't a course where you just watch me work. This is a proving ground. You will get your hands dirty. You will make mistakes. You will probably break things. And that's the point. That's how you build the muscle memory and the intuition that separates a technician who follows a script from a true systems professional who understands the "why" behind every command. This is your roadmap to building a portfolio of real, marketable projects that you can speak about with confidence. Let's begin.
Week 1: Laying the Foundation - Your First Server
This Week's Mission
Our mission this week is to get you out of the theoretical and into the practical. We're going to provision your very own cloud server and establish a highly secure way to access it. This server is your digital workshop, the foundation upon which we will build everything else for the next seven weeks. By the end of this week, you will have a tangible piece of internet infrastructure that is yours to command.
Core Concepts
- Virtual Private Server (VPS): Think of a VPS as your own private, remote computer living in a data center. It's a slice of a much larger physical server, but it acts as a completely independent machine that you have full control over. We use a VPS because it's an affordable, scalable way to run a real server without buying physical hardware.
- The Command-Line Interface (CLI): We will be working exclusively through the command line. There's no graphical user interface (GUI), no mouse, no desktop icons. Real server administration happens in the CLI because it's powerful, efficient, and scriptable. Mastering the CLI is a non-negotiable skill for any serious IT professional.
- Root User vs. Sudo User: In Linux, the root user is the absolute administrator with unlimited power to do anything, including accidentally destroying the entire system. For this reason, we will immediately create a new, non-root user for our daily work. This user will be granted administrative powers via the sudo command, which allows them to temporarily elevate their privileges for specific tasks. This is the principle of least privilege in action from day one.
- SSH (Secure Shell): SSH is the encrypted protocol we use to securely connect to and manage our server over the internet. All communication, from the commands you type to the output you receive, is protected from eavesdropping.
- Public/Private Key Cryptography: This is the modern foundation for secure authentication. Instead of a password that can be stolen or guessed, you'll use a cryptographic key pair. Your private key, which you keep secret on your local computer, acts as your identity. Your public key, which you place on the server, acts as the lock. Only your unique private key can open it, granting you access.
Your Project: Provisioning and Securing Your Ubuntu VPS
Follow these steps exactly. The order is critical.
- Provision the VPS: Sign up with a cloud provider like DigitalOcean, Vultus, or Linode. Choose their smallest, cheapest plan. When prompted, select the latest Ubuntu Server LTS (Long-Term Support) release. Once it's launched, your provider will give you the server's public IP address and a temporary root password.
- Initial Login as Root: Open a terminal on your local computer (Terminal on macOS/Linux, or PowerShell/WSL on Windows). Connect to your server for the first time using the root user and the IP address provided.1Bash
ssh root@your_server_ip
You'll likely see a warning about host authenticity; type yes to continue. Enter the temporary password your provider gave you. You may be forced to change it immediately. This is the only time we will log in as root. - Create Your Day-to-Day User: Once logged in as root, we'll create your personal user account. Replace your_username with a name you like.Bash
adduser your_username
You'll be prompted to create a strong password for this new user. After that, we need to grant this user sudo privileges by adding them to the sudo group.1Bashusermod -aG sudo your_username
- Generate Your SSH Key Pair: Now, on your local computer, not the server, generate your SSH key pair. Open a new terminal window and run the following command: Bash
ssh-keygen
Press ENTER to accept the default file location (~/.ssh/id_rsa). When prompted for a passphrase, enter a strong one. This passphrase encrypts your private key on your computer, adding another layer of security. - Upload Your Public Key to the Server: We need to copy your public key (the "lock") to your new user's account on the server. The easiest way is with the ssh-copy-id command. This will prompt you for the password you created in Step 3.2Bash
ssh-copy-id your_username@your_server_ip
- Test Your Key-Based Login: Log out of your root session on the server. Now, try logging in as your new user.Bash
ssh your_username@your_server_ip
It should now log you in without asking for your server password (it may ask for the passphrase you set for your SSH key). - Disable Password and Root Login: This is the final and most important hardening step. Logged in as your new user, use nano (a simple text editor) to edit the SSH configuration file. You need sudo because this is a system file.Bash
sudo nano /etc/ssh/sshd_config
Find the following lines, uncomment them if necessary (remove the #), and change their values to no:PasswordAuthentication no PermitRootLogin no
Press CTRL+X, then Y, then ENTER to save and exit. Finally, restart the SSH service for the changes to take effect.Bashsudo systemctl restart ssh
Your server is now only accessible via your SSH key.
AI Mentor Prompts
- "Explain the difference between a public key and a private key in SSH authentication as if I were a beginner."
- "What is the principle of least privilege, and why is it important to create a sudo user instead of just using root?"
- "Walk me through the lines PasswordAuthentication no and PermitRootLogin no in the sshd_config file. What attack vectors do these changes prevent?"
Ramon's Pro-Tip
This first project is more than just a sequence of commands; it's your first lesson in procedural dependency and operational foresight. A beginner might be tempted to immediately disable root and password logins for security. But if you do that before you've created a new sudo user and correctly installed your SSH key, you will permanently lock yourself out of your own server. Your only option will be to destroy it and start over. Consider this your first, low-stakes "break/fix" scenario. It teaches a crucial lesson that every senior engineer knows by heart: think through the consequences of your actions and always ensure you have a path to access before you close the door behind you.
Week 2: Hardening the Gates - Basic Server Security
This Week's Mission
Last week, we built our house and secured the front door with a very strong lock (our SSH key). This week, we're going to install an alarm system, bar the windows, and put a guard on duty. We'll build upon our foundation by locking down our server, controlling who can access what files and what network ports, and setting up an automated system to block malicious actors.
Core Concepts
- Linux File Permissions: Every file and directory in Linux has a set of permissions that control who can read, write to, or execute it. These permissions are defined for three classes of users: the file's owner, members of the file's group, and everyone else.
- File Ownership: Every file is owned by a specific user and a specific group. Commands like chown (change owner) and chgrp (change group) allow administrators to manage this ownership.
- Firewalls: A firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. It acts as a barrier between your server and the internet, allowing you to explicitly define which services (or "ports") are accessible.
- Intrusion Prevention: While a firewall uses a static set of rules, an Intrusion Prevention System (IPS) is a dynamic tool that can actively respond to threats. It monitors logs for suspicious behavior, like repeated failed login attempts, and automatically blocks the offending IP addresses.
Your Project: Implementing Layered Security
- Understanding and Modifying File Permissions: Let's start by exploring the file system.
- Create a test directory and a test file inside it:Bash
mkdir myproject touch myproject/testfile.txt
- Now, view the permissions using the ls -l command:Bash
ls -l myproject/
You'll see something like -rw-rw-r--. This string represents the permissions. The first character indicates the file type (- for a file, d for a directory). The next nine characters are three sets of three: read (r), write (w), and execute (x) for the owner, group, and others, respectively.3 - Practice changing permissions with chmod. Let's remove write permission for the group and all permissions for others on our test file. We can do this using octal (numeric) mode:Bash
chmod 640 myproject/testfile.txt
- Now, let's add execute permission for the owner using symbolic mode:Bash
chmod u+x myproject/testfile.txt
- Finally, practice changing ownership. This won't do much now since you're the only user, but the syntax is important. To change the owner to root, you would run:Bash
sudo chown root myproject/testfile.txt
- Create a test directory and a test file inside it:Bash
- Configuring the Uncomplicated Firewall (UFW):
- First, check UFW's status. It's usually inactive by default.Bash
sudo ufw status
- Before we enable it, we must add a rule to allow SSH traffic. If you don't, you will be locked out of your server when you enable the firewall.Bash
sudo ufw allow ssh
This command works because UFW knows that ssh corresponds to port 22.4 - Now, set the default policies. We want to deny all incoming traffic and allow all outgoing traffic. This is a standard security posture.Bash
sudo ufw default deny incoming sudo ufw default allow outgoing
- With our SSH rule in place, it's safe to enable the firewall 4:Bash
sudo ufw enable
Confirm with y. Check the status again to see your rules are active.
- First, check UFW's status. It's usually inactive by default.Bash
- Installing and Configuring Fail2Ban:
- UFW provides our static rules. Fail2Ban will be our dynamic guard. Install it using the package manager:Bash
sudo apt update sudo apt install fail2ban
- The service will start automatically. Now, we need to configure it. The main configuration file is /etc/fail2ban/jail.conf, but we should never edit it directly, as package updates can overwrite it. Instead, we'll create a local copy that overrides the defaults.5Bash
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
- Now, edit your local configuration file:Bash
sudo nano /etc/fail2ban/jail.local
- Scroll down until you find the [sshd] section. It should already be enabled by default on Ubuntu. Let's customize it. Under the `` section at the top, find these lines and change the values to something more aggressive for our test. Let's ban for 10 minutes (600 seconds) after 3 failed tries.
bantime = 600 maxretry = 3
- Save and exit the file, then restart Fail2Ban to apply the new configuration.Bash
sudo systemctl restart fail2ban
- UFW provides our static rules. Fail2Ban will be our dynamic guard. Install it using the package manager:Bash
- Testing Fail2Ban:
- This is the fun part. From your local machine, open a new terminal window (keep your current SSH session open!). Try to SSH into your server, but use a deliberately incorrect username.Bash
ssh wronguser@your_server_ip
- It will fail. Do this 3 or 4 times. On the last attempt, it will likely hang or say "Connection refused." You've just been banned by your own server!
- Back in your active SSH session, check the Fail2Ban status for the SSH jail:Bash
sudo fail2ban-client status sshd
You will see a list of banned IP addresses, and yours will be one of them. The ban will automatically expire after the bantime you set.
- This is the fun part. From your local machine, open a new terminal window (keep your current SSH session open!). Try to SSH into your server, but use a deliberately incorrect username.Bash
| Permission Type | Meaning for a File | Meaning for a Directory |
| Read (r) | Allows viewing the contents of the file. | Allows listing the contents of the directory (e.g., using ls). |
| Write (w) | Allows modifying or deleting the contents of the file. | Allows creating, deleting, or renaming files within the directory. |
| Execute (x) | Allows running the file as a program or script. | Allows entering the directory (e.g., using cd). |
AI Mentor Prompts
- "Explain the difference between octal (chmod 755) and symbolic (chmod u+x) notation for file permissions. When would I use one over the other?"
- "Why do I need a tool like Fail2Ban if I've already disabled password authentication and am using strong SSH keys?"
- "If I accidentally lock myself out with UFW, what are my recovery options with a cloud VPS provider?"
Ramon's Pro-Tip
A beginner often asks, "If my SSH key is secure, why do I need a firewall and Fail2Ban? Aren't they redundant?" This is where we move from following steps to thinking about security architecture. These tools are not redundant; they form the foundational layers of a defense-in-depth strategy. Your SSH key secures authentication (proving you are who you say you are). The UFW firewall provides network access control (blocking access to all services that don't need to be public, reducing your server's attack surface). Fail2Ban provides automated threat response (actively blocking IPs that exhibit malicious behavior like port scanning). By framing these tools as distinct but complementary layers—Authentication, Access Control, and Threat Response—you elevate your understanding from just a technician to someone who thinks strategically about system security.
Week 3: The Power of Containers - Introduction to Docker
This Week's Mission
This week, we're making a leap into the most transformative technology in modern infrastructure: containers. We're going to move beyond managing a single operating system and learn how to package applications into isolated, portable environments. We'll learn why Docker has revolutionized software development and deployment, and we'll build the blueprint for our first containerized application.
Core Concepts
- Virtual Machines vs. Containers: A Virtual Machine (VM) virtualizes the hardware, running a full guest operating system on top of a host OS. A container, by contrast, virtualizes the operating system itself, allowing multiple containers to share the host OS kernel. This makes containers incredibly lightweight, fast, and efficient.
- Docker Images and Containers: A Docker image is a lightweight, standalone, executable package that includes everything needed to run a piece of software: the code, a runtime, libraries, environment variables, and config files. It's a blueprint. A container is a running instance of an image. You can create many containers from a single image.
- The Dockerfile: A Dockerfile is a simple text file that contains the step-by-step instructions for building a Docker image. Each instruction creates a layer in the image, and Docker uses these layers to build images efficiently.
- Docker Compose: While you can manage single containers with the docker command, most real-world applications consist of multiple services (e.g., a web server, a database, a caching layer). Docker Compose is a tool that uses a simple YAML file to define and run multi-container Docker applications, handling the networking between them automatically.
- Volumes vs. Bind Mounts: Containers are ephemeral, meaning any data written inside them is lost when the container is removed. To persist data, we use volumes (which are managed by Docker) or bind mounts (which map a directory from the host machine into the container).
Your Project: Building a Multi-Container App with Docker Compose
- Install Docker and Docker Compose: First, let's get the tools onto our Ubuntu server.Bash
sudo apt update sudo apt install docker.io docker-compose -y
You'll also need to add your user to the docker group to run Docker commands without sudo.Bashsudo usermod -aG docker your_username
You will need to log out and log back in for this change to take effect. - Create a Simple Python Web App: We'll use the classic Flask/Redis counter example from the Docker documentation, as it perfectly illustrates a multi-service application.
- Create a project directory and enter it: mkdir composetest && cd composetest
- Create a file named app.py:Python
import time import redis from flask import Flask app = Flask(__name__) cache = redis.Redis(host='redis', port=6379) def get_hit_count(): retries = 5 while True: try: return cache.incr('hits') except redis.exceptions.ConnectionError as exc: if retries == 0: raise exc retries -= 1 time.sleep(0.5) @app.route('/') def hello(): count = get_hit_count() return f'Hello World! I have been seen {count} times.\n' - Create a file named requirements.txt:
flask redis
- Create a file named Dockerfile (no extension):
- Dockerfile
# Use an official Python runtime as a parent image FROM python:3.10-alpine # Set the working directory in the container WORKDIR /code # Set environment variables ENV FLASK_APP=app.py ENV FLASK_RUN_HOST=0.0.0.0 # Copy the requirements file and install dependencies COPY requirements.txt requirements.txt RUN pip install -r requirements.txt # Copy the rest of the application code COPY.. # Tell Docker the container listens on port 5000 EXPOSE 5000 # Define the command to run the app CMD ["flask", "run"]
- Dockerfile
- Write the docker-compose.yml file: This file, which we'll name docker-compose.yml, describes our two services: the web app and the Redis database. Notice how the web service can connect to the redis service using the hostname redis. Docker Compose handles this networking for you.
- YAML
services: web: build:. ports: - "8000:5000" redis: image: "redis:alpine"
- YAML
- Build and Run the Application: From your project directory, a single command builds your Python image and starts both containers.Bash
docker-compose up --build
You'll see the output from both services. Open a new terminal and use curl http://localhost:8000 on your server to see the hit counter working. To stop the application, go back to the first terminal and press CTRL+C. To remove the containers, run docker-compose down. - Understanding Data Persistence: If you run docker-compose up again, you'll notice the counter resets to 1. This is because the Redis container's data is ephemeral. Let's fix this with a Docker volume.
- Modify your docker-compose.yml to define a named volume and mount it into the Redis container.
- YAML
services: web: build:. ports: - "8000:5000" redis: image: "redis:alpine" volumes: - redis-data:/data volumes: redis-data:
- YAML
- Now run docker-compose up -d (the -d runs it in the background). Visit the page a few times to increment the counter. Then, run docker-compose down. Finally, run docker-compose up -d one more time. When you visit the page, you'll see the counter has picked up where it left off. The data is now persistent.
| Feature | Docker Volumes | Bind Mounts |
| Management | Fully managed by the Docker daemon. | Managed by the user on the host operating system. |
| Location | Stored in a dedicated area on the host filesystem managed by Docker (/var/lib/docker/volumes/). | Can be any file or directory on the host system. |
| Use Case | Preferred for production data. Ideal for databases, application state, and any data that should persist beyond the container's lifecycle.10 | Ideal for development. Useful for mounting source code into a container so changes on the host are immediately reflected, or for sharing configuration files. |
| Portability | More portable and OS-agnostic. The volume is part of the Docker ecosystem. | Less portable, as it depends on a specific directory structure on the host machine. |
AI Mentor Prompts
- "Explain the difference between a Docker image and a container like I'm five."
- "In a Dockerfile, what is the difference between the RUN command and the CMD command?"
- "Why is it generally better to use a Docker volume instead of a bind mount for a production database container?"
Ramon's Pro-Tip
Pay close attention to what we did in this week's project. We could have installed Redis inside the same container as our Python app, but we didn't. By separating the web application (web service) from the data store (redis service), you've just implemented your first microservice architecture. This is a fundamental design pattern in modern software. Each service has a single responsibility and can be updated, scaled, or replaced independently of the others. Docker Compose makes this separation incredibly easy to manage, hiding the networking complexity. So this week wasn't just about learning Docker syntax; it was an implicit lesson in how to design resilient, scalable applications. This mental model is crucial for anyone moving into DevOps or cloud engineering.
Week 4: Building with Blueprints - Deploying Your First Containerized App
This Week's Mission
It's time to apply our new Docker skills to a real-world, professional-grade application. We're moving beyond "hello world" examples to deploy a powerful security tool that you can actually use to monitor and protect the very server we're building. This week is about taking a complex, multi-component application and taming it with containers.
Core Concepts
- Container Security Best Practices: We'll go beyond just running containers and start thinking about how to run them securely. This includes using trusted images, running processes as non-root users, and not granting unnecessary privileges.11
- Docker Networking: We'll reinforce our understanding of how Docker Compose creates isolated networks for our applications, allowing services to communicate securely without exposing unnecessary ports to the outside world.
- Persistent Data Management: Real applications have critical data that must survive container restarts and updates. We'll see how a complex application uses Docker volumes to manage its persistent state.
- Reading Documentation: A huge part of being an IT pro is the ability to take an official deployment guide and adapt it to your own environment. We'll practice that skill this week by using the official Wazuh Docker project as our foundation.
Your Project: Containerizing a Security Tool (Wazuh)
For this project, I recommend we deploy Wazuh. It's a powerful, open-source Security Information and Event Management (SIEM) tool. Deploying it not only gives you a fantastic portfolio piece but also a functional security monitoring system for your own server.
- Clone the Official Wazuh Docker Repository: We won't be writing the docker-compose.yml from scratch. Instead, we'll use the official one provided by the Wazuh team. This is a common real-world scenario.Bash
git clone https://github.com/wazuh/wazuh-docker.git -b v4.7.4 --depth=1 cd wazuh-docker/single-node
- Analyze the docker-compose.yml file: Before running anything, open the docker-compose.yml file in nano. You'll see it's more complex than our simple Flask app. Identify the three main services that make up the Wazuh stack 13:
- wazuh.manager: The core server that analyzes agent data.
- wazuh.indexer: The database that stores and indexes all the security event data.
- wazuh.dashboard: The web interface you'll use to view alerts and manage the system.
Take note of the ports section. You'll see that the dashboard is exposed on port 443 (HTTPS), and the manager exposes ports for agent communication. Also, look at the volumes section. You'll see they use named volumes to persist all the critical configuration and data for each component.
- Generate SSL Certificates: The Wazuh stack components communicate with each other using TLS encryption. We need to generate the certificates for them. The repository includes a handy script for this.
- Bash
docker-compose -f generate-certs.yml run --rm generator
- Bash
- Implement Security Best Practices: Now, let's think like security professionals.
- Network Isolation: Notice that all three services are in the same docker-compose.yml file. This means Docker Compose will automatically create a dedicated, isolated network for them. The indexer's database port, for example, is not exposed to the public internet; only the other containers on the same Docker network can reach it. This is a critical security feature.
- Least Privilege: We are not running these containers with the --privileged flag. This is crucial because privileged containers have nearly root-level access to the host machine, effectively breaking container isolation.
- Trusted Images: We are using the official wazuh/ images from Docker Hub. Always use official or verified images when possible to avoid running containers with malicious code embedded in them.
- Deploy and Access the Application: With our analysis complete, it's time to launch the stack. The -d flag will run it in detached mode (in the background).
- Bash
docker-compose up -d
This will take a few minutes as it downloads the images and starts the containers. You can check the progress with docker-compose ps. Once all services show a "healthy" or "running" state, your deployment is complete.
- Bash
- Log In and Explore: Open a web browser and navigate to https://your_server_ip. You'll likely get a browser warning about an untrusted certificate because we generated a self-signed one. Proceed past the warning. The default credentials for the Wazuh dashboard are admin and SecretPassword. Change this password immediately! Once you're in, you'll see that a Wazuh agent is already installed and reporting in—it's monitoring the Docker host itself. Your project is complete.
AI Mentor Prompts
- "What are the most common security mistakes people make when using Docker, and how can I avoid them?"
- "Explain the concept of a SIEM like Wazuh. What kind of data does it collect, and what can I do with that information?"
- "In the Wazuh docker-compose.yml, I see restart: always. What does this do and what are the other restart policies available in Docker?"
Ramon's Pro-Tip
This week's project creates a powerful, self-reinforcing learning loop. You aren't just deploying some random application; you're deploying a security tool that you can immediately use to monitor the very server you are building. In Week 2, you set up Fail2Ban, which logs brute-force attempts to /var/log/auth.log. Now, you can go into your new Wazuh dashboard, find the agent monitoring your host, and see the alerts from that very log file. This immediately connects your previous work (basic hardening) to your current work (advanced monitoring). Using the tool (Wazuh) reinforces the importance of the foundational security measures you've already implemented. This transforms the curriculum from a linear sequence of tasks into an integrated ecosystem. You're building a system and the tools to manage it simultaneously, which is the essence of a holistic, hands-on approach.
Week 5: The Automation Engine - Mastering n8n Workflows
This Week's Mission
So far, we've been the ones actively managing our server—checking logs, running commands. This week, we flip the script. Our mission is to build a system that tells us when something is wrong, instead of waiting for us to find it. We will learn the power of workflow automation, creating a system that proactively monitors itself and alerts us to potential problems. This is the first major step toward building truly resilient, "hands-off" infrastructure.
Core Concepts
- Workflow Automation: This is the concept of connecting different applications and services together to perform tasks automatically. It's built on a simple model of "triggers" (an event that starts the workflow) and "actions" (what the workflow does in response).
- Webhooks: A webhook is one of the most common ways for applications to communicate. It's essentially a URL that an application can send a simple HTTP request to in order to trigger an action in another application. It's like a doorbell for software.
- API Credentials: To connect to services that require authentication (like sending an email through a provider or posting to a private social media account), automation tools need credentials, often in the form of an API key or token. Managing these securely is crucial.
- Proactive Monitoring: This is a fundamental shift in mindset. Instead of reactively logging in to check on a server's health, we build automated systems that constantly monitor key metrics (like disk space or CPU usage) and alert us before a small issue becomes a critical failure.
Your Project: Automated Server Health Alerts with n8n and Discord
- Deploy n8n: n8n is a powerful, open-source workflow automation tool that we can easily run in Docker.
- Create a new directory for it: mkdir n8n && cd n8n.
- Create a docker-compose.yml file:
- YAML
services: n8n: image: n8nio/n8n restart: always ports: - "5678:5678" volumes: - n8n-data:/home/node/.n8n volumes: n8n-data:Notice we are using a named volume, n8n-data, to ensure our workflows and credentials are saved permanently.
- YAML
- Start it up: docker-compose up -d.
- Set up a Discord Webhook: We need a place for our alerts to go. A private Discord server is perfect for this.
- Create a new, free Discord server for yourself. Create a text channel called #alerts.
- Go into the channel's settings (Edit Channel > Integrations > Webhooks), create a new webhook, give it a name like "Server Monitor," and copy the Webhook URL. Keep this URL safe; anyone with it can post messages to your channel.
- Build the n8n Workflow:
- Access your n8n instance in a browser at http://your_server_ip:5678. You'll be asked to set up an owner account.
- Create a new, blank workflow.
- The first node is your trigger. Click the + and search for the "Webhook" node. This will automatically generate a unique URL for your workflow. Copy the "Test" URL for now.
- Click the + after the Webhook node to add an action. Search for and select the "Discord" node.
- In the Discord node's settings:
- For "Authentication," select "Webhook URL."
- Paste the Discord webhook URL you copied in Step 2 into the "Webhook URL" field.
- In the "Content" field, we'll use an expression to pull data from the webhook. Type: Alert from server: {{ $json.body.message }}. This tells Discord to display the value of the message key from the JSON data we're about to send.
- Save and activate your workflow.
- Create a Health Check Script: Now, back on your server's command line, we'll create a simple script to check the server's health. Let's make a script that checks root disk usage.
- Create the script file: nano health_check.sh.
- Paste the following code, replacing YOUR_N8N_WEBHOOK_URL with the URL from your n8n Webhook node.
- Bash
#!/bin/bash # The URL for your n8n webhook WEBHOOK_URL="YOUR_N8N_WEBHOOK_URL" # The threshold for disk usage percentage THRESHOLD=80 # Get the current usage of the root filesystem CURRENT_USAGE=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g') if; then # If usage is over the threshold, send an alert MESSAGE="CRITICAL: Root disk space is at ${CURRENT_USAGE}%!" curl -X POST -H "Content-Type: application/json" -d "{\"message\": \"${MESSAGE}\"}" "$WEBHOOK_URL" fi
- Bash
- Make the script executable: chmod +x health_check.sh.
- Schedule with Cron: Finally, we'll use cron, the built-in Linux task scheduler, to run our script automatically.
- Open the crontab editor: crontab -e.
- Add the following line to the bottom of the file, making sure to replace the path with the actual path to your script. This will run the script every 5 minutes.
*/5 * * * * /home/your_username/health_check.sh
- Save and exit. Your automated monitoring is now live! To test it, you can temporarily set the THRESHOLD in your script to a low number (like 10), run it manually (./health_check.sh), and you should see the alert pop up in your Discord channel instantly.
AI Mentor Prompts
- "Give me a simple bash script that uses curl to send a JSON payload to a webhook URL."
- "Explain the syntax of a crontab entry. What do the five stars mean?"
- "What are some other key server health metrics I could monitor besides disk space? Give me examples of commands to check them."
Ramon's Pro-Tip
This project is a subtle but powerful introduction to the core principles of Site Reliability Engineering (SRE). A traditional, reactive sysadmin logs into a server to see if anything is wrong. An SRE builds systems that tell them when something is wrong. By creating an automated check (cron script) and a real-time notification pipeline (n8n to Discord), you are learning to build observability into your infrastructure. You're shifting your mindset from manual, reactive work to building automated, proactive systems. This is a far more valuable and modern skill. The goal isn't just to keep a server running; it's to build a system that alerts you to problems before they become catastrophic failures.
Week 6: Trial by Fire - Getting Hacked and Bouncing Back
This Week's Mission
This is the week you've been preparing for. All the foundational work, the containerization, the automation—it all leads to this. We are going to intentionally break our server, diagnose the problem under pressure, and then restore it from a backup. This is not a drill. It's a controlled fire where you get to be the firefighter. This is the week that theory is burned away and only practical, hard-won experience remains.
Core Concepts
- Incident Response: This is the methodical approach to addressing and managing the aftermath of a security breach or system failure. The basic steps are to identify the problem, contain the damage, eradicate the cause, and recover the system.
- System Diagnostics: When a system is misbehaving, you need to know how to ask it what's wrong. We'll use fundamental Linux tools like top, htop, and log files to find the root cause of a performance issue.
- Backup and Recovery Strategy: Prevention is ideal, but recovery is essential. A solid backup strategy is your ultimate safety net. We'll put the famous 3-2-1 Rule (3 copies of your data, on 2 different media, with 1 copy offsite) into practice.
- System Resilience: This is the understanding that things will break. A resilient system isn't one that never fails; it's one that can be recovered quickly and reliably after a failure.
Your Project: The "Get Hacked" Scenario
This project is broken into four distinct parts: Prepare, Break, Analyze, and Fix.
- Part 1: The Backup Plan (Prepare):
- Early in the week, before the "attack," we must establish our backup plan. A backup is useless if it's taken after the disaster.
- Let's choose a critical piece of data to protect: the named volume for our n8n workflows from last week. First, find where Docker stores this volume on the host. You can find the Mountpoint by running:
- Bash
docker volume inspect n8n_n8n-data
- Bash
- Now, we'll use the tar command to create a compressed archive of that entire directory. We'll name it with today's date.
- Bash
sudo tar -czvf n8n-backup-$(date +%F).tar.gz /var/lib/docker/volumes/n8n_n8n-data/_data
- Bash
- This creates our backup. To simulate the "1 copy offsite" part of the 3-2-1 rule, you must now get this file off the server. Use a tool like scp (Secure Copy) or FileZilla to download the n8n-backup-....tar.gz file to your local computer. This is your lifeline.
- Part 2: The "Attack" (Break):
- It's time to simulate a common attack: a malicious script that consumes all your server's resources, like a cryptominer. We'll use a simple but brutally effective command called yes. It does nothing but output the letter 'y' infinitely, which will max out a CPU core.17
- First, find out how many CPU cores your server has: nproc.
- Now, for each core, run the following command. For example, if you have 2 cores, run it twice. The & puts the process in the background.
- Bash
yes > /dev/null & yes > /dev/null &
- Bash
- Within seconds, your server's CPU usage will be at 100%. If you have your n8n CPU alert configured, it should fire and notify you in Discord. The server will become sluggish and difficult to use. The attack is live.
- Part 3: The Diagnosis (Analyze):
- Your mission is to SSH into your struggling server and figure out what's happening. The connection might be slow, but it should work.
- Once you're in, run the top command (or htop if you've installed it). You will immediately see two processes named yes at the very top of the list, each consuming close to 100% of a CPU core.
- Note the Process ID (PID) for each of the yes processes from the first column.
- To stop the attack, you must terminate these processes using the kill command.
- Bash
kill PID_of_first_yes_process kill PID_of_second_yes_process
- Bash
- Run top again. The yes processes should be gone, and your CPU usage should return to normal. You have successfully neutralized the immediate threat.
- Part 4: The Recovery (Fix):
- The attacker is gone, but what if they damaged something? We'll simulate data corruption by deleting the contents of our n8n data directory. This is a destructive command. Double-check your path.
- Bash
sudo rm -rf /var/lib/docker/volumes/n8n_n8n-data/_data/*
- Bash
- If you try to access n8n now, it will be broken or reset to its initial state.
- It's time to use our backup. First, upload the n8n-backup-....tar.gz file from your local computer back to your server's home directory using scp or FileZilla.
- Now, extract the archive, restoring the files to their original location.
- Bash
sudo tar -xzvf n8n-backup-....tar.gz -C /
The -C / flag tells tar to extract the files relative to the root directory, putting everything back exactly where it belongs.
- Bash
- Restart your n8n container (docker-compose -f /path/to/n8n/docker-compose.yml restart n8n) and access the web UI. Your workflows and settings should be fully restored. You have successfully recovered from the incident.
AI Mentor Prompts
- "In the top command output, what do the 'us', 'sy', and 'id' CPU time values mean, and why are they useful for diagnosing performance issues?"
- "Explain the flags in the command tar -czvf. What does each letter do?"
- "What is the difference between a full backup, an incremental backup, and a differential backup?"
Ramon's Pro-Tip
This week is the emotional and psychological core of this entire curriculum. Reading about incident response in a textbook is abstract and boring. Experiencing a "live" incident, even a simulated one where the system you've carefully built for weeks becomes unresponsive, creates a genuine sense of urgency. Successfully diagnosing the issue with top and killing the rogue processes will give you a massive confidence boost. It proves you can handle pressure and solve real problems. Then, successfully restoring from the backup you so diligently prepared transforms the idea of "backups" from a theoretical chore into a tangible, proven lifesaver. You will never forget the feeling of fixing a broken system you built yourself. This experience solidifies the "build, break, fix" philosophy in a way no lecture ever could. This is the week you start to feel like a real sysadmin.
Week 7: The Business Connection - Deploying a Real-World ERP
This Week's Mission
So far, our projects have been focused on infrastructure and IT tools. This week, we bridge the gap between technical skills and business value. Our mission is to deploy a complex, multi-tiered business application—an Enterprise Resource Planning (ERP) system—and expose it to the world securely. We will put it behind a reverse proxy and enable HTTPS with a valid SSL certificate. This project proves you can handle the kind of production-grade deployments that businesses run on.
Core Concepts
- Enterprise Resource Planning (ERP): An ERP system is a type of software that organizations use to manage day-to-day business activities such as accounting, procurement, project management, risk management, and supply chain operations. It's the digital backbone of a company. We're deploying Odoo, a popular open-source ERP.
- Reverse Proxy: A reverse proxy is a server that sits in front of one or more web servers, intercepting requests from clients. It's a critical component in production environments for several reasons: it can provide SSL/TLS termination (handling all the encryption work), load balance traffic across multiple backend servers, and add a layer of security by hiding the identity of your backend servers.
- SSL/TLS Certificates: These are the digital certificates that enable HTTPS (the 'S' stands for 'Secure'). They encrypt the data transmitted between a user's browser and your server, ensuring privacy and data integrity. In today's web, HTTPS is non-negotiable.
- Let's Encrypt: Let's Encrypt is a non-profit Certificate Authority that provides free, automated SSL/TLS certificates. Its arrival revolutionized web security by making it easy and free for anyone to enable HTTPS.
Your Project: Deploying and Securing Odoo
- Deploy Odoo with Docker Compose: We'll use a pre-existing Docker Compose configuration to get Odoo and its PostgreSQL database running quickly.
- Create a directory for your Odoo project: mkdir odoo && cd odoo.
- Create a docker-compose.yml file. This configuration is based on common Odoo deployment patterns.
- YAML
services: odoo: image: odoo:19.0 restart: always environment: - HOST=db - USER=odoo - PASSWORD=myodoo_password volumes: - odoo-data:/var/lib/odoo depends_on: - db db: image: postgres:16 restart: always environment: - POSTGRES_DB=postgres - POSTGRES_PASSWORD=myodoo_password - POSTGRES_USER=odoo volumes: - db-data:/var/lib/postgresql/data volumes: odoo-data: db-data:
- YAML
- Important: Note that we are not exposing any ports for Odoo or the database in this file. The reverse proxy will be the only way to access the application.
- Set up a Domain Name: To get a valid SSL certificate, you need a real domain name. If you don't have one, go to a registrar like Namecheap and buy a cheap one (they can be had for a few dollars a year). Once you have it, go to your registrar's DNS settings and create an "A" record that points your domain (e.g., my-odoo-project.com) to your server's public IP address.
- Deploy a Caddy Reverse Proxy: We will use Caddy as our reverse proxy because it is incredibly simple and handles HTTPS automatically. Nginx is also a great tool, but Caddy is perfect for beginners.
- Add the Caddy service to your docker-compose.yml file.
- YAML
#... (keep the odoo and db services as they are) caddy: image: caddy:2 restart: always ports: - "80:80" - "443:443" volumes: -./Caddyfile:/etc/caddy/Caddyfile - caddy-data:/data volumes: odoo-data: db-data: caddy-data:
- YAML
- Now, create a file named Caddyfile in the same directory. This is Caddy's configuration file. Replace yourdomain.com with the domain you set up.Code snippet
yourdomain.com { reverse_proxy odoo:8069 }That's it. This configuration tells Caddy to listen for traffic for yourdomain.com, automatically provision a Let's Encrypt SSL certificate for it, and proxy all requests to the odoo container on port 8069 (Odoo's default port).
- Launch and Verify the Deployment:
- Now, run docker-compose up -d from your odoo directory. This will start all three containers: the database, the Odoo application, and the Caddy reverse proxy.
- Wait a minute or two for the DNS changes to propagate and for Caddy to acquire the certificate.
- Open a web browser and navigate to https://yourdomain.com. You should see the Odoo setup screen, served securely over HTTPS with a valid padlock icon in your browser. The project is complete when you can create your master password, set up a database, and log into the Odoo dashboard.
AI Mentor Prompts
- "Explain what a reverse proxy does and why it's a critical component for security and performance in a production web application."
- "How does Caddy's automatic HTTPS feature work with Let's Encrypt? What are the ACME challenge types?"
- "What is the business purpose of an ERP system like Odoo? What are some of the key modules it might include?"
Ramon's Pro-Tip
This week's project is designed to teach a crucial lesson: the ultimate purpose of infrastructure is to enable business operations. As tech professionals, it's easy to get lost in the tools and forget this. Up to this point, our projects have been for us—the IT pros. Deploying an ERP forces you to think from the perspective of a non-technical user. Why is this application important? Because it manages the company's sales, inventory, and accounting. This context gives meaning to our technical tasks. The database isn't just a database; it holds the company's financial records. The web server isn't just a web server; it's the primary interface for employees to do their jobs. This week fundamentally shifts your perspective. You learn to see yourself not just as a "Linux person" or a "Docker person," but as a technology professional whose job is to support and enhance the business. That business-centric mindset is what employers are desperately looking for, and it's what separates the hobbyists from the highly-paid professionals.
Week 8: Launching Your Career - From Projects to Paycheck
This Week's Mission
You've made it. For the last seven weeks, you've been in the trenches, building, hardening, breaking, and fixing. You have accumulated an incredible amount of hands-on experience. But that experience is useless if you can't communicate it. This final week, our mission is to translate your hard-won technical skills into a professional portfolio that gets you noticed and gets you hired. We will craft your resume, optimize your LinkedIn profile, and prepare you to talk about your projects with confidence in an interview.
Core Concepts
- Impact-Oriented Resumes: The difference between a good resume and a great one is the shift from listing responsibilities ("I was responsible for...") to quantifying achievements ("I accomplished X, resulting in Y"). We'll focus on demonstrating impact.
- Keyword Optimization: Recruiters and automated Applicant Tracking Systems (ATS) search for specific keywords. We need to ensure your online presence is seeded with the right terms for the jobs you want.
- The STAR Method: This is a simple, structured way to answer behavioral interview questions ("Tell me about a time when..."). It stands for Situation, Task, Action, Result. It helps you tell a compelling story about your experience.
- Articulating Technical Concepts: It's not enough to know what you did. You must be able to explain why you did it. We'll practice explaining the reasoning behind our technical decisions.
Your Project: Building Your Professional Brand
- Translating Projects into Resume Bullet Points: Let's go through our weekly projects and turn them into powerful, action-oriented bullet points for a "Projects" section on your resume.
- Weeks 1 & 2: Server Foundation & Hardening
- Before: "Set up an Ubuntu server."
- After: "Provisioned and hardened a public-facing Ubuntu Linux server from scratch, implementing layered security including SSH key-only authentication, a UFW firewall with a default-deny policy, and Fail2Ban for automated intrusion prevention against brute-force attacks."
- Weeks 3 & 4: Containerization with Docker
- Before: "Used Docker and deployed Wazuh."
- After: "Deployed and managed multi-container applications using Docker Compose, containerizing a full SIEM stack (Wazuh) for security monitoring. Leveraged named volumes for data persistence and enforced security best practices including network segmentation and the use of official, trusted images."
- Weeks 5 & 6: Automation & Incident Response
- Before: "Set up alerts and fixed a problem."
- After: "Engineered a proactive server monitoring solution using n8n and Discord webhooks, creating automated alerts for critical system metrics. Successfully diagnosed and remediated a simulated high-CPU incident, identifying and terminating rogue processes using top and restoring corrupted application data from a tar backup."
- Week 7: Production-Grade ERP Deployment
- Before: "Installed Odoo and used Caddy."
- After: "Orchestrated the production deployment of an Odoo ERP system in Docker, fronted by a Caddy reverse proxy that provided automatic SSL/TLS certificate acquisition and renewal via Let's Encrypt for secure HTTPS access."
- Weeks 1 & 2: Server Foundation & Hardening
- Optimizing Your LinkedIn Profile: Your LinkedIn profile is your digital resume and professional storefront. Let's make it work for you.
- Photo: Get a clean, professional-looking headshot. No exceptions. Profiles with photos get far more engagement.
- Headline: This is the most important real estate. Don't just put "Student." Use keywords.
- Example: "Aspiring Systems Administrator | Linux, Docker, & Automation | Building a Hands-On Project Portfolio"
- Summary (About Section): Tell your story. Briefly explain your motivation for getting into IT and describe this 8-week journey. Mention the key technologies you learned (Ubuntu, Docker, UFW, Fail2Ban, n8n, Caddy, Odoo, Wazuh) and state that you are actively seeking an entry-level role in system administration, DevOps, or cybersecurity.
- Experience/Projects: Create a "Project" entry for this 8-week program. Use the bullet points we crafted in the previous step. If you can, create a public GitHub repository with your configuration files (docker-compose.yml, Caddyfile, health_check.sh) and link to it from your profile. This is tangible proof of your work.
- Preparing for the Interview: An interview is where you bring your resume to life. Use your projects as the source material for your answers.
- The "Tell me about a time you had to troubleshoot a problem" question: This is a gift. You don't need to invent a story. You have a real one. Use the STAR method to describe the Week 6 "Get Hacked" scenario.
- Situation: "As part of my personal development lab, I was running several containerized services on a Linux VPS. I had set up automated monitoring, which alerted me to a sustained 100% CPU spike that was making the server unresponsive."
- Task: "My task was to diagnose the root cause of the performance degradation, neutralize the issue, and ensure all systems were returned to a fully functional state."
- Action: "I SSH'd into the machine and used the top command to identify several rogue 'yes' processes consuming all available CPU resources. I recorded their Process IDs and terminated them using the kill command. I then discovered that the application data for my n8n service had been corrupted."
- Result: "By terminating the malicious processes, I immediately restored server performance. I then successfully restored the corrupted data from a previously created tar backup that I had stored offsite, bringing the n8n service back online with no data loss in under 15 minutes. This experience validated my incident response and disaster recovery procedures."
- The "Why should we hire you?" question:
- Answer: "Because I don't just have theoretical knowledge; I have practical, hands-on experience. I've built a secure, containerized server environment from the ground up. I've deployed not just simple web apps, but complex business and security systems like Odoo and Wazuh. I've built my own automation for monitoring, and I've proven I can stay calm under pressure to diagnose and fix a critical system failure. This 8-week project demonstrates my ability to learn quickly, solve real-world problems, and my passion for building resilient, secure systems."
- The "Tell me about a time you had to troubleshoot a problem" question: This is a gift. You don't need to invent a story. You have a real one. Use the STAR method to describe the Week 6 "Get Hacked" scenario.
AI Mentor Prompts
- "Take this resume bullet point and rephrase it using the STAR method: 'Fixed a server that was running slow'."
- "Generate five common interview questions for an entry-level DevOps or Junior System Administrator role."
- "Review my LinkedIn summary for keyword optimization. The roles I'm targeting are 'Junior Linux Administrator' and 'Cloud Support Associate'."
Ramon's Pro-Tip
This final week is the critical bridge between your skills and your employment. Many talented technical people fail at this stage. They can build a nuclear reactor, but they can't explain how it works in a job interview. Technical ability is only half the battle; the other half is effective communication and professional marketing. This curriculum doesn't just end; it culminates in a launchpad. You finish not just with a server, but with a portfolio, a story, and the confidence to tell that story in a way that resonates with hiring managers. You've done the work. Now go out there and show them what you've built. Good luck.
The 8-Week IT Proving Ground