Initial commit of Wallarm deployment toolkit

This commit is contained in:
SechPoint 2026-03-11 12:03:08 +00:00
parent 27329d3afa
commit 14dadd0735
10 changed files with 736 additions and 5 deletions

BIN
.DS_Store vendored Normal file

Binary file not shown.

124
README.md
View file

@ -1,8 +1,122 @@
# Sample GitLab Project
# 🛡️ Wallarm Deployment Toolkit
This sample project shows how a project in GitLab looks for demonstration purposes. It contains issues, merge requests and Markdown files in many branches,
named and filled with lorem ipsum.
This repository contains automated scripts to deploy the Wallarm Filtering Node in various environments. Whether you are using a virtual machine (NGINX Dynamic Module) or a containerized environment (Docker/Podman), these scripts ensure a "Bank-Grade" configuration.
You can look around to get an idea how to structure your project and, when done, you can safely delete this project.
**Repository:** `https://git.sechpoint.app/customer-engineering/wallarm`
[Learn more about creating GitLab projects.](https://docs.gitlab.com/ee/gitlab-basics/create-project.html)
---
## 🚦 Step 1: Network Pre-Check (Crucial)
Before attempting any installation, you **must** verify that the server has egress access to Wallarm's Cloud and the official NGINX repositories. Banks often block these by default.
1. **Navigate to the script:**
```bash
cd ~/wallarm/
```
2. **Run the audit:**
```bash
chmod +x network-pre-check.sh
sudo ./network-pre-check.sh
```
3. **Result:** If you see any **RED** "FAIL" messages, contact the network team to whitelist the listed IPs/Hostnames on Port 443. **Do not proceed until all checks are GREEN.**
---
## 🛠️ Step 2: Choose Your Deployment Method
Depending on your architecture, choose **one** of the following methods.
### Option A: Virtual Machine (Native NGINX Module)
*Best for: Maximum performance, high-traffic production, or existing NGINX servers.*
1. **Configure:** Open the install script and set your `TOKEN` and `USE_CASE` (in-line or out-of-band).
```bash
nano ~/wallarm/installation-vm/install.sh
```
2. **Run:**
```bash
chmod +x ~/wallarm/installation-vm/install.sh
sudo ~/wallarm/installation-vm/install.sh
```
### Option B: Containerized (Docker / Podman)
*Best for: Rapid PoC, testing, or environments where you don't want to modify the host OS packages.*
1. **Configure:** Open the install script and set your `TOKEN` and `UPSTREAM` IP.
```bash
nano ~/wallarm/installation-docker/install.sh
```
2. **Run:**
```bash
chmod +x ~/wallarm/installation-docker/install.sh
sudo ~/wallarm/installation-docker/install.sh
```
**Observation on scaling the Docker / Podman (Containerized) deployment**
You can run multiple instances by changing the `NODE_NAME` and `TRAFFIC_PORT`/`MONITOR_PORT` variables. This allows you to serve different backends on one host.
---
## ⚙️ Configuration Variables
Inside each `install.sh` script, you will find a **Configuration** section at the top. You must update these:
|**Variable**|**Description**|**Example**|
|---|---|---|
|`NODE_NAME`|A unique identifier for the instance (useful for containerized deployments).|`wallarm-prod-01`|
|`TRAFFIC_PORT`|The host port where the node listens for incoming application traffic.|`8000`|
|`MONITOR_PORT`|The host port used to expose Wallarm metrics (Prometheus/JSON format).|`9000`|
|`TOKEN`|The unique API Token generated in the Wallarm Console.|`vPHB+Ygn...`|
|`REGION`|The Wallarm Cloud location associated with your account (`EU` or `US`).|`US`|
|`UPSTREAM`|The internal IP or hostname of the application server being protected.|`10.0.0.14`|
|`USE_CASE`|(VM only) Sets the mode: `in-line` (active filtering) or `out-of-band` (monitoring).|`in-line`|
---
### Deployment Flow Overview
When you deploy the node using these variables, the traffic typically flows as follows:
1. **Incoming Traffic:** Hits the `TRAFFIC_PORT`.
2. **Filtering:** The node uses the `TOKEN` to sync security rules from the Wallarm Cloud (based on your `REGION`).
3. **Forwarding:** Valid traffic is sent to the `UPSTREAM` IP.
4. **Monitoring:** System health and security metrics are pulled via the `MONITOR_PORT`.
---
## 🧪 Post-Installation Test
Once the script finishes, verify the WAF is working by sending a "fake" attack:
```bash
# Replace localhost with your server IP if testing remotely
curl -I http://localhost/etc/passwd
```
## 📊 Sizing & Performance Specs
For a standard deployment (Postanalytics + NGINX on one host), we recommend the following:
### Resource Allocation & Performance
| **Resource** | **Primary Consumer** | **Performance Note** |
| ------------------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| **CPU** (2 Cores) | **NGINX & Postanalytics** | One core typically handles the NGINX filtering, while the second manages `wstore` (local data analytics). This setup safely handles ~1,000 RPS. |
| **RAM** (4 GB) | **`wstore` (Tarantool)** | Wallarm uses an in-memory circular buffer for request analysis. At 4GB, you can store roughly 1520 minutes of request metadata for analysis. |
| **Storage** (50 GB) | **Log Buffering** | Required for local attack logs and the postanalytics module. SSD is mandatory to prevent I/O wait times from slowing down request processing. |
---
### Scaling Indicators
If you notice your traffic increasing beyond these specs, here is when you should consider upgrading:
- **High CPU Usage (>70%):** Usually indicates you have reached the RPS limit for your core count or are processing highly complex, nested payloads (like large JSON or Base64-encoded data).
- **Memory Pressure:** If you see `wstore` dropping data before the 15-minute mark, it means your traffic volume (data per minute) is exceeding the 4GB buffer.
- **Disk Latency:** If you see "I/O Wait" in your system monitoring, it will directly cause latency for your end users, as NGINX cannot clear its buffers fast enough.
> [!TIP]
> **Performance Note:** One CPU core typically handles **500 RPS**. In a 2-core setup, one core is dedicated to NGINX (filtering) and one core to `wstore` (local analytics). For production spikes, we recommend 2x headroom.

View file

@ -0,0 +1,120 @@
#!/bin/bash
# ==============================================================================
# Wallarm PoC: Multi-Instance Safe Deployer (Podman/Docker)
# ==============================================================================
# --- Instance Configuration ---
NODE_NAME="wallarm-01"
TRAFFIC_PORT="8000"
MONITOR_PORT="9000"
# --- UPSTREAM SETTINGS ---
UPSTREAM_IP="10.0.0.14" # Internal Application IP
UPSTREAM_PORT="6042" # Internal Application Port
# --- CLOUD SETTINGS ---
TOKEN="YOUR_NODE_TOKEN_HERE"
REGION="EU" # US or EU
# --- Colors ---
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
echo -e "${YELLOW}🔍 PHASE 0: Pre-Flight Connectivity Checks...${NC}"
# 1. Root Check
[[ $EUID -ne 0 ]] && { echo -e "${RED}❌ ERROR: Run as root.${NC}"; exit 1; }
# 2. Specific Upstream Port Check
echo -n "Verifying connectivity to $UPSTREAM_IP on port $UPSTREAM_PORT... "
if ! timeout 3 bash -c "cat < /dev/null > /dev/tcp/$UPSTREAM_IP/$UPSTREAM_PORT" 2>/dev/null; then
echo -e "${RED}FAILED${NC}"
echo -e "${RED}❌ ERROR: The VM cannot reach the application on port $UPSTREAM_PORT.${NC}"
echo -e "${YELLOW}Action: Ask the bank's Network Team to open egress to $UPSTREAM_IP:$UPSTREAM_PORT.${NC}"
exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# 3. Wallarm Cloud Check
API_HOST=$( [[ "$REGION" == "US" ]] && echo "us1.api.wallarm.com" || echo "api.wallarm.com" )
if ! curl -s --connect-timeout 5 "https://$API_HOST" > /dev/null; then
echo -e "${RED}❌ ERROR: Cannot reach Wallarm Cloud ($API_HOST). Check Proxy settings.${NC}"
exit 1
fi
# --- PHASE 1: Engine Setup ---
if [ -f /etc/redhat-release ]; then
ENGINE="podman"
dnf install -y epel-release
dnf install -y podman podman-docker podman-compose wget curl
systemctl enable --now podman.socket
# Open OS Firewalld for incoming traffic
firewall-cmd --permanent --add-port=$TRAFFIC_PORT/tcp
firewall-cmd --permanent --add-port=$MONITOR_PORT/tcp
firewall-cmd --reload
elif [ -f /etc/debian_version ]; then
ENGINE="docker"
apt update && apt install -y docker.io docker-compose wget curl
systemctl enable --now docker
else
echo -e "${RED}❌ Unsupported OS${NC}"; exit 1
fi
COMPOSE_CMD=$([[ "$ENGINE" == "podman" ]] && echo "podman-compose" || echo "docker-compose")
# --- PHASE 2: Instance Workspace ---
INSTANCE_DIR="/opt/wallarm/$NODE_NAME"
mkdir -p "$INSTANCE_DIR"
# Generate Nginx Config using the specific Upstream Port
cat <<EOF > "$INSTANCE_DIR/nginx.conf"
server {
listen 80;
wallarm_mode monitoring;
location / {
proxy_pass http://$UPSTREAM_IP:$UPSTREAM_PORT;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
}
server {
listen 90;
location /wallarm-status {
wallarm_status on;
allow all;
}
}
EOF
# Compose File with SELinux ( :Z ) flag
cat <<EOF > "$INSTANCE_DIR/conf.yml"
version: '3.8'
services:
$NODE_NAME:
image: docker.io/wallarm/node:4.10-latest
container_name: $NODE_NAME
restart: always
ports:
- "$TRAFFIC_PORT:80"
- "$MONITOR_PORT:90"
environment:
- WALLARM_API_TOKEN=$TOKEN
- WALLARM_API_HOST=$API_HOST
volumes:
- ./nginx.conf:/etc/nginx/http.d/default.conf:ro,Z
EOF
# --- PHASE 3: Launch ---
echo -e "${YELLOW}🚀 Launching Wallarm Instance...${NC}"
cd "$INSTANCE_DIR"
$COMPOSE_CMD -f conf.yml up -d
echo -e "\n${GREEN}✅ DEPLOYMENT COMPLETE${NC}"
echo -e "External Port: $TRAFFIC_PORT -> Internal: $UPSTREAM_IP:$UPSTREAM_PORT"
echo -e "View real-time logs: $ENGINE logs -f $NODE_NAME"

View file

@ -0,0 +1,129 @@
#!/bin/bash
# ==============================================================================
# Wallarm PoC: Interactive "KISS" Deployer (Keystone Bank Edition)
# ==============================================================================
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
clear
echo -e "${YELLOW}====================================================${NC}"
echo -e "${YELLOW} Wallarm Guided Instance Deployer (US Cloud) ${NC}"
echo -e "${YELLOW}====================================================${NC}\n"
# --- 1. THE ID ---
echo -e "Existing Instances in /opt/wallarm/:"
ls /opt/wallarm/ 2>/dev/null || echo "None"
echo ""
read -p "Enter Instance ID number (e.g., 1, 2, 3): " INSTANCE_ID
NODE_NAME=$(printf "wallarm-%02d" $INSTANCE_ID)
TRAFFIC_PORT=$((8000 + INSTANCE_ID))
MONITOR_PORT=$((9000 + INSTANCE_ID))
# --- 2. CONFIGURATION ---
read -p "Enter Upstream IP (App Server): " UPSTREAM_IP
read -p "Enter Upstream Port [default 80]: " UPSTREAM_PORT
UPSTREAM_PORT=${UPSTREAM_PORT:-80}
# Hardcoded US Endpoints
API_HOST="us1.api.wallarm.com"
DATA_NODE="node-data0.us1.wallarm.com"
read -p "Paste Wallarm Token (US Cloud): " TOKEN
# --- 3. PRE-FLIGHT VALIDATION ---
echo -e "\n${YELLOW}🔍 Starting Pre-Flight Connectivity Checks...${NC}"
# A. Internal Check
echo -n "Checking App Server ($UPSTREAM_IP:$UPSTREAM_PORT)... "
if ! timeout 2 bash -c "cat < /dev/null > /dev/tcp/$UPSTREAM_IP/$UPSTREAM_PORT" 2>/dev/null; then
echo -e "${RED}FAILED${NC}"; exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# B. Wallarm API Check
echo -n "Checking Wallarm API ($API_HOST)... "
if ! curl -s --connect-timeout 5 "https://$API_HOST" > /dev/null; then
echo -e "${RED}FAILED${NC}"; exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# C. Wallarm Data Node Check (Critical for events)
echo -n "Checking Wallarm Data Node ($DATA_NODE)... "
if ! timeout 3 bash -c "cat < /dev/null > /dev/tcp/$DATA_NODE/443" 2>/dev/null; then
echo -e "${RED}FAILED${NC}"
echo -e "${RED}❌ ERROR: Data transmission to Wallarm is blocked.${NC}"
echo -e "${YELLOW}Action: Whitelist IPs 34.96.64.17 and 34.110.183.149 on Port 443.${NC}"; exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# --- 4. ENGINE SETUP ---
if [ -f /etc/redhat-release ]; then
ENGINE="podman"
dnf install -y epel-release podman podman-docker podman-compose wget curl &>/dev/null
systemctl enable --now podman.socket &>/dev/null
firewall-cmd --permanent --add-port=$TRAFFIC_PORT/tcp --add-port=$MONITOR_PORT/tcp &>/dev/null
firewall-cmd --reload &>/dev/null
else
ENGINE="docker"
apt update && apt install -y docker.io docker-compose wget curl &>/dev/null
systemctl enable --now docker &>/dev/null
fi
COMPOSE_CMD=$([[ "$ENGINE" == "podman" ]] && echo "podman-compose" || echo "docker-compose")
# --- 5. WORKSPACE & CONFIG ---
INSTANCE_DIR="/opt/wallarm/$NODE_NAME"
mkdir -p "$INSTANCE_DIR"
cat <<EOF > "$INSTANCE_DIR/nginx.conf"
server {
listen 80;
wallarm_mode monitoring;
location / {
proxy_pass http://$UPSTREAM_IP:$UPSTREAM_PORT;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
}
}
server { listen 90; location /wallarm-status { wallarm_status on; allow all; } }
EOF
cat <<EOF > "$INSTANCE_DIR/conf.yml"
version: '3.8'
services:
$NODE_NAME:
image: docker.io/wallarm/node:4.10-latest
container_name: $NODE_NAME
restart: always
ports: ["$TRAFFIC_PORT:80", "$MONITOR_PORT:90"]
environment:
- WALLARM_API_TOKEN=$TOKEN
- WALLARM_API_HOST=$API_HOST
volumes: ["./nginx.conf:/etc/nginx/http.d/default.conf:ro,Z"]
EOF
# --- 6. LAUNCH ---
echo -e "${YELLOW}🚀 Launching $NODE_NAME...${NC}"
cd "$INSTANCE_DIR"
$COMPOSE_CMD -f conf.yml up -d
# --- 7. VERIFICATION ---
echo -e "\n${YELLOW}⏳ Waiting for handshake...${NC}"
sleep 5
if curl -s "http://localhost:$MONITOR_PORT/wallarm-status" | grep -q "requests"; then
echo -e "${GREEN}✅ SUCCESS: $NODE_NAME IS LIVE AND INSPECTING TRAFFIC${NC}"
else
echo -e "${RED}⚠️ WARNING: Handshake slow. Check: $ENGINE logs $NODE_NAME${NC}"
fi
echo -e "--------------------------------------------------"
echo -e "Traffic URL: http://<VM_IP>:$TRAFFIC_PORT"
echo -e "--------------------------------------------------"

View file

@ -0,0 +1,120 @@
#!/bin/bash
# ==============================================================================
# Wallarm PoC: Multi-Instance Safe Deployer (Podman/Docker)
# ==============================================================================
# --- Instance Configuration ---
NODE_NAME="wallarm-01"
TRAFFIC_PORT="8000"
MONITOR_PORT="9000"
# --- UPSTREAM SETTINGS ---
UPSTREAM_IP="10.0.0.14" # Internal Application IP
UPSTREAM_PORT="6042" # Internal Application Port
# --- CLOUD SETTINGS ---
TOKEN="YOUR_NODE_TOKEN_HERE"
REGION="EU" # US or EU
# --- Colors ---
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
echo -e "${YELLOW}🔍 PHASE 0: Pre-Flight Connectivity Checks...${NC}"
# 1. Root Check
[[ $EUID -ne 0 ]] && { echo -e "${RED}❌ ERROR: Run as root.${NC}"; exit 1; }
# 2. Specific Upstream Port Check
echo -n "Verifying connectivity to $UPSTREAM_IP on port $UPSTREAM_PORT... "
if ! timeout 3 bash -c "cat < /dev/null > /dev/tcp/$UPSTREAM_IP/$UPSTREAM_PORT" 2>/dev/null; then
echo -e "${RED}FAILED${NC}"
echo -e "${RED}❌ ERROR: The VM cannot reach the application on port $UPSTREAM_PORT.${NC}"
echo -e "${YELLOW}Action: Ask the bank's Network Team to open egress to $UPSTREAM_IP:$UPSTREAM_PORT.${NC}"
exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# 3. Wallarm Cloud Check
API_HOST=$( [[ "$REGION" == "US" ]] && echo "us1.api.wallarm.com" || echo "api.wallarm.com" )
if ! curl -s --connect-timeout 5 "https://$API_HOST" > /dev/null; then
echo -e "${RED}❌ ERROR: Cannot reach Wallarm Cloud ($API_HOST). Check Proxy settings.${NC}"
exit 1
fi
# --- PHASE 1: Engine Setup ---
if [ -f /etc/redhat-release ]; then
ENGINE="podman"
dnf install -y epel-release
dnf install -y podman podman-docker podman-compose wget curl
systemctl enable --now podman.socket
# Open OS Firewalld for incoming traffic
firewall-cmd --permanent --add-port=$TRAFFIC_PORT/tcp
firewall-cmd --permanent --add-port=$MONITOR_PORT/tcp
firewall-cmd --reload
elif [ -f /etc/debian_version ]; then
ENGINE="docker"
apt update && apt install -y docker.io docker-compose wget curl
systemctl enable --now docker
else
echo -e "${RED}❌ Unsupported OS${NC}"; exit 1
fi
COMPOSE_CMD=$([[ "$ENGINE" == "podman" ]] && echo "podman-compose" || echo "docker-compose")
# --- PHASE 2: Instance Workspace ---
INSTANCE_DIR="/opt/wallarm/$NODE_NAME"
mkdir -p "$INSTANCE_DIR"
# Generate Nginx Config using the specific Upstream Port
cat <<EOF > "$INSTANCE_DIR/nginx.conf"
server {
listen 80;
wallarm_mode monitoring;
location / {
proxy_pass http://$UPSTREAM_IP:$UPSTREAM_PORT;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
}
server {
listen 90;
location /wallarm-status {
wallarm_status on;
allow all;
}
}
EOF
# Compose File with SELinux ( :Z ) flag
cat <<EOF > "$INSTANCE_DIR/conf.yml"
version: '3.8'
services:
$NODE_NAME:
image: docker.io/wallarm/node:4.10-latest
container_name: $NODE_NAME
restart: always
ports:
- "$TRAFFIC_PORT:80"
- "$MONITOR_PORT:90"
environment:
- WALLARM_API_TOKEN=$TOKEN
- WALLARM_API_HOST=$API_HOST
volumes:
- ./nginx.conf:/etc/nginx/http.d/default.conf:ro,Z
EOF
# --- PHASE 3: Launch ---
echo -e "${YELLOW}🚀 Launching Wallarm Instance...${NC}"
cd "$INSTANCE_DIR"
$COMPOSE_CMD -f conf.yml up -d
echo -e "\n${GREEN}✅ DEPLOYMENT COMPLETE${NC}"
echo -e "External Port: $TRAFFIC_PORT -> Internal: $UPSTREAM_IP:$UPSTREAM_PORT"
echo -e "View real-time logs: $ENGINE logs -f $NODE_NAME"

View file

@ -0,0 +1,63 @@
#!/bin/bash
# ==============================================================================
# Wallarm Pre-Flight Check
# Purpose: Validate Environment before Container Deployment
# ==============================================================================
UPSTREAM_IP="10.0.0.14"
UPSTREAM_PORT="80"
WALLARM_API="api.wallarm.com"
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
echo -e "${YELLOW}🔍 Starting Pre-Flight Checks...${NC}\n"
# 1. Check Root
[[ $EUID -ne 0 ]] && echo -e "${RED}❌ Fail: Must run as root${NC}" || echo -e "${GREEN}✅ Pass: Root privileges${NC}"
# 2. Check OS (CentOS/RHEL focus)
if [ -f /etc/redhat-release ]; then
echo -e "${GREEN}✅ Pass: CentOS/RHEL detected ($(cat /etc/redhat-release))${NC}"
else
echo -e "${YELLOW}⚠️ Warn: Not a RedHat-based system. Script 1 may need tweaks.${NC}"
fi
# 3. Check SELinux Status
SE_STATUS=$(getenforce)
if [ "$SE_STATUS" == "Enforcing" ]; then
echo -e "${YELLOW}⚠️ Note: SELinux is Enforcing. Ensure volume mounts use the :Z flag.${NC}"
else
echo -e "${GREEN}✅ Pass: SELinux is $SE_STATUS${NC}"
fi
# 4. Check Upstream Connectivity (The most important check)
echo -n "Checking connectivity to Upstream ($UPSTREAM_IP:$UPSTREAM_PORT)... "
nc -zv -w5 $UPSTREAM_IP $UPSTREAM_PORT &>/dev/null
if [ $? -eq 0 ]; then
echo -e "${GREEN}✅ Connected${NC}"
else
echo -e "${RED}❌ FAILED: Cannot reach Upstream app. Check Routing/Firewalls.${NC}"
fi
# 5. Check Wallarm Cloud Connectivity
echo -n "Checking connectivity to Wallarm API ($WALLARM_API)... "
curl -s --connect-timeout 5 https://$WALLARM_API &>/dev/null
if [ $? -eq 0 ] || [ $? -eq 45 ]; then # 45 is common if no auth, but shows port 443 is open
echo -e "${GREEN}✅ Connected${NC}"
else
echo -e "${RED}❌ FAILED: Cannot reach Wallarm Cloud. Check Proxy/Egress.${NC}"
fi
# 6. Check Port Availability
for PORT in 8000 9000; do
if lsof -Pi :$PORT -sTCP:LISTEN -t >/dev/null ; then
echo -e "${RED}❌ FAILED: Port $PORT is already in use.${NC}"
else
echo -e "${GREEN}✅ Pass: Port $PORT is free${NC}"
fi
done
echo -e "\n${YELLOW}Pre-flight complete. If all are GREEN, proceed to deployment.${NC}"

0
network-pre-check.sh Normal file
View file

View file

@ -0,0 +1,45 @@
#!/bin/bash
# 1. Define Backend
APP_SERVER="10.0.14.24:80"
echo "🛠️ Configuring Wallarm Inline Proxy..."
# 2. Write the configuration
sudo bash -c "cat << 'EOF' > /etc/nginx/sites-available/default
server {
listen 80 default_server;
server_name _;
wallarm_mode monitoring;
location / {
proxy_pass http://$APP_SERVER;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
location /wallarm-status {
wallarm_status on;
wallarm_mode off;
allow 127.0.0.1;
deny all;
}
}
EOF"
# 3. Ensure the site is enabled (Ubuntu requirement)
sudo ln -sf /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default
# 4. Test and Reload
echo "🔍 Testing Nginx..."
if sudo nginx -t; then
sudo systemctl restart nginx
echo "✅ SUCCESS: Proxying to $APP_SERVER"
curl -X GET "http://localhost" -H "accept: application/json"
curl -I "http://localhost/etc/passwd"
else
echo "❌ ERROR: Nginx config invalid."
exit 1
fi

136
vm-deployment/install.sh Normal file
View file

@ -0,0 +1,136 @@
#!/bin/bash
# ==============================================================================
# Wallarm Native Deployer: NGINX Dynamic Module (Official Repo)
# Supports: RHEL/Alma/Rocky (9.x) & Ubuntu/Debian
# ==============================================================================
# --- User Configuration ---
USE_CASE="in-line" # Options: "in-line" or "out-of-band"
TOKEN="vPHB+Ygn1ia/wg+NV49tOq3Ndf10K0sO6MgU+FzQdx7M8bW93UpAV7zfq0cZF/+3"
REGION="EU" # US or EU
UPSTREAM="10.0.0.14"
# --- Colors ---
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
# --- ROOT CHECK ---
if [[ $EUID -ne 0 ]]; then
echo -e "${RED}❌ ERROR: Run as root.${NC}"; exit 1
fi
# --- PHASE 0: Official NGINX Repo Setup ---
echo -e "${YELLOW}🛠️ Step 0: Setting up Official NGINX Repository...${NC}"
if [ -f /etc/redhat-release ]; then
yum install -y yum-utils
cat <<EOF > /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/mainline/centos/\$releasever/\$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
EOF
yum install -y nginx
elif [ -f /etc/debian_version ]; then
apt update && apt install -y curl gnupg2 ca-certificates lsb-release ubuntu-keyring
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor | tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
CODENAME=$(lsb_release -cs)
DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/mainline/$DISTRO/ $CODENAME nginx" | tee /etc/apt/sources.list.d/nginx.list
apt update && apt install -y nginx
else
echo -e "${RED}❌ Unsupported OS${NC}"; exit 1
fi
systemctl enable --now nginx
# --- PHASE 1: Wallarm All-In-One Installer ---
echo -e "${YELLOW}📦 Step 1: Running Wallarm All-in-One Installer...${NC}"
API_HOST=$( [[ "$REGION" == "US" ]] && echo "us1.api.wallarm.com" || echo "api.wallarm.com" )
# Download the latest installer (4.10 branch)
curl -O https://meganode.wallarm.com/native/all-in-one/wallarm-4.10.10.x86_64-linux.sh
chmod +x wallarm-4.10.10.x86_64-linux.sh
./wallarm-4.10.10.x86_64-linux.sh \
--no-interactive \
--token "$TOKEN" \
--host "$API_HOST" \
--nginx-bundle
# --- PHASE 2: Logic-Based Configuration ---
echo -e "${YELLOW}⚙️ Step 2: Building NGINX Config for $USE_CASE Mode...${NC}"
# Ensure module is loaded
if ! grep -q "load_module" /etc/nginx/nginx.conf; then
sed -i '1i load_module modules/ngx_http_wallarm_module.so;' /etc/nginx/nginx.conf
fi
if [[ "$USE_CASE" == "in-line" ]]; then
# Standard Reverse Proxy with Blocking capability
cat <<EOF > /etc/nginx/conf.d/wallarm-proxy.conf
server {
listen 80;
server_name _;
wallarm_mode monitoring; # Change to 'block' after testing
location / {
proxy_pass http://$UPSTREAM;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
}
}
EOF
elif [[ "$USE_CASE" == "out-of-band" ]]; then
# OOB (Passive) Mode using Nginx Mirror
cat <<EOF > /etc/nginx/conf.d/wallarm-proxy.conf
server {
listen 80;
server_name _;
location / {
# Mirror traffic to a background internal location for Wallarm
mirror /mirror;
proxy_pass http://$UPSTREAM;
}
location = /mirror {
internal;
# Wallarm processes mirrored traffic here
wallarm_mode monitoring;
wallarm_upstream_connect_timeout 2s;
proxy_pass http://127.0.0.1:1; # Dummy upstream
}
}
EOF
fi
# Add Wallarm Monitoring status location (standard for both)
cat <<EOF > /etc/nginx/conf.d/wallarm-status.conf
server {
listen 90;
server_name localhost;
location /wallarm-status {
wallarm_status on;
wallarm_mode off;
allow 127.0.0.1;
deny all;
}
}
EOF
# --- PHASE 3: Validation ---
echo -e "${YELLOW}🚀 Step 3: Validating and Restarting...${NC}"
nginx -t && systemctl restart nginx
echo -e "\n${GREEN}✅ DEPLOYMENT SUCCESSFUL ($USE_CASE)${NC}"
echo -e "--------------------------------------------------"
echo -e "NGINX Version: $(nginx -v 2>&1)"
echo -e "Wallarm Status: curl http://localhost:90/wallarm-status"
echo -e "--------------------------------------------------"

4
wallarm/notes.md Normal file
View file

@ -0,0 +1,4 @@
app1:
- url: shorty.sechpoint.app
- ip: 10.0.0.14:80
-