This commit is contained in:
SechPoint 2026-03-13 08:48:33 +00:00
parent 78c392b8de
commit 88d51d1e4f
6 changed files with 95 additions and 503 deletions

117
README.md
View file

@ -6,117 +6,14 @@ This repository contains automated scripts to deploy the Wallarm Filtering Node
---
## 🚦 Step 1: Network Pre-Check (Crucial)
## 🚦 Step 1: Mandatory Pre-Flight Diagnostic
Before attempting any installation, you **must** verify that the server has egress access to Wallarm's Cloud and the official NGINX repositories. Banks often block these by default.
Before attempting any installation, you **must** verify the environment. Banks often have strict egress filters. This script verifies sudo access, required tools, and connectivity to Wallarm Cloud IPs.
1. **Navigate to the script:**
**Run the interactive diagnostic:**
```bash
cd ~/wallarm/
```
2. **Run the audit:**
```bash
chmod +x network-pre-check.sh
sudo ./network-pre-check.sh
```
3. **Result:** If you see any **RED** "FAIL" messages, contact the network team to whitelist the listed IPs/Hostnames on Port 443. **Do not proceed until all checks are GREEN.**
---
## 🛠️ Step 2: Choose Your Deployment Method
Depending on your architecture, choose **one** of the following methods.
### Option A: Virtual Machine (Native NGINX Module)
*Best for: Maximum performance, high-traffic production, or existing NGINX servers.*
1. **Configure:** Open the install script and set your `TOKEN` and `USE_CASE` (in-line or out-of-band).
```bash
nano ~/wallarm/installation-vm/install.sh
```
2. **Run:**
```bash
chmod +x ~/wallarm/installation-vm/install.sh
sudo ~/wallarm/installation-vm/install.sh
```
### Option B: Containerized (Docker / Podman)
*Best for: Rapid PoC, testing, or environments where you don't want to modify the host OS packages.*
1. **Configure:** Open the install script and set your `TOKEN` and `UPSTREAM` IP.
```bash
nano ~/wallarm/installation-docker/install.sh
```
2. **Run:**
```bash
chmod +x ~/wallarm/installation-docker/install.sh
sudo ~/wallarm/installation-docker/install.sh
```
**Observation on scaling the Docker / Podman (Containerized) deployment**
You can run multiple instances by changing the `NODE_NAME` and `TRAFFIC_PORT`/`MONITOR_PORT` variables. This allows you to serve different backends on one host.
---
## ⚙️ Configuration Variables
Inside each `install.sh` script, you will find a **Configuration** section at the top. You must update these:
|**Variable**|**Description**|**Example**|
|---|---|---|
|`NODE_NAME`|A unique identifier for the instance (useful for containerized deployments).|`wallarm-prod-01`|
|`TRAFFIC_PORT`|The host port where the node listens for incoming application traffic.|`8000`|
|`MONITOR_PORT`|The host port used to expose Wallarm metrics (Prometheus/JSON format).|`9000`|
|`TOKEN`|The unique API Token generated in the Wallarm Console.|`vPHB+Ygn...`|
|`REGION`|The Wallarm Cloud location associated with your account (`EU` or `US`).|`US`|
|`UPSTREAM`|The internal IP or hostname of the application server being protected.|`10.0.0.14`|
|`USE_CASE`|(VM only) Sets the mode: `in-line` (active filtering) or `out-of-band` (monitoring).|`in-line`|
---
### Deployment Flow Overview
When you deploy the node using these variables, the traffic typically flows as follows:
1. **Incoming Traffic:** Hits the `TRAFFIC_PORT`.
2. **Filtering:** The node uses the `TOKEN` to sync security rules from the Wallarm Cloud (based on your `REGION`).
3. **Forwarding:** Valid traffic is sent to the `UPSTREAM` IP.
4. **Monitoring:** System health and security metrics are pulled via the `MONITOR_PORT`.
---
## 🧪 Post-Installation Test
Once the script finishes, verify the WAF is working by sending a "fake" attack:
```bash
# Replace localhost with your server IP if testing remotely
curl -I http://localhost/etc/passwd
```
## 📊 Sizing & Performance Specs
For a standard deployment (Postanalytics + NGINX on one host), we recommend the following:
### Resource Allocation & Performance
| **Resource** | **Primary Consumer** | **Performance Note** |
| ------------------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| **CPU** (2 Cores) | **NGINX & Postanalytics** | One core typically handles the NGINX filtering, while the second manages `wstore` (local data analytics). This setup safely handles ~1,000 RPS. |
| **RAM** (4 GB) | **`wstore` (Tarantool)** | Wallarm uses an in-memory circular buffer for request analysis. At 4GB, you can store roughly 1520 minutes of request metadata for analysis. |
| **Storage** (50 GB) | **Log Buffering** | Required for local attack logs and the postanalytics module. SSD is mandatory to prevent I/O wait times from slowing down request processing. |
---
### Scaling Indicators
If you notice your traffic increasing beyond these specs, here is when you should consider upgrading:
- **High CPU Usage (>70%):** Usually indicates you have reached the RPS limit for your core count or are processing highly complex, nested payloads (like large JSON or Base64-encoded data).
- **Memory Pressure:** If you see `wstore` dropping data before the 15-minute mark, it means your traffic volume (data per minute) is exceeding the 4GB buffer.
- **Disk Latency:** If you see "I/O Wait" in your system monitoring, it will directly cause latency for your end users, as NGINX cannot clear its buffers fast enough.
> [!TIP]
> **Performance Note:** One CPU core typically handles **500 RPS**. In a 2-core setup, one core is dedicated to NGINX (filtering) and one core to `wstore` (local analytics). For production spikes, we recommend 2x headroom.
# Download and run the pre-flight test
curl -sL "[https://git.sechpoint.app/customer-engineering/wallarm/-/raw/main/pre-deployment-test.sh](https://git.sechpoint.app/customer-engineering/wallarm/-/raw/main/pre-deployment-test.sh)" -o pre-deployment-test.sh
chmod +x pre-deployment-test.sh
./pre-deployment-test.sh

View file

@ -1,129 +0,0 @@
#!/bin/bash
# ==============================================================================
# Wallarm PoC: Interactive "KISS" Deployer (Keystone Bank Edition)
# ==============================================================================
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
clear
echo -e "${YELLOW}====================================================${NC}"
echo -e "${YELLOW} Wallarm Guided Instance Deployer (US Cloud) ${NC}"
echo -e "${YELLOW}====================================================${NC}\n"
# --- 1. THE ID ---
echo -e "Existing Instances in /opt/wallarm/:"
ls /opt/wallarm/ 2>/dev/null || echo "None"
echo ""
read -p "Enter Instance ID number (e.g., 1, 2, 3): " INSTANCE_ID
NODE_NAME=$(printf "wallarm-%02d" $INSTANCE_ID)
TRAFFIC_PORT=$((8000 + INSTANCE_ID))
MONITOR_PORT=$((9000 + INSTANCE_ID))
# --- 2. CONFIGURATION ---
read -p "Enter Upstream IP (App Server): " UPSTREAM_IP
read -p "Enter Upstream Port [default 80]: " UPSTREAM_PORT
UPSTREAM_PORT=${UPSTREAM_PORT:-80}
# Hardcoded US Endpoints
API_HOST="us1.api.wallarm.com"
DATA_NODE="node-data0.us1.wallarm.com"
read -p "Paste Wallarm Token (US Cloud): " TOKEN
# --- 3. PRE-FLIGHT VALIDATION ---
echo -e "\n${YELLOW}🔍 Starting Pre-Flight Connectivity Checks...${NC}"
# A. Internal Check
echo -n "Checking App Server ($UPSTREAM_IP:$UPSTREAM_PORT)... "
if ! timeout 2 bash -c "cat < /dev/null > /dev/tcp/$UPSTREAM_IP/$UPSTREAM_PORT" 2>/dev/null; then
echo -e "${RED}FAILED${NC}"; exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# B. Wallarm API Check
echo -n "Checking Wallarm API ($API_HOST)... "
if ! curl -s --connect-timeout 5 "https://$API_HOST" > /dev/null; then
echo -e "${RED}FAILED${NC}"; exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# C. Wallarm Data Node Check (Critical for events)
echo -n "Checking Wallarm Data Node ($DATA_NODE)... "
if ! timeout 3 bash -c "cat < /dev/null > /dev/tcp/$DATA_NODE/443" 2>/dev/null; then
echo -e "${RED}FAILED${NC}"
echo -e "${RED}❌ ERROR: Data transmission to Wallarm is blocked.${NC}"
echo -e "${YELLOW}Action: Whitelist IPs 34.96.64.17 and 34.110.183.149 on Port 443.${NC}"; exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# --- 4. ENGINE SETUP ---
if [ -f /etc/redhat-release ]; then
ENGINE="podman"
dnf install -y epel-release podman podman-docker podman-compose wget curl &>/dev/null
systemctl enable --now podman.socket &>/dev/null
firewall-cmd --permanent --add-port=$TRAFFIC_PORT/tcp --add-port=$MONITOR_PORT/tcp &>/dev/null
firewall-cmd --reload &>/dev/null
else
ENGINE="docker"
apt update && apt install -y docker.io docker-compose wget curl &>/dev/null
systemctl enable --now docker &>/dev/null
fi
COMPOSE_CMD=$([[ "$ENGINE" == "podman" ]] && echo "podman-compose" || echo "docker-compose")
# --- 5. WORKSPACE & CONFIG ---
INSTANCE_DIR="/opt/wallarm/$NODE_NAME"
mkdir -p "$INSTANCE_DIR"
cat <<EOF > "$INSTANCE_DIR/nginx.conf"
server {
listen 80;
wallarm_mode monitoring;
location / {
proxy_pass http://$UPSTREAM_IP:$UPSTREAM_PORT;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
}
}
server { listen 90; location /wallarm-status { wallarm_status on; allow all; } }
EOF
cat <<EOF > "$INSTANCE_DIR/conf.yml"
version: '3.8'
services:
$NODE_NAME:
image: docker.io/wallarm/node:4.10-latest
container_name: $NODE_NAME
restart: always
ports: ["$TRAFFIC_PORT:80", "$MONITOR_PORT:90"]
environment:
- WALLARM_API_TOKEN=$TOKEN
- WALLARM_API_HOST=$API_HOST
volumes: ["./nginx.conf:/etc/nginx/http.d/default.conf:ro,Z"]
EOF
# --- 6. LAUNCH ---
echo -e "${YELLOW}🚀 Launching $NODE_NAME...${NC}"
cd "$INSTANCE_DIR"
$COMPOSE_CMD -f conf.yml up -d
# --- 7. VERIFICATION ---
echo -e "\n${YELLOW}⏳ Waiting for handshake...${NC}"
sleep 5
if curl -s "http://localhost:$MONITOR_PORT/wallarm-status" | grep -q "requests"; then
echo -e "${GREEN}✅ SUCCESS: $NODE_NAME IS LIVE AND INSPECTING TRAFFIC${NC}"
else
echo -e "${RED}⚠️ WARNING: Handshake slow. Check: $ENGINE logs $NODE_NAME${NC}"
fi
echo -e "--------------------------------------------------"
echo -e "Traffic URL: http://<VM_IP>:$TRAFFIC_PORT"
echo -e "--------------------------------------------------"

View file

@ -1,120 +0,0 @@
#!/bin/bash
# ==============================================================================
# Wallarm PoC: Multi-Instance Safe Deployer (Podman/Docker)
# ==============================================================================
# --- Instance Configuration ---
NODE_NAME="wallarm-01"
TRAFFIC_PORT="8000"
MONITOR_PORT="9000"
# --- UPSTREAM SETTINGS ---
UPSTREAM_IP="10.0.0.14" # Internal Application IP
UPSTREAM_PORT="6042" # Internal Application Port
# --- CLOUD SETTINGS ---
TOKEN="YOUR_NODE_TOKEN_HERE"
REGION="EU" # US or EU
# --- Colors ---
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
echo -e "${YELLOW}🔍 PHASE 0: Pre-Flight Connectivity Checks...${NC}"
# 1. Root Check
[[ $EUID -ne 0 ]] && { echo -e "${RED}❌ ERROR: Run as root.${NC}"; exit 1; }
# 2. Specific Upstream Port Check
echo -n "Verifying connectivity to $UPSTREAM_IP on port $UPSTREAM_PORT... "
if ! timeout 3 bash -c "cat < /dev/null > /dev/tcp/$UPSTREAM_IP/$UPSTREAM_PORT" 2>/dev/null; then
echo -e "${RED}FAILED${NC}"
echo -e "${RED}❌ ERROR: The VM cannot reach the application on port $UPSTREAM_PORT.${NC}"
echo -e "${YELLOW}Action: Ask the bank's Network Team to open egress to $UPSTREAM_IP:$UPSTREAM_PORT.${NC}"
exit 1
else
echo -e "${GREEN}OK${NC}"
fi
# 3. Wallarm Cloud Check
API_HOST=$( [[ "$REGION" == "US" ]] && echo "us1.api.wallarm.com" || echo "api.wallarm.com" )
if ! curl -s --connect-timeout 5 "https://$API_HOST" > /dev/null; then
echo -e "${RED}❌ ERROR: Cannot reach Wallarm Cloud ($API_HOST). Check Proxy settings.${NC}"
exit 1
fi
# --- PHASE 1: Engine Setup ---
if [ -f /etc/redhat-release ]; then
ENGINE="podman"
dnf install -y epel-release
dnf install -y podman podman-docker podman-compose wget curl
systemctl enable --now podman.socket
# Open OS Firewalld for incoming traffic
firewall-cmd --permanent --add-port=$TRAFFIC_PORT/tcp
firewall-cmd --permanent --add-port=$MONITOR_PORT/tcp
firewall-cmd --reload
elif [ -f /etc/debian_version ]; then
ENGINE="docker"
apt update && apt install -y docker.io docker-compose wget curl
systemctl enable --now docker
else
echo -e "${RED}❌ Unsupported OS${NC}"; exit 1
fi
COMPOSE_CMD=$([[ "$ENGINE" == "podman" ]] && echo "podman-compose" || echo "docker-compose")
# --- PHASE 2: Instance Workspace ---
INSTANCE_DIR="/opt/wallarm/$NODE_NAME"
mkdir -p "$INSTANCE_DIR"
# Generate Nginx Config using the specific Upstream Port
cat <<EOF > "$INSTANCE_DIR/nginx.conf"
server {
listen 80;
wallarm_mode monitoring;
location / {
proxy_pass http://$UPSTREAM_IP:$UPSTREAM_PORT;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
}
server {
listen 90;
location /wallarm-status {
wallarm_status on;
allow all;
}
}
EOF
# Compose File with SELinux ( :Z ) flag
cat <<EOF > "$INSTANCE_DIR/conf.yml"
version: '3.8'
services:
$NODE_NAME:
image: docker.io/wallarm/node:4.10-latest
container_name: $NODE_NAME
restart: always
ports:
- "$TRAFFIC_PORT:80"
- "$MONITOR_PORT:90"
environment:
- WALLARM_API_TOKEN=$TOKEN
- WALLARM_API_HOST=$API_HOST
volumes:
- ./nginx.conf:/etc/nginx/http.d/default.conf:ro,Z
EOF
# --- PHASE 3: Launch ---
echo -e "${YELLOW}🚀 Launching Wallarm Instance...${NC}"
cd "$INSTANCE_DIR"
$COMPOSE_CMD -f conf.yml up -d
echo -e "\n${GREEN}✅ DEPLOYMENT COMPLETE${NC}"
echo -e "External Port: $TRAFFIC_PORT -> Internal: $UPSTREAM_IP:$UPSTREAM_PORT"
echo -e "View real-time logs: $ENGINE logs -f $NODE_NAME"

View file

@ -1,63 +0,0 @@
#!/bin/bash
# ==============================================================================
# Wallarm Pre-Flight Check
# Purpose: Validate Environment before Container Deployment
# ==============================================================================
UPSTREAM_IP="10.0.0.14"
UPSTREAM_PORT="80"
WALLARM_API="api.wallarm.com"
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
echo -e "${YELLOW}🔍 Starting Pre-Flight Checks...${NC}\n"
# 1. Check Root
[[ $EUID -ne 0 ]] && echo -e "${RED}❌ Fail: Must run as root${NC}" || echo -e "${GREEN}✅ Pass: Root privileges${NC}"
# 2. Check OS (CentOS/RHEL focus)
if [ -f /etc/redhat-release ]; then
echo -e "${GREEN}✅ Pass: CentOS/RHEL detected ($(cat /etc/redhat-release))${NC}"
else
echo -e "${YELLOW}⚠️ Warn: Not a RedHat-based system. Script 1 may need tweaks.${NC}"
fi
# 3. Check SELinux Status
SE_STATUS=$(getenforce)
if [ "$SE_STATUS" == "Enforcing" ]; then
echo -e "${YELLOW}⚠️ Note: SELinux is Enforcing. Ensure volume mounts use the :Z flag.${NC}"
else
echo -e "${GREEN}✅ Pass: SELinux is $SE_STATUS${NC}"
fi
# 4. Check Upstream Connectivity (The most important check)
echo -n "Checking connectivity to Upstream ($UPSTREAM_IP:$UPSTREAM_PORT)... "
nc -zv -w5 $UPSTREAM_IP $UPSTREAM_PORT &>/dev/null
if [ $? -eq 0 ]; then
echo -e "${GREEN}✅ Connected${NC}"
else
echo -e "${RED}❌ FAILED: Cannot reach Upstream app. Check Routing/Firewalls.${NC}"
fi
# 5. Check Wallarm Cloud Connectivity
echo -n "Checking connectivity to Wallarm API ($WALLARM_API)... "
curl -s --connect-timeout 5 https://$WALLARM_API &>/dev/null
if [ $? -eq 0 ] || [ $? -eq 45 ]; then # 45 is common if no auth, but shows port 443 is open
echo -e "${GREEN}✅ Connected${NC}"
else
echo -e "${RED}❌ FAILED: Cannot reach Wallarm Cloud. Check Proxy/Egress.${NC}"
fi
# 6. Check Port Availability
for PORT in 8000 9000; do
if lsof -Pi :$PORT -sTCP:LISTEN -t >/dev/null ; then
echo -e "${RED}❌ FAILED: Port $PORT is already in use.${NC}"
else
echo -e "${GREEN}✅ Pass: Port $PORT is free${NC}"
fi
done
echo -e "\n${YELLOW}Pre-flight complete. If all are GREEN, proceed to deployment.${NC}"

View file

@ -6,99 +6,106 @@ GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m'
echo -e "${YELLOW}=== Sechpoint Wallarm Pre-Flight Diagnostic ===${NC}"
# --- Configuration & Globals ---
EU_DATA_NODES=("api.wallarm.com" "node-data0.eu1.wallarm.com" "node-data1.eu1.wallarm.com")
US_DATA_NODES=("us1.api.wallarm.com" "node-data0.us1.wallarm.com" "node-data1.us1.wallarm.com")
# --- 1. INTERACTIVE INPUT ---
read -p "Enter Application Server IP [127.0.0.1]: " APP_HOST </dev/tty
# --- Functions ---
print_header() {
echo -e "${YELLOW}=== Sechpoint Wallarm Pre-Flight Diagnostic ===${NC}"
echo "Use this tool to verify environment readiness before deployment."
echo "-------------------------------------------------------"
}
check_proxy() {
echo -e "${YELLOW}[1/5] Checking Environment Proxies...${NC}"
if [ -n "$https_proxy" ] || [ -n "$HTTPS_PROXY" ]; then
echo -e "${GREEN}[INFO]${NC} Proxy detected: ${https_proxy:-$HTTPS_PROXY}"
else
echo -e "[INFO] No system proxy detected."
fi
}
get_user_input() {
read -p "Enter Application Server IP (to be protected) [127.0.0.1]: " APP_HOST </dev/tty
APP_HOST=${APP_HOST:-127.0.0.1}
read -p "Enter Application Server Port [8080]: " APP_PORT </dev/tty
APP_PORT=${APP_PORT:-8080}
}
# --- 2. SUDO & SYSTEM CHECK ---
echo -e "\n${YELLOW}[1/4] Checking Sudo & OS Status...${NC}"
# Improved Sudo Check: Try a non-destructive command with sudo
echo "Checking sudo permissions (you may be prompted for a password)..."
check_sudo() {
echo -e "\n${YELLOW}[2/5] Checking Sudo & OS Status...${NC}"
echo "Verifying sudo permissions (you may be prompted for your password)..."
if sudo -v; then
echo -e "${GREEN}[PASS]${NC} User has sudo privileges."
echo -e "${GREEN}[PASS]${NC} Sudo access confirmed."
else
echo -e "${RED}[FAIL]${NC} User is NOT in sudoers or password was incorrect."
echo -e "${RED}[FAIL]${NC} Sudo access DENIED. You must be a sudoer to install Wallarm."
fi
# Detect OS and try to update/install basics
if [ -f /etc/debian_version ]; then
echo "OS: Debian/Ubuntu detected."
# sudo apt-get update -qq && sudo apt-get install -y curl wget git -qq > /dev/null
elif [ -f /etc/redhat-release ]; then
echo "OS: RHEL/CentOS detected."
# sudo yum makecache -q && sudo yum install -y curl wget git -q > /dev/null
else
echo "OS: Non-Linux (Mac/Other) detected. Network tests will use 'curl' fallback."
if [ -f /etc/os-release ]; then
( . /etc/os-release; echo "OS: $PRETTY_NAME" )
fi
}
# --- 3. TOOL VERIFICATION ---
echo -e "\n${YELLOW}[2/4] Verifying Required Tools...${NC}"
for tool in curl wget git; do
if command -v $tool &> /dev/null; then
check_tools() {
echo -e "\n${YELLOW}[3/5] Verifying Required Tools...${NC}"
local tools=("curl" "wget" "gpg" "grep")
for tool in "${tools[@]}"; do
if command -v "$tool" &> /dev/null; then
echo -e "${GREEN}[PASS]${NC} $tool is installed."
else
echo -e "${RED}[FAIL]${NC} $tool is MISSING."
fi
done
}
# --- 4. WALLARM CLOUD CONNECTIVITY ---
echo -e "\n${YELLOW}[3/4] Testing Wallarm Cloud Connectivity (Port 443)...${NC}"
test_conn() {
# The core connectivity logic
test_endpoint() {
local target=$1
local desc=$2
if [[ "$OSTYPE" == "darwin"* ]]; then
# Mac Fallback: If curl gets ANY status code, the port is open.
# -I (Head only), -s (Silent), -k (Insecure), --max-time 3
if curl -Is --connect-timeout 3 "https://$target" > /dev/null 2>&1; then
echo -e "${GREEN}[PASS]${NC} $desc ($target)"
# -skI = silent, insecure (ignore certs), head-only
if curl -skI --connect-timeout 5 "https://$target" > /dev/null 2>&1 || [ $? -eq 45 ] || [ $? -eq 52 ]; then
echo -e "${GREEN}[PASS]${NC} Reached $target"
else
echo -e "${RED}[FAIL]${NC} $desc ($target) - BLOCKED"
fi
else
# Linux Native (Still the most reliable for VMs)
if timeout 3 bash -c "cat < /dev/null > /dev/tcp/$target/443" 2>/dev/null; then
echo -e "${GREEN}[PASS]${NC} $desc ($target)"
else
echo -e "${RED}[FAIL]${NC} $desc ($target) - BLOCKED"
fi
echo -e "${RED}[FAIL]${NC} BLOCKED: $target"
fi
}
check_wallarm_cloud() {
echo -e "\n${YELLOW}[4/5] Testing Wallarm Cloud Connectivity (Port 443)...${NC}"
echo "--- EU Cloud ---"
test_conn "34.160.38.183" "node-data1.eu1"
test_conn "34.144.227.90" "node-data0.eu1"
test_conn "34.90.110.226" "api.wallarm.com"
for node in "${EU_DATA_NODES[@]}"; do test_endpoint "$node"; done
echo -e "\n--- US Cloud ---"
test_conn "34.96.64.17" "node-data0.us1"
test_conn "34.110.183.149" "node-data1.us1"
test_conn "35.235.66.155" "us1.api.wallarm.com"
for node in "${US_DATA_NODES[@]}"; do test_endpoint "$node"; done
}
# --- 5. INTERNAL APP CHECK ---
echo -e "\n${YELLOW}[4/4] Testing Internal App Connectivity...${NC}"
if [[ "$OSTYPE" == "darwin"* ]]; then
# Mac check for the app port specifically
if curl -s --connect-timeout 3 "$APP_HOST:$APP_PORT" > /dev/null 2>&1 || [ $? -eq 52 ] || [ $? -eq 45 ]; then
echo -e "${GREEN}[PASS]${NC} Reached App at $APP_HOST:$APP_PORT"
check_internal_app() {
echo -e "\n${YELLOW}[5/5] Testing Internal App Connectivity...${NC}"
# We test TCP handshake only.
# Curl exit 7 (Refused) and 28 (Timeout) are the main failure triggers.
curl -vsk --connect-timeout 5 "http://$APP_HOST:$APP_PORT" > /dev/null 2>&1
local exit_code=$?
# Exit codes 0, 52 (empty reply), 22 (4xx/5xx), 56 (reset) all imply the port is OPEN.
if [[ "$exit_code" =~ ^(0|52|22|56|35)$ ]]; then
echo -e "${GREEN}[PASS]${NC} TCP Connection established to $APP_HOST:$APP_PORT"
else
echo -e "${RED}[FAIL]${NC} Cannot reach $APP_HOST on port $APP_PORT"
fi
else
# Linux native check
if timeout 3 bash -c "cat < /dev/null > /dev/tcp/$APP_HOST/$APP_PORT" 2>/dev/null; then
echo -e "${GREEN}[PASS]${NC} Reached App at $APP_HOST:$APP_PORT"
else
echo -e "${RED}[FAIL]${NC} CANNOT REACH $APP_HOST on port $APP_PORT"
fi
echo -e "${RED}[FAIL]${NC} CANNOT REACH App at $APP_HOST:$APP_PORT (Error: $exit_code)"
echo " Check firewalls or verify if the service is running on the app server."
fi
}
# --- Execution ---
print_header
check_proxy
get_user_input
check_sudo
check_tools
check_wallarm_cloud
check_internal_app
echo -e "\n${YELLOW}-------------------------------------------------------"
echo -e "PRE-FLIGHT COMPLETE. PLEASE SCREENSHOT THIS OUTPUT."