| container-deployment | ||
| vm-deployment | ||
| wallarm | ||
| .DS_Store | ||
| network-pre-check.sh | ||
| README.md | ||
🛡️ Wallarm Deployment Toolkit
This repository contains automated scripts to deploy the Wallarm Filtering Node in various environments. Whether you are using a virtual machine (NGINX Dynamic Module) or a containerized environment (Docker/Podman), these scripts ensure a "Bank-Grade" configuration.
Repository: https://git.sechpoint.app/customer-engineering/wallarm
🚦 Step 1: Network Pre-Check (Crucial)
Before attempting any installation, you must verify that the server has egress access to Wallarm's Cloud and the official NGINX repositories. Banks often block these by default.
- Navigate to the script:
cd ~/wallarm/
- Run the audit:
chmod +x network-pre-check.sh
sudo ./network-pre-check.sh
- Result: If you see any RED "FAIL" messages, contact the network team to whitelist the listed IPs/Hostnames on Port 443. Do not proceed until all checks are GREEN.
🛠️ Step 2: Choose Your Deployment Method
Depending on your architecture, choose one of the following methods.
Option A: Virtual Machine (Native NGINX Module)
Best for: Maximum performance, high-traffic production, or existing NGINX servers.
- Configure: Open the install script and set your
TOKENandUSE_CASE(in-line or out-of-band).
nano ~/wallarm/installation-vm/install.sh
- Run:
chmod +x ~/wallarm/installation-vm/install.sh
sudo ~/wallarm/installation-vm/install.sh
Option B: Containerized (Docker / Podman)
Best for: Rapid PoC, testing, or environments where you don't want to modify the host OS packages.
- Configure: Open the install script and set your
TOKENandUPSTREAMIP.
nano ~/wallarm/installation-docker/install.sh
- Run:
chmod +x ~/wallarm/installation-docker/install.sh
sudo ~/wallarm/installation-docker/install.sh
Observation on scaling the Docker / Podman (Containerized) deployment
You can run multiple instances by changing the NODE_NAME and TRAFFIC_PORT/MONITOR_PORT variables. This allows you to serve different backends on one host.
⚙️ Configuration Variables
Inside each install.sh script, you will find a Configuration section at the top. You must update these:
| Variable | Description | Example |
|---|---|---|
NODE_NAME |
A unique identifier for the instance (useful for containerized deployments). | wallarm-prod-01 |
TRAFFIC_PORT |
The host port where the node listens for incoming application traffic. | 8000 |
MONITOR_PORT |
The host port used to expose Wallarm metrics (Prometheus/JSON format). | 9000 |
TOKEN |
The unique API Token generated in the Wallarm Console. | vPHB+Ygn... |
REGION |
The Wallarm Cloud location associated with your account (EU or US). |
US |
UPSTREAM |
The internal IP or hostname of the application server being protected. | 10.0.0.14 |
USE_CASE |
(VM only) Sets the mode: in-line (active filtering) or out-of-band (monitoring). |
in-line |
Deployment Flow Overview
When you deploy the node using these variables, the traffic typically flows as follows:
- Incoming Traffic: Hits the
TRAFFIC_PORT. - Filtering: The node uses the
TOKENto sync security rules from the Wallarm Cloud (based on yourREGION). - Forwarding: Valid traffic is sent to the
UPSTREAMIP. - Monitoring: System health and security metrics are pulled via the
MONITOR_PORT.
🧪 Post-Installation Test
Once the script finishes, verify the WAF is working by sending a "fake" attack:
# Replace localhost with your server IP if testing remotely
curl -I http://localhost/etc/passwd
📊 Sizing & Performance Specs
For a standard deployment (Postanalytics + NGINX on one host), we recommend the following:
Resource Allocation & Performance
| Resource | Primary Consumer | Performance Note |
|---|---|---|
| CPU (2 Cores) | NGINX & Postanalytics | One core typically handles the NGINX filtering, while the second manages wstore (local data analytics). This setup safely handles ~1,000 RPS. |
| RAM (4 GB) | wstore (Tarantool) |
Wallarm uses an in-memory circular buffer for request analysis. At 4GB, you can store roughly 15–20 minutes of request metadata for analysis. |
| Storage (50 GB) | Log Buffering | Required for local attack logs and the postanalytics module. SSD is mandatory to prevent I/O wait times from slowing down request processing. |
Scaling Indicators
If you notice your traffic increasing beyond these specs, here is when you should consider upgrading:
-
High CPU Usage (>70%): Usually indicates you have reached the RPS limit for your core count or are processing highly complex, nested payloads (like large JSON or Base64-encoded data).
-
Memory Pressure: If you see
wstoredropping data before the 15-minute mark, it means your traffic volume (data per minute) is exceeding the 4GB buffer. -
Disk Latency: If you see "I/O Wait" in your system monitoring, it will directly cause latency for your end users, as NGINX cannot clear its buffers fast enough.
Tip
Performance Note: One CPU core typically handles 500 RPS. In a 2-core setup, one core is dedicated to NGINX (filtering) and one core to
wstore(local analytics). For production spikes, we recommend 2x headroom.