Introduction

Stop copying nginx configs from Stack Overflow. Half of them are for nginx 1.14 and the other half have security holes. Here's how to actually understand what you're configuring.

Most people need three things from Nginx: reverse proxy their app, terminate TLS, and serve static files without burning backend CPU. Everything else -- caching, rate limiting, load balancing -- is situational. But those three? Non-negotiable for any production deploy. And most people get at least one of them wrong because they never learned how Nginx actually processes a request.

So that is where we start. The mental model first, then the configs.

Nginx Architecture and Configuration Files

Nginx processes a request through a hierarchy. A request comes in. Nginx looks at the Host header and matches it against server_name directives across all server blocks. That picks the server block. Within the block, the URI gets matched against location directives -- exact matches first, then prefix matches, then regex. Whichever location wins gets to handle the request. Directives set at the http level cascade down into server blocks, which cascade down into location blocks, unless explicitly overridden. Once you see this tree structure, most confusing Nginx behavior stops being confusing.

The architecture under the hood: one master process, a handful of worker processes (one per CPU core), and each worker handles thousands of connections through an event loop. Not a thread-per-connection model like Apache. A few megabytes of memory for thousands of concurrent connections.

Config file lives at /etc/nginx/nginx.conf.

/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log warn;
events {
 worker_connections1024;
 multi_accept on;
 use epoll;
}
http {
 include /etc/nginx/mime.types;
 default_type application/octet-stream;
 # Logging formatlog_format main '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';
 access_log /var/log/nginx/access.log main;
 # Performancesendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout65;
 # Include site configurationsinclude /etc/nginx/conf.d/*.conf;
 include /etc/nginx/sites-enabled/*;
}

worker_processes auto spawns one worker per CPU core. Leave it. The events block controls connection handling. use epoll picks the fastest event notification mechanism on Linux. Everything web-related lives inside the http block, and the include lines at the bottom pull in per-site configs from separate files.

You create config files in /etc/nginx/sites-available/ and symlink the ones you want active into /etc/nginx/sites-enabled/. Disabling a site is just removing the symlink.

And before you reload anything: sudo nginx -t.

Always.

A syntax error in an Nginx config takes down every site on the machine, not just the one you changed.

Server Blocks and Virtual Hosts

One Nginx instance, multiple domains. Each server block defines a domain, a document root, and the rules for that domain's requests. Apache calls them virtual hosts.

/etc/nginx/sites-available/example.com
server {
 listen80;
 listen[::]:80;
 server_name example.com www.example.com;
 root /var/www/example.com/html;
 index index.html index.htm;
 # Main location blocklocation / {
 try_files $uri $uri/ =404;
 }
 # Custom error pageserror_page404 /404.html;
 error_page500 502 503 504 /50x.html;
 # Static asset cachinglocation ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff2)$ {
 expires30d;
 add_header Cache-Control "public, immutable";
 }
 # Deny access to hidden fileslocation ~ /\. {
 deny all;
 access_log off;
 log_not_found off;
 }
}

Nothing to explain here that the config does not already say. listen 80 on IPv4 and IPv6. server_name matches the Host header. try_files tries the URI as a file, then a directory, then 404. Static asset caching on common extensions. Deny hidden files. Symlink it into sites-enabled and reload.

One thing that has bitten me: try_files makes perfect sense for static sites, but people leave it in when they switch to a reverse proxy setup. Then requests that should hit your backend get 404'd by Nginx before they ever reach your app.

A reload with bad syntax will not crash the running server. It will silently keep the old configuration. Which leads to twenty minutes of "why aren't my changes working" before you check the error log.

Reverse Proxy Configuration

Your Node.js app on port 3000 should not face the public internet. Neither should your Django app on 8000 or your Go binary on 8080.

Nginx buffers slow clients so your app threads are not sitting idle waiting on a phone with bad cell signal to finish uploading a request body. You get TLS termination in one place, centralized logging, rate limiting, and horizontal scaling -- add more app servers behind the same Nginx instance later. But honestly, the biggest reason is that application servers are terrible at handling raw internet traffic. They were not built for it. Nginx was.

/etc/nginx/sites-available/myapp.com
upstream node_backend {
 server127.0.0.1:3000;
 keepalive32;
}
server {
 listen80;
 server_name myapp.com www.myapp.com;
 # Proxy settingslocation / {
 proxy_pass http://node_backend;
 proxy_http_version1.1;
 # Required headers for proper proxyingproxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 # WebSocket supportproxy_set_header Upgrade $http_upgrade;
 proxy_set_header Connection "upgrade";
 # Timeoutsproxy_connect_timeout60s;
 proxy_send_timeout60s;
 proxy_read_timeout60s;
 # Bufferingproxy_buffering on;
 proxy_buffer_size4k;
 proxy_buffers8 4k;
 }
 # Serve static files directly (bypass proxy)location /static/ {
 alias /var/www/myapp/static/;
 expires30d;
 add_header Cache-Control "public, immutable";
 }
}

If you've seen one proxy_pass, you've seen them all. The interesting part here is the upstream block -- use it even with a single backend, because keepalive 32 enables connection pooling. Without it, Nginx opens a fresh TCP connection for every request. With it, connections get reused. And adding a second backend later is one line.

The proxy_set_header lines are the part people forget. Without X-Real-IP and X-Forwarded-For, your app sees every request coming from 127.0.0.1. Useless for logging. Useless for rate limiting. X-Forwarded-Proto tells your app whether the original request was HTTP or HTTPS -- get this wrong and your redirects will loop forever.

The WebSocket headers (Upgrade and Connection) are there because you will need them. Hot module reloading, Socket.io, any real-time feature -- dead without those two lines. And Nginx gives zero useful error output when they are missing. Just a silent dropped connection.

Load Balancing Strategies

Three built-in strategies.

Round-robin (the default) cycles through servers. Fine when your backends are roughly equivalent hardware. Least connections sends each request to whichever server has the fewest active connections right now -- if one endpoint takes 50ms and another takes several seconds, round-robin creates hotspots, but least-conn adapts. IP hash routes the same client IP to the same backend every time. Sticky sessions without external state. The downside: clients behind a shared NAT all look like one IP, so distribution gets lopsided.

Load Balancing Configuration
# Round-robin with weighted serversupstream app_cluster {
 server10.0.1.10:3000 weight=3; # gets 3x trafficserver10.0.1.11:3000 weight=2; # gets 2x trafficserver10.0.1.12:3000 weight=1; # gets 1x trafficserver10.0.1.13:3000 backup; # only used if others failkeepalive64;
}
# Least connections strategyupstream api_cluster {
 least_conn;
 server10.0.2.10:8080 max_fails=3 fail_timeout=30s;
 server10.0.2.11:8080 max_fails=3 fail_timeout=30s;
 server10.0.2.12:8080 max_fails=3 fail_timeout=30s;
 keepalive32;
}
# IP hash for session persistenceupstream session_cluster {
 ip_hash;
 server10.0.3.10:5000;
 server10.0.3.11:5000;
 server10.0.3.12:5000;
}
server {
 listen80;
 server_name app.example.com;
 location / {
 proxy_pass http://app_cluster;
 proxy_http_version1.1;
 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header Connection "";
 # Retry on failureproxy_next_upstream error timeout http_502 http_503;
 proxy_next_upstream_tries3;
 }
 location /api/ {
 proxy_pass http://api_cluster;
 proxy_http_version1.1;
 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header Connection "";
 }
}

weight when your servers are not identical. An 8-core machine should get more traffic than a 4-core one. backup is the last resort -- only receives requests when every other server in the group is down.

max_fails and fail_timeout handle passive health checking. Three errors in 30 seconds and Nginx pulls that server from rotation. Not as good as active health checks (Nginx Plus territory), but sufficient.

proxy_next_upstream is the part that actually saves your users. First backend returns a 502? Nginx retries on a different server. They never see it. But cap proxy_next_upstream_tries or a cascade of failing backends turns into an avalanche of retry traffic that makes everything worse.

SSL/TLS with Let's Encrypt

No TLS, no deploy.

Install Certbot: sudo apt install certbot python3-certbot-nginx. Get a certificate: sudo certbot --nginx -d example.com -d www.example.com. Certbot modifies your Nginx config automatically. But its defaults prioritize compatibility over security, so here is what each directive in a hardened config actually does.

ssl_protocols TLSv1.2 TLSv1.3 drops TLS 1.0 and 1.1, which have been officially deprecated since 2021 and have known vulnerabilities. No reason to support them. The cipher list prioritizes ECDHE ciphers for forward secrecy: even if your private key leaks in the future, past traffic stays encrypted. ssl_session_cache shared:SSL:10m lets workers share TLS session data so returning visitors skip the full handshake. ssl_stapling is the other performance win -- without it, the browser contacts the certificate authority on every connection to check validity. With stapling, Nginx fetches the OCSP response itself and bundles it into the handshake. One fewer round-trip. And Strict-Transport-Security with the preload flag tells browsers to always use HTTPS, even if the user types http://. Be careful with this one. Once you are on the preload list, removing your domain takes months.

SSL/TLS Configuration
# Redirect all HTTP traffic to HTTPSserver {
 listen80;
 listen[::]:80;
 server_name example.com www.example.com;
 return301 https://$server_name$request_uri;
}
server {
 listen443 ssl http2;
 listen[::]:443 ssl http2;
 server_name example.com www.example.com;
 # Certificate files (managed by Certbot)ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
 ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
 # TLS protocol and cipher configurationssl_protocols TLSv1.2 TLSv1.3;
 ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
 ssl_prefer_server_ciphers off;
 # SSL session caching for performancessl_session_cache shared:SSL:10m;
 ssl_session_timeout1d;
 ssl_session_tickets off;
 # OCSP stapling (faster certificate validation)ssl_stapling on;
 ssl_stapling_verify on;
 resolver1.1.1.1 8.8.8.8 valid=300s;
 resolver_timeout5s;
 # HSTS (force browsers to use HTTPS for 1 year)add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
 # ... your location blocks go here
}

Certificates expire every 90 days. Certbot installs a systemd timer for auto-renewal. Verify with sudo certbot renew --dry-run and forget about it.

Caching and Compression

Two wins that stack. Caching stores backend responses in Nginx so repeat requests never touch your application. Compression shrinks what goes over the wire.

Caching and Compression Configuration
# Define cache zone in the http blockproxy_cache_path /var/cache/nginx
 levels=1:2
 keys_zone=app_cache:10m
 max_size=1g
 inactive=60m
 use_temp_path=off;
server {
 # ... SSL and server_name config ...# Gzip compressiongzip on;
 gzip_vary on;
 gzip_proxied any;
 gzip_comp_level5;
 gzip_min_length256;
 gzip_types
 text/plain
 text/css
 text/javascript
 application/json
 application/javascript
 application/xml
 application/xml+rss
 image/svg+xml
 font/woff2;
 # Proxy caching for API responseslocation /api/ {
 proxy_pass http://node_backend;
 proxy_cache app_cache;
 proxy_cache_valid200 10m;
 proxy_cache_valid404 1m;
 proxy_cache_use_stale error timeout updating
 http_500 http_502 http_503 http_504;
 proxy_cache_lock on;
 # Cache key based on request URIproxy_cache_key"$scheme$request_method$host$request_uri";
 # Add cache status header for debuggingadd_header X-Cache-Status $upstream_cache_status;
 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 }
 # Skip cache for authenticated requestslocation /api/user/ {
 proxy_pass http://node_backend;
 proxy_cache_bypass1;
 proxy_no_cache1;
 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 }
}

proxy_cache_use_stale is the part that matters most here. Backend goes down? Nginx serves the last cached response instead of an error page. Your app is dead but your users still see something useful. proxy_cache_lock prevents the thundering herd -- cache expires, a hundred requests arrive, only one goes to the backend. The other ninety-nine wait.

On gzip: do not set gzip_comp_level above 5. Difference between 5 and 9? Under 3% size reduction for roughly double the CPU cost. gzip_min_length of 256 bytes avoids compressing tiny responses where the gzip overhead actually makes them larger. Only text-based MIME types. Gzipping a JPEG wastes CPU and changes nothing.

X-Cache-Status returns HIT, MISS, BYPASS, or EXPIRED. Useful in staging. Strip it in production if you do not want to leak implementation details.

Rate Limiting and Security Headers

Add these before you ship. Not after. "We'll add security later" means you will not add security.

Rate Limiting and Security Headers
# Define rate limit zones (in http block)limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
# Connection limitinglimit_conn_zone $binary_remote_addr zone=addr:10m;
server {
 # ... SSL config ...# Global connection limitlimit_conn addr 100;
 # Request size limitsclient_max_body_size10m;
 client_body_timeout12s;
 client_header_timeout12s;
 # Security headersadd_header X-Frame-Options "SAMEORIGIN" always;
 add_header X-Content-Type-Options "nosniff" always;
 add_header X-XSS-Protection "1; mode=block" always;
 add_header Referrer-Policy "strict-origin-when-cross-origin" always;
 add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com;" always;
 add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
 # Hide Nginx versionserver_tokens off;
 # General rate limitlocation / {
 limit_req zone=general burst=20 nodelay;
 proxy_pass http://node_backend;
 }
 # Strict rate limit for login endpointlocation /auth/login {
 limit_req zone=login burst=3 nodelay;
 proxy_pass http://node_backend;
 }
 # Higher rate limit for APIlocation /api/ {
 limit_req zone=api burst=50 nodelay;
 proxy_pass http://node_backend;
 }
 # Block common attack patternslocation ~* (\.php|\.asp|\.aspx|\.jsp|\.cgi)$ {
 return444;
 }
 # Block access to sensitive fileslocation ~* (\.env|\.git|\.htaccess|\.htpasswd|wp-config) {
 deny all;
 return404;
 }
}

Three rate limit zones targeting different endpoints: general browsing at 10 req/s, login at 5 req/min (you do not need fast logins), and API at 30 req/s. $binary_remote_addr is the compact binary form of the client IP. Saves memory versus the string version.

Without burst, every request over the limit gets an instant 503. A browser loading a single page fires 15 parallel requests for CSS, JS, and images. That is already over a 10 req/s limit. burst=20 queues the overflow, nodelay processes them immediately. Normal browsing works. A script hammering at 100 req/s hits the wall.

Content-Security-Policy is the most important header in this block. It whitelists exactly which origins can serve scripts, styles, images, and fonts. Even if an attacker injects a script tag, the browser refuses to run it. The rest -- X-Frame-Options, X-Content-Type-Options, Referrer-Policy -- are one-liners that close well-known attack vectors. Just add them.

server_tokens off hides the Nginx version. Minor. No upside to broadcasting it though.

The final two location blocks: cheap defense. If you do not run PHP, block .php requests. Block .env, .git, and .htaccess unconditionally. return 444 is Nginx-specific -- drops the connection with zero response body. Faster than sending an error page to a bot, and it gives the scanner nothing to work with.

Before deploying: nginx -t passes. SSL Labs gives you an A. Your security headers are in place. Your logs go somewhere you'll actually check. Your rate limits will not lock out normal users. You can explain what every directive in your config does.

Done.

Anurag Sinha

Anurag Sinha

Full Stack Developer & Technical Writer

Anurag is a full stack developer and technical writer. He covers web technologies, backend systems, and developer tools for the Codertronix community.