Este sitio utiliza cookies propias y de terceros. Si continúa navegando consideramos que acepta el uso de cookies. OK Más Información.

Damn Small XSS Scanner

  • 0 Respuestas
  • 2272 Vistas

0 Usuarios y 2 Visitantes están viendo este tema.

Conectado ANTRAX

  • *
  • Administrator
  • Mensajes: 5469
  • Actividad:
    15%
  • Reputación 35
  • ANTRAX
    • Ver Perfil
    • Underc0de
    • Email
  • Skype: underc0de.org
  • Twitter: @Underc0de
« en: Septiembre 20, 2018, 11:07:57 am »

Excelente y funcional Scanner de XSS que soporta parámetros POST y GET. Esta escrito enPython y no tiene más de 100 lineas de código.


Tiene también soporte para Proxys HTTP y se le pueden emplear algunas opciones como la de implementar User-Agent, Referer y Cookies.

Uso y Opciones:

Código: Bash
  1. $ python dsxs.py -h
  2. Damn Small XSS Scanner (DSXS) < 100 LoC (Lines of Code) #v0.2d
  3.  by: Miroslav Stampar (@stamparm)
  4.  
  5. Usage: dsxs.py [options]
  6.  
  7. Options:
  8.   --version          show program's version number and exit
  9.  -h, --help         show this help message and exit
  10.  -u URL, --url=URL  Target URL (e.g. "http://www.target.com/page.htm?id=1")
  11.  --data=DATA        POST data (e.g. "query=test")
  12.  --cookie=COOKIE    HTTP Cookie header value
  13.  --user-agent=UA    HTTP User-Agent header value
  14.  --referer=REFERER  HTTP Referer header value
  15.  --proxy=PROXY      HTTP proxy address (e.g. "http://127.0.0.1:8080")


Ejemplo de uso:

Código: Bash
  1. $ python dsxs.py -u "http://testphp.vulnweb.com/search.php?test=query" --data="s
  2. earchFor=foobar"
  3. Damn Small XSS Scanner (DSXS) < 100 LoC (Lines of Code) #v0.2d
  4.  by: Miroslav Stampar (@stamparm)
  5.  
  6. * scanning GET parameter 'test'
  7. * scanning POST parameter 'searchFor'
  8.  (i) POST parameter 'searchFor' appears to be XSS vulnerable (">.xss.<", outside
  9.  of tags, no filtering)
  10.  
  11. scan results: possible vulnerabilities found

Código: Bash
  1. $ python dsxs.py -u "http://public-firing-range.appspot.com/address/location.has
  2. h/replace"
  3. Damn Small XSS Scanner (DSXS) < 100 LoC (Lines of Code) #v0.2d
  4.  by: Miroslav Stampar (@stamparm)
  5.  
  6.  (i) page itself appears to be XSS vulnerable (DOM)
  7.   (o) ...<script>
  8.       var payload = window.location.hash.substr(1);location.replace(payload);
  9.  
  10.     </script>...
  11.  (x) no usable GET/POST parameters found
  12.  
  13. scan results: possible vulnerabilities found


Requerimientos:

- Python 2.6.x o 2.7.x


SRC:

Código: Python
  1. #!/usr/bin/env python
  2. import cookielib, optparse, random, re, string, urllib, urllib2, urlparse
  3.  
  4. NAME, VERSION, AUTHOR, LICENSE = "Damn Small XSS Scanner (DSXS) < 100 LoC (Lines of Code)", "0.2h", "Miroslav Stampar (@stamparm)", "Public domain (FREE)"
  5.  
  6. SMALLER_CHAR_POOL    = ('<', '>')                                                           # characters used for XSS tampering of parameter values (smaller set - for avoiding possible SQLi errors)
  7. LARGER_CHAR_POOL     = ('\'', '"', '>', '<', ';')                                           # characters used for XSS tampering of parameter values (larger set)
  8. GET, POST            = "GET", "POST"                                                        # enumerator-like values used for marking current phase
  9. PREFIX_SUFFIX_LENGTH = 5                                                                    # length of random prefix/suffix used in XSS tampering
  10. COOKIE, UA, REFERER = "Cookie", "User-Agent", "Referer"                                     # optional HTTP header names
  11. TIMEOUT = 30                                                                                # connection timeout in seconds
  12. DOM_FILTER_REGEX = r"(?s)<!--.*?-->|\bescape\([^)]+\)|\([^)]+==[^(]+\)|\"[^\"]+\"|'[^']+'"  # filtering regex used before DOM XSS search
  13.  
  14. REGULAR_PATTERNS = (                                                                        # each (regular pattern) item consists of (r"context regex", (prerequisite unfiltered characters), "info text", r"content removal regex")
  15.     (r"\A[^<>]*%(chars)s[^<>]*\Z", ('<', '>'), "\".xss.\", pure text response, %(filtering)s filtering", None),
  16.     (r"<!--[^>]*%(chars)s|%(chars)s[^<]*-->", ('<', '>'), "\"<!--.'.xss.'.-->\", inside the comment, %(filtering)s filtering", None),
  17.     (r"(?s)<script[^>]*>[^<]*?'[^<']*%(chars)s|%(chars)s[^<']*'[^<]*</script>", ('\'', ';'), "\"<script>.'.xss.'.</script>\", enclosed by <script> tags, inside single-quotes, %(filtering)s filtering", r"\\'"),
  18.     (r'(?s)<script[^>]*>[^<]*?"[^<"]*%(chars)s|%(chars)s[^<"]*"[^<]*</script>', ('"', ';'), "'<script>.\".xss.\".</script>', enclosed by <script> tags, inside double-quotes, %(filtering)s filtering", r'\\"'),
  19.     (r"(?s)<script[^>]*>[^<]*?%(chars)s|%(chars)s[^<]*</script>", (';',), "\"<script>.xss.</script>\", enclosed by <script> tags, %(filtering)s filtering", None),
  20.     (r">[^<]*%(chars)s[^<]*(<|\Z)", ('<', '>'), "\">.xss.<\", outside of tags, %(filtering)s filtering", r"(?s)<script.+?</script>|<!--.*?-->"),
  21.     (r"<[^>]*=\s*'[^>']*%(chars)s[^>']*'[^>]*>", ('\'',), "\"<.'.xss.'.>\", inside the tag, inside single-quotes, %(filtering)s filtering", r"(?s)<script.+?</script>|<!--.*?-->|\\"),
  22.     (r'<[^>]*=\s*"[^>"]*%(chars)s[^>"]*"[^>]*>', ('"',), "'<.\".xss.\".>', inside the tag, inside double-quotes, %(filtering)s filtering", r"(?s)<script.+?</script>|<!--.*?-->|\\"),
  23.     (r"<[^>]*%(chars)s[^>]*>", (), "\"<.xss.>\", inside the tag, outside of quotes, %(filtering)s filtering", r"(?s)<script.+?</script>|<!--.*?-->|=\s*'[^']*'|=\s*\"[^\"]*\""),
  24. )
  25.  
  26. DOM_PATTERNS = (                                                                            # each (dom pattern) item consists of r"recognition regex"
  27.     r"(?s)<script[^>]*>[^<]*?(var|\n)\s*(\w+)\s*=[^;]*(document\.(location|URL|documentURI)|location\.(href|search)|window\.location)[^;]*;[^<]*(document\.write(ln)?\(|\.innerHTML\s*=|eval\(|setTimeout\(|setInterval\(|location\.(replace|assign)\(|setAttribute\()[^;]*\2.*?</script>",
  28.     r"(?s)<script[^>]*>[^<]*?(document\.write\(|\.innerHTML\s*=|eval\(|setTimeout\(|setInterval\(|location\.(replace|assign)\(|setAttribute\()[^;]*(document\.(location|URL|documentURI)|location\.(href|search)|window\.location).*?</script>",
  29. )
  30.  
  31. _headers = {}                                                                               # used for storing dictionary with optional header values
  32.  
  33. def _retrieve_content(url, data=None):
  34.     try:
  35.         req = urllib2.Request("".join(url[i].replace(' ', "%20") if i > url.find('?') else url[i] for i in xrange(len(url))), data, _headers)
  36.         retval = urllib2.urlopen(req, timeout=TIMEOUT).read()
  37.     except Exception, ex:
  38.         retval = ex.read() if hasattr(ex, "read") else getattr(ex, "msg", str())
  39.     return retval or ""
  40.  
  41. def _contains(content, chars):
  42.     content = re.sub(r"\\[%s]" % re.escape("".join(chars)), "", content) if chars else content
  43.     return all(char in content for char in chars)
  44.  
  45. def scan_page(url, data=None):
  46.     retval, usable = False, False
  47.     url, data = re.sub(r"=(&|\Z)", "=1\g<1>", url) if url else url, re.sub(r"=(&|\Z)", "=1\g<1>", data) if data else data
  48.     original = re.sub(DOM_FILTER_REGEX, "", _retrieve_content(url, data))
  49.     dom = max(re.search(_, original) for _ in DOM_PATTERNS)
  50.     if dom:
  51.         print " (i) page itself appears to be XSS vulnerable (DOM)"
  52.         print "  (o) ...%s..." % dom.group(0)
  53.         retval = True
  54.     try:
  55.         for phase in (GET, POST):
  56.             current = url if phase is GET else (data or "")
  57.             for match in re.finditer(r"((\A|[?&])(?P<parameter>[\w\[\]]+)=)(?P<value>[^&#38;#]*)", current):
  58.                 found, usable = False, True
  59.                 print "* scanning %s parameter '%s'" % (phase, match.group("parameter"))
  60.                 prefix, suffix = ("".join(random.sample(string.ascii_lowercase, PREFIX_SUFFIX_LENGTH)) for i in xrange(2))
  61.                 for pool in (LARGER_CHAR_POOL, SMALLER_CHAR_POOL):
  62.                     if not found:
  63.                         tampered = current.replace(match.group(0), "%s%s" % (match.group(0), urllib.quote("%s%s%s%s" % ("'" if pool == LARGER_CHAR_POOL else "", prefix, "".join(random.sample(pool, len(pool))), suffix))))
  64.                         content = (_retrieve_content(tampered, data) if phase is GET else _retrieve_content(url, tampered)).replace("%s%s" % ("'" if pool == LARGER_CHAR_POOL else "", prefix), prefix)
  65.                         for regex, condition, info, content_removal_regex in REGULAR_PATTERNS:
  66.                             filtered = re.sub(content_removal_regex or "", "", content)
  67.                             for sample in re.finditer("%s([^ ]+?)%s" % (prefix, suffix), filtered, re.I):
  68.                                 context = re.search(regex % {"chars": re.escape(sample.group(0))}, filtered, re.I)
  69.                                 if context and not found and sample.group(1).strip():
  70.                                     if _contains(sample.group(1), condition):
  71.                                         print " (i) %s parameter '%s' appears to be XSS vulnerable (%s)" % (phase, match.group("parameter"), info % dict((("filtering", "no" if all(char in sample.group(1) for char in LARGER_CHAR_POOL) else "some"),)))
  72.                                         found = retval = True
  73.                                     break
  74.         if not usable:
  75.             print " (x) no usable GET/POST parameters found"
  76.     except KeyboardInterrupt:
  77.         print "\r (x) Ctrl-C pressed"
  78.     return retval
  79.  
  80. def init_options(proxy=None, cookie=None, ua=None, referer=None):
  81.     global _headers
  82.     _headers = dict(filter(lambda _: _[1], ((COOKIE, cookie), (UA, ua or NAME), (REFERER, referer))))
  83.     urllib2.install_opener(urllib2.build_opener(urllib2.ProxyHandler({'http': proxy})) if proxy else None)
  84.  
  85. if __name__ == "__main__":
  86.     print "%s #v%s\n by: %s\n" % (NAME, VERSION, AUTHOR)
  87.     parser = optparse.OptionParser(version=VERSION)
  88.     parser.add_option("-u", "--url", dest="url", help="Target URL (e.g. \"http://www.target.com/page.php?id=1\")")
  89.     parser.add_option("--data", dest="data", help="POST data (e.g. \"query=test\")")
  90.     parser.add_option("--cookie", dest="cookie", help="HTTP Cookie header value")
  91.     parser.add_option("--user-agent", dest="ua", help="HTTP User-Agent header value")
  92.     parser.add_option("--referer", dest="referer", help="HTTP Referer header value")
  93.     parser.add_option("--proxy", dest="proxy", help="HTTP proxy address (e.g. \"http://127.0.0.1:8080\")")
  94.     options, _ = parser.parse_args()
  95.     if options.url:
  96.         init_options(options.proxy, options.cookie, options.ua, options.referer)
  97.         result = scan_page(options.url if options.url.startswith("http") else "http://%s" % options.url, options.data)
  98.         print "\nscan results: %s vulnerabilities found" % ("possible" if result else "no")
  99.     else:
  100. parser.print_help()

Repo: https://github.com/stamparm/DSXS

Espero que les sirva!
ANTRAX
« Última modificación: Septiembre 20, 2018, 11:11:20 am por ANTRAX »


 

¿Te gustó el post? COMPARTILO!



ATSCAN- Advanced search / Dork / Mass Exploitation Scanner

Iniciado por BrowserNet

Respuestas: 1
Vistas: 3040
Último mensaje Agosto 14, 2017, 06:09:25 pm
por DUDA
JexBoss - Scanner y explotador de fallas

Iniciado por ANTRAX

Respuestas: 1
Vistas: 3540
Último mensaje Mayo 13, 2017, 05:40:27 pm
por BrowserNet
Auditando con Nessus Vulnerability Scanner

Iniciado por BrowserNet

Respuestas: 0
Vistas: 3973
Último mensaje Mayo 21, 2016, 09:20:16 pm
por BrowserNet
XSS Scanner BETA 1.1

Iniciado por Unlocker

Respuestas: 0
Vistas: 1938
Último mensaje Septiembre 12, 2014, 06:03:41 am
por Unlocker