Jump to content

Search the Community

Showing results for tags 'seq'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Informatii generale
    • Anunturi importante
    • Bine ai venit
    • Proiecte RST
  • Sectiunea tehnica
    • Exploituri
    • Challenges (CTF)
    • Bug Bounty
    • Programare
    • Securitate web
    • Reverse engineering & exploit development
    • Mobile security
    • Sisteme de operare si discutii hardware
    • Electronica
    • Wireless Pentesting
    • Black SEO & monetizare
  • Tutoriale
    • Tutoriale in romana
    • Tutoriale in engleza
    • Tutoriale video
  • Programe
    • Programe hacking
    • Programe securitate
    • Programe utile
    • Free stuff
  • Discutii generale
    • RST Market
    • Off-topic
    • Discutii incepatori
    • Stiri securitate
    • Linkuri
    • Cosul de gunoi
  • Club Test's Topics
  • Clubul saraciei absolute's Topics
  • Chernobyl Hackers's Topics
  • Programming & Fun's Jokes / Funny pictures (programming related!)
  • Programming & Fun's Programming
  • Programming & Fun's Programming challenges
  • Bani pă net's Topics
  • Cumparaturi online's Topics
  • Web Development's Forum
  • 3D Print's Topics

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Yahoo


Jabber


Skype


Location


Interests


Occupation


Interests


Biography


Location

Found 1 result

  1. import re , urllib2 , sys, urllib lista = [] backup = ['wp-config.php~','wp-config.php.bak','wp-config.bak','wp-config.php-bak','/wp-content/uploads/blog-backup.txt'] def unique(seq): seen = set() return [seen.add(x) or x for x in seq if x not in seen] def grabwp(ip): try: s = ip page = 1 print('\n') while page <= 21: bing = "http://www.bing.com/search?q=ip%3A"+s+"+?page_id=&count=50&first="+str(page) openbing = urllib2.urlopen(bing) readbing = openbing.read() findwebs = re.findall('<h2><a href="(.*?)"' , readbing) for i in range(len(findwebs)): wpnoclean = findwebs[i] findwp = re.findall('(.*?)\?page_id=', wpnoclean) lista.extend(findwp) page = page + 10 except IndexError: pass def searchbackup(site, config): try : read = urllib2.urlopen(site + "/" + config).read() rs = re.findall("USER",read) if rs : print "BACKUP FILE > " + site + "/" + config except : pass def scan(): final = unique(lista) for site in final : for config in backup : searchbackup(site, config) print "\!/ Server Wordpress Backup Files Scanner By YASSINOX.TN !/" print '' ip = raw_input("Server Ip Adress : ") grabwp(ip) final = unique(lista) print "Done ! Grabbed " + str(len(final) ) + " Wordpress Sites On This Server" print "---------------------------------------------------" scan() print "---------------------------------------------------"
×
×
  • Create New...