Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 05/18/18 in all areas

  1. 2 points
  2. faceti research pe keyword "sinucidere" poate il gasim pe gecko! @Nytro sti ceva si nu vrei sa ne spui ?
    2 points
  3. Description NetRipper is a post exploitation tool targeting Windows systems which uses API hooking in order to intercept network traffic and encryption related functions from a low privileged user, being able to capture both plain-text traffic and encrypted traffic before encryption/after decryption. NetRipper was released at Defcon 23, Las Vegas, Nevada. Abstract The post-exploitation activities in a penetration test can be challenging if the tester has low-privileges on a fully patched, well configured Windows machine. This work presents a technique for helping the tester to find useful information by sniffing network traffic of the applications on the compromised machine, despite his low-privileged rights. Furthermore, the encrypted traffic is also captured before being sent to the encryption layer, thus all traffic (clear-text and encrypted) can be sniffed. The implementation of this technique is a tool called NetRipper which uses API hooking to do the actions mentioned above and which has been especially designed to be used in penetration tests, but the concept can also be used to monitor network traffic of employees or to analyze a malicious application. https://github.com/NytroRST
    1 point
  4. This post requires you to click the Likes button to read this content. http://a.pomf.se/pjmwvx.png """ OLX.ro scraper Gets name, phone no., Yahoo! & Skype addresses, where applicable http://a.pomf.se/pjmwvx.png """ import re import json import requests from bs4 import BeautifulSoup as b pages = 1 # How many pages should be scraped # Category URL, a.k.a. where to get the ads from catURL = "http://olx.ro/electronice-si-electrocasnice/laptop-calculator/" # Links to the Ajax requests ajaxNum = "http://olx.ro/ajax/misc/contact/phone/" ajaxYah = "http://olx.ro/ajax/misc/contact/communicator/" ajaxSky = "http://olx.ro/ajax/misc/contact/skype/" def getName(link): # Get the name from the ad page = requests.get(link) soup = b(page.text) match = soup.find(attrs={"class": "block color-5 brkword xx-large"}) name = re.search(">(.+)<", str(match)).group(1) return name def getPhoneNum(aID): # Get the phone number resp = requests.get("%s%s/" % (ajaxNum, aID)).text try: resp = json.loads(resp).get("value") except ValueError: return # No phone number if "span" in resp: # Multiple phone numbers nums = b(resp).find_all(text=True) for num in nums: if num != " ": return num else: return resp def getYahoo(aID): # Get the Yahoo! ID resp = requests.get("%s%s/" % (ajaxYah, aID)).text try: resp = json.loads(resp).get("value") except ValueError: return # No Yahoo! ID else: return resp def getSkype(aID): # Get the Skype ID resp = requests.get("%s%s/" % (ajaxSky, aID)).text try: resp = json.loads(resp).get("value") except ValueError: return # No Skype ID else: return resp def main(): for pageNum in range(1, pages+1): print("Page %d." % pageNum) page = requests.get(catURL + "?page=" + str(pageNum)) soup = b(page.text) links = soup.findAll(attrs={"class": "marginright5 link linkWithHash \ detailsLink"}) for a in links: aID = re.search('ID(.+)\.', a['href']).group(1) print("ID: %s" % aID) print("\tName: %s" % getName(a['href'])) if getPhoneNum(aID) != None: print("\tPhone: %s" % getPhoneNum(aID)) if getYahoo(aID) != None: print("\tYahoo: %s" % getYahoo(aID)) if getSkype(aID) != None: print("\tSkype: %s" % getSkype(aID)) if __name__ == "__main__": main() Tocmai scraper: https://rstforums.com/forum/98245-tocmai-ro-scraper-nume-oras-numar-telefon.rst
    1 point
  5. Porneste-ti un blog bre: https://rstforums.com/forum/blogs/ si scrie acolo pana crapi. Nu trebuie sa futi lumea la icre aici..
    1 point
×
×
  • Create New...