pr00f
Active Members-
Posts
1207 -
Joined
-
Last visited
-
Days Won
11
Everything posted by pr00f
-
(video) Learning Python Web Penetration Testing
pr00f replied to QuoVadis's topic in Tutoriale video
Reading material https://eev.ee/blog/2016/07/31/python-faq-why-should-i-use-python-3/ -
Am profitat de o ocazie pentru a lua un domeniu .shop de mai sus (domain hoarding ftw) cu un singur query, la ei pe site. De atunci (de doua zile), am primit urmatoarele (trei) mail-uri: From: Domain Notice <final-notice@mailontheway.ml> Subj: <domeniu>.shop, might be temporarily De-Activated for security reasons. <nume>: This is your Final-Notice of Domain Listing ! Body: [...] This Notice for: will expire at 11:59PM EST, 23 - December - 2016 Act now! [...] From: Sales Team <Info@archive12.pw> Subj: <nume> - Get Wordpress Website for <domeniu>.shop @149 Body: [...] We are offering website at an affordable cost under USD149. [...] From: Go-Daddy <allen@logooinflux.ml> Subj: <nume>, Urgent Reminder: Activate Your Logo Coupon for <domeniu>.shop Body: [...] Activate Your Coupon Now to Get Your Logo for Only $29.96 [...] Sunt niste cacati care fac spam aiurea si dau/vand informatiile catre toti idiotii cu domenii cu tld-uri gratuite si care nu stiu sa scrie. Concluzie/tl;dr: muie moniker, vand info la 3rd party.
-
pai whatsapp e encrypted, nu afla nimeni ce scrii :^)
-
Chiar daca se gasesc separat, am facut arhiva cu tot ce-i de la humble bundle. https://www.humblebundle.com/books/unix-book-bundle https://mega.nz/#!owdyTISD!d2af-rcMdtnUdsyiFYC29WhUAlr1x7qtCbDjU1BoRHk
- 1 reply
-
- 6
-
The hype was real, while the incident was not. http://www.acunetix.com/official-statement-alleged-acunetix-website-defacement-incident/
-
Another day, another Data Breach! Now, Russia's biggest social networking site VK.com is the latest in the line of historical data breaches targeting social networking websites. The same hacker who previously sold data dumps from MySpace, Tumblr, LinkedIn, and Fling.com, is now selling more than 100 Million VK.com records for just 1 Bitcoin (approx. US$580). The database contains information like full names (first names and last names), email addresses, plain-text passwords, location information, phone numbers and, in some cases, secondary email addresses. Yes, plain-text passwords. According to Peace, the passwords were already in plain text when the VK.com was hacked. So, if the site still stores passwords in cleartext today, this could be a real security risk for its users. The hacker, named Peace (or Peace_of_mind), is selling the dataset -- which is over 17 gigabytes in size -- on The Real Deal dark web marketplace for a mere 1 Bitcoin. Source: https://thehackernews.com/2016/06/vk-com-data-breach.html
-
http://exploitgate.com/acunetixs-website-got-hacked-croatian-hackers/
-
https://twitter.com/micahflee/status/717433872560992256
-
Bruteforce-ul la wordpress este trivial - https://github.com/vlad-s/random/blob/master/scripts/wp_brute.py Sortarea email-urilor si a numerelor de telefon se face banal (`sort file | uniq`) Locatia unei adrese IP este discutata antecedent - https://rstforums.com/forum/topic/101273-cumpar-script-dns/?do=findComment&comment=630929 Extragerea URL-urilor din fisiere text este simpla, pe baza de regex (fisiere text) sau se poate implementa foarte usor ceva care extrage <a>uri - https://github.com/vlad-s/random/blob/master/scrapers/url_scraper.php Cat pentru restul (link-uri de pe google) exista API-uri care cer un minim de cunostinte pentru implementarea lor. Stiu ca-i foame de bani si cu ransomware nu e schema la noi, da' nu vinde castraveti gradinarului.
-
read() sau readlines() este no-no/bad habit in cazul fisierelor mari, ele stocand datele in memorie. Scapi usor cu `for line in file'. http://i.imgur.com/UAcyqm1.png
-
https://rstforums.com/forum/topic/98489-wospi-world-wide-web-word-crawler-for-generating-wordlists/ Feel free to fork :).
-
Source: https://sysdig.com/blog/fishing-for-hackers/ $ sysdig -r trace.scap.gz -A -c echo_fds fd.filename=.sloboz.pdf ------ Write 3.89KB to /run/shm/.sloboz.pdf #!/usr/bin/perl #################################################################################################################### #################################################################################################################### ## Undernet Perl IrcBot v1.02012 bY DeBiL @RST Security Team ## [ Help ] ######################################### ## Stealth MultiFunctional IrcBot Writen in Perl ##################################################### ## Teste on every system with PERL instlled ## !u @system ## ## ## !u @version ## ## This is a free program used on your own risk. ## !u @channel ## ## Created for educational purpose only. ## !u @flood ## ## I'm not responsible for the illegal use of this program. ## !u @utils ## #################################################################################################################### ## [ Channel ] #################### [ Flood ] ################################## [ Utils ] ######################### #################################################################################################################### ## !u !join <#channel> ## !u @udp1 <ip> <port> <time> ## !su @conback <ip> <port> ## ## !u !part <#channel> ## !u @udp2 <ip> <packet size> <time> ## !u @downlod <url+path> <file> ## ## !u !uejoin <#channel> ## !u @udp3 <ip> <port> <time> ## !u @portscan <ip> ## ## !u !op <channel> <nick> ## !u @tcp <ip> <port> <packet size> <time> ## !u @mail <subject> <sender> ## ## !u !deop <channel> <nick> ## !u @http <site> <time> ## <recipient> <message> ## ...
-
grabbit.py Python script for grabbing email or IP addresses (optional with port) from a given file. Installation Clone the github repo git clone https://github.com/vlad-s/grabbit Usage """ grabbit.py grabs email/ip(:port) strings from a given file """ from __future__ import print_function # pylint needs this for py3k from socket import inet_aton # non regex ip validation from os import access, R_OK # file access validation from sys import stdout # write to stdout if no file specified import re import argparse __author__ = "Vlad <vlad at vlads dot me>" __version__ = "0.1" __license__ = "GPL v3" __description__ = "python script for grabbing email or ip addresses \ (optional with port) from a given file. " PARSER = argparse.ArgumentParser(description=__description__) GROUP = PARSER.add_mutually_exclusive_group() GROUP.add_argument('--email', help='match an email address', action='store_true') GROUP.add_argument('--ip', help='match an ip address', action='store_true') GROUP.add_argument('--ip-port', help='match an ip:port', action='store_true') PARSER.add_argument('-s', '--separator', help='separator used when data is \ column separated using one or more characters') PARSER.add_argument('-w', '--write', help='file to write in (default stdout)') PARSER.add_argument('file', help='the file to look in') ARGS = PARSER.parse_args() if not (ARGS.email or ARGS.ip or ARGS.ip_port): print("You have to select an option.") exit(1) if not access(ARGS.file, R_OK): print("Can't open the file, exiting.") exit(1) if ARGS.write is not None: try: OUT = open(ARGS.write, 'w') except OSError: print("Can't write to file, permission error, exiting.") exit(1) else: OUT = stdout if ARGS.separator is not None: SEP = ARGS.separator.encode('utf-8').decode('unicode_escape') else: SEP = None VALIDMAIL = re.compile(r'^[^@ ]+@[^@]+\.[^@]+$') def is_valid_ip(ip_address): """ Returns the validity of an IP address """ if not re.match(r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}', ip_address): return False # first we need a valid ip form try: inet_aton(ip_address) # check if it's a valid ip address except OSError: return False return True for line in open(ARGS.file, 'rb'): line = line.strip().split(SEP.encode('utf-8')) if ARGS.email: found = [OUT.write(s.decode('utf-8') + '\n') for i, s in enumerate(line) if VALIDMAIL.match(s.decode('utf-8'))] OUT.flush() else: for string in line: string = string.decode('utf-8') if ARGS.ip_port and len(string.split(':')) == 2: # IP:Port ip, port = string.split(':') if is_valid_ip(ip) and 0 < int(port) < 65535: OUT.write('{}:{}\n'.format(ip, port)) OUT.flush() elif ARGS.ip: if is_valid_ip(string): OUT.write(string + '\n') OUT.flush() Source (w/ shameful advertising): https://github.com/vlad-s/grabbit
-
PrestaShop ar fi o treaba, dar doar daca mergi pe ideea de click aici, click acolo. Nu am lucrat foarte mult cu el, dar ca si structura logica, codul e urat cu spume.
-
Python 3 https://docs.python.org/3.3/reference/lexical_analysis.html#string-literals Python 2 https://docs.python.org/2/reference/lexical_analysis.html#string-literals
-
ceva distro cu XFCE
-
pentru ca nu esti in stare sa pui nici macar un echo. $_SERVER["SERVER_ADDR"] este adresa IP a serverului, respectiv intrare din $_SERVER. chestii pe care tu le ai de mai devreme spuse de mine si Byte-ul
-
Am avut nevoie de un crawler care genereaza wordlist-uri in functie de continutul unei pagini/unui website, si tot ce am gasit a fost cewl, dar e scris in ruby si nu-mi place ideea de gem-uri (sunt fan dulceata), asa ca am decis sa scriu eu unul. Poate nu e la fel de bun/featureful precum cewl (sau altele?), dar functioneaza, si functioneaza bine. Necesita requests si BeautifulSoup. Sursa, cat si informatii despre setting up: https://github.com/vlad-s/wospi """ wospi 0.1 word spider whose sole purpose is to crawl for strings and generate a wordlist """ __author__ = "Vlad <vlad at vlads dot me>" __version__ = "0.1" __license__ = "GPL v2" # pylint: disable=import-error # pylint can't find BeautifulSoup (installed with pip) import requests import argparse from threading import Thread from bs4 import BeautifulSoup class WordSpider(object): """ Main class """ def __init__(self, output, url): self.min_length = 4 self.user_agent = "wospi (v0.1) word spiderbro" self.with_strip = False self.output = output self.url = url self.data_dict = {"words": [], "urls": [], "strip": ".,\"'"} try: self.outfile = open(self.output, "w") except IOError: print "Can't write the file. Do you have write access?" exit(1) def url_magic(self, url, depth): """ Do the URL boogie all night long """ domain = self.url.split("/")[0]+"//"+self.url.split("/")[2] if url.startswith("/"): crawl_url = domain+url elif url.startswith(domain): crawl_url = url else: return if crawl_url not in self.data_dict.get("urls"): self.data_dict.get("urls").append(crawl_url) link_worker = Thread(target=self.request, args=(crawl_url, int(depth)-1)) link_worker.start() def request(self, url, depth): """ Do request, get content, spread the word """ if depth < 0: exit(1) if url.startswith("/"): url_split = url.split("/") url = url_split[0] + "//" + url_split[2] print "[+] URL: %s" % url headers = {"user-agent": self.user_agent} try: req = requests.get(url, headers=headers, timeout=3) except requests.ConnectionError: print "[+] Connection error, returning." return except requests.HTTPError: print "[+] Invalid HTTP response, returning." return except requests.Timeout: print "[+] Request timed out, returning." return except requests.TooManyRedirects: print "[+] Too many redirections, returning." return if "text/html" not in req.headers.get("content-type"): print "[+] Content type is not text/html, returning." return soup = BeautifulSoup(req.text, "html.parser") for invalid_tags in soup(["script", "iframe", "style"]): invalid_tags.extract() for link in soup.find_all("a"): if not isinstance(link.get("href"), type(None)): self.url_magic(link.get("href"), depth) data_worker = Thread(target=self.parse_data, args=(soup.get_text(), )) data_worker.start() def parse_data(self, data): """ Parse the data after request """ data = data.replace("\r\n", " ").replace("\n", " ").split() for word in data: word = word.encode("utf-8") if word not in self.data_dict.get("words"): if len(word) >= self.min_length: if self.with_strip == True: stripped = word for char in self.data_dict.get("strip"): stripped = stripped.strip(char) self.data_dict.get("words").append(word) self.outfile.write(word+"\n") if self.with_strip == True and stripped != word: self.data_dict.get("words").append(stripped) self.outfile.write(stripped+"\n") def run(self, depth=0): """ Run, scraper, run! """ self.request(self.url, depth) if __name__ == "__main__": PARSER = argparse.ArgumentParser(description="word scraper/wordlist\ generator") PARSER.add_argument("--min-length", type=int, default=4, help="minimum\ word length, defaults to 4") PARSER.add_argument("--user-agent", help="user agent to use on requests") PARSER.add_argument("--with-strip", action="store_true", help="also store\ the stripped word") PARSER.add_argument("--write", "-w", required=True, dest="file", help="file to write the content in") PARSER.add_argument("--depth", default=0, help="crawling depth, defaults\ to 0") PARSER.add_argument("url", type=str, help="url to scrape") ARGS = PARSER.parse_args() SCRAPER = WordSpider(ARGS.file, ARGS.url) if ARGS.min_length is not None: SCRAPER.min_length = ARGS.min_length if ARGS.user_agent is not None: SCRAPER.user_agent = ARGS.user_agent if ARGS.with_strip == True: SCRAPER.with_strip = True SCRAPER.run(ARGS.depth)
-
- 1
-
$ php -r "var_dump(\$_server);" PHP Notice: Undefined variable: _server in Command line code on line 1 NULL $ php -r "var_dump(\$_SERVER);" | head -1 array(47) { Observi diferenta?
-
var_dump($_SERVER);
-
Kali Debian sau Ubuntu?
pr00f replied to theandruala's topic in Sisteme de operare si discutii hardware
Am link-uit wiki-ul, este godlike pe langa restul . -
Kali Debian sau Ubuntu?
pr00f replied to theandruala's topic in Sisteme de operare si discutii hardware
In acelasi timp este mult mai straightforward decat Debian in unele cazuri si vine cu prostii foarte utile preinstalate precum Xorg, XFCE, pana si KDE pentru eye candy. Bonus: https://wiki.archlinux.org/ -
Kali Debian sau Ubuntu?
pr00f replied to theandruala's topic in Sisteme de operare si discutii hardware
slackware. e mai stabil decat debian stable. -
^ e mai mult BSD decat Linux rice, rice, baby