Jump to content

sleed

Active Members
  • Posts

    1019
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by sleed

  1. @ Pentru memory dump: https://zeltser.com/memory-acquisition-with-dumpit-for-dfir-2/ ->
  2. https://github.com/FabioBaroni/awesome-exploit-development https://github.com/wtsxDev/reverse-engineering https://www.vulnhub.com/ https://ctflearn.com/index.php https://hackinparis.com/archives/
  3. Stiu ca au filmat in zona Dacia din Timisoara . Dar @Zatarra are dreptate, e despre skiddies. Va recomand un film interesant:
  4. =)))))))))))))))))) @QuoVadis +1
  5. Probabil din cauza asta a spus acele lucruri: http://www.mlive.com/news/us-world/index.ssf/2017/02/paypal_increasing_several_fees.html Paypal a marit fee-urile. Sunt putin disperati?
  6. Spectre Example Code: https://gist.githubusercontent.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9e3d4bb6/raw/41bf9bd0e7577fe3d7b822bbae1fec2e818dcdd6/spectre.c #define CACHE_HIT_THRESHOLD(80) - Pentru a nu avea erori puneti un spatiu intre THRESHOLD si (80)
  7. Security Advisories & Responses Title : CPU Side-Channel Information Disclosure Vulnerabilities URL : https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180104-cpusidechannel Description : On January 3, 2018 researchers disclosed three vulnerabilities that take advantage of the implementation of speculative execution of instructions on many modern microprocessor architectures to perform side-channel information disclosure attacks. These vulnerabilities could allow an unprivileged local attacker, in specific circumstances, to read privileged memory belonging to other processes or memory allocated to the operating system kernel. Cisco will release software updates that address this vulnerability. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Cisco Security Advisory: CPU Side-Channel Information Disclosure Vulnerabilities Advisory ID: cisco-sa-20180104-cpusidechannel Revision: 1.0 For Public Release: 2018 January 4 22:20 GMT Last Updated: 2018 January 4 22:20 GMT CVE ID(s): CVE-2017-5715, CVE-2017-5753, CVE-2017-5754 +--------------------------------------------------------------------- Summary ======= On January 3, 2018 researchers disclosed three vulnerabilities that take advantage of the implementation of speculative execution of instructions on many modern microprocessor architectures to perform side-channel information disclosure attacks. These vulnerabilities could allow an unprivileged local attacker, in specific circumstances, to read privileged memory belonging to other processes or memory allocated to the operating system kernel. The first two vulnerabilities, CVE-2017-5753 and CVE-2017-5715, are collectively known as Spectre, the third vulnerability, CVE-2017-5754, is known as Meltdown. The vulnerabilities are all variants of the same attack and differ in the way the speculative execution is exploited. In order to exploit any of these vulnerabilities, an attacker must be able to run crafted code on an affected device. The majority of Cisco products are closed systems, which do not allow customers to run custom code on the device. Although, the underlying CPU and OS combination in a product may be affected by these vulnerabilities, the majority of Cisco products are closed systems that do not allow customers to run custom code on the device, and thus are not vulnerable. There is no vector to exploit them. Only Cisco devices that are found to allow the customer to execute their customized code side-by-side with the Cisco code on the same microprocessor are considered vulnerable. A Cisco product that may be deployed as a virtual machine or a container, even while not being directly affected by any of these vulnerabilities, could be the targeted by such attacks if the hosting environment is vulnerable. Cisco recommends customers to harden their virtual environment and to ensure that all security updates are installed. Cisco will release software updates that address this vulnerability. This advisory is available at the following link: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180104-cpusidechannel ["https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180104-cpusidechannel"]
      • 2
      • Upvote
  8. Reading privileged memory with a side-channel Posted by Jann Horn, Project Zero We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts. Variants of this issue are known to affect many modern processors, including certain processors by Intel, AMD and ARM. For a few Intel and AMD CPU models, we have exploits that work against real software. We reported this issue to Intel, AMD and ARM on 2017-06-01 [1]. So far, there are three known variants of the issue: Variant 1: bounds check bypass (CVE-2017-5753) Variant 2: branch target injection (CVE-2017-5715) Variant 3: rogue data cache load (CVE-2017-5754) Before the issues described here were publicly disclosed, Daniel Gruss, Moritz Lipp, Yuval Yarom, Paul Kocher, Daniel Genkin, Michael Schwarz, Mike Hamburg, Stefan Mangard, Thomas Prescher and Werner Haas also reported them; their [writeups/blogposts/paper drafts] are at: Spectre (variants 1 and 2) Meltdown (variant 3) During the course of our research, we developed the following proofs of concept (PoCs): A PoC that demonstrates the basic principles behind variant 1 in userspace on the tested Intel Haswell Xeon CPU, the AMD FX CPU, the AMD PRO CPU and an ARM Cortex A57 [2]. This PoC only tests for the ability to read data inside mis-speculated execution within the same process, without crossing any privilege boundaries. A PoC for variant 1 that, when running with normal user privileges under a modern Linux kernel with a distro-standard config, can perform arbitrary reads in a 4GiB range [3] in kernel virtual memory on the Intel Haswell Xeon CPU. If the kernel's BPF JIT is enabled (non-default configuration), it also works on the AMD PRO CPU. On the Intel Haswell Xeon CPU, kernel virtual memory can be read at a rate of around 2000 bytes per second after around 4 seconds of startup time. [4] A PoC for variant 2 that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific (now outdated) version of Debian's distro kernel [5] running on the host, can read host kernel memory at a rate of around 1500 bytes/second, with room for optimization. Before the attack can be performed, some initialization has to be performed that takes roughly between 10 and 30 minutes for a machine with 64GiB of RAM; the needed time should scale roughly linearly with the amount of host RAM. (If 2MB hugepages are available to the guest, the initialization should be much faster, but that hasn't been tested.) A PoC for variant 3 that, when running with normal user privileges, can read kernel memory on the Intel Haswell Xeon CPU under some precondition. We believe that this precondition is that the targeted kernel memory is present in the L1D cache. For interesting resources around this topic, look down into the "Literature" section. A warning regarding explanations about processor internals in this blogpost: This blogpost contains a lot of speculation about hardware internals based on observed behavior, which might not necessarily correspond to what processors are actually doing. We have some ideas on possible mitigations and provided some of those ideas to the processor vendors; however, we believe that the processor vendors are in a much better position than we are to design and evaluate mitigations, and we expect them to be the source of authoritative guidance. The PoC code and the writeups that we sent to the CPU vendors will be made available at a later date. Tested Processors Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz (called "Intel Haswell Xeon CPU" in the rest of this document) AMD FX(tm)-8320 Eight-Core Processor (called "AMD FX CPU" in the rest of this document) AMD PRO A8-9600 R7, 10 COMPUTE CORES 4C+6G (called "AMD PRO CPU" in the rest of this document) An ARM Cortex A57 core of a Google Nexus 5x phone [6] (called "ARM Cortex A57" in the rest of this document) Glossary retire: An instruction retires when its results, e.g. register writes and memory writes, are committed and made visible to the rest of the system. Instructions can be executed out of order, but must always retire in order. logical processor core: A logical processor core is what the operating system sees as a processor core. With hyperthreading enabled, the number of logical cores is a multiple of the number of physical cores. cached/uncached data: In this blogpost, "uncached" data is data that is only present in main memory, not in any of the cache levels of the CPU. Loading uncached data will typically take over 100 cycles of CPU time. speculative execution: A processor can execute past a branch without knowing whether it will be taken or where its target is, therefore executing instructions before it is known whether they should be executed. If this speculation turns out to have been incorrect, the CPU can discard the resulting state without architectural effects and continue execution on the correct execution path. Instructions do not retire before it is known that they are on the correct execution path. mis-speculation window: The time window during which the CPU speculatively executes the wrong code and has not yet detected that mis-speculation has occurred. Variant 1: Bounds check bypass This section explains the common theory behind all three variants and the theory behind our PoC for variant 1 that, when running in userspace under a Debian distro kernel, can perform arbitrary reads in a 4GiB region of kernel memory in at least the following configurations: Intel Haswell Xeon CPU, eBPF JIT is off (default state) Intel Haswell Xeon CPU, eBPF JIT is on (non-default state) AMD PRO CPU, eBPF JIT is on (non-default state) The state of the eBPF JIT can be toggled using the net.core.bpf_jit_enable sysctl. Theoretical explanation The Intel Optimization Reference Manual says the following regarding Sandy Bridge (and later microarchitectural revisions) in section 2.3.2.3 ("Branch Prediction"): Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. In section 2.3.5.2 ("L1 DCache"): Loads can: [...] Be carried out speculatively, before preceding branches are resolved. Take cache misses out of order and in an overlapped manner. Intel's Software Developer's Manual [7] states in Volume 3A, section 11.7 ("Implicit Caching (Pentium 4, Intel Xeon, and P6 family processors"): Implicit caching occurs when a memory element is made potentially cacheable, although the element may never have been accessed in the normal von Neumann sequence. Implicit caching occurs on the P6 and more recent processor families due to aggressive prefetching, branch prediction, and TLB miss handling. Implicit caching is an extension of the behavior of existing Intel386, Intel486, and Pentium processor systems, since software running on these processor families also has not been able to deterministically predict the behavior of instruction prefetch. Consider the code sample below. If arr1->length is uncached, the processor can speculatively load data from arr1->data[untrusted_offset_from_caller]. This is an out-of-bounds read. That should not matter because the processor will effectively roll back the execution state when the branch has executed; none of the speculatively executed instructions will retire (e.g. cause registers etc. to be affected). struct array { unsigned long length; unsigned char data[]; }; struct array *arr1 = ...; unsigned long untrusted_offset_from_caller = ...; if (untrusted_offset_from_caller < arr1->length) { unsigned char value = arr1->data[untrusted_offset_from_caller]; ... } However, in the following code sample, there's an issue. If arr1->length, arr2->data[0x200] andarr2->data[0x300] are not cached, but all other accessed data is, and the branch conditions are predicted as true, the processor can do the following speculatively before arr1->length has been loaded and the execution is re-steered: load value = arr1->data[untrusted_offset_from_caller] start a load from a data-dependent offset in arr2->data, loading the corresponding cache line into the L1 cache struct array { unsigned long length; unsigned char data[]; }; struct array *arr1 = ...; /* small array */ struct array *arr2 = ...; /* array of size 0x400 */ /* >0x400 (OUT OF BOUNDS!) */ unsigned long untrusted_offset_from_caller = ...; if (untrusted_offset_from_caller < arr1->length) { unsigned char value = arr1->data[untrusted_offset_from_caller]; unsigned long index2 = ((value&1)*0x100)+0x200; if (index2 < arr2->length) { unsigned char value2 = arr2->data[index2]; } } After the execution has been returned to the non-speculative path because the processor has noticed thatuntrusted_offset_from_caller is bigger than arr1->length, the cache line containing arr2->data[index2] stays in the L1 cache. By measuring the time required to load arr2->data[0x200] andarr2->data[0x300], an attacker can then determine whether the value of index2 during speculative execution was 0x200 or 0x300 - which discloses whether arr1->data[untrusted_offset_from_caller]&1 is 0 or 1. To be able to actually use this behavior for an attack, an attacker needs to be able to cause the execution of such a vulnerable code pattern in the targeted context with an out-of-bounds index. For this, the vulnerable code pattern must either be present in existing code, or there must be an interpreter or JIT engine that can be used to generate the vulnerable code pattern. So far, we have not actually identified any existing, exploitable instances of the vulnerable code pattern; the PoC for leaking kernel memory using variant 1 uses the eBPF interpreter or the eBPF JIT engine, which are built into the kernel and accessible to normal users. A minor variant of this could be to instead use an out-of-bounds read to a function pointer to gain control of execution in the mis-speculated path. We did not investigate this variant further. Attacking the kernel This section describes in more detail how variant 1 can be used to leak Linux kernel memory using the eBPF bytecode interpreter and JIT engine. While there are many interesting potential targets for variant 1 attacks, we chose to attack the Linux in-kernel eBPF JIT/interpreter because it provides more control to the attacker than most other JITs. The Linux kernel supports eBPF since version 3.18. Unprivileged userspace code can supply bytecode to the kernel that is verified by the kernel and then: either interpreted by an in-kernel bytecode interpreter or translated to native machine code that also runs in kernel context using a JIT engine (which translates individual bytecode instructions without performing any further optimizations) Execution of the bytecode can be triggered by attaching the eBPF bytecode to a socket as a filter and then sending data through the other end of the socket. Whether the JIT engine is enabled depends on a run-time configuration setting - but at least on the tested Intel processor, the attack works independent of that setting. Unlike classic BPF, eBPF has data types like data arrays and function pointer arrays into which eBPF bytecode can index. Therefore, it is possible to create the code pattern described above in the kernel using eBPF bytecode. eBPF's data arrays are less efficient than its function pointer arrays, so the attack will use the latter where possible. Both machines on which this was tested have no SMAP, and the PoC relies on that (but it shouldn't be a precondition in principle). Additionally, at least on the Intel machine on which this was tested, bouncing modified cache lines between cores is slow, apparently because the MESI protocol is used for cache coherence [8]. Changing the reference counter of an eBPF array on one physical CPU core causes the cache line containing the reference counter to be bounced over to that CPU core, making reads of the reference counter on all other CPU cores slow until the changed reference counter has been written back to memory. Because the length and the reference counter of an eBPF array are stored in the same cache line, this also means that changing the reference counter on one physical CPU core causes reads of the eBPF array's length to be slow on other physical CPU cores (intentional false sharing). The attack uses two eBPF programs. The first one tail-calls through a page-aligned eBPF function pointer array prog_map at a configurable index. In simplified terms, this program is used to determine the address of prog_map by guessing the offset from prog_map to a userspace address and tail-calling throughprog_map at the guessed offsets. To cause the branch prediction to predict that the offset is below the length of prog_map, tail calls to an in-bounds index are performed in between. To increase the mis-speculation window, the cache line containing the length of prog_map is bounced to another core. To test whether an offset guess was successful, it can be tested whether the userspace address has been loaded into the cache. Because such straightforward brute-force guessing of the address would be slow, the following optimization is used: 215 adjacent userspace memory mappings [9], each consisting of 24 pages, are created at the userspace address user_mapping_area, covering a total area of 231 bytes. Each mapping maps the same physical pages, and all mappings are present in the pagetables. This permits the attack to be carried out in steps of 231 bytes. For each step, after causing an out-of-bounds access through prog_map, only one cache line each from the first 24 pages of user_mapping_area have to be tested for cached memory. Because the L3 cache is physically indexed, any access to a virtual address mapping a physical page will cause all other virtual addresses mapping the same physical page to become cached as well. When this attack finds a hit—a cached memory location—the upper 33 bits of the kernel address are known (because they can be derived from the address guess at which the hit occurred), and the low 16 bits of the address are also known (from the offset inside user_mapping_area at which the hit was found). The remaining part of the address of user_mapping_area is the middle. The remaining bits in the middle can be determined by bisecting the remaining address space: Map two physical pages to adjacent ranges of virtual addresses, each virtual address range the size of half of the remaining search space, then determine the remaining address bit-wise. At this point, a second eBPF program can be used to actually leak data. In pseudocode, this program looks as follows: uint64_t bitmask = <runtime-configurable>; uint64_t bitshift_selector = <runtime-configurable>; uint64_t prog_array_base_offset = <runtime-configurable>; uint64_t secret_data_offset = <runtime-configurable>; // index will be bounds-checked by the runtime, // but the bounds check will be bypassed speculatively uint64_t secret_data = bpf_map_read(array=victim_array, index=secret_data_offset); // select a single bit, move it to a specific position, and add the base offset uint64_t progmap_index = (((secret_data & bitmask) >> bitshift_selector) << 7) + prog_array_base_offset; bpf_tail_call(prog_map, progmap_index); This program reads 8-byte-aligned 64-bit values from an eBPF data array "victim_map" at a runtime-configurable offset and bitmasks and bit-shifts the value so that one bit is mapped to one of two values that are 27 bytes apart (sufficient to not land in the same or adjacent cache lines when used as an array index). Finally it adds a 64-bit offset, then uses the resulting value as an offset into prog_map for a tail call. This program can then be used to leak memory by repeatedly calling the eBPF program with an out-of-bounds offset into victim_map that specifies the data to leak and an out-of-bounds offset into prog_mapthat causes prog_map + offset to point to a userspace memory area. Misleading the branch prediction and bouncing the cache lines works the same way as for the first eBPF program, except that now, the cache line holding the length of victim_map must also be bounced to another core. Variant 2: Branch target injection This section describes the theory behind our PoC for variant 2 that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific version of Debian's distro kernel running on the host, can read host kernel memory at a rate of around 1500 bytes/second. Basics Prior research (see the Literature section at the end) has shown that it is possible for code in separate security contexts to influence each other's branch prediction. So far, this has only been used to infer information about where code is located (in other words, to create interference from the victim to the attacker); however, the basic hypothesis of this attack variant is that it can also be used to redirect execution of code in the victim context (in other words, to create interference from the attacker to the victim; the other way around). The basic idea for the attack is to target victim code that contains an indirect branch whose target address is loaded from memory and flush the cache line containing the target address out to main memory. Then, when the CPU reaches the indirect branch, it won't know the true destination of the jump, and it won't be able to calculate the true destination until it has finished loading the cache line back into the CPU, which takes a few hundred cycles. Therefore, there is a time window of typically over 100 cycles in which the CPU will speculatively execute instructions based on branch prediction. Haswell branch prediction internals Some of the internals of the branch prediction implemented by Intel's processors have already been published; however, getting this attack to work properly required significant further experimentation to determine additional details. This section focuses on the branch prediction internals that were experimentally derived from the Intel Haswell Xeon CPU. Haswell seems to have multiple branch prediction mechanisms that work very differently: A generic branch predictor that can only store one target per source address; used for all kinds of jumps, like absolute jumps, relative jumps and so on. A specialized indirect call predictor that can store multiple targets per source address; used for indirect calls. (There is also a specialized return predictor, according to Intel's optimization manual, but we haven't analyzed that in detail yet. If this predictor could be used to reliably dump out some of the call stack through which a VM was entered, that would be very interesting.) Generic predictor The generic branch predictor, as documented in prior research, only uses the lower 31 bits of the address of the last byte of the source instruction for its prediction. If, for example, a branch target buffer (BTB) entry exists for a jump from 0x4141.0004.1000 to 0x4141.0004.5123, the generic predictor will also use it to predict a jump from 0x4242.0004.1000. When the higher bits of the source address differ like this, the higher bits of the predicted destination change together with it—in this case, the predicted destination address will be 0x4242.0004.5123—so apparently this predictor doesn't store the full, absolute destination address. Before the lower 31 bits of the source address are used to look up a BTB entry, they are folded together using XOR. Specifically, the following bits are folded together: bit A bit B 0x40.0000 0x2000 0x80.0000 0x4000 0x100.0000 0x8000 0x200.0000 0x1.0000 0x400.0000 0x2.0000 0x800.0000 0x4.0000 0x2000.0000 0x10.0000 0x4000.0000 0x20.0000 In other words, if a source address is XORed with both numbers in a row of this table, the branch predictor will not be able to distinguish the resulting address from the original source address when performing a lookup. For example, the branch predictor is able to distinguish source addresses 0x100.0000 and 0x180.0000, and it can also distinguish source addresses 0x100.0000 and 0x180.8000, but it can't distinguish source addresses 0x100.0000 and 0x140.2000 or source addresses 0x100.0000 and 0x180.4000. In the following, this will be referred to as aliased source addresses. When an aliased source address is used, the branch predictor will still predict the same target as for the unaliased source address. This indicates that the branch predictor stores a truncated absolute destination address, but that hasn't been verified. Based on observed maximum forward and backward jump distances for different source addresses, the low 32-bit half of the target address could be stored as an absolute 32-bit value with an additional bit that specifies whether the jump from source to target crosses a 232 boundary; if the jump crosses such a boundary, bit 31 of the source address determines whether the high half of the instruction pointer should increment or decrement. Indirect call predictor The inputs of the BTB lookup for this mechanism seem to be: The low 12 bits of the address of the source instruction (we are not sure whether it's the address of the first or the last byte) or a subset of them. The branch history buffer state. If the indirect call predictor can't resolve a branch, it is resolved by the generic predictor instead. Intel's optimization manual hints at this behavior: "Indirect Calls and Jumps. These may either be predicted as having a monotonic target or as having targets that vary in accordance with recent program behavior." The branch history buffer (BHB) stores information about the last 29 taken branches - basically a fingerprint of recent control flow - and is used to allow better prediction of indirect calls that can have multiple targets. The update function of the BHB works as follows (in pseudocode; src is the address of the last byte of the source instruction, dst is the destination address): void bhb_update(uint58_t *bhb_state, unsigned long src, unsigned long dst) { *bhb_state <<= 2; *bhb_state ^= (dst & 0x3f); *bhb_state ^= (src & 0xc0) >> 6; *bhb_state ^= (src & 0xc00) >> (10 - 2); *bhb_state ^= (src & 0xc000) >> (14 - 4); *bhb_state ^= (src & 0x30) << (6 - 4); *bhb_state ^= (src & 0x300) << (8 - 8); *bhb_state ^= (src & 0x3000) >> (12 - 10); *bhb_state ^= (src & 0x30000) >> (16 - 12); *bhb_state ^= (src & 0xc0000) >> (18 - 14); } Some of the bits of the BHB state seem to be folded together further using XOR when used for a BTB access, but the precise folding function hasn't been understood yet. The BHB is interesting for two reasons. First, knowledge about its approximate behavior is required in order to be able to accurately cause collisions in the indirect call predictor. But it also permits dumping out the BHB state at any repeatable program state at which the attacker can execute code - for example, when attacking a hypervisor, directly after a hypercall. The dumped BHB state can then be used to fingerprint the hypervisor or, if the attacker has access to the hypervisor binary, to determine the low 20 bits of the hypervisor load address (in the case of KVM: the low 20 bits of the load address of kvm-intel.ko). Reverse-Engineering Branch Predictor Internals This subsection describes how we reverse-engineered the internals of the Haswell branch predictor. Some of this is written down from memory, since we didn't keep a detailed record of what we were doing. We initially attempted to perform BTB injections into the kernel using the generic predictor, using the knowledge from prior research that the generic predictor only looks at the lower half of the source address and that only a partial target address is stored. This kind of worked - however, the injection success rate was very low, below 1%. (This is the method we used in our preliminary PoCs for method 2 against modified hypervisors running on Haswell.) We decided to write a userspace test case to be able to more easily test branch predictor behavior in different situations. Based on the assumption that branch predictor state is shared between hyperthreads [10], we wrote a program of which two instances are each pinned to one of the two logical processors running on a specific physical core, where one instance attempts to perform branch injections while the other measures how often branch injections are successful. Both instances were executed with ASLR disabled and had the same code at the same addresses. The injecting process performed indirect calls to a function that accesses a (per-process) test variable; the measuring process performed indirect calls to a function that tests, based on timing, whether the per-process test variable is cached, and then evicts it using CLFLUSH. Both indirect calls were performed through the same callsite. Before each indirect call, the function pointer stored in memory was flushed out to main memory using CLFLUSH to widen the speculation time window. Additionally, because of the reference to "recent program behavior" in Intel's optimization manual, a bunch of conditional branches that are always taken were inserted in front of the indirect call. In this test, the injection success rate was above 99%, giving us a base setup for future experiments. We then tried to figure out the details of the prediction scheme. We assumed that the prediction scheme uses a global branch history buffer of some kind. To determine the duration for which branch information stays in the history buffer, a conditional branch that is only taken in one of the two program instances was inserted in front of the series of always-taken conditional jumps, then the number of always-taken conditional jumps (N) was varied. The result was that for N=25, the processor was able to distinguish the branches (misprediction rate under 1%), but for N=26, it failed to do so (misprediction rate over 99%). Therefore, the branch history buffer had to be able to store information about at least the last 26 branches. The code in one of the two program instances was then moved around in memory. This revealed that only the lower 20 bits of the source and target addresses have an influence on the branch history buffer. Testing with different types of branches in the two program instances revealed that static jumps, taken conditional jumps, calls and returns influence the branch history buffer the same way; non-taken conditional jumps don't influence it; the address of the last byte of the source instruction is the one that counts; IRETQ doesn't influence the history buffer state (which is useful for testing because it permits creating program flow that is invisible to the history buffer). Moving the last conditional branch before the indirect call around in memory multiple times revealed that the branch history buffer contents can be used to distinguish many different locations of that last conditional branch instruction. This suggests that the history buffer doesn't store a list of small history values; instead, it seems to be a larger buffer in which history data is mixed together. However, a history buffer needs to "forget" about past branches after a certain number of new branches have been taken in order to be useful for branch prediction. Therefore, when new data is mixed into the history buffer, this can not cause information in bits that are already present in the history buffer to propagate downwards - and given that, upwards combination of information probably wouldn't be very useful either. Given that branch prediction also must be very fast, we concluded that it is likely that the update function of the history buffer left-shifts the old history buffer, then XORs in the new state (see diagram). If this assumption is correct, then the history buffer contains a lot of information about the most recent branches, but only contains as many bits of information as are shifted per history buffer update about the last branch about which it contains any data. Therefore, we tested whether flipping different bits in the source and target addresses of a jump followed by 32 always-taken jumps with static source and target allows the branch prediction to disambiguate an indirect call. [11] With 32 static jumps in between, no bit flips seemed to have an influence, so we decreased the number of static jumps until a difference was observable. The result with 28 always-taken jumps in between was that bits 0x1 and 0x2 of the target and bits 0x40 and 0x80 of the source had such an influence; but flipping both 0x1 in the target and 0x40 in the source or 0x2 in the target and 0x80 in the source did not permit disambiguation. This shows that the per-insertion shift of the history buffer is 2 bits and shows which data is stored in the least significant bits of the history buffer. We then repeated this with decreased amounts of fixed jumps after the bit-flipped jump to determine which information is stored in the remaining bits. Reading host memory from a KVM guest Locating the host kernel Our PoC locates the host kernel in several steps. The information that is determined and necessary for the next steps of the attack consists of: lower 20 bits of the address of kvm-intel.ko full address of kvm.ko full address of vmlinux Looking back, this is unnecessarily complicated, but it nicely demonstrates the various techniques an attacker can use. A simpler way would be to first determine the address of vmlinux, then bisect the addresses of kvm.ko and kvm-intel.ko. In the first step, the address of kvm-intel.ko is leaked. For this purpose, the branch history buffer state after guest entry is dumped out. Then, for every possible value of bits 12..19 of the load address of kvm-intel.ko, the expected lowest 16 bits of the history buffer are computed based on the load address guess and the known offsets of the last 8 branches before guest entry, and the results are compared against the lowest 16 bits of the leaked history buffer state. The branch history buffer state is leaked in steps of 2 bits by measuring misprediction rates of an indirect call with two targets. One way the indirect call is reached is from a vmcall instruction followed by a series of N branches whose relevant source and target address bits are all zeroes. The second way the indirect call is reached is from a series of controlled branches in userspace that can be used to write arbitrary values into the branch history buffer. Misprediction rates are measured as in the section "Reverse-Engineering Branch Predictor Internals", using one call target that loads a cache line and another one that checks whether the same cache line has been loaded. With N=29, mispredictions will occur at a high rate if the controlled branch history buffer value is zero because all history buffer state from the hypercall has been erased. With N=28, mispredictions will occur if the controlled branch history buffer value is one of 0<<(28*2), 1<<(28*2), 2<<(28*2), 3<<(28*2) - by testing all four possibilities, it can be detected which one is right. Then, for decreasing values of N, the four possibilities are {0|1|2|3}<<(28*2) | (history_buffer_for(N+1) >> 2). By repeating this for decreasing values for N, the branch history buffer value for N=0 can be determined. At this point, the low 20 bits of kvm-intel.ko are known; the next step is to roughly locate kvm.ko. For this, the generic branch predictor is used, using data inserted into the BTB by an indirect call from kvm.ko to kvm-intel.ko that happens on every hypercall; this means that the source address of the indirect call has to be leaked out of the BTB. kvm.ko will probably be located somewhere in the range from 0xffffffffc0000000 to0xffffffffc4000000, with page alignment (0x1000). This means that the first four entries in the table in the section "Generic Predictor" apply; there will be 24-1=15 aliasing addresses for the correct one. But that is also an advantage: It cuts down the search space from 0x4000 to 0x4000/24=1024. To find the right address for the source or one of its aliasing addresses, code that loads data through a specific register is placed at all possible call targets (the leaked low 20 bits of kvm-intel.ko plus the in-module offset of the call target plus a multiple of 220) and indirect calls are placed at all possible call sources. Then, alternatingly, hypercalls are performed and indirect calls are performed through the different possible non-aliasing call sources, with randomized history buffer state that prevents the specialized prediction from working. After this step, there are 216 remaining possibilities for the load address of kvm.ko. Next, the load address of vmlinux can be determined in a similar way, using an indirect call from vmlinux to kvm.ko. Luckily, none of the bits which are randomized in the load address of vmlinux are folded together, so unlike when locating kvm.ko, the result will directly be unique. vmlinux has an alignment of 2MiB and a randomization range of 1GiB, so there are still only 512 possible addresses. Because (as far as we know) a simple hypercall won't actually cause indirect calls from vmlinux to kvm.ko, we instead use port I/O from the status register of an emulated serial port, which is present in the default configuration of a virtual machine created with virt-manager. The only remaining piece of information is which one of the 16 aliasing load addresses of kvm.ko is actually correct. Because the source address of an indirect call to kvm.ko is known, this can be solved using bisection: Place code at the various possible targets that, depending on which instance of the code is speculatively executed, loads one of two cache lines, and measure which one of the cache lines gets loaded. Identifying cache sets The PoC assumes that the VM does not have access to hugepages.To discover eviction sets for all L3 cache sets with a specific alignment relative to a 4KiB page boundary, the PoC first allocates 25600 pages of memory. Then, in a loop, it selects random subsets of all remaining unsorted pages such that the expected number of sets for which an eviction set is contained in the subset is 1, reduces each subset down to an eviction set by repeatedly accessing its cache lines and testing whether the cache lines are always cached (in which case they're probably not part of an eviction set) and attempts to use the new eviction set to evict all remaining unsorted cache lines to determine whether they are in the same cache set [12]. Locating the host-virtual address of a guest page Because this attack uses a FLUSH+RELOAD approach for leaking data, it needs to know the host-kernel-virtual address of one guest page. Alternative approaches such as PRIME+PROBE should work without that requirement. The basic idea for this step of the attack is to use a branch target injection attack against the hypervisor to load an attacker-controlled address and test whether that caused the guest-owned page to be loaded. For this, a gadget that simply loads from the memory location specified by R8 can be used - R8-R11 still contain guest-controlled values when the first indirect call after a guest exit is reached on this kernel build. We expected that an attacker would need to either know which eviction set has to be used at this point or brute-force it simultaneously; however, experimentally, using random eviction sets works, too. Our theory is that the observed behavior is actually the result of L1D and L2 evictions, which might be sufficient to permit a few instructions worth of speculative execution. The host kernel maps (nearly?) all physical memory in the physmap area, including memory assigned to KVM guests. However, the location of the physmap is randomized (with a 1GiB alignment), in an area of size 128PiB. Therefore, directly bruteforcing the host-virtual address of a guest page would take a long time. It is not necessarily impossible; as a ballpark estimate, it should be possible within a day or so, maybe less, assuming 12000 successful injections per second and 30 guest pages that are tested in parallel; but not as impressive as doing it in a few minutes. To optimize this, the problem can be split up: First, brute-force the physical address using a gadget that can load from physical addresses, then brute-force the base address of the physmap region. Because the physical address can usually be assumed to be far below 128PiB, it can be brute-forced more efficiently, and brute-forcing the base address of the physmap region afterwards is also easier because then address guesses with 1GiB alignment can be used. To brute-force the physical address, the following gadget can be used: ffffffff810a9def: 4c 89 c0 mov rax,r8 ffffffff810a9df2: 4d 63 f9 movsxd r15,r9d ffffffff810a9df5: 4e 8b 04 fd c0 b3 a6 mov r8,QWORD PTR [r15*8-0x7e594c40] ffffffff810a9dfc: 81 ffffffff810a9dfd: 4a 8d 3c 00 lea rdi,[rax+r8*1] ffffffff810a9e01: 4d 8b a4 00 f8 00 00 mov r12,QWORD PTR [r8+rax*1+0xf8] ffffffff810a9e08: 00 This gadget permits loading an 8-byte-aligned value from the area around the kernel text section by setting R9 appropriately, which in particular permits loading page_offset_base, the start address of the physmap. Then, the value that was originally in R8 - the physical address guess minus 0xf8 - is added to the result of the previous load, 0xfa is added to it, and the result is dereferenced. Cache set selection To select the correct L3 eviction set, the attack from the following section is essentially executed with different eviction sets until it works. Leaking data At this point, it would normally be necessary to locate gadgets in the host kernel code that can be used to actually leak data by reading from an attacker-controlled location, shifting and masking the result appropriately and then using the result of that as offset to an attacker-controlled address for a load. But piecing gadgets together and figuring out which ones work in a speculation context seems annoying. So instead, we decided to use the eBPF interpreter, which is built into the host kernel - while there is no legitimate way to invoke it from inside a VM, the presence of the code in the host kernel's text section is sufficient to make it usable for the attack, just like with ordinary ROP gadgets. The eBPF interpreter entry point has the following function signature: static unsigned int __bpf_prog_run(void *ctx, const struct bpf_insn *insn) The second parameter is a pointer to an array of statically pre-verified eBPF instructions to be executed - which means that __bpf_prog_run() will not perform any type checks or bounds checks. The first parameter is simply stored as part of the initial emulated register state, so its value doesn't matter. The eBPF interpreter provides, among other things: multiple emulated 64-bit registers 64-bit immediate writes to emulated registers memory reads from addresses stored in emulated registers bitwise operations (including bit shifts) and arithmetic operations To call the interpreter entry point, a gadget that gives RSI and RIP control given R8-R11 control and controlled data at a known memory location is necessary. The following gadget provides this functionality: ffffffff81514edd: 4c 89 ce mov rsi,r9 ffffffff81514ee0: 41 ff 90 b0 00 00 00 call QWORD PTR [r8+0xb0] Now, by pointing R8 and R9 at the mapping of a guest-owned page in the physmap, it is possible to speculatively execute arbitrary unvalidated eBPF bytecode in the host kernel. Then, relatively straightforward bytecode can be used to leak data into the cache. Variant 3: Rogue data cache load Basically, read Anders Fogh's blogpost: https://cyber.wtf/2017/07/28/negative-result-reading-kernel-memory-from-user-mode/ In summary, an attack using this variant of the issue attempts to read kernel memory from userspace without misdirecting the control flow of kernel code. This works by using the code pattern that was used for the previous variants, but in userspace. The underlying idea is that the permission check for accessing an address might not be on the critical path for reading data from memory to a register, where the permission check could have significant performance impact. Instead, the memory read could make the result of the read available to following instructions immediately and only perform the permission check asynchronously, setting a flag in the reorder buffer that causes an exception to be raised if the permission check fails. We do have a few additions to make to Anders Fogh's blogpost: "Imagine the following instruction executed in usermode mov rax,[somekernelmodeaddress] It will cause an interrupt when retired, [...]" It is also possible to already execute that instruction behind a high-latency mispredicted branch to avoid taking a page fault. This might also widen the speculation window by increasing the delay between the read from a kernel address and delivery of the associated exception. "First, I call a syscall that touches this memory. Second, I use the prefetcht0 instruction to improve my odds of having the address loaded in L1." When we used prefetch instructions after doing a syscall, the attack stopped working for us, and we have no clue why. Perhaps the CPU somehow stores whether access was denied on the last access and prevents the attack from working if that is the case? "Fortunately I did not get a slow read suggesting that Intel null’s the result when the access is not allowed." That (read from kernel address returns all-zeroes) seems to happen for memory that is not sufficiently cached but for which pagetable entries are present, at least after repeated read attempts. For unmapped memory, the kernel address read does not return a result at all. Ideas for further research We believe that our research provides many remaining research topics that we have not yet investigated, and we encourage other public researchers to look into these. This section contains an even higher amount of speculation than the rest of this blogpost - it contains untested ideas that might well be useless. Leaking without data cache timing It would be interesting to explore whether there are microarchitectural attacks other than measuring data cache timing that can be used for exfiltrating data out of speculative execution. Other microarchitectures Our research was relatively Haswell-centric so far. It would be interesting to see details e.g. on how the branch prediction of other modern processors works and how well it can be attacked. Other JIT engines We developed a successful variant 1 attack against the JIT engine built into the Linux kernel. It would be interesting to see whether attacks against more advanced JIT engines with less control over the system are also practical - in particular, JavaScript engines. More efficient scanning for host-virtual addresses and cache sets In variant 2, while scanning for the host-virtual address of a guest-owned page, it might make sense to attempt to determine its L3 cache set first. This could be done by performing L3 evictions using an eviction pattern through the physmap, then testing whether the eviction affected the guest-owned page. The same might work for cache sets - use an L1D+L2 eviction set to evict the function pointer in the host kernel context, use a gadget in the kernel to evict an L3 set using physical addresses, then use that to identify which cache sets guest lines belong to until a guest-owned eviction set has been constructed. Dumping the complete BTB state Given that the generic BTB seems to only be able to distinguish 231-8 or fewer source addresses, it seems feasible to dump out the complete BTB state generated by e.g. a hypercall in a timeframe around the order of a few hours. (Scan for jump sources, then for every discovered jump source, bisect the jump target.) This could potentially be used to identify the locations of functions in the host kernel even if the host kernel is custom-built. The source address aliasing would reduce the usefulness somewhat, but because target addresses don't suffer from that, it might be possible to correlate (source,target) pairs from machines with different KASLR offsets and reduce the number of candidate addresses based on KASLR being additive while aliasing is bitwise. This could then potentially allow an attacker to make guesses about the host kernel version or the compiler used to build it based on jump offsets or distances between functions. Variant 2: Leaking with more efficient gadgets If sufficiently efficient gadgets are used for variant 2, it might not be necessary to evict host kernel function pointers from the L3 cache at all; it might be sufficient to only evict them from L1D and L2. Various speedups In particular the variant 2 PoC is still a bit slow. This is probably partly because: It only leaks one bit at a time; leaking more bits at a time should be doable. It heavily uses IRETQ for hiding control flow from the processor. It would be interesting to see what data leak rate can be achieved using variant 2. Leaking or injection through the return predictor If the return predictor also doesn't lose its state on a privilege level change, it might be useful for either locating the host kernel from inside a VM (in which case bisection could be used to very quickly discover the full address of the host kernel) or injecting return targets (in particular if the return address is stored in a cache line that can be flushed out by the attacker and isn't reloaded before the return instruction). However, we have not performed any experiments with the return predictor that yielded conclusive results so far. Leaking data out of the indirect call predictor We have attempted to leak target information out of the indirect call predictor, but haven't been able to make it work. Vendor statements The following statement were provided to us regarding this issue from the vendors to whom Project Zero disclosed this vulnerability: Intel No current statement provided at this time. AMD AMD provided the following link: http://www.amd.com/en/corporate/speculative-execution ARM Arm recognises that the speculation functionality of many modern high-performance processors, despite working as intended, can be used in conjunction with the timing of cache operations to leak some information as described in this blog. Correspondingly, Arm has developed software mitigations that we recommend be deployed. Specific details regarding the affected processors and mitigations can be found at this website:https://developer.arm.com/support/security-update Arm has included a detailed technical whitepaper as well as links to information from some of Arm’s architecture partners regarding their specific implementations and mitigations. Literature Note that some of these documents - in particular Intel's documentation - change over time, so quotes from and references to it may not reflect the latest version of Intel's documentation. https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf: Intel's optimization manual has many interesting pieces of optimization advice that hint at relevant microarchitectural behavior; for example: "Placing data immediately following an indirect branch can cause a performance problem. If the data consists of all zeros, it looks like a long stream of ADDs to memory destinations and this can cause resource conflicts and slow down branch recovery. Also, data immediately following indirect branches may appear as branches to the branch predication [sic] hardware, which can branch off to execute other data pages. This can lead to subsequent self-modifying code problems." "Loads can:[...]Be carried out speculatively, before preceding branches are resolved." "Software should avoid writing to a code page in the same 1-KByte subpage that is being executed or fetching code in the same 2-KByte subpage of that is being written. In addition, sharing a page containing directly or speculatively executed code with another processor as a data page can trigger an SMC condition that causes the entire pipeline of the machine and the trace cache to be cleared. This is due to the self-modifying code condition." "if mapped as WB or WT, there is a potential for speculative processor reads to bring the data into the caches" "Failure to map the region as WC may allow the line to be speculatively read into the processor caches (via the wrong path of a mispredicted branch)." https://software.intel.com/en-us/articles/intel-sdm: Intel's Software Developer Manuals http://www.agner.org/optimize/microarchitecture.pdf: Agner Fog's documentation of reverse-engineered processor behavior and relevant theory was very helpful for this research. http://www.cs.binghamton.edu/~dima/micro16.pdf and https://github.com/felixwilhelm/mario_baslr: Prior research by Dmitry Evtyushkin, Dmitry Ponomarev and Nael Abu-Ghazaleh on abusing branch target buffer behavior to leak addresses that we used as a starting point for analyzing the branch prediction of Haswell processors. Felix Wilhelm's research based on this provided the basic idea behind variant 2. https://arxiv.org/pdf/1507.06955.pdf: The rowhammer.js research by Daniel Gruss, Clémentine Maurice and Stefan Mangard contains information about L3 cache eviction patterns that we reused in the KVM PoC to evict a function pointer. https://xania.org/201602/bpu-part-one: Matt Godbolt blogged about reverse-engineering the structure of the branch predictor on Intel processors. https://www.sophia.re/thesis.pdf: Sophia D'Antoine wrote a thesis that shows that opcode scheduling can theoretically be used to transmit data between hyperthreads. https://gruss.cc/files/kaiser.pdf: Daniel Gruss, Moritz Lipp, Michael Schwarz, Richard Fellner, Clémentine Maurice, and Stefan Mangard wrote a paper on mitigating microarchitectural issues caused by pagetable sharing between userspace and the kernel. https://www.jilp.org/: This journal contains many articles on branch prediction. http://blog.stuffedcow.net/2013/01/ivb-cache-replacement/: This blogpost by Henry Wong investigates the L3 cache replacement policy used by Intel's Ivy Bridge architecture. Source: https://googleprojectzero.blogspot.ro/2018/01/reading-privileged-memory-with-side.html ### MELTDOWN ATTACK AND SPECTRE ### https://meltdownattack.com/
  9. https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/ Kernel page-table isolation (KPTI, previously called KAISER) is a hardening technique in the Linux kernel to improve security by better isolating user space and kernel space memory. KPTI was merged into Linux kernel version 4.15, to be released in early 2018, and backported into Linux Kernel 4.14.10. Windows implemented an identical feature in version 17035 (RS4). Prior to KPTI, whenever executing user space code (applications), Linux would also keep its entire kernel memory mapped in page table. https://www.youtube.com/watch?time_continue=1792&v=ewe3-mUku94 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5925 https://www.reddit.com/r/Amd/comments/7nqwoe/apparently_amds_request_to_be_excluded_from_the/ The effects are still being benchmarked, however we're looking at a ballpark figure of five to 30 per cent slow down, depending on the task and the processor model. More recent Intel chips have features – such as PCID – to reduce the performance hit.
  10. Caut developer pentru Magento 2. Pentru mai multe detalii, trimite-mi te rog P.M
  11. @ Un coleg si-a luat credit din banca in valoare de 10.000 $ pentru a investi Ripple. Tot mi a spus ca ripple va fi acceptat de multe banci, multe banci si sisteme de payment il folosesc. Poate va este de folos, GoogleIT Ripple .
  12. sleed

    Servicii SEO

    Trimite mi te rog cateva din realizarile tale si cuvintele cheie targetate. Mersi
  13. Flextronics Specifically, you will:  Ensure support for McAfee Anti-Virus Solution, IDS/IPS solution, Data Loss Prevention. The support level included L1, L2 and L3 support and implementation work;  Ensure support for McAfee Endpoind Encryption and BitLocker;  Be involved in discovering virus outbreaks and working with different IT Teams and our AV vendor to handle the outbreak, clean and doing the root cause analysis;  Monitor and analyze security events, make a risk assignment and response and solve this events in conjunction with global, regional and local IT teams;  Perform security assessments at the network and application layer and you will participate in the development of processes and requirements for systems prior to their acceptance into production;  Manage different IT security incidents and you will provide IT Security engineering and integration services to internal customers;  Work with different technical monitoring, scanning, and remediation tools such as App Scan, Web Inspect, Internet Scanner, Nessus, Nmap. Skills you will need:  Minimum 2 years working experience in IT industry and excellent knowledge in using computers;  General knowledge about the security architecture, products and concepts;  English – advanced level - ability to communicate effectively and appropriately with other level of support from global teams and people of other cultures;  Ability to understand security alerts, establish facts and draw valid conclusions to solve the issues;  Knowledge of Windows, Unix, TCP/IP, Firewalls, IDS. Understanding of security technologies including Firewalls, VPN, IDS/IPS, as well as knowledge of SOX IT Controls; Windows Update Service Version 2 and 3;  Knowledge of Microsoft AD;  Knowledge of SQL queries. www.flextronics.com Locatia: Timisoara Pentru mai multe detalii, contactati-ma via P.M si am sa va pun in legatura cu cei ce recruteaza.
  14. Salut. Am nevoie de o solutie integrata de traffic shaping. Nu ma intereseaza solutii gen: Juniper Srx, Riv. Steelhead sau alte hardware appliance. Doar Unix/Linux based, pentru mai multe LAN-uri din mai multe tari. Cu ce ati lucrat si ce recomandati ? Procesare undeva la ~788G Packets, 577T Bytes
  15. A white hat hacker last week announced the discovery of more than a half-dozen security flaws in some software Facebook used on its corporate network. While performing penetration testing of some third-party software in a network appliance Facebook used, Orange Tsai, a security researcher for Devcore, discovered seven vulnerabilities that attackers could use to compromise a system, as well as a backdoor script left by someone else who'd penetrated the network. The researcher was conducting tests as part of Facebook's bug bounty program. After reporting the findings to Facebook, he received US$10,000 for his efforts. The company no longer uses the software Tsai tested, and it was never part of the systems that run Facebook, including the systems that host the data people share on the site, the company said. As for the traces of a backdoor the researcher found, "we conducted a thorough investigation and determined that the activity Orange detected actually was another security researcher that was also participating in our bug bounty program and who was testing the same third-party software," said a statement provided to TechNewsWorld by Facebook spokesperson Jay Nancarrow. Little Harm Facebook's explanation of the back door makes the discovery relatively benign, noted Ben Desjardins, director of security solution marketing at Radware. "Facebook is claiming the proxy login page was actually set up by another white hat hacker, essentially saying two ethical hackers bumped into each other while trying to penetrate the network," he told TechNewsWorld. "If so, it's likely little or no harm was done." Even if the vulnerabilities Tsai found had led to compromised credentials, it would have been difficult for black hats to authenticate themselves on Facebook's systems because of two-factor authentication, which typically requires a code sent to a mobile phone in addition to a username and password to log in to a system. "Without two-factor authentication, a hacker could use stolen credentials to navigate the network and traverse to all the critical servers," said Ajit Sancheti, CEO of Preempt Security. "Credential theft drives a majority of data breaches," he told TechNewsWorld. "If my credentials are compromised and someone is able to get in to my network, then they'll have access that will get them to most places on a network." Serious Vulnerabilities Nevertheless, the seven vulnerabilities discovered in the software in the Accellion Secure File Transfer appliance Facebook used are nothing to be ignored, noted Jean-Philippe Taggart, a senior security researcher with Malwarebytes Labs. "I would classify these vulnerabilities as serious indeed," he told TechNewsWorld. "What was even more worrisome is that this researcher found evidence of another compromise, performed by a malicious actor in the form of malicious toolsets. He analyzed these and showed that they were attempting to harvest credentials," Taggart added. "The ultimate goal would have been establishing a beachhead into the internal Facebook network," he noted. "Then the natural progression would be to pivot through the network while attempting to gather credentials and exfiltrate valuable information." Bigger Threats Ahead? Hackers need not penetrate Facebook's corporate servers to steal valuable intellectual property, noted Danny Rogers, CEO of Terbium Labs. "We've seen elements of Facebook source code leaked to the Internet," he told TechNewsWorld. "Most of it is inadvertently leaked by Facebook developers." Developers often post snips of code online when seeking help from other developers in solving a programming problem, Rogers said. "People can piece together those snips into significant chunks of Facebook source that includes things like database credentials, which can be used to develop more serious exploits," he said. Companies don't have to be in the social media business to learn from Tsai's methods and Facebook's support of the researcher, Taggart noted. Enterprises should set up bug bounty programs and hire penetration testers to check on the strength of defenses. "Having a completely external entity look at your infrastructure is the closest you can get to the mindset of an actual attacker," he said. "This exercise allowed Facebook to better secure this application and boot out a genuine malicious actor who was intent on collecting Facebook staff credentials." Source: Source
  16. I certainly didn't expect it would go this far when I built Have I been pwned (HIBP) a few years ago, but I've just loaded the 100th data breach into the system. This brings it to a grand total of 336,724,945 breached accounts that have been loaded in over the years, another figure I honestly didn't expect to see. But there's something a bit different about this 100th data breach - it was provided to me by the site that was breached themselves. It was self-submitted, if you like. Usually, a site is breached and the data floats around the web whilst the impacted organisation either has no clue what happened or they stonewall and avoid admitting the incident. Just yesterday I wrote If I Can Verify Data Breaches, so Can Those Who Are Breached where I chastised organisations such as the Philippines Electoral Committee and Naughty America for still not acknowledging breach authenticity weeks after the incident. As much as ethics are lacking when hackers break into these systems and put the people in there at risk, so too are they lacking in the organisations that refuse to admit the incident and focus on protecting their members. Recently, I received an email that included this request: I am an admin / dev of a gaming forum with ~ 80,000 accounts that had a db breach a few weeks back and we'd like to add our breach to the site listing. Now as you can imagine, I often have what you might call "interesting" interactions with various people who pop up out of the blue and want to talk about data breaches, but it turns out that this one was precisely what it suggests at face value. The site is TruckersMP and it's a trucking simulator: News of the breach was published on their website on Feb 25 at 19:39 which is 2 hours and 9 minutes after they first discovered the incident. That discovery was only 30 minutes after the incident took place. The succinct blog post explains what happens and then offers an apology, all within a few hours of the event. I was curious though as to why they'd reach out and offer the data to HIBP. We had a bit of email to and fro (which included me verifying I was indeed chatting with an admin of the site and that the data they provided was legitimate) and they had this to say on why they provided me with the data: We're decently security minded and feel a responsibility and duty to inform our users when such a breach happens. All of the members of the team agreed it'd be ok to be added to the list with the notion that we'd like to see other sites do the same as well; given the unfortunate chance. For a while now, I've had a few ideas forming about how I can use HIBP in conjunction with breached organisations to better support those who have accounts compromised, but I honestly wasn't expecting this. Perhaps I've just become a little cynical after seeing literally hundreds of "we take security seriously" statements from organisations which clearly didn't and to see a response like this where they're not trying to spin the story to their own advantage or misconstrue facts is heartening. If only those with nation state budgets or billion dollar revenues could act so responsibly. Source
      • 1
      • Upvote
  17. So, challenge accepted: @ErrataRob you’re up for writing the blog post “detecting TrueCrypt/encrypt blob transfers” on the wire… — the grugq (@thegrugq) March 29, 2016 tl;dr: The NSA should be able to go back through it's rolling 90 day backlog of Internet metadata and find all other terrorist cells using this method. From what we can piece together from the NYTimes article, it appears that ISIS is passing around TrueCrypt container files as a way of messaging. This is really weird. It has the has the property of security through obscurity, which is that it has the nice property of evading detection for a while because we'd never consider that ISIS would do such a strange thing. But it has the bad property that once discovered, it now becomes easier to track. With the keys found on the USB drive, we can now start decrypting things that were a mystery before. We are going off of very little information at the moment, but let's imagine some fictional things. First, we need to figure out what is meant by a file or hosting site in Turkey. Such hosting sites are all over the place, as you can find with a little googling. Their primary purpose is to exchange copyrighted material (movies, music, games, ebooks) and porn. They are also a convenient way to host viruses that I'll trick you to load in phishing emails. Half of these appear to use SSL during file transfers. In such cases, there's not much we can do in order to detect this particular transfer on the wire. However, we aren't completely out of luck. Presumably, the containers created by the terrorists were always the same size, such as 1 megabyte. We can monitor SSL connections and detect transactions of this size, uploaded by a customer in Europe and download by a customer in Syria or Iraq, with just one download. Presumably, this is something the NSA can track down. According to Snowden, they keep metadata of all TCP transfers in places like Turkey, Syria, and Iraq. These logs are supposed to go back 90 days. Thus, a creative analyst should be able to sit down at the console and start making these queries to tease out such info. We are looking for the IP addresses in Europe making a few small uploads, and the IP addresses in Syria and Iraq making many downloads. With a 90 day backlog, this should go back to the start of the year, before the Brussels bombing, and catch any active terrorists. The next thing to do would be to update the code of their sniffers to detect this on the wire. I created a TrueCrypt container file and uploaded it. Here's what I saw sniffing the packets. It's a normal POST command, where this is the contents of the POST, starting at the "WebKitFormboundary" The thing about TrueCrypt containers is that they are completely random. The first 8 bytes are random salt, followed by the encrypted contents of everything else. If they do it right, it's impossible to distinguish TrueCrypt from purely random data (such as the output of /dev/urandom on Linux). But luckily for us intelligence agents, TrueCrypt is rare in that property. Compressed files and encrypted ZIP files are also supposed to be random -- except they have headers identifying themselves. They've all got non-random bits to them, so while I can't easily identify TrueCrypt, I can easily identify everything that's not TrueCrypt. Thus, if the NSA has a sniffer eavesdropping next to these non-SSL file-upload sites, they can do the following. First of all, they can classify all known file types somebody would be uploading (images, movies, virus code, ZIP files, RAR files, etc.). Of the remaining, they would then apply a simple entropy measurement system that tests the randomness of a file. This will weed out things like text files, or anything else of an unknown format. (Back in the day, my intrusion prevent system did this -- applying entropy tests to SSH and SSL connections once they were established, in order to discover exploits that would later send unencrypted data on these sessions, like the GOBBLES SSH exploit). I have no experience eavesdropping on file upload sites, but I imagine the remaining files would be fairly small and manageable. Note that at this point, the NSA can start capturing the sessions so that later, when they capture terrorists and grab their keys, that they can decrypt old files. (This is one of the flaws of this terrorist dead-drop system: no "forward security"). So these are the thoughts so far. I'm sure I'll be tweeting back and forth with @thegrugq and will think of some more ideas. I'll update this later. Update: It would be the sniffers associated with the NSA's XKeyScore system that would need to be updated to detect this. Presumably, this system can already track file uploads/downloads like this, and use file types as one of the search criteria when making queries. One bit of code that would be useful to add to the sniffers would be some that automatically tried to password guess TrueCrypt container files. When it sees completely random bytes at the start of an upload, it can try to decrypt it using known passwords, and see if the result produces "TRUE" in the first few bytes, which is the string TrueCrypt uses to identify its files once decrypted. As passwords/passphrases are collected, they can be disseminated out to the sniffers, which can then identify these files in particular being transferred. Notes: Instead of a keyfile used by the terrorists, I used a 5 lower-case letter password. You should be able to copy the bytes above into something that'll crack the password. Source
  18. OK kids, this is cool. Know a hacker or computer club or school that could use some free, community-contributed labs? From the website (pivotproject.org): “People who earn great jobs in cyber security have mastered both academics and hands-on skills. But where can people with a wide variety of skill levels get hands-on practice with real-world cyber security problems? On January 12, the PIVOT project goes live to help meet that need. PIVOT makes it possible for students and others, all over the world, to build their hands-on skills in a fun, challenging, real-world cyber environment. PIVOT provides exciting hands-on labs and challenges for student groups and associated faculty, completely free. Through a variety of engaging downloadable materials, participants build their hands-on skills to help them pivot from academic studies to their future cyber security careers.” To kick things off there’s a contest to get things moving and gather feedback: “We’re launching PIVOT with a special contest and over a dozen prizes so you can help make PIVOT even better. Prizes include gift cards, club pizza feasts, t-shirts, and more! To participate in the contest and help us make PIVOT even better, all you need to do is have your group work through your choice of at least two of our current labs, and then have a student leader or faculty member fill out our contest form by February 15, 2016. The contest form gathers information about your experiences with the labs and recommendations for additional PIVOT challenges. From all submitted entries, we’ll select the top 5 with the most useful input to receive our grand prizes. Then, from all submitted entries, we’ll select another 10 at random to receive a prize.” Please check out PIVOT Project and spread the word, it is off to a great start but now we need to build the community. Source
  19. In-brief: technology analyst firm Gartner Inc. predcits worldwide spending on IoT security will exceed half a billion dollars in just two years. Technology analyst firm Gartner Inc. has some eye-popping numbers out on the growth of the market for Internet of Things -focused security products and services. According to the firm, worldwide spending on IoT security will exceed half a billion dollars in just two years. IoT security spending is predicted to reach $348 million this year, a 24% increase over 2015. And that aggressive growth is expected to continue for the foreseeable future. Spending on IoT security is expected to reach $547 million in 2018, then increase at a faster rate after 2020, as improved skills, organizational change and more scalable service options improve execution. From Gartner: “The market for IoT security products is currently small but it is growing as both consumers and businesses start using connected devices in ever greater numbers,” said Ruggero Contu, research director at Gartner. “Gartner forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from 2015, and will reach 11.4 billion by 2018. However, considerable variation exists among different industry sectors as a result of different levels of prioritization and security awareness.” Source
  20. As mentioned in an update to my post on the HNAP bug in the DIR-890L, the same bug was reported earlier this year in the DIR-645, and a patch was released. D-Link has now released a patch for the DIR-890L as well. The patches for both the DIR-645 and DIR-890L are identical, so I’ll only examine the DIR-890L here. Although I focused on command injection in my previous post, this patch addresses multiple security bugs, all of which stem from the use of strstr to validate the HNAP SOAPAction header: Use of unauthenticated user data in a call to system (command injection) Use of unauthenticated user data in a call to sprintf (stack overflow) Unauthenticated users can execute privileged HNAP actions (such as changing the admin password) Remember, D-Link has acknowledged all of the above in their security advisories, and thus were clearly aware of all these attack vectors. So, did they remove the sprintf stack overflow? sprintf(cmd_buf, “sh %s%s.sh > /dev/console”, “/var/run”, SOAPAction); Nope. Did they remove the call to system? system(cmd_buf); Of course not! Are they using strcmp instead of strstr to validate the SOAPAction header? if(strstr(SOAPAction, “http://purenetworks.com/HNAP1/GetDeviceSettings”) != NULL) Pfft, why bother? Their fix to all these fundamental problems is to use the access function to verify that the SOAPAction is a valid, expected action by ensuring that the file /etc/templates/hnap/<SOAPAction>.php exists: A call to sprintf(), followed by a call to access() OK, that does at least prevent users from supplying arbitrary data to sprintf and system. However, they’ve added another sprintf to the code before the call to access; their patch to prevent an unauthenticated sprintf stack overflow includes a new unauthenticated sprintf stack overflow. But here’s the kicker: this patch does nothing to prevent unauthenticated users from executing completely valid administrative HNAP actions, because all it does is ensure that the HNAP action is valid. That’s right, their patch doesn’t even address all the bugs listed in their own security advisory! But I guess nobody really cares that any unauthenticated user can query information about hosts on the internal network, view/change system settings, or reset the router to its factory defaults: $ wget --header="SOAPAction: http://purenetworks.com/HNAP1/GetDeviceSettings/SetFactoryDefault" http://192.168.0.1/HNAP1 Source
  21. We often get asked for things we can do to help users keep their mobile devices secure. Here's a quick list of some simple things you can do to ensure that your mobile devices are running with at least some security. All of these steps are free and raise the bar on both unauthorized use of your device and the integrity of the applications you're running on them. Our goal here is not to make your device impenetrable to attack, but instead to raise the bar. Security Tips For Android Devices Turn on disk encryption (not explicitly tied to PIN/screen lock). Use biometrics for unlocking normally with a longer passcode (instead of a simpler 4-character PIN). Disable developer access (off by default). Disable third-party app store access (off by default, but very common) Evaluate and uninstall apps with excessive permissions using Android Permission Apps or other tools. Install Android platform updates when they become available Compare your Android version to recent releases. Is your phone getting updates? If not, it's time for a new phone. (This is hard, because most users will find that Android phones are poorly supported and require more frequent replacements, which end up being more costly than iOS devices over time). Do your research before you buy a new phone. Nexus has the best record for security update delivery and support, followed by Samsung, and then by LG. Everyone else is the pits for security updates. Turn on "Android Device Manager" for remote location services for lost devices or a third-party "Find my Android" tool if your Android device doesn't support this feature. Periodically erase your network settings to forget about old, insecure WiFi networks you don't use anymore. When plugging in USB, don't say yes to "Trust this PC" when prompted, unless it is a personally owned system. Set a strong Google password, better still, enable two-factor authentication. Complain to your cell phone carrier about unwanted applications on device and loss of control. There's no challenge currently, so the carriers do what they want. Security Tips for iOS Devices Make sure you update iOS when new updates come out. Periodically erase your network settings to forget about old, insecure WiFi networks you don't use anymore. Make sure "Find my iPhone" is turned on for locating or wiping lost devices. Use TouchID with a longer passcode in lieu of a 4-digit PIN. When plugging in USB, don't say yes to "Trust this Computer" when prompted, unless it is a personally owned system. Turn off iCloud backup unless you are comfortable with your pictures being stored in the cloud. Use iTunes to make a backup with a password to both encrypt and to capture all your settings. Set a strong Apple iTunes password. Review the Settings | Privacy settings, revoking permissions from apps that are unnecessarily greedy with permissions. Security Tips for For Both iOS and Android Devices Disable wireless and leave it off unless you're actively using it. Install a VPN (proXPN, Private Internet Access, etc.) for when you need to use Wi-Fi, and always use the VPN when connecting to Wi-Fi. Only use known Wi-Fi connections, beware of free public Wi-Fi. Don't leave your device unattended, treat it like your wallet. Use caution lending your device to others, they can quickly make unauthorized changes. Disable premium rate messages via your cell carrier! If you manage cell phones for the organization, turn it off for all. Uninstall unused apps. Factory reset phones before returning for service. Source: Source
  22. With the recent release of the E-ISAC and SANS ICS Defense Use Case (DUC) #5 which analyzed the cyber-attack that impacted Ukraine on December 23, 2015, I wondered how NERC CIP might have helped. I want to preface this analysis with acknowledgement that the Ukrainian event was wholly contained at the distribution level of their electric system. Except in very limited and specific situations, similar facilities in the United States would not have been under NERC regulation and therefore not subject to the CIP standards. The Ukrainian event clearly shows us that while the higher voltage transmission systems are the backbone of an electric system, they don't serve much practical good without the underlying distribution systems. NERC CIP gets a bad rap from many who claim that regulation doesn't equal security. I actually agree, but when it comes to CIP Version 5/6 it's hard to make the case that CIP isn't making a difference where applicable and when implemented in a holistic manner that balances the cyber security and compliance obligations. Below, I'll try to map the known Ukrainian events and the ICS Cyber Kill Chain to specific cyber security requirements in NERC CIP that might have helped if this had been an in-scope control center controlling in-scope substations. Stage 1- Reconnaissance: This step is characterized by the attacker passively seeking information about the intended target and is perhaps the hardest to detect. In this stage, attackers try to identify employees of interest, business locations, technologies in use, etc., in hopes of finding vulnerabilities. CIP-011, Information Protection, is the CIP standard that would try to limit the disclosure of information that could lead to a compromise. It's unknown how much the attackers were aided by publicly available information but certainly any efforts to protect BES Cyber System Information would at least make this stage more difficult. Stage 1- Targeting/Weponization: In the Ukrainian case, Microsoft Office documents were weaponized with embedded BlackEnergy 3 and were delivered to company employees with administrative or IT network responsibilities. The specific assets used to access corporate email would likely be out of scope for NERC CIP (assuming proper network isolation and segmentation has taken place) but malicious code prevention measures such as AV, whitelisting, intrusion preventions systems, and monitoring included in CIP-007, System Security Management, might have detected something out of the ordinary. The cyber security awareness and cyber security training requirements in CIP-004, Personnel & Training, might also have helped the recipients recognize the phishing attack and prevented it from succeeding. Stage 1- C2: In this stage the attackers were able to establish a foothold and secure command and control capabilities. But those weren't needed long as they quickly obtained legitimate credentials harvested from directory servers on the corporate network giving them cloaked access to into the ICS network. Again, depending on the configuration of the directory services these corporate systems would likely have been outside of the scope of NERC CIP. But assuming that the SCADA environment would have existed within a CIP-005 required Electronic Security Perimeter (ESP), they would have had to have identified all inbound and outbound access permissions to the ESP. Additionally, they would have had to use an Intermediate System which most certainly would have been in-scope for NERC CIP requiring all applicable CIP-007 and CIP-005 measures including multi-factor authentication for Interactive Remote Access. Stage 2- ICS Specific Attack Development, Testing and Delivery: In the Ukrainian case, the attackers planted a modified version of KillDisk software to specific systems selected because the unavailability of those systems after the actual attack would hinder recovery. They also delivered custom malicious firmware updates to serial-to-Ethernet devices needed to communicate with field devices. Whitelisting, active configuration management, security event monitoring, and alerting measures that could be implemented to meet various CIP-007 and CIP-010 requirements may have prevented (or at least detected) and provided early warnings of unauthorized software being installed. Remote access to the operator HMIs provided attackers with direct ability to use the SCADA HMI software to impersonate an operator and initiate breaker open commands. Implementation of the ports and services hardening requirements of CIP-007 may have identified remote access as an unneeded service requiring that it be disabled. In at least one location the attackers reconfigured a UPS to cause a localized power outage to further hamper recovery efforts. This is one glaring gap in the NERC standards that I believe should be revisited as currently HVAC, UPS, and other support systems are not the focus of compliance monitoring, unless those systems are within an ESP.1 While not directly capable of impacting the Bulk Electric System their dependence on these secondary systems deserves some degree of protections. Post-attack Response and Recovery: A thorough Cyber Security Incident Response Plan with identified roles and responsibilities as required by CIP-008 may have helped in earlier identification of the incident. Pure speculation on my part, but it's possible that there were signs that might have been overlooked and a process for identification may have helped. Likewise, a response plan that had been regularly tested as required in CIP-008 may have provided the opportunity to identify weaknesses or gaps in the plan. While not much has been written about the recovery efforts post-attack, it's safe to assume that well documented recovery procedures and the availability of verified backup sources required by CIP-009 would have aided in restoration efforts. A well-documented recovery plan that included availability of system spares or information for obtaining spares may have also helped in quicker replacement of the serial-to-Ethernet devices that were unrecoverable and rendered useless by the malicious firmware. Beginning with the Beginning: Accepted security dogma dictates that in order to protect your systems you must understand what systems you have and the role they play. Application of a defined methodology similar to the bright-line criteria in CIP-002 would have helped to identify the systems critical to operating the Ukrainian electric system and therefore deserving of higher level of protections. And finally, the Security Management Controls in CIP-003 would have forced the thoughtful consideration and development of cyber security policies. The requirements in the other standards that require plan development and review may have alerted the Ukrainian teams of weaknesses needing to be addressed. I don't pretend that NERC CIP is perfect - far from it. But the fact that it is making a difference at least with the systems to which it is applicable is undeniable. That also isn't meant to suggest that a system subject to NERC CIP can't be compromised - its effectiveness will always depend on specific design considerations and will be subject to human error, system malfunction, and attacker ingenuity. With regard to Ukraine, if nothing else, had measures similar to those required by NERC CIP been implemented, they might have slowed the attackers down or tripped them into making a detectible misstep. By examining events like what happened in the Ukraine and understanding how NERC CIP may or may not have helped at each stage, we might identify what we need to do differently. We can also have the conversations about whether or not we are protecting all the right assets. While not required by regulation, there is nothing that precludes U.S. entities from implementing CIP-like cyber security measures in similar distribution networks. The Ukrainian event should give distribution asset owners cause to consider adopting similar cyber security measures and I hope this analysis demonstrates that NERC CIP-like cyber security measures can make a difference. Source
  23. Web-based crime gangs are smuggling weapons into Britain by splitting them into component parts and delivering them via parcel couriers, the National Crime Agency has warned. Taking advantage of the huge expansion in courier firms delivering goods ordered online, underworld armourers based overseas are able to post firearms ordered by criminals in the UK. The NCA is now working with delivery firms in order to help them spot suspicious packages, but admits that the sheer volume of deliveries means that it is impossible to screen effectively. David Armond, the NCA's Deputy Director General, said: "It would be like holding back the sea to try to expect every parcel to be searched, but we are working with industry to plug vulnerabilities.” • Dark web browser Tor is overwhelmingly used for crime, says study Lynne Owens, the NCA's director-general, also warned that a Brexit vote could jeopardise the agency's intelligence-sharing and cooperation with other countries. "At the NCA, one of the things we do that is different to other law enforcement agencies is to have a significant overseas network," she said. "We work with 150 partners globally, and to tackle crime effectively we have to able to cooperate closely with others. "It would be more difficult if we could not share information in an agile way, and at the moment that happens within the European Union mechanism." She stressed that she was not expressing a view on whether Britain should stay in Europe, but merely spelling out some of the possible bureaucratic consequences. Source
  24. A longtime reader recently asked: “How do online fraudsters get the 3-digit card verification value (CVV or CVV2) code printed on the back of customer cards if merchants are forbidden from storing this information? The answer: If not via phishing, probably by installing a Web-based keylogger at an online merchant so that all data that customers submit to the site is copied and sent to the attacker’s server. Kenneth Labelle, a regional director at insurer Burns-Wilcox.com, wrote: “So, I am trying to figure out how card not present transactions are possible after a breach due to the CVV. If the card information was stolen via the point-of-sale system then the hacker should not have access to the CVV because its not on the magnetic strip. So how in the world are they committing card not present fraud when they don’t have the CVV number? I don’t understand how that is possible with the CVV code being used in online transactions.” First off, “dumps” — or credit and debit card accounts that are stolen from hacked point of sale systems via skimmers or malware on cash register systems — retail for about $20 apiece on average in the cybercrime underground. Each dump can be used to fabricate a new physical clone of the original card, and thieves typically use these counterfeits to buy goods from big box retailers that they can easily resell, or to extract cash at ATMs. However, when cyber crooks wish to defraud online stores, they don’t use dumps. That’s mainly because online merchants typically require the CVV, criminal dumps sellers don’t bundle CVVs with their dumps. Instead, online fraudsters turn to “CVV shops,” shadowy cybercrime stores that sell packages of cardholder data, including customer name, full card number, expiration, CVV2 and ZIP code. These CVV bundles are far cheaper than dumps — typically between $2-$5 apiece — in part because the are useful mainly just for online transactions, but probably also because overall they more complicated to “cash out” or make money from them. The vast majority of the time, this CVV data has been stolen by Web-based keyloggers. This is a relatively uncomplicated program that behaves much like a banking Trojan does on an infected PC, except it’s designed to steal data from Web server applications. PC Trojans like ZeuS, for example, siphon information using two major techniques: snarfing passwords stored in the browser, and conducting “form grabbing” — capturing any data entered into a form field in the browser before it can be encrypted in the Web session and sent to whatever site the victim is visiting. Web-based keyloggers also can do form grabbing, ripping out form data submitted by visitors — including names, addresses, phone numbers, credit card numbers and card verification code — as customers are submitting the data during the online checkout process. These attacks drive home one immutable point about malware’s role in subverting secure connections: Whether resident on a Web server or on an end-user computer, if either endpoint is compromised, it’s ‘game over’ for the security of that Web session. With PC banking trojans, it’s all about surveillance on the client side pre-encryption, whereas what the bad guys are doing with these Web site attacks involves sucking down customer data post- or pre-encryption (depending on whether the data was incoming or outgoing). If you’re responsible for maintaining or securing Web sites, it might be a good idea to get involved in one or more local groups that seek to help administrators. Professional and semi-professionals are welcome at local chapter meetings of OWASP, CitySec, ISSA or Security Bsides meetups. Source
×
×
  • Create New...