Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 07/04/17 in all areas

  1. How I found a bug in Intel Skylake processors July 3, 2017 By Xavier Leroy Instructors of "Introduction to programming" courses know that students are willing to blame the failures of their programs on anything. Sorting routine discards half of the data? "That might be a Windows virus!" Binary search always fails? "The Java compiler is acting funny today!" More experienced programmers know very well that the bug is generally in their code: occasionally in third-party libraries; very rarely in system libraries; exceedingly rarely in the compiler; and never in the processor. That's what I thought too, until recently. Here is how I ran into a bug in Intel Skylake processors while trying to debug mysterious OCaml failures. The first sighting Late April 2016, shortly after OCaml 4.03.0 was released, a Serious Industrial OCaml User (SIOU) contacted me privately with bad news: one of their applications, written in OCaml and compiled with OCaml 4.03.0, was crashing randomly. Not at every run, but once in a while it would segfault, at different places within the code. Moreover, the crashes were only observed on their most recent computers, those running Intel Skylake processors. (Skylake is the nickname for what was the latest generation of Intel processors at the time. The latest generation at the time of this writing is nicknamed Kaby Lake.) Many OCaml bugs have been reported to me in the last 25 years, but this report was particularly troubling. Why Skylake processors only? Indeed, I couldn't reproduce the crashes using SIOU's binary on my computers at Inria, which were all running older Intel processors. Why the lack of reproducibility? SIOU's application was single-threaded and made no network I/O, only file I/O, so its execution should have been perfectly deterministic, and whatever bug caused the segfault should cause it at every run and at the same place in the code. My first guess was flaky hardware at SIOU: a bad memory chip? overheating? Speaking from personal experience, those things happen and can result in a computer that boots and runs a GUI just fine, then crashes under load. So, I suggested SIOU to run a memory test, underclock their processor, and disable hyperthreading (HT) while they were at it. The HT suggestion was inspired by an earlier report of a Skylake bug involving AVX vector arithmetic, which would show up only with HT enabled (see description). SIOU didn't take my suggestions well, arguing (correctly) that they were running other CPU- and memory-intensive tests on their Skylake machines and only the ones written in OCaml would crash. Clearly, they thought their hardware was perfect and the bug was in my software. Great. I still managed to cajole them into running a memory test, which came back clean, but my suggestion about turning HT off was ignored. (Too bad, because this would have saved us much time.) In parallel, SIOU was conducting an impressive investigation, varying the version of OCaml, the C compiler used to compile OCaml's runtime system, and the operating system. The verdict came as follows. OCaml: 4.03, including early betas, but not 4.02.3. C compiler: GCC, but not Clang. OS: Linux and Windows, but not MacOS. Since MacOS uses Clang and they used a GCC-based Windows port, the finger was being firmly pointed to OCaml 4.03 and GCC. Surely, SIOU reasoned, in the OCaml 4.03 runtime system, there is a piece of bad C code -- an undefined behavior as we say in the business -- causing GCC to generate machine code that crashes, as C compilers are allowed to do in the presence of undefined behaviors. That would not be the first time that GCC treats undefined behaviors in the least possibly helpful way, see for instance this security hole and this broken benchmark. The explanation above was plausible but still failed to account for the random nature of crashes. When GCC generates bizarre code based on an undefined behavior, it still generates deterministic code. The only source of randomness I could think of is Address Space Layout Randomization (ASLR), an OS feature that causes absolute memory addresses to change from run to run. The OCaml runtime system uses absolute addresses in some places, e.g. to index into a hash table of memory pages. However, the crashes remained random after turning ASLR off, in particular when running under the GDB debugger. We were now in early May 2016, and it was my turn to get my hands dirty, as SIOU subtly hinted by giving me a shell account on their famous Skylake machine. My first attempt was to build a debug version of OCaml 4.03 (to which I planned to add even more debugging instrumentation later) and rebuild SIOU's application with this version of OCaml. Unfortunately this debug version would not trigger the crash. Instead, I worked from the executable provided by SIOU, first interactively under GDB (but it nearly drove me crazy, as I had to wait sometimes one hour to trigger the crash again), then using a little OCaml script that ran the program 1000 times and saved the core dumps produced at every crash. Debugging the OCaml runtime system is no fun, but post-mortem debugging from core dumps is atrocious. Analysis of 30 core dumps showed the segfaults to occur in 7 different places, two within the OCaml GC and 5 within the application. The most popular place, with 50% of the crashes, was the mark_slice function from OCaml's GC. In all cases, the OCaml heap was corrupted: a well-formed data structure contains a bad pointer, i.e. a pointer that doesn't point to the first field of a Caml block but instead points to the header or inside the middle of a Caml block, or even to invalid memory (already freed). The 15 crashes in mark_slice were all caused by a pointer two words ahead in a block of size 4. All those symptoms were consistent with familiar mistakes such as the ocamlopt compiler forgetting to register a memory root with the GC. However, those mistakes would cause reproducible crashes, depending only on the allocation and GC patterns. I completely failed to see what kind of memory management bug in OCaml could cause random crashes! By lack of a better idea, I then listened again to the voice at the back of my head that was whispering "hardware bug!". I had a vague impression that the crashes happened more frequently the more the machine was loaded, as would be the case if it were just an overheating issue. To test this theory, I modified my OCaml script to run N copies of SIOU's program in parallel. For some runs I also disabled the OCaml memory compactor, resulting in a bigger memory footprint and more GC activity. The results were not what I expected but striking nonetheless: N system load w/default options w/compactor turned off 1 3+epsilon 0 failures 0 failures 2 4+epsilon 1 failure 3 failures 4 6+epsilon 12 failures 19 failures 8 10+epsilon 17 failures 23 failures 16 18+epsilon 16 failures The number of failures given above is for 1000 runs of the test program. Notice the jump between N = 2 and N = 4 ? And the plateau for higher values of N ? To explain those numbers, I need to give more information on the test Skylake machine. It has 4 physical cores and 8 logical cores, since HT is enabled. Two of the cores were busy with two long-running tests (not mine) in the background, but otherwise the machine was not doing much, hence the system load was 2 + N + epsilon, where N is the number of tests I ran in parallel. When there are no more than 4 active processes at the same time, the OS scheduler spreads them evenly between the 4 physical cores of the machine, and tries hard not to schedule two processes on the two logical cores of the same physical core, because that would result in underutilization of the resources of the other physical cores. This is the case here for N = 1 and also, most of the time, for N = 2. When the number of active processes grows above 4, the OS starts taking advantage of HT by scheduling processes to the two logical cores of the same physical core. This is the case for N = 4 here. It's only when all 8 logical cores of the machine are busy that the OS performs traditional time-sharing between processes. This is the case for N = 8 and N = 16 in our experiment. It was now evident that the crashes happened only when hyperthreading kicked in, or more precisely when the OCaml program was running along another hyperthread (logical core) on the same physical core of the processor. I wrote SIOU back with a summary of my findings, imploring them to entertain my theory that it all has to do with hyperthreading. This time they listened and turned hyperthreading off on their machine. Then, the crashes were gone for good: two days of testing in a loop showed no issues whatsoever. Problem solved? Yes! Happy ending? Not yet. Neither I nor SIOU tried to report this issue to Intel or others: SIOU because they were satisfied with the workaround consisting in compiling OCaml with Clang, and because they did not want any publicity of the "SIOU's products crash randomly!" kind; I because I was tired of this problem, didn't know how to report those things (Intel doesn't have a public issue tracker like the rest of us), and suspected it was a problem with the specific machines at SIOU (e.g. a batch of flaky chips that got put in the wrong speed bin by accident). The second sighting The year 2016 went by without anyone else reporting that the sky (or more exactly the Skylake) was falling with OCaml 4.03, so I gladly forgot about this little episode at SIOU (and went on making horrible puns). Then, on January 6th 2017, Enguerrand Decorne and Joris Giovannangeli at Ahrefs (another Serious Industrial OCaml User, member of the Caml Consortium to boot) report mysterious random crashes with OCaml 4.03.0: this is PR#7452 on the Caml bug tracker. In the repro case they provided, it's the ocamlopt.opt compiler itself that sometimes crashes or produces nonsensical output while compiling a large source file. This is not particularly surprising since ocamlopt.opt is itself an OCaml program compiled with the ocamlopt.byte compiler, but mades it easier to discuss and reproduce the issue. The public comments on PR#7452 show rather well what happened next, and the Ahrefs people wrote a detailed story of their bug hunt as a blog post. So, I'll only highlight the turning points of the story. Twelve hours after opening the PR, and already 19 comments into the discussion, Enguerrand Decorne reports that "every machine on which we were able to reproduce the issue was running a CPU of the Intel Skylake family". The day after, I mention the 2016 random crash at SIOU and suggest to disable hyperthreading. The day after, Joris Giovannangeli confirms that the crash cannot be reproduced when hyperthreading is disabled. In parallel, Joris discovers that the crash happens only if the OCaml runtime system is built with gcc -O2, but not with gcc -O1. In retrospect, this explains the absence of crashes with the debug OCaml runtime and with OCaml 4.02, as both are built with gcc -O1 by default. I go out on a limb and post the following comment: Is it crazy to imagine that gcc -O2 on the OCaml 4.03 runtime produces a specific instruction sequence that causes hardware issues in (some steppings of) Skylake processors with hyperthreading? Perhaps it is crazy. On the other hand, there was already one documented hardware issue with hyperthreading and Skylake (link) Mark Shinwell contacts some colleagues at Intel and manages to push a report through Intel customer support. Then, nothing happened for 5 months, until... The revelation On May 26th 2017, user "ygrek" posts a link to the following Changelog entry from the Debian "microcode" package: * New upstream microcode datafile 20170511 [...] * Likely fix nightmare-level Skylake erratum SKL150. Fortunately, either this erratum is very-low-hitting, or gcc/clang/icc/msvc won't usually issue the affected opcode pattern and it ends up being rare. SKL150 - Short loops using both the AH/BH/CH/DH registers and the corresponding wide register *may* result in unpredictable system behavior. Requires both logical processors of the same core (i.e. sibling hyperthreads) to be active to trigger, as well as a "complex set of micro-architectural conditions" SKL150 was documented by Intel in April 2017 and is described on page 65 of 6th Generation Intel® Processor Family - Specification Update. Similar errata go under the names SKW144, SKX150, SKZ7 for variants of the Skylake architecture, and KBL095, KBW095 for the newer Kaby Lake architecture. "Nightmare-level" is not part of the Intel description but sounds about right. Despite the rather vague description ("complex set of micro-architectural conditions", you don't say!), this erratum rings a bell: hyperthreading required? check! triggers pseudo-randomly? check! does not involve floating-point nor vector instructions? check! Plus, a microcode update that works around this erratum is available, nicely packaged by Debian, and ready to apply to our test machines. A few hours later, Joris Giovannangeli confirms that the crash is gone after upgrading the microcode. I run more tests on my shiny new Skylake-based workstation (courtesy of Inria's procurement) and come to the same conclusion, since a test that crashes in less than 10 minutes with the old microcode runs 2.5 days without problems with the updated microcode. Another reason to believe that SKL150 is the culprit is that the problematic code pattern outlined in this erratum is generated by GCC when compiling the OCaml run-time system. For example, in byterun/major_gc.c, function sweep_slice, we have C code like this: hd = Hd_hp (hp); /*...*/ Hd_hp (hp) = Whitehd_hd (hd); After macro-expansion, this becomes: hd = *hp; /*...*/ *hp = hd & ~0x300; Clang compile this code the obvious way, using only full-width registers: movq (%rbx), %rax [...] andq $-769, %rax # imm = 0xFFFFFFFFFFFFFCFF movq %rax, (%rbx) However, gcc prefers to use the %ah 8-bit register to operate upon bits 8 to 15 of the full register %rax, leaving the other bits unchanged: movq (%rdi), %rax [...] andb $252, %ah movq %rax, (%rdi) The two codes are functionally equivalent. One possible reason for GCC's choice of code is that it is more compact: the 8-bit constant $252 fits in 1 byte of code, while the 32-bit-extended-to-64-bit constant $-769 needs 4 bytes of code. At any rate, the code generated by GCC does use both %rax and %ah, and, depending on optimization level and bad luck, such code could end up in a loop small enough to trigger the SKL150 bug. So, in the end, it was a hardware bug. Told you so! Epilogue Intel released microcode updates for Skylake and Kaby Lake processors that fix or work around the issue. Debian has detailed instructions to check whether your Intel processor is affected and how to obtain and apply the microcode updates. The timing for the publication of the bug and the release of the microcode updates was just right, because several projects written in OCaml were starting to observe mysterious random crashes, for example Lwt, Coq, and Coccinelle. The hardware bug is making the rounds in the technical Web sites, see for example Ars Technica, HotHardware, Tom's Hardware, and Hacker's News. Sursa: http://gallium.inria.fr/blog/intel-skylake-bug/
    3 points
  2. Practic e o problema de adresare ce este rezolvata cu update-ul la microcode. Problema se pune altfel. Sunt zeci de mii de servere active cu kaby lake (ex: intel 7700) iar de Skylake nu mai vorbesc. Ma indoiesc ca distributii precum centos o sa faca un release repede.
    3 points
  3. 150 de lei pe zi + mancare + un pachet de Plugarul + o jumatate de vodca
    3 points
  4. Symmetric Encryption The only way to encrypt today is authenticated encryption, or "AEAD". ChaCha20-Poly1305 is faster in software than AES-GCM. AES-GCM will be faster than ChaCha20-Poly1305 with AES-NI. Poly1305 is also easier than GCM for library designers to implement safely. AES-GCM is the industry standard. Use, in order of preference: The NaCl/libsodium default Chacha20-Poly1305 AES-GCM Avoid: AES-CBC, AES-CTR by itself Block ciphers with 64-bit blocks such as Blowfish OFB mode RC4, which is comically broken Symmetric Key Length See The Physics of Brute Force to understand why 256-bit keys is more than sufficient. But rememeber: your AES key is far less likely to be broken than your public key pair, so the latter key size should be larger if you're going to obsess about this. Use: Minimum- 128-bit keys Maximum- 256-bit keys Avoid: Constructions with huge keys Cipher "cascades" Key sizes under 128 bits Symmetric Signatures If you're authenticating but not encrypting, as with API requests, don't do anything complicated. There is a class of crypto implementation bugs that arises from how you feed data to your MAC, so, if you're designing a new system from scratch, Google "crypto canonicalization bugs". Also, use a secure compare function. Use: HMAC Avoid: HMAC-MD5 HMAC-SHA1 Custom "keyed hash" constructions Complex polynomial MACs Encrypted hashes Anything CRC Hashing/HMAC Algorithm If you can get away with it you want to use hashing algorithms that truncate their output and sidesteps length extension attacks. Meanwhile: it's less likely that you'll upgrade from SHA-2 to SHA-3 than it is that you'll upgrade from SHA-2 to BLAKE2, which is faster than SHA-3, and SHA-2 looks great right now, so get comfortable and cuddly with SHA-2. Use, in order of preference: HMAC-SHA-512/256 HMAC-SHA-512/224 HMAC-SHA-384 HMAC-SHA-224 HMAC-SHA-512 HMAC-SHA-256 Alternately, use in order of preference: BLAKE2 SHA3-512 SHA3-256 Avoid: HMAC-SHA-1 HMAC-MD5 MD6 EDON-R Random IDs When creating random IDs, numbers, URLs, nonces, initialization vectors, or anything that is random, then you should always use /dev/urandom. Use: /dev/urandom Create: 256-bit random numbers Avoid: Userspace random number generators /dev/random Password Hashing When using scrypt for password hashing, be aware that It is very sensitive to the parameters, making it possible to end up weaker than bcrypt, and suffers from time-memory trade-off (source #1 and source #2). When using bcrypt, make sure to use the following algorithm to prevent the leading NULL byte problem and the 72-character password limit: bcrypt(base64(sha-512(password))) I'd wait a few years, until 2020 or so, before implementing any of the Password Hashing Competition candidates, such as Argon2. They just haven't had the time to mature yet. Use, in order of preference: scrypt bcrypt sha512crypt sha256crypt PBKDF2 Avoid: Plaintext Naked SHA-2, SHA-1, MD5 Complex homebrew algorithms Any encryption algorithm Asymmetric Encryption It's time to stop using vanilla RSA, and start using NaCl/libsodium. Of all the cryptographic "best practices", this is the one you're least likely to get right on your own. NaCl/libsodium has been designed to prevent you from making stupid mistakes, it's highly favored among the cryptographic community, and focuses on modern, highly secure cryptographic primitives. It's time to start using ECC. Here are several reasons you should stop using RSA and switch to elliptic curve software: Progress in attacking RSA --- really, all the classic multiplicative group primitives, including DH and DSA and presumably ElGamal --- is proceeding faster than progress against elliptic curve. RSA (and DH) drag you towards "backwards compatibility" (ie: downgrade-attack compatibility) with insecure systems. Elliptic curve schemes generally don't need to be vigilant about accidentally accepting 768-bit parameters. RSA begs implementors to encrypt directly with its public key primitive, which is usually not what you want to do: not only does accidentally designing with RSA encryption usually forfeit forward-secrecy, but it also exposes you to new classes of implementation bugs. Elliptic curve systems don't promote this particular foot-gun. The weight of correctness/safety in elliptic curve systems falls primarily on cryptographers, who must provide a set of curve parameters optimized for security at a particular performance level; once that happens, there aren't many knobs for implementors to turn that can subvert security. The opposite is true in RSA. Even if you use RSA-OAEP, there are additional parameters to supply and things you have to know to get right. If you have to use RSA, do use RSA-OAEP. But don't use RSA. Use ECC. Use: NaCl/libsodium Avoid: RSA-PKCS1v15 RSAES-OAEP RSASSA-PSS with MGFI-256, Really, anything RSA ElGamal OpenPGP, OpenSSL, BouncyCastle, etc. Asymmetric Key Length As with symmetric encryption, asymmetric encryption key length is a vital security parameter. Academic, private, and government organizations provide different recommendations with mathematical formulas to approimate the minimum key size requirement for security. See BlueKcrypt's Cryptographyc Key Length Recommendation for other recommendations and dates. To protect data up through 2020, it is recommended to meet the minimum requirements for asymmetric key lengths: Method RSA ECC D-H Key D-H Group Lenstra/Verheul 1881 161 151 1881 Lenstra Updated 1387 163 163 1387 ECRYPT II 1776 192 192 1776 NIST 2048 224 224 2048 ANSSI 2048 200 200 2048 BSI 3072 256 256 3072 See also the NSA Fact Sheet Suite B Cryptography and RFC 3766 for additional recommendations and math algorithms for calculating strengths based on calendar year. Personally, I don't see any problem with using 2048-bit RSA/DH group and 256-bit ECC/DH key lengths. So, my recommendation would be: Use: 256-bit minimum for ECC/DH Keys 2048-bit minimum for RSA/DH Group Avoid: Not following the above recommendations. Asymmetric Signatures In the last few years there has been a major shift away from conventional DSA signatures and towards misuse-resistent "deterministic" signature schemes, of which EdDSA and RFC6979 are the best examples. You can think of these schemes as "user-proofed" responses to the Playstation 3 ECDSA flaw, in which reuse of a random number leaked secret keys. Use deterministic signatures in preference to any other signature scheme. Use, in order of preference: NaCl/libsodium Ed25519 RFC6979 (deterministic DSA/ECDSA) Avoid: RSA-PKCS1v15 RSASSA-PSS with MGF1+SHA256 Really, anything RSA Vanilla ECDSA Vanilla DSA Diffie-Hellman This is the trickiest one. Here is roughly the set of considerations: If you can just use Nacl, use Nacl. You don't even have to care what Nacl does. If you can use a very trustworthy library, use Curve25519; it's the modern ECDH curve with the best software support and the most analysis. People really beat the crap out of Curve25519 when they tried to get it standardized for TLS. There are stronger curves, but none supported as well as Curve25519. But don't implement Curve25519 yourself or port the C code for it. If you can't use a very trustworthy library for ECDH but can for DH, use DH-2048 with a standard 2048 bit group, like Colin says, but only if you can hardcode the DH parameters. But don't use conventional DH if you need to negotiate parameters or interoperate with other implementations. If you have to do handshake negotiation or interoperate with older software, consider using NIST P-256, which has very widespread software support. Hardcoded-param DH-2048 is safer than NIST P-256, but NIST P-256 is safer than negotiated DH. But only if you have very trustworthy library support, because NIST P-256 has some pitfalls. P-256 is probably the safest of the NIST curves; don't go down to -224. Isn't crypto fun? If your threat model is criminals, prefer DH-1024 to sketchy curve libraries. If your threat model is governments, prefer sketchy curve libraries to DH-1024. But come on, find a way to one of the previous recommendations. It sucks that DH (really, "key agreement") is such an important crypto building block, but it is. Use, in order of preference: NaCl/libsodium 2048-bit Diffie-Hellman Group #14 Avoid: conventional DH SRP J-PAKE Handshakes and negotiation Elaborate key negotiation schemes that only use block ciphers srand(time()) Website security By "website security", we mean "the library you use to make your web server speak HTTPS". Believe it or not, OpenSSL is still probably the right decision here, if you can't just delegate this to Amazon and use HTTPS elastic load balancers, which makes this their problem not yours. Use: OpenSSL, LibreSSL, or BoringSSL if you run your own site Amazon AWS Elastic Load Balancing if Amazon does Avoid: PolarSSL GnuTLS MatrixSSL Client-server application security What happens when you design your own custom RSA protocol is that 1-18 months afterwards, hopefully sooner but often later, you discover that you made a mistake and your protocol had virtually no security. A good example is Salt Stack. Salt managed to deploy e=1 RSA. It seems a little crazy to recommend TLS given its recent history: The Logjam DH negotiation attack The FREAK export cipher attack The POODLE CBC oracle attack The RC4 fiasco The CRIME compression attack The Lucky13 CBC padding oracle timing attack The BEAST CBC chained IV attack Heartbleed Renegotiation Triple Handshakes Compromised CAs Here's why you should still use TLS for your custom transport problem: Many of these attacks only work against browsers, because they rely on the victim accepting and executing attacker-controlled Javascript in order to generate repeated known/chosen plaintexts. Most of these attacks can be mitigated by hardcoding TLS 1.2+, ECDHE and AES-GCM. That sounds tricky, and it is, but it's less tricky than designing your own transport protocol with ECDHE and AES-GCM! In a custom transport scenario, you don't need to depend on CAs: you can self-sign a certificate and ship it with your code, just like Colin suggests you do with RSA keys. Use: TLS Avoid: Designing your own encrypted transport, which is a genuinely hard engineering problem; Using TLS but in a default configuration, like, with "curl" Using "curl" IPSEC Online backups Of course, you should host your own backups in house. The best security is the security where others just don't get access to your data. There are many tools to do this, all of which should be using OpenSSH or TLS for the transport. If using an online backup service, use Tarsnap. It's withstood the test of time. Use: Tarsnap Avoid: Google Apple Microsoft Dropbox Amazon S3 Sursa: https://gist.github.com/atoponce/07d8d4c833873be2f68c34f9afc5a78a
    2 points
  5. Asta e doar parerea ta. Si a mea, sincer sa fiu. Dar nu a celor de la HR. Vezi tu, daca totul era roz, era plina lumea de entuziasti plini de vise...
    1 point
  6. De ce sa nu iti dea ? Toata lumea vorbeste de experienta ca de cel mai 'sfant' lucru pe care trebuie sa-l aiba cineva. Experienta in IT e relativa, nu poti sa compari 5 ani in care ai fixat cate un bug pe ici pe colo pe un proiect mic, iar jumate din cod pe care il scriai era copy-paste de pe stackoverflow cu cineva care a lucrat 2 ani pe un proiect mare in care interactiona cu foarte multi developeri, testeri, analisti, pe un proiect in care nu isi permitea sa bage mizerii in codul de productie ca il arata arhitectul cu degetu'. Eu cred ca cineva care e la inceput, cineva care a investit timpul personal chiar si bani ca sa isi imbunatateasca cunostinte, sa obtina certificari, e mult mai valoros. Dovedeste entuziasm, pasiune si determinare. Ca ai 5 ani de experienta, ca ai 1 an de experienta de cele mai multe ori la noul loc de munca esti pe un proiect total diferit de ceea ce faceai si perioada de tranzitie e la fel pentru ambii. Legat de salariu, depinde cum te vinzi. Cred ca poti pleca lejer de la 4.000 in Bucuresti, dar depinde ce alte beneficii primesti, de numele firmei, de volumul de munca. Si inca ceva, nu cred ca ai certificarile respective. Daca le aveai nu mai intrebai pe forum despre cat ai putea castiga. Daca vrei sa le dai, pune mana pe carte si mult succes.
    1 point
  7. Daca nu ai experienta de munca sau cel putin proiecte personale, orice, nu iti da nimeni 5000 RON. Dar de crescut poate sa creasca destul de mult.
    1 point
  8. 5000 - 17000 RON, depinde cate stii, unde te angajezi si cat de bine te vinzi.
    1 point
  9. WSUXploit Written by Marcio Almeida to weaponize the use of WSUSpect Proxy created by Paul Stone and Alex Chapman in 2015 and public released by Context Information Security Summary This is a MiTM weaponized exploit script to inject 'fake' updates into non-SSL WSUS traffic. It is based on the WSUSpect Proxy application that was introduced to public on the Black Hat USA 2015 presentation, 'WSUSpect – Compromising the Windows Enterprise via Windows Update' Please read the White Paper and the presentation slides listed below: White paper: http://www.contextis.com/documents/161/CTX_WSUSpect_White_Paper.pdf Slides: http://www.contextis.com/documents/162/WSUSpect_Presentation.pdf Sursa: https://github.com/pimps/wsuxploit
    1 point
  10. PPEE (puppy) is a Professional PE file Explorer for reversers, malware researchers and those who want to statically inspect PE files in more details Puppy is free and tries to be small, fast, nimble and friendly as your puppy Download v1.09 Visual C++ 2010 Redistributable Package required Features Puppy is robust against malformed and crafted PE files which makes it handy for reversers, malware researchers and those who want to inspect PE files in more details. All directories in a PE file including Export, Import, Resource, Exception, Certificate(Relies on Windows API), Base Relocation, Debug, TLS, Load Config, Bound Import, IAT, Delay Import and CLR are supported. Both PE32 and PE64 support Examine YARA rules against opened file Virustotal and OPSWAT's Metadefender query report Statically analyze windows native and .Net executables Robust Parsing of exe, dll, sys, scr, drv, cpl, ocx and more Edit almost every data structure Easily dump sections, resources and .Net assembly directories Entropy and MD5 calculation of the sections and resource items View strings including URL, Registry, Suspicious, ... embedded in files Detect common resource types Extract artifacts remained in PE file Anomaly detection Right-click for Copy, Search in web, Whois and dump Built in hex editor Explorer context menu integration Descriptive information for data members Refresh, Save and Save as menu commands Drag and drop support List view columns can sort data in an appropriate way Open file from command line Checksum validation Plugin enabled Link: https://www.mzrst.com/
    1 point
  11. Problema cu lipsa IMEI, il faci cu asta
    1 point
  12. https://www.glassdoor.com/Salaries/it-security-salary-SRCH_KO0,11.htm Totusi, in Ro mai sunt reminiscente comuniste si e posibil sa nu te ajute certificarea. Citeste si: https://brightside.me/wonder-curiosities/8-tricks-employers-use-to-pay-you-less-347310/?utm_source=fb_brightside&utm_medium=fb_organic&utm_campaign=fb_gr_brightside
    1 point
  13. daca nu iti recunoaste driverul la PC, este mufa de incarcare... daca nu si nu, placa.
    1 point
  14. Puzzle No.1 – Monday 3 July We've got something new for you today. It's a puzzle that's been set for us by GCHQ and the kind of thing they use to recruit staff. We'll be setting a new puzzle every day so why not have go and see if you could make it as a GCHQ codebreaker. Puzzle Take the digits 1,2,3 up to 9 in numerical order and put either a plus sign or a minus sign or neither between the digits to make a sum that adds up to 100. For example, one way of achieving this is: 1 + 2 + 34 - 5 + 67 - 8 + 9 = 100, which uses six plusses and minuses. What is the fewest number of plusses and minuses you need to do this? The answer will be published on the Today website from 6am on Tuesday 4 July. source
    1 point
  15. Whitepaper called Fully Undetectable Malware Fully Undetectable Malware Term Paper candidate Alessandro Groppo Institute of Higher Education "Camillo Olivetti” 2016/2017 School Year Download / View: https://www.scribd.com/document/352790073/Fully-Undetectable-Malware Mirror: https://dl.packetstormsecurity.net/papers/general/fully-undetectable-malware.pdf
    1 point
  16. E hacker, culege date, social engineering Am auzit ca tinta lui este sa puna shemale pe index
    1 point
  17. 1 point
  18. Am eu o blonda sa o loviti, o gasiti pe nimfomane, top escorts bucuresti si o cheama andreea :)))
    1 point
  19. Treaba e simpla, totul costa bani! Daca esti interesat de InfoSec, si tu acum castigi 1200 RON.....investeste in tine! In InfoSec se castiga mult mai mult. Sa fim seriosi, research poti face singur oricand, dar tu singur oricat ai vrea nu poti acoperi toate punctele de vedere. La DefCamp vei cunoaste oameni cu aceleasi interese ca si tine, si s-ar putea ca dintr-o discutie in alta sa iti dea un punct de vedere la care nu te-ai fi gandit(bang!). Guess my point, or else, stop complaining! : import random def neuron(x, y): answer = False while answer is False: input_list = generate_input(x) weight_list = generate_weight(y) winner_input = [input_list[i:i+8] for i in range(0, len(input_list), 8)] if len(input_list) % 2 != 0 or len(weight_list) % 2 != 0 or len(input_list) != len(weight_list): print "Wrong values, dumb ass!" return threshold = 5 activation = 0.00 i = 0 while i < len(input_list): activation += input_list[i] * weight_list[i] i += 1 if threshold <= activation < threshold + 0.0001: print winner_input print activation answer = True def generate_input(x): input_list = [] i = 0 while i < x: y = random.random() input_list.append(int(round(y))) i += 1 return input_list def generate_weight(x): weight_list = [] i = 0 while i < x: y = random.random() weight_list.append(float(y)) i += 1 return weight_list if __name__ == '__main__': neuron(12,12)
    -1 points
  20. buna, stiu ca este umpic cam vechi acest topic, insa as avea nevoie de cineva care se pricepe sa dea mii si mii de reporturi la un canal de youtube, platesc foarte bine pentru aceasta actiune :))) daca cineva este interesat sa ma contacteze pe privat, este vorba de un canal cu peste 100K subscribers.
    -1 points
×
×
  • Create New...