-
Posts
1838 -
Joined
-
Last visited
-
Days Won
31
Everything posted by M2G
-
Daca mai pun 500 ron iau S3, dar nu vreau sa pun mai mult pentru ca nu dau 20 mil pe un telefon. Imi trebuie in primul rand pentru development si daca tot il iau imi trebuie si pentru mine, sa aiba un design bun, sa mearga fluent. Cum adica incep sa se blocheze? E tot hardware-ul ala in el, daca merge bine de la inceput o sa mearga si "dupa un timp". Nu o sa imi tin sute de aplicatii pe el ca sa se ingreuneze sistemul. Tin doar ce folosesc si imi este util. Altcineva?
-
Vreau sa imi iau un telefon pe android si mi-a atras atentia Xperia SL de la Sony. Sony Xperia SL Black - TelefonulTau.eu Am inceput sa caut pe net si am citit mai multe review-uri dar am vazut ca multi se plang de autonomia bateriei(1750 mAh). Autonomia depinde mult si de utilizator. Eu nu sunt un impatimit al jocurilor pe telefon, o sa il folosesc mai mult pentru muzica, poze, call, sms, some browsing, development. Postez in speranta in care din atatia useri, sunt cativa care au acest telefon si pot sa imi spuna parearea lor despre el. Eventual daca puteti sa imi dati si alte sugestii pentru un smartphone android pana in 1600 ron. Nexus 4 cam pica deoarece o sa dureze ceva pana ajunge in tara si cand o sa ajunga or sa ii dubleze pretul. Nu ma impresioneaza telefoanele cu procesor quad-core, mai mult marketing decat ceva util/performant. Nu-mi plac HTC-urile, arata urat. Vreau sa imi cumpar cel tarziu pe la mijlocul lui decembrie. Thx.
-
Apare un nou protocol care imbunatateste cu pana la 700% semnalul WiFi!
M2G replied to Nytro's topic in Stiri securitate
More available wireless networks to hack into. -
Declara namespace inainte de declararea clasei. Ai de doua ori Public Class frmMain, sterge-o pe cea de sub importuri. Muta Namespace TempGraphic dupa importuri. Alta data poti sa folosesti asta: Convert C# to VB.NET - A free code conversion tool - developer Fusion
-
De ce naiba pui parola la asemenea chestii? La un crypter inteleg, dar la asa ceva nu are nici un sens. And what the fuck is titan crypt?
-
Bine ai venit. Nu ne intereseaza id-ul tau de mess si nu e nici un motiv pentru care sa il lasi aici. Daca vrei sa comunici cu lumea de aici, o faci prin posturi de calitate.
-
Il cam bagi in ceata cu forma asta, e doar clasa a 9-a. Codul trebuie sa fie cat mai usor de citit si self explanatory. Ce-i de la google ziceau ca un cod bine scris nu are nevoie de comentarii ci se explica singur prin numele variabilelor si modul in care este organizat.
-
Cand mai ai intrebari la teme viitoare da-mi un pm cu problema si iti explic. Asta asa ca ma simt bine azi.
-
Introduction This month’s issue kicks off a two part series on web application security flaw discovery and prevention, beginning with Arachni. As this month’s topic is another case of mailing lists facilitating great toolsmith topics, I’ll begin this month by recommending a few you should join if you haven’t already. The Web Application Security Consortium mailing list is a must, as are the SecurityFocus lists. I favor their Penetration Testing and Web Application Security lists but they have many others as well. As you can imagine, these two make sense for me given focuses on web application security and penetration testing, and it was via SecurityFocus that I received news of the latest release of Arachni. Arachni is a high-performance, modular, open source web application security scanning framework written in Ruby. It was refreshing to discover a web app scanner I had not yet tested. I spend a lot of time with the likes of Burp, ZAP, and Watobo but strongly advocate expanding the arsenal. Arachni’s developer/creator is Tasos "Zapotek" Laskos, who kindly provided details on this rapidly maturing tool and project. Via email, Tasos indicated that to date, Arachni's role has been that of an experiment/learning-exercise hybrid, mainly focused on doing things a little bit differently. He’s glad to say that the fundamental project goals have been achieved; Arachni is fast, relatively simple, quite accurate, open source and quite flexible in the ways which it can be deployed. In addition, as of late, stability and testing have been given top priority in order to ensure that the framework won't exhibit performance degradation as the code-base expands. With a strong foundation laid and a clear road map, future plans for Arachni include pushing the envelope where version 0.4.2 include improved distributed, high-performance scan features such as the new, distributed crawler (under current development), and a new, cleaner, more stable and attractive Web User Interface, as well as general code clean-up. Version 0.5 is where a lot of interesting work will take place as the Arachni team will be attempting to break some new ground with native DOM and JavaScript support, with the intent of allowing a depth/level of analysis beyond what's generally possible today, from either open source or commercial systems. According to Tasos, most, if not all, current scanners rely on external browser engines to perform their duties bringing with them a few penalties (performance hits, loss of control, limited inspection capabilities, design compromises, etc.), which Arachni will be able to avoid. This kind of functionality, especially from an open and flexible system, will be greatly beneficial to web application testing in general, and not just in a security-related context. Arachni success stories include incredibly cool features such as WAF Realtime Virtual Patching. At OWASP AppSec DC 2012, Trustwave Spider Lab’s Ryan Barnett discussed the concept of dynamic application scanning testing (DAST) exporting data that is then imported into a web application firewall (WAF) for targeted remediation. In addition to stating that the Arachni scanner is an “absolutely awesome web application scanner framework” Ryan describes how to integrate export data from Arachni with ModSecurity, the WAF for which he is OWASP ModSecurity Core Rule Set (CRS) project leader. Take note here as next month in toolsmith we’re going to discuss ModSecurity for IIS as part two of this series and will follow Ryan’s principles for DAST to WAF. Other Arachni successes include highly-customized scripted audits and easy incorporation into testing platforms (by virtue of its distributed features). Tasos has received a lot of positive feedback and has been pleasantly surprised there has not been one unsatisfied user, even in the Arachni's early, immature phases. Many Arachni users end up doing so out of frustration with the currently available tools and are quite happy with the results after giving Arachni a try given that Arachni gives users a decent alternative while simplifying web application security assessment tasks. Arachni benefits from excellent documentation and support via its wiki, be sure to give a good once over before beginning installation and use. Installing Arachni On an Ubuntu 12.10 instance, I first made sure I had all dependencies met via sudo apt-get install build-essential libxml2-dev libxslt1-dev libcurl4-openssl-dev libsqlite3-dev libyaml-dev zlib1g-dev ruby1.9.1-dev ruby1.9.1. For developer’s sake, this includes Gem support so thereafter one need only issue sudo gem install arachni to install Arachni. However, the preferred method is use of the appropriate system packages from the latest downloads page. While Arachni features robust CLI use, for presentation’s sake we’ll describe Arachni use with the Web UI. Start it via arachni_web_autostart which will initiate a Dispatcher and the UI server. The last step is to point your browser to http://localhost:4567, accept the default settings and begin use. Arachni in use Of interest as you begin Arachni use is the dispatcher which spawns RPC instances and allows you to attach to, pause, resume, and shutdown Arachni instances. This is extremely important for users who wish to configure Arachni instances in a high performance grid (think a web application security scanning cluster with a master and slave configuration). Per the wiki, “this allows scan-time to be severely decreased, by as much as n times less under ideal circumstances, where n equals the number of running instances.” You can configure Arachni’s web UI to run under SSL and provide HTTP Basic authentication if you wish to lock use down. Refer to the wiki entry for the web user interface for more details. Before beginning a simple scan (one Dispatcher), let’s quickly review Arachni’s modules and plugins. Each has a tab in Arachni’s primary UI view. The 45 modules are divided into Audit (22) and Recon (23) options where the audit modules actively test the web application via inputs such as parameters, forms, cookies and headers while the recon modules passively test the web application, focusing on server configuration, responses and specific directories and files. I particularly like the additional SSN and credit card number disclosure modules as they are helpful for OSINT, as well as the Backdoor module, which looks to determine if the web application you’re assessing is already owned. Of note from the Audit options is the Trainer module that probes all inputs of a given page in order to uncover new input vectors and trains Arachni by analyzing the server responses. Arachni modules are all enabled by default. Arachni plugins offer preconfigured auto-logins (great when spidering), proxy settings, and notification options along with some pending plugins supported in the CLI version but not yet ready for the Web UI as of v.0.4.1.1 To start a scan, navigate to the Start a scan tab and confirm that a Dispatcher is running. You should see the likes of @localhost:7331 (host and port) along with number of running scans, as well as RAM and CPU usage. Then paste a URL into the URL form, and select Launch Scan as seen in Figure 1. While the scan is running you can monitor the Dispatcher status via the Dispatchers tab as seen in Figure 2. From the Dispatchers view you can choose to Attach to the running Instance (there will be multiples if you’ve configured a high performance grid) which will give a real-time view to the scan statistics, percentage of completion for the running instance, scanner output, and results for findings discovered as seen in Figure 3. Dispatchers provide Instances, Instances perform the scans. Once the scan is complete, as you might imagine, the completed results report will be available to you in the Reports tab. As an example I chose the HTML output but realize that you can also select JSON, text, YAML, and XML as well as binary output such as Metareport, Marshal report, and even Arachni Framework Reporting. Figure 4 represents the HTML-based results of a scan against NOWASP Mutillidae. Even the reports are feature-rich with a summary tab with graphs and issues, remedial guidance, plugin results, along with a sitemap and configuration settings. The results are accurate too; in my preliminary testing I found very few false positives. When Arachni isn’t definitive about results, it even goes so far as to label the result “untrusted (and may in fact be false positives) because at the time they were identified the server was exhibiting some kind of anomalous behavior or there was 3rd part interference (like network latency for example).” Nice, I love truth and transparency in my test results. I am really excited to see Arachni work at scale. I intend to test it very broadly on large applications using a high performance grid. This is definitely one project I’ll keep squarely on my radar screen as it matures through its 0.4.2 and 0.5 releases. Download Sursa
-
- 1
-
Bine ai venit, frumoasa prezentare.
-
Am incercat sa intru pe site la ei si a dat o eroare. De acolo mi-am dat seama ca serverul foloseste .NET2 => e o versiune cam veche de .net => se folosesc de tehnologii mai vechi => probabil probleme de adaptare la noile tehnologii => o probabilitate mare ca aplicatia sa fie scrisa in .NET. Cauta .NET reflector si incearca sa ii faci reverse la codul sursa. Daca nu e obfuscat, ai putea sa iti dai seama cum se fac calculele sau sa faci modificari si sa refaci binarul. Vezi daca gasesti ceva informatii utile cu WinSpy++ 1.7 | Catch22
-
https://rstcenter.com/forum/sisteme-de-operare-discutii-hardware.rst https://rstcenter.com/forum/electronica.rst Spor la treaba!
-
In this post, I will present DLL injection by means of automatically unpacking Citadel. But first, the most important question: WHAT IS DLL INJECTION AND REASONS FOR INJECTING A DLL DLL injection is one way of executing code in the context of another process. There are other techniques to execute code in another process, but essentially this is the easiest way to do it. As a DLL brings nifty features like automatic relocation of code good testability, you don't have to reinvent the wheel and do everything on your own. But why should you want to injecting a DLL into a foreign process? There are lots of reasons to inject a DLL. As you are within the process address space, you have full control over the process. You can read and write arbitrary memory locations, set hooks etc. with unsurpassed performance. You could basically do the same with a debugger, but it is way more convenient to do it in an injected DLL. Some showcases are: creation of cheats, trainers extracting passwords and encryption keys unpacking packed/encrypted executables To me, especially the opportunity of unpacking and decrypting malware is very interesting. During my day job, I had to unpack a bunch of Citadel binaries in order to prepare them for static analysis. Basically, most Citadel samples are packed by the same packer or family of packers. In the following, I will shortly summarize how it works. THE CITADEL PACKER In order to evade anti-virus detection, the authors of the Citadel packer (and other commercial malware) have devised an interesting unpacking procedure. Roughly, it can be summarized in the following stages: First, the unpacker stub does some inconspicuously looking stuff in order to thwart AV detection. The code is slightly obfuscated, but not as strong as to raise suspicion. Actually, the code that is being executed decrypts parts of the executable and jumps to it by self-modifying code. In the snippet below, you see how exactly the code is modified. The first instruction of the function that is supposedly called is changed to a jump to the newly decrypted code. mov [ebp+var_1], 0F6h mov al, [ebp+var_1] mov ecx, ptr_to_function xor al, 0A1h sub al, 6Eh mov [ecx], al ; =0xE9 mov ecx, ptr_to_function ... mov [ecx+1], eax ; delta to decrypted code ... call eax As you can see (after doing some math), an unconditional near jmp is inserted right at the beginning of the function to be called. Hence, by calling a supposedly normal function, the decrypted code is executed. The decrypted stub allocates some memory and copies the whole executable to that memory. Them it does some relocation (as the base address has changed) and executes the entry point of executable. In the following code excerpt, you can see the generic calculation of the entry point: mov edx, [ebp+newImageBase] mov ecx, [edx+3Ch] ; e_lfanew add ecx, edx ; get PE header ... mov ebx, [ecx+28h] ; get AddressOfEntryPoint add ebx, edx ; add imageBase ... mov [ebp+vaOfEntryPoint], ebx ... mov ebx, [ebp+vaOfEntryPoint] ... call ebx Here, the next stage begins. At first glance it seems the same code is executed twice, but naturally, there's a deviation in control flow. For example, the the packer authors had to make sure that the encrypted code doesn't get decrypted twice. For that, they declared a global variable which in this sample initially holds the value 0x6E6C82B7. So upon first execution, the variable alreadyDecrypted is set to zero. mov eax, alreadyDecrypted cmp eax, 6E6C82B7h jnz dontInitialize ... mov alreadyDecrypted, 0 dontInitialize: ... In the decryption function, that variable is checked for zero, as you can see/calculate in the following snippet: mov [ebp+const_0DF2EF03], 0DF2EF03h mov edi, 75683572h mov esi, 789ADA71h mov eax, [ebp+const_0DF2EF03] mov ecx, alreadyDecrypted xor eax, edi sub eax, esi cmp eax, ecx ; eax = 0 jnz dontDecrypt Once more, you see the obfuscation employed by the packer. Then, a lengthy function is executed that takes care of the actual unpacking process. It comprises the following steps: gather chunks of the packed program from the executable memory space BASE64-decode it decompress it write it section by section to the original executable's memory space, effectively overwriting all of the original code fix imports etc. After that, the OEP (original entry point) is called. The image below depicts a typical OEP of Citadel. Note that after a call to some initialization function, the first API function it calls is SetErrorMode. Weaknesses What are possible points to attack the unpacking process? Basically, you can grab the unpacked binary at two points: first, when it is completely unpacked on the heap, but not yet written to the original executable's image space, and second, once Citadel has reached its OEP. The second option is the most common and generic one when unpacking binaries, so I will explain that one. Naturally, you can write a static unpacker and perhaps one of my future posts will deal with that. One of the largest weaknesses are the memory allocations and setting the appropriate access rights. As a matter of fact, in order to write to the original executable's memory, the Citadel unpacker grants RWE access to the whole image space. Hence, it has no problems accessing and executing all data and code contained in it. If you set a breakpoint on VirtualProtect, you will see what I mean. There are very distinct calls to this function and the one setting the appropriate rights to the whole image space really sticks out. After a little research, I found two articles dealing with the unpacking process of the Citadel packer (here and here), but both seem not aware that the technique presented in the following is really easily implemented. Once you have reached the VirtualProtect call that changes the access rights to RWE, you can change the flags to RW-only, hence execution of the unpacked binary will not be possible. So, once the unpacker tries to jump to the OEP, an exception will be raised due to missing execution rights. So, now that we know the correct location where to break the packer, how to unpack Citadel automatically? Here DLL injection enters the scene. The basic idea is very simple: start the Citadel binary in suspended state inject a DLL this DLL sets a hook on VirtualProtect, changing RWE to RW at the correct place as backup, a hook on SetErrorMode is set. Hence, when encountering unknown packers, the binary won't be executed for too long. resume the process Some other things have to be taken care of, like correctly dumping the process and rebuilding imports, but these are out of the scope of this article. If you encounter them yourself and don't know how to handle them, just ask me ;-) It seems not too easy to find a decent DLL injector. Especially, one that injects a DLL before the program starts (if there is one around, please tell me). As I could not find an injector that is capable of injecting right at program start, I coded my own. You can find it at my GitHub page. It uses code from Jan Newger, so kudos to him. I'm particularly fond of using test-driven development employing the googletest framework Conclusion The presented technique works very well against the Citadel unpacker (sorry, I don't know any other name for that unpacker). So far, I've encountered about 50 samples and almost all can be unpacked using this technique. Furthermore, all unpackers that overwrite the original executable's image space can be unpacked by this technique. In future posts, I will evaluate this technique against other packers. Fun fact: it seems Brian Krebs coded Citadel after all: If you supply the command line option -z to a Citadel sample, you will see the following message box: Sursa
-
- 1
-
It's been just a few days since NIST approved Keccak as the winner of the SHA-3 competition, and it likely will be some time before we begin seeing the new hash algorithm popping up in common products and services. However, some in the cryptography community say it may not be a bad idea to start making plans to move away from the older SHA-1 algorithm fairly soon, given the quickly dropping cost of compute power. The SHA family of hash algorithms was introduced nearly 20 years ago and various versions of it have been used as the government's approved secure hash algorithm since then. The National Institute for Standards and Technology began a search for a replacement algorithm five years ago, and Keccak emerged as the winner this week. In a message on the mailing list dedicated to the SHA-3 competition, Jesse Walker, a co-author of Skein, one of the finalists in the competition, showed some quick calculations based on the current and future costs of commodity compute power and came up with some interesting conclusions about how soon we might see a practical attack that can produce a hash collision on SHA-1. First, using a known attack as a starting point and some calculations of how many cycles one can get from a given processor right now, Walker calculated the value of a "commodity server year" as 2^63 cycles/year in 2015 and 2^65 cycles/year in 2018, assuming that Moore's Law continues to hold true for the next decade or so. He then computed the number of such years it would take to carry out the Stevens attack and found that by 2015 it would take 2^11 commodity server years to execute the attack. In 2018 the time needed for the attack could be as little as 2^7 commodity server years. That's a major decrease in the amount of time needed for a complex attack, and as Walker pointed out in his message, could put the attack within reach of some well-funded attackers within a few years. Given that Amazon charges about $0.04 per hour to rent time on a commodity server, Walker estimates that the monetary cost of this attack would be about $173,000 by 2018, assuming again that Moore's Law remains valid. The cost could drop to $43,000 by 2021. "A collision attack is therefore well within the range of what an organized crime syndicate can practically budget by 2018, and a university research project by 2021," Walker wrote in the message, which Bruce Schneier, also a co-author of Skein, published on his blog, Schneier on Security. SHA-1 is the older of the two SHA versions still in use and there are several known attacks against that have been published over the years. SHA-1 was phased out in favor of the stronger SHA-2 several years ago. Schneier said in his post that the calculations Walker provides should show that now is the time to move away from any remaining SHA-1 implementations. "The point is that we in the community need to start the migration away from SHA-1 and to SHA-2/SHA-3 now," he wrote. Sursa
-
You can grab the hash_extender tool on Github! https://github.com/iagox86/hash_extender Awhile back, my friend @mogigoma and I were doing a capture-the-flag contest at https://stripe-ctf.com. One of the levels of the contest required us to perform a hash length extension attack. I had never even heard of the attack at the time, and after some reading I realized that not only is it a super cool (and conceptually easy!) attack to perform, there is also a total lack of good tools for performing said attack! After hours of adding the wrong number of null bytes or incorrectly adding length values, I vowed to write a tool to make this easy for myself and anybody else who's trying to do it. So, after a couple weeks of work, here it is! Now I'm gonna release the tool, and hope I didn't totally miss a good tool that does the same thing! It's called hash_extender, and implements a length extension attack against every algorithm I could think of: MD4 MD5 RIPEMD-160 SHA-0 SHA-1 SHA-256 SHA-512 WHIRLPOOL I'm more than happy to extend this to cover other hashing algorithms as well, provided they are "vulnerable" to this attack — MD2, SHA-224, and SHA-384 are not. Please contact me if you have other candidates and I'll add them ASAP! The attack An application is susceptible to a hash length extension attack if it prepends a secret value to a string, hashes it with a vulnerable algorithm, and entrusts the attacker with both the string and the hash, but not the secret. Then, the server relies on the secret to decide whether or not the data returned later is the same as the original data. It turns out, even though the attacker doesn't know the value of the prepended secret, he can still generate a valid hash for {secret || data || attacker_controlled_data}! This is done by simply picking up where the hashing algorithm left off; it turns out, 100% of the state needed to continue a hash is in the output of most hashing algorithms! We simply load that state into the appropriate hash structure and continue hashing. TL;DR: given a hash that is composed of a string with an unknown prefix, an attacker can append to the string and produce a new hash that still has the unknown prefix. Example Let's look at a step-by-step example. For this example: The server sends data and signature to the attacker. The attacker guesses that H is MD5 simply by its length (it's the most common 128-bit hashing algorithm), based on the source, or the application's specs, or any way they are able to. Knowing only data, H, and signature, the attacker's goal is to append append to data and generate a valid signature for the new data. And that's easy to do! Let's see how. Padding Before we look at the actual attack, we have to talk a little about padding. When calculating H(secret + data), the string (secret + data) is padded with a '1' bit and some number of '0' bits, followed by the length of the string. That is, in hex, the padding is a 0x80 byte followed by some number of 0x00 bytes and then the length. The number of 0x00 bytes, the number of bytes reserved for the length, and the way the length is encoded, depends on the particular algorithm and blocksize. With most algorithms (including MD4, MD5, RIPEMD-160, SHA-0, SHA-1, and SHA-256), the string is padded until its length is congruent to 56 bytes (mod 64). Or, to put it another way, it's padded until the length is 8 bytes less than a full (64-byte) block (the 8 bytes being size of the encoded length field). There are two hashes implemented in hash_extender that don't use these values: SHA-512 uses a 128-byte blocksize and reserves 16 bytes for the length field, and WHIRLPOOL uses a 64-byte blocksize and reserves 32 bytes for the length field. The endianness of the length field is also important. MD4, MD5, and RIPEMD-160 are little-endian, whereas the SHA family and WHIRLPOOL are big-endian. Trust me, that distinction cost me days of work! In our example, length(secret || data) = length("secretdata") is 10 (0x0a) bytes, or 80 (0x50) bits. So, we have 10 bytes of data ("secretdata"), 46 bytes of padding (80 00 00 ...), and an 8-byte little-endian length field (50 00 00 00 00 00 00 00), for a total of 64 bytes (or one block). Put together, it looks like this: 0000 73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 secretdata...... 0010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0030 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00 ........P....... Breaking down the string, we have: "secret" = secret "data" = data 80 00 00 ... — The 46 bytes of padding, starting with 0x80 50 00 00 00 00 00 00 00 — The bit length in little endian This is the exact data that H hashed in the original example. The attack Now that we have the data that H hashes, let's look at how to perform the actual attack. First, let's just append append to the string. Easy enough! Here's what it looks like: 0000 73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 secretdata...... 0010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0030 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00 ........P....... 0040 61 70 70 65 6e 64 append The hash of that block is what we ultimately want to a) calculate, and get the server to calculate. The value of that block of data can be calculated in two ways: By sticking it in a buffer and performing H(buffer) By starting at the end of the first block, using the state we already know from signature, and hashing append starting from that state The first method is what the server will do, and the second is what the attacker will do. Let's look at the server, first, since it's the easier example. Server's calculation We know the server will prepend secret to the string, so we send it the string minus the secret value: 0000 64 61 74 61 80 00 00 00 00 00 00 00 00 00 00 00 data............ 0010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0030 00 00 50 00 00 00 00 00 00 00 61 70 70 65 6e 64 ..P.......append Don't be fooled by this being exactly 64 bytes (the blocksize) — that's only happening because secret and append are the same length. Perhaps I shouldn't have chosen that as an example, but I'm not gonna start over! The server will prepend secret to that string, creating: 0000 73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 secretdata...... 0010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0030 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00 ........P....... 0040 61 70 70 65 6e 64 append And hashes it to the following value: For those of you playing along at home, you can prove this works by copying and pasting this into a terminal: echo ' #include <stdio.h> #include <openssl/md5.h> int main(int argc, const char *argv[]) { MD5_CTX c; unsigned char buffer[MD5_DIGEST_LENGTH]; int i; MD5_Init(&c); MD5_Update(&c, "secret", 6); MD5_Update(&c, "data" "\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00" "\x50\x00\x00\x00\x00\x00\x00\x00" "append", 64); MD5_Final(buffer, &c); for (i = 0; i < 16; i++) { printf("%02x", buffer[i]); } printf("\n"); return 0; }' > hash_extension_1.c gcc -o hash_extension_1 hash_extension_1.c -lssl -lcrypto ./hash_extension_1 All right, so the server is going to be checking the data we send against the signature 6ee582a1669ce442f3719c47430dadee. Now, as the attacker, we need to figure out how to generate that signature! Client's calculation So, how do we calculate the hash of the data shown above without actually having access to secret? Well, first, we need to look at what we have to work with: data, append, H, and H(secret || data). We need to define a new function, H?, which uses the same hashing algorithm as H, but whose starting state is the final state of H(secret || data), i.e., signature. Once we have that, we simply calculate H?(append) and the output of that function is our hash. It sounds easy (and is!); have a look at this code: echo ' #include <stdio.h> #include <openssl/md5.h> int main(int argc, const char *argv[]) { int i; unsigned char buffer[MD5_DIGEST_LENGTH]; MD5_CTX c; MD5_Init(&c); MD5_Update(&c, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", 64); c.A = htonl(0x6036708e); /* <-- This is the hash we already had */ c.B = htonl(0xba0d11f6); c.C = htonl(0xef52ad44); c.D = htonl(0xe8b74d5b); MD5_Update(&c, "append", 6); /* This is the appended data. */ MD5_Final(buffer, &c); for (i = 0; i < 16; i++) { printf("%02x", buffer[i]); } printf("\n"); return 0; }' > hash_extension_2.c gcc -o hash_extension_2 hash_extension_2.c -lssl -lcrypto ./hash_extension_2 The the output is, just like before: So we know the signature is right. The difference is, we didn't use secret at all! What's happening!? Well, we create a MD5_CTX structure from scratch, just like normal. Then we take the MD5 of 64 'A's. We take the MD5 of a full (64-byte) block of 'A's to ensure that any internal values — other than the state of the hash itself — are set to what we expect. Then, after that is done, we replace c.A, c.B, c.C, and c.D with the values that were found in signature: 6036708eba0d11f6ef52ad44e8b74d5b. This puts the MD5_CTX structure in the same state as it finished in originally, and means that anything else we hash — in this case append — will produce the same output as it would have had we hashed it the usual way. We use htonl() on the values before setting the state variables because MD5 — being little-endian — outputs its values in little-endian as well. Result So, now we have this string: 0000 64 61 74 61 80 00 00 00 00 00 00 00 00 00 00 00 data............ 0010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0030 00 00 50 00 00 00 00 00 00 00 61 70 70 65 6e 64 ..P.......append And this signature for H(secret || data || append): And we can generate the signature without ever knowing what the secret was! So, we send the string to the server along with our new signature. The server will prepend the signature, hash it, and come up with the exact same hash we did (victory!). This example took me hours to write. Why? Because I made about a thousand mistakes writing the code. Too many NUL bytes, not enough NUL bytes, wrong endianness, wrong algorithm, used bytes instead of bits for the length, and all sorts of other stupid problems. The first time I worked on this type of attack, I spent from 2300h till 0700h trying to get it working, and didn't figure it out till after sleeping (and with Mak's help). And don't even get me started on how long it took to port this attack to MD5. Endianness can die in a fire. Why is it so difficult? Because this is crypto, and crypto is immensely complicated and notoriously difficult to troubleshoot. There are lots of moving parts, lots of side cases to remember, and it's never clear why something is wrong, just that the result isn't right. What a pain! So, I wrote hash_extender. hash_extender is (I hope) the first free tool that implements this type of attack. It's easy to use and implements this attack for every algorithm I could think of. Here's an example of its use: $ ./hash_extender --data data --secret 6 --append append --signature 6036708eba0d11f6ef52ad44e8b74d5b --format md5 Type: md5 Secret length: 6 New signature: 6ee582a1669ce442f3719c47430dadee New string: 64617461800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005000000000000000617070656e64 If you're unsure about the hash type, you can let it try different types by leaving off the --format argument. I recommend using the --table argument as well if you're trying multiple algorithms: $ ./hash_extender --data data --secret 6 --append append --signature 6036708eba0d11f6ef52ad44e8b74d5b --out-data-format html --table md4 89df68618821cd4c50dfccd57c79815b data80000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000P00000000000000append md5 6ee582a1669ce442f3719c47430dadee data80000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000P00000000000000append There are plenty of options for how you format inputs and outputs, including HTML (where you use %NN notation), CString (where you use \xNN notation, as well as \r, \n, \t, etc.), hex (such as how the hashes were specified above), etc. By default I tried to choose what I felt were the most reasonable options: Input data: raw Input hash: hex Output data: hex Output hash: hex Here's the help page for reference: Defense So, as a programmer, how do you solve this? It's actually pretty simple. There are two ways: Don't trust a user with encrypted data or signatures, if you can avoid it. If you can't avoid it, then use HMAC instead of trying to do it yourself. HMAC is designed for this. HMAC is the real solution. HMAC is designed for securely hashing data with a secret key. As usual, use constructs designed for what you're doing rather than doing it yourself. The key to all crypto! [pun intended] Sursa
-
Postat la tutoriale deoarece mai explica si ce se intampla in spate dupa faza cu copilul. Sursa
-
Ethernet Cat5e - 1 Giga
M2G replied to fedorffixxzz's topic in Sisteme de operare si discutii hardware
Sa ajuns la concluzia ca nu e optim sa faci corectari in pachete daca acestea vin eronate. Daca exista zgomot si un anumit pachet este trimis cu erori, receptorul face un request pentru acel pachet si sursa il retransmite. Asta se intampla din cauza faptului ca e mai rapid sa retrasmiti pachetul respectiv decat sa il corectezi. Acum cativa ani se foloseau algoritmi de corectie deoarece viteza retelelor era mai mica dar acum nu se mai practica. Asta se intampla in cazul TCP/IP. Pentru UDP pachetele se pierd pur si simplu si poti sa observi asta in streamingul video. Cand iti mai apar din cand in cand "patratele pe ecran" pixeli care ar trebui sa fie acolo si nu sunt. Astea sunt defapt pachete pierdute. In legatura cu firele, materialele din care sunt construite au o limita de cate date poti trimite la un momentdat pe ele. Nu stiu exact sa iti raspund dar sunt multi factori care influenteaza rata maxima de trasmisie printr-un cablu. De exemplu un cablu cu o ecranare mai buna poate sa ajunga la performante mai mari dar din cauza stratului suplimentar de izolatie este mai greu de indoit. Lucru la care administratorii de sistem trebuie sa se gandeasca atunci cand proiecteaza o retea. Vitezele maxime mai sunt influentate de lungimea cablului, cantitatea de cupru din fire si multe alte asptecte. -
Ethernet Cat5e - 1 Giga
M2G replied to fedorffixxzz's topic in Sisteme de operare si discutii hardware
Nu ti se pare logic ca sa folosesti mai multe fire cu cat ai banda mai mare? Pe unde sa circule bitii aia daca nu prin fire? Nu poti atinge viteze de 1 gbps cu 4 fire din cauza limitarilor fizice ale firelor. De asta e nevoie de mai multe. -
Eu ti-am zis sa faci dictionary, nu array. Comanda o pui pe cheia dictionarului. In dictionar poti sa cauti chei cu eficeinta O(1). Dupa asta poti sa scrii ceva de genul: if (commandDictionary.ContainsKey("download")){ Array params = (Array) commandDictionary["download"]; // acum ai in params toti parametrii pentru comanda download si poti face ce vor muschii tai cu ei. }else // in cazul in care comanda nu exista afiseaza un mesaj { Console.println("Command not found"); }
-
Ai zis ca imita intr-un fel cmd-ul, de asta am zis consola. Poti face ca atunci cand apesi tab sa nu mai treaca focusul pe urmatorul element. Sau mai simplu, pui o combinatie cum ar fi ctrl+tab si ai rezolvat problema. Ti-am raspuns ca nu ai nevoie sa vezi daca exista un anumit element in array. Daca citesti despre array.length si cum sa faci loopuri cu foreach o sa intelegi. Or sa se parcurga elementele de la primul pana la ultimul, nu conteaza cate sunt.
-
Le afisezi in consola cand apesi tab de exemplu. Daca ai comanda poti face get la valoarea acelei chei. Cum valoarea este un array, il parcurgi cu un foreach astfel ca nu conteaza cate elemente are, ti le va afisa pe toate. Merge si daca ai un element si daca ai 1000. Sau ca alternativa poti sa folosesti un for care merge pana la array.length, adica pana la numarul maxim de elmente din array. Nu ai nevoie sa stii cate sunt. Poti seta ca la fiecare apasare de tab sa iti mai arate un element din lista. Ca si la linux.
-
Folosesti String.split pentru a separa stringul dupa spatii. Rezultatele le pastrazi intr-un array sau ce vrei tu stiind ca pe prima pozitie se afla comanda si pe urmatoarele, parametrii. Comenzile le poti stoca intr-un dictionary, avand ca si cheie numele comnezii si ca si valoare, un array. Acel array reprezinta paramerii. Cand introduci comanda poti face in asa fel incat sa se afiseze toate posibilitatile de parametrii, adica acel array si elementele din el luate in orice mod vrei tu.
-
Great news, guys, it was just announced that the upcoming Linux 3.7 kernel will incorporate support for multiple ARM System on Chips (SoCs) platforms. Having all ARM platforms supported into a single kernel package is an amazing piece of news for everyone, from end users to Android and ARM-based hardware manufactures. “This is a pretty significant branch. It's the introduction of the first multiplatform support on ARM,” said Olof Johansson in the Git commit page. “And with this (and the later branch) merged, it is now possible to build one kernel that contains support for highbank, vexpress, mvebu, socfpga, and picoxcell.” Until today, all ARM platforms are separated in Linux kernel, kept into their own directory, under mach-<mach>/include/mach/* and each one had to list the device trees in order to compile for its boards in mach-<mach>/Makefile.boot. “They now need to move out to a common location instead, and this branch moves a large number of those out to include/linux/platform_data. It's a one-time move and once it settles in, we should be good for quite a while,” stated Olof Johansson, the lead developer for Chrome OS kernel (x86/ARM). When Linux kernel 3.7 is out, it will first support the following ARM platforms: VExpress, Highbank, SoC FPGA, Picoxcell and Mvebu. But support for more ARM platforms will be added to next releases of the Linux kernel. As you all know already, Linux kernel 3.6 was just released a few days ago, bringing new features for the Btrfs filesystem, support for suspending to memory and disk at the same time, TCP small queues, TCP "Fast Open" mode, support for safe swapping over NBD/NFS, better support for EXT4 quota, support for the PCIe D3cold power state, VFIO, support for SMBv2 protocol, and much more. Sursa
-
- 1