-
Posts
1773 -
Joined
-
Last visited
-
Days Won
6
Everything posted by Matt
-
These days malware is becoming more advanced. Malware Analysts use lots of debugging software and applications to analyze malware and spyware. Malware authors use some techniques to detect the presence of automatic analysis systems such as debuggers and Virtual Machines. In this article we will explore some of these commonly used techniques and practices to evade malware debugging software and sandboxes. Tools required : Immunity debugger C/C++ compiler (msvc or GCC) Virtual Machine (Vmware of Vbox) Introduction to Debuggers. A Debugger is a piece of software used to analyze and instrument executable files. In order to analyze and intercept machine code debuggers use system calls and API commonly provided by the operating system. To intercept a single block of code, debuggers use a single stepping operation which can be turned on by setting the TRAP Flag in EFLAGS register. Debuggers use many types of breakpoints in order to stop at a particular memory address. The following are the type of breakpoints debuggers use. 1. Software Breakpoint. 2. Hardware breakpoint. 3. Memory breakpoints. 4. Conditional Breakpoints. Software Breakpoints are the type of breakpoints where a debugger replaces the original instruction with an INT 0xcc instruction, which raises a software breakpoint interrupt routine and is returned back to the debugger to handle it. In an immunity debugger you can view your software breakpoint by pressing ALT + b Breakpoints Address Module Active Disassembly Comment 00401FF0 extracto Always JE SHORT extracto.00401FF7 00401FFC extracto Always MOV EBP,ESP 0040200A extracto Always CALL DWORD PTR DS:[<&KERNEL32.ExitPr Hardware breakpoints use four of the debug register provided by the process in-order to incept at a particular breakpoint. These registers include DR0, DR1, DR2, DR3 We then flip the appropriate bits in the DR7 register to enable the breakpoint and set its type and length. After the hardware breakpoint has been set and is reached the OS raises an INT 1 interrupt the single stepping event. Debuggers then set up appropriate handlers to catch those exceptions. Memory Breakpoint: In memory the breakpoint we use guard pages to set up a handler and if that page is accessed an exception handler is called. Debuggers support many types of memory breakpoints. 1. memory breakpoint on BYTE access 2. memory breakpoint on WORD access 3. memory breakpoint on DWORD access Conditional breakpoints: Conditional breakpoints are managed by the debugger, and they are presented to users only if certain conditions are met. For example you can set up conditional breakpoints in an immunity debugger which has the following syntax: CONDITION = [ESP] = 0x0077ff89 Which will only be caught if the value pointed at the top of the stack is 0x0077ff89. Memory breakpoints are only useful when you want to monitor calls to specific API with only certain parameters. Debugging API on Windows Windows by default provides an API for debugging which is utilized by debuggers to debug applications. The API provided by windows is known as windows debugging API. The following is a sample code to debug an application using windows debugging API. void EnterDebugLoop(const LPDEBUG_EVENT DebugEv) { DWORD dwContinueStatus = DBG_CONTINUE; // exception continuation char buffer[100]; CONTEXT lcContext; for( { // Wait for a debugging event to occur. The second parameter indicates // that the function does not return until a debugging event occurs. WaitForDebugEvent(DebugEv, INFINITE); // Process the debugging event code. switch (DebugEv->dwDebugEventCode) { case EXCEPTION_DEBUG_EVENT: // Process the exception code. When handling // exceptions, remember to set the continuation // status parameter (dwContinueStatus). This value // is used by the ContinueDebugEvent function. switch(DebugEv->u.Exception.ExceptionRecord.ExceptionCode) { case EXCEPTION_ACCESS_VIOLATION: // First chance: Pass this on to the system. // Last chance: Display an appropriate error. break; case EXCEPTION_BREAKPOINT: if (!fChance) { dwContinueStatus = DBG_CONTINUE; // exception continuation fChance = 1; break; } lcContext.ContextFlags = CONTEXT_ALL; GetThreadContext(pi.hThread, &lcContext); ReadProcessMemory(pi.hProcess , (LPCVOID)(lcContext.Esp ),(LPVOID)&rtAddr, sizeof(void *), NULL ); if (DebugEv->u.Exception.ExceptionRecord.ExceptionAddress == pEntryPoint) { printf("\n%s\n", "Entry Point Reached"); WriteProcessMemory(pi.hProcess ,DebugEv->u.Exception.ExceptionRecord.ExceptionAddress,&OrgByte, 0x01, NULL); lcContext.ContextFlags = CONTEXT_ALL; GetThreadContext(pi.hThread, &lcContext); lcContext.Eip--; // Move back one byte SetThreadContext(pi.hThread, &lcContext); FlushInstructionCache(pi.hProcess,DebugEv->u.Exception.ExceptionRecord.ExceptionAddress,1); dwContinueStatus = DBG_CONTINUE ; // exception continuation putBP(); break; } // First chance: Display the current // instruction and register values. break; case EXCEPTION_DATATYPE_MISALIGNMENT: // First chance: Pass this on to the system. // Last chance: Display an appropriate error. dwContinueStatus = DBG_CONTINUE ; break; case EXCEPTION_SINGLE_STEP: printf("%s", "Single stepping event "); dwContinueStatus = DBG_CONTINUE ; break; case DBG_CONTROL_C: // First chance: Pass this on to the system. // Last chance: Display an appropriate error. break; default: // Handle other exceptions. break; } break; case CREATE_THREAD_DEBUG_EVENT: //dwContinueStatus = OnCreateThreadDebugEvent(DebugEv); break; case CREATE_PROCESS_DEBUG_EVENT: printf("%s", GetFileNameFromHandle(DebugEv->u.CreateProcessInfo.hFile)); break; case EXIT_THREAD_DEBUG_EVENT: // Display the thread's exit code. //dwContinueStatus = OnExitThreadDebugEvent(DebugEv); break; case EXIT_PROCESS_DEBUG_EVENT: // Display the process's exit code. return; //dwContinueStatus = OnExitProcessDebugEvent(DebugEv); break; case LOAD_DLL_DEBUG_EVENT: char *sDLLName; sDLLName = GetFileNameFromHandle(DebugEv->u.LoadDll.hFile); printf("\nDLl Loaded = %s Base Address 0x%p\n", sDLLName, DebugEv->u.LoadDll.lpBaseOfDll); //dwContinueStatus = OnLoadDllDebugEvent(DebugEv); break; case UNLOAD_DLL_DEBUG_EVENT: // Display a message that the DLL has been unloaded. //dwContinueStatus = OnUnloadDllDebugEvent(DebugEv); break; case OUTPUT_DEBUG_STRING_EVENT: // Display the output debugging string. //dwContinueStatus = OnOutputDebugStringEvent(DebugEv); break; case RIP_EVENT: //dwContinueStatus = OnRipEvent(DebugEv); break; } // Resume executing the thread that reported the debugging event. ContinueDebugEvent(DebugEv->dwProcessId, DebugEv->dwThreadId, dwContinueStatus); } } int main(int argc ,char **argv) { DEBUG_EVENT debug_event = {0}; STARTUPINFO si; FILE *fp = fopen(argv[1], "rb"); ZeroMemory( &si, sizeof(si) ); si.cb = sizeof(si); ZeroMemory( ?, sizeof(pi) ); CreateProcess ( argv[1], NULL, NULL, NULL, FALSE, DEBUG_ONLY_THIS_PROCESS, NULL,NULL, &si, ? ); printf("Passed Argument is %s\n", OrgName); pEntryPoint = GetEP(fp); // GET the entry Point of the Application fclose(fp); ReadProcessMemory(pi.hProcess ,pEntryPoint, &OrgByte, 0x01, NULL); // read the original byte at the entry point WriteProcessMemory(pi.hProcess ,pEntryPoint,"\xcc", 0x01, NULL); // Replace the byte at entry point with int 0xcc EnterDebugLoop(&debug_event); // User-defined function, not API return 0; } int main(int argc ,char **argv) { DEBUG_EVENT debug_event = {0}; STARTUPINFO si; FILE *fp = fopen(argv[1], "rb"); ZeroMemory( &si, sizeof(si) ); si.cb = sizeof(si); ZeroMemory( ?, sizeof(pi) ); CreateProcess ( argv[1], NULL, NULL, NULL, FALSE, DEBUG_ONLY_THIS_PROCESS, NULL,NULL, &si, ? ); printf("Passed Argument is %s\n", OrgName); pEntryPoint = GetEP(fp); // GET the entry Point of the Application fclose(fp); ReadProcessMemory(pi.hProcess ,pEntryPoint, &OrgByte, 0x01, NULL); // read the original byte at the entry point WriteProcessMemory(pi.hProcess ,pEntryPoint,"\xcc", 0x01, NULL); // Replace the byte at entry point with int 0xcc EnterDebugLoop(&debug_event); // User-defined function, not API return 0; } Anti-debugging techniques. Now in order to frustrate the malware analyst, malware can be detected in the presence of debuggers and show up in unexpected events. In order to detect the presence of a debugger, malware can either read some values or it can use API present to detect if the malware is being debugged or not. One of the simple debugger detection tricks includes using the winAPI function known as Now in order to frustrate the malware analyst, malware can be detected in the presence of debuggers and show up in unexpected events. In order to detect the presence of a debugger, malware can either read some values or it can use API present to detect if the malware is being debugged or not. One of the simple debugger detection tricks includes using the winAPI function known as KERNEL32.IsDebuggerPresent. #define WIN32_LEAN_AND_MEAN #include <windows.h> #include <stdio.h> int main(int argc, char **argv) { if (IsDebuggerPresent()) { MessageBox(HWND_BROADCAST, "Debugger Detected", ""Debugger Detected"", MB_OK); exit(); } MessageBox(HWND_BROADCAST, "Debugger Not Detected", ""Debugger Not Detected"", MB_OK); return 0; } Detecting a debugger using PEB: When the process is created using CreateProcess API, and if the creation flag is set as DEBUG_ONLY_THIS_PROCESS then a special field is set in the PEB data structure in the memory. #define WIN32_LEAN_AND_MEAN #include <windows.h> #include <stdio.h> int __naked detectDebugger() { __asm { ASSUME FS:NOTHING MOV EAX,DWORD PTR FS:[18] MOV EAX,DWORD PTR DS:[EAX+30] MOVZX EAX,BYTE PTR DS:[EAX+2] RET } } int main(int argc, char **argv) { if (detectDebugger()) { MessageBox(HWND_BROADCAST, "Debugger Detected", ""Debugger Detected"", MB_OK); exit(); } MessageBox(HWND_BROADCAST, "Debugger Not Detected", ""Debugger Not Detected"", MB_OK); return 0; } Detection using HEAP flags: When a program is run under a debugger, and is created using the debug process creation flags. The heap flags are changed. These Flags exit at a different location depending upon the version of the operating system. On Windows NT based systems these flags exist at 0x0c offset from heap base. ON Windows Vista based systems and later they exist at location 0x40 offset from the heap base. These two flags initialized are 'Force flags' and 'flags'. ProcessHeap Base Points towards a _HEAP structure are defined as: Reference : http://www.nirsoft.net/kernel_struct/vista/HEAP.html typedef struct _HEAP { HEAP_ENTRY Entry; ULONG SegmentSignature; ULONG SegmentFlags; LIST_ENTRY SegmentListEntry; PHEAP Heap; PVOID BaseAddress; ULONG NumberOfPages; PHEAP_ENTRY FirstEntry; PHEAP_ENTRY LastValidEntry; ULONG NumberOfUnCommittedPages; ULONG NumberOfUnCommittedRanges; WORD SegmentAllocatorBackTraceIndex; WORD Reserved; LIST_ENTRY UCRSegmentList; ULONG Flags; ULONG ForceFlags; ULONG CompatibilityFlags; ULONG EncodeFlagMask; HEAP_ENTRY Encoding; ULONG PointerKey; ULONG Interceptor; ULONG VirtualMemoryThreshold; ULONG Signature; ULONG SegmentReserve; ULONG SegmentCommit; ULONG DeCommitFreeBlockThreshold; ULONG DeCommitTotalFreeThreshold; ULONG TotalFreeSize; ULONG MaximumAllocationSize; WORD ProcessHeapsListIndex; WORD HeaderValidateLength; PVOID HeaderValidateCopy; WORD NextAvailableTagIndex; WORD MaximumTagIndex; PHEAP_TAG_ENTRY TagEntries; LIST_ENTRY UCRList; ULONG AlignRound; ULONG AlignMask; LIST_ENTRY VirtualAllocdBlocks; LIST_ENTRY SegmentList; WORD AllocatorBackTraceIndex; ULONG NonDedicatedListLength; PVOID BlocksIndex; PVOID UCRIndex; PHEAP_PSEUDO_TAG_ENTRY PseudoTagEntries; LIST_ENTRY FreeLists; PHEAP_LOCK LockVariable; LONG * CommitRoutine; PVOID FrontEndHeap; WORD FrontHeapLockCount; UCHAR FrontEndHeapType; HEAP_COUNTERS Counters; HEAP_TUNING_PARAMETERS TuningParameters; } HEAP, *PHEAP; Following the C program can be used to detect the presence of a debugger using heap flags int main(int argc, char* argv[]) { unsigned int var; __asm { MOV EAX, FS:[0x30]; MOV EAX, [EAX + 0x18]; MOV EAX, [EAX + 0x0c]; MOV var,EAX } if(var != 2) { printf("Debugger Detected"); } return 0; } Virtual Machine Detection or Emulation Detection. Malware samples are usually analyzed by analysts in an isolated environment such as Virtual Machine. In order to thwart the analysis of samples inside a virtual machine malware include anti-vm protection or they simply exit when malware is run in an isolated environment. The following techniques can be used to detect if a sample is running inside a VM. 1.Timing Based. 2.Artifacts based Timing based detection "The Time Stamp Counter (TSC) is a 64-bit register present on all x86 processors since the Pentium. It counts the number of cycles since reset". (Wikipedia) If the code is being emulated then, there will be change in the time stamp between. The Result in stored in EDX:EAX format Now the time difference in a real host machine would be usually less than 100, but if the code is emulated the difference will be huge. int main(int argc, char* argv[]) { unsigned int time1 = 0; unsigned int time2 = 0; __asm { RDTSC MOV time1,EAX RDTSC MOV time2, EAX } if ((time2 - time1) > 100) { printf("%s", "VM Detected"); return 0; } printf("%s", "VM not present"); return 0; } The above program uses time stamp instruction to detect the presence of Virtual Machine. Artifact Based Detection. Malwares leverage on the presence of Virtual Machine configuration based on file, network or device artifacts. Malwares usually check the presence of these artifacts to detect the presence of a debugger or Virtual Environment. The best case would be registry artifacts, Vmware creates registry keys for Virtual Disk Controller, which can be located in registry using the following key. HKLM\SYSTEM\CurrentControlSet\Services\Disk\Enum\0 as "SCSI\Disk&Ven_VMware_&Prod_VMware_Virtual_S&Rev_1.0\4&XXX&XXX" int main(int argc, char **argv) { char lszValue[100]; HKEY hKey; int i=0; RegOpenKeyEx (HKEY_LOCAL_MACHINE, "SYSTEM\\CurrentControlSet\\Services\\Disk\\Enum", 0L, KEY_READ , &hKey); RegQueryValue(hKey,"0",lszValue,sizeof(lszValue)); printf("%s", lszValue); if (strstr(lszValue, "VMware")) { printf("Vmware Detected"); } RegCloseKey(hKey); return 0; } Sursa Resources.infosecInstitute.com
-
Charging your iPhone just got a bit riskier. At Black Hat USA Wednesday in Las Vegas, a trio of researchers from The Georgia Institute of Technology demonstrated how to abuse USB functionality of Apple iPhones to compromise the device. Using a Beagleboard, the researchers built a proof-of-concept malicious charger they refer to as Mactans. "After pairing, Mactans can do anything that can be done through the USB connection," said Yeongjin Jang, a PhD student at Georgia Tech who was joined on stage with fellow researchers Billy Lau and Chengyu Song. That includes creating a developer provisioning profile and adding applications onto the device without the user's permission. To do this, the researchers had to first steal the UDID [unique device identifier] for the device, which Jang described as "trivial." Once the new provisioning profile is created and deployed on the phone, a malicious application can be loaded by the attacker. In the case of their demo, they replaced a legitimate version of the Facebook app with a malicious one that they secretly loaded onto the phone in roughly a minute. Though Jang explained that the app is still sandboxed, it can still call private APIs and be used for a number of nefarious tasks, including taking screenshots of the victim's password as it is being entered or even placing telephone calls at the behest of the attacker. There are a few possible attack scenarios for Mactans, explained Lau. For example, USB outlets in airports or hotels could be targeted. In addition, state-sponsored attackers that are well-financed could build a device that looks like a regular charger but actually is malicious, he said. The device does not need to be jailbroken for Mactans to work. However, if the device is locked while it is charging, the Mactans attack will not work, according to Jang. Following the disclosure of the attack, Apple implemented a feature in iOS7 to notify users when they plug in any USB device that attempts to establish a data connection. Sursa Securityweek.com
-
Noi cifre arata ca Google Play a depasit Apple App Store in materie de aplicatii descarcate, cu 10% in cel de-al doilea trimestru al anului, insa, cu toate acestea, veniturile Apple au fost de 2,3 ori mai mari. Avand in vedere ca majoritatea oamenilor din intreaga lume folosesc dispozitive Android, nu este surprinzator faptul ca numarul aplicatiilor descarcate de pe Google Play l-a depasit pe cel al aplicatiilor descarcate de pe Apple App Store. Cu toate acestea, Apple castiga atunci cand vine vorba despre profit. Un nou raport App Annie Index releva ca in cursul celui de-al doilea trimestru al anului, Google Play a depasit cu 10% Apple App Store in ceea ce priveste numarul aplicatiilor descarcate, insa Apple a generat venituri de 2,3 ori mai mari decat Google Play. "Desi Google Play a depasit iOS App Store, exista inca un decalaj mare in ceea ce priveste monetizarea aplicatiilor", explica App Annie Index pe blog. Aparent, Apple a castigat o mare parte din venituri din aplicatiile de jocuri - aproape 75% din totalul veniturilor din aplicatii provin din descarcari de jocuri, in crestere fata de 70% in primul trimestru al anului. In plus, aplicatiile de muzica si social-networking au ajutat la cresterea veniturilor Apple App Store. Aproape jumatate din totalul veniturilor din aplicatii provine de la utilizatori din SUA si Japonia. Tarile care descarca cele mai multe aplicatii din Google Play sunt SUA, Coreea de Sud si India, iar tarile care descarca cele mai multe aplicatii din Apple App Store sunt SUA, China si Japonia. Tarile care cunosc cea mai rapida rata de crestere in materia descarcarilor de aplicatii sunt Rusia, India si Brazilia. Referitor la tipul aplicatiilor pe care oamenii prefera sa le descarce, jocurile continua sa se claseze pe locul 1, atat pe Google Play, cat si pe Apple App Store. Cea de-a doua pozitie in Google Play este ocupata de comunicatii, urmate de instrumente. In Apple App Store, pozitiile 2 si 3 sunt ocupate de divertisment si, respectiv, foto & video. Saptamana trecuta, Google a anuntat ca se asteapta la peste 70 milioane de activari ale tabletelor Android pana la sfarsitul acestui an, o crestere imensa fata de sfarsitul anului 2012, cand Google a numarat aproape 10 milioane de tablete activate. Cu astfel de cifre, numarul aplicatiilor descarcate va continua sa creasca. Google a anuntat ca in prezent exista 1 milion de aplicatii in Google Play, precum si peste 50 miliarde descarcari. Apple prezinta, la randul sau, cifre impresionante. La inceputul acestei luni, App Store a depasit 50 miliarde aplicatii descarcate, cu 900.000 aplicatii disponibile. Sursa : Technology News - CNET News
-
Black Hat 2013 Security researchers have demonstrated how to exploit widely deployed SCADA systems to spoof data to the operator, and remotely control equipment such as pumps in oil pipelines. The exploits were demonstrated live at Black Hat 2013 in Las Vegas on Thursday, and saw security engineers from energy sector process automation company Cimation remotely control the valves within a pretend oil well. The simulation rig consisted of a liquid container that stood in for an oil well, connected to a pump that connected to an isolation valve, which then connected to a simulated tank; the system was controlled by a programmable logic controller (PLC) within a SCADA system. Engineers Eric Forner and Brian Meixell demonstrated a way to remotely control the PLC that sends signals to devices on the simulated pipeline, and were able to turn pumps on and off – which in the real world could cause an oil pipeline to rupture. They also were able to send contrasting data to the Human Machine Interface (HMI) that sends data up to an operator. "It's not rocket science, but it's extremely dangerous," Forner says. "In real life that would be a pipe blowout. That could be oil or acid or anything." The researchers were able to do this because many PLCs are exposed to the internet with public IP addresses, and they frequently don't have Ethernet built-in, but instead have an old Ethernet module that plugs into their backplane. These Ethernet modules typically run an ancient version of Linux and are very easy to exploit, Forner says. "It's usually just an embedded piece of hardware and runs VxWorks or some BusyBOX distro or RTOS, or some of them – God forbid – write their own OSs" Once inside the Ethernet system, the engineers can then start to send commands to the PLC itself. Though companies implement safety logic in their PLCs that is designed to avoid damaging scenarios such as a pump being turned on in an already highly-pressurized system, this can be worked around, they said. Once the researchers gain access to the PLC, they can simply overwrite the logic with new safety logic that lacks these protections, and then enter malicious commands. As of 2012, there are some 93,793 nodes on the public internet listening on port 502, according to the 2012 Internet Census, and the researchers suspect a large number of these are PLCs out in the field. They were also able to spoof data to the Human Machine Interface (HMI) system which allows field workers and remote administrators to monitor the system. HMIs are frequently vulnerable to trivial attacks. "A lot of them are Windows-based machines and woefully out of date, and the reason is you're in production and you never want them to go down. Every day you're not producing oil or some chemical is money down the drain," Forner says. The team was able to start overloading the pretend oil tank, while outputting data to the HMI that said the fluid level in the tank was falling. This would cause an operator to typically pump more into the tank, so even if the underlying PLC has not been compromised, this provides another route of attack. In a final flourish, they uploaded their own binaries to the cracked HMI and had a game of Solitaire. They also discussed ways to get at PLCs not kept on the company network. Hackers can do this by cracking their way into a company's enterprise network, then proceeding down the stack until they reach the PLC. In an admission sure to induce brown trousers in people who live near oil pipelines, the researchers said that energy networks are very pooly protected. "A lot of the firewalls that are implemented are put in place because people need to comply to a standard, and they end up leaving all traffic to pass," Meixell says. "Even worse is no firewalls, where everything is on a flat network – anything from your SAP up to your WebSphere can talk directly. It'll be on the same LAN as your PLCs and controller hardware," Forner says with a hint of maniacal glee – an emotion that he radiated throughout the presentation, and which climaxed when the test rig began spraying dyed-green water onto the assembled cheering audience. ® Sursa TheRegister.co.uk
-
Analysis Like many new technologies, the Linux operating system got its big break in high performance computing. There is a symbiotic relationship between Linux and HPC that seems natural and normal today, and the Linux Foundation, which is the steward of the Linux kernel and other important open source projects – and, importantly, the place where Linus Torvalds, the creator of Linux, gets his paycheck – thinks that Linux was more than a phenomenon on HPC iron. The organization goes so far as to say that Linux helped spawn the massive expansion in supercomputing capacity we have seen in the past two decades. It is not a surprise at all that the Linux Foundation would have such a self-serving opinion, which it put forth in a report released this week. That report slices and dices data from the Top500 supercomputer list, which was established 20 years ago and ranks the largest supercomputers by their sustained performance on the Linpack Fortran benchmark test. And as you can see, over the past two decades, Linux has come in and essentially replaced Unix as the operating system of choice for supercomputers. Here is the ranking of operating systems by machine count on the Top500 list since 1993: Linux has come to dominate the top-end of the supercomputing racket in two decades Basically, Unix and Linux have flip-flopped market shares over the two decades that the Top500 ranking has been around. And depending on how you want to look at it, it has been a meteoric rise for Linux and an utter catastrophe for Unix. So how did Linux come to dominate the upper echelons of supercomputing? "By offering a free, flexible, and open source operating system, Linux made it cost effective to design and deliver custom hardware and system architecture designs for the world's top-performing supercomputers," write Libby Clark and Brian Warner of the Linux Foundation. "As a result, the proportion of computers running Linux on the Top500 list saw a meteoric rise starting in the early 2000s to reach more than 95 percent of the machines on the list today." It is certainly true that Cray, Silicon Graphics, IBM, Sun Microsystems, Digital, Convex, and others charged big bucks for their Unix or proprietary operating systems, but Linux took off not just because it was cheap and malleable. Linux and the Message Passing Interface (MPI) method of distributing work and data across a cluster of machines both got their start in 1991, and had Torvalds decided to do something else, we might be talking about the rise of one of the open source BSD Unixes instead of Linux in this story. But Linux did something that BSD did not: it caught the attention of hackers, academics, IT vendors, and became a practical alternative for any of the Unixes for a modestly powered server by the mid-1990s if you knew what you were doing. This is also when Beowulf clustering, based on MPI and other messaging and collective systems software, became a practical alternative to build modestly powered parallel supercomputers. By the end of the 1990s, Linux and MPI were not only cheaper bets, but were safer ones because all of the major systems vendors were promising to port at least one and sometimes more flavors of Linux to their various platforms. And not just x86 platforms, but any captive processors they might etch and weld into systems. El Reg would contend that had Linux not existed, another open source Unix operating system would have risen to fill the void that Linux filled, because as the Linux Foundation has correctly pointed out in its rah-rah paper, no one can afford to pay list price for a software license or a support contract for an operating system on a machine that has thousands of nodes. And if the national labs, academia, and enterprises with HPC systems had coalesced around an open source Unix, this could have filtered down to the rest of the systems business from the top. But as it was, Linux came at supercomputing from the bottom, and its champion Linus Torvalds caught the imagination of people at just the time that open source software was ready to go mainstream and challenge both Unix and Windows – and at just a time, during the dot-com boom, the major server makers (including those with HPC systems) wanted to look cool. This story is always the same. Unix came out of AT&T and academia, and gradually became a safe option by the mid-1980s for workstations and then for these new kind of network-connected systems called servers. And Unix machines raised hell and ate into the systems business – including HPC systems – from below. While it is clear that the Top500 supercomputers run Linux for the most part, which Linux is largely a mystery. Take a look at the distribution of operating systems from the June 2013 list, by both system count and by sustained Linpack performance: Linux, as a group of different OSes, dominates the Top500 supers Of the 500 machines on the list, 417 of them are running "Linux," which could be any Linux at all. SUSE Linux went through the top hundred machines on the list and reckons that about a third of them are running one or another variant of SUSE Linux Enterprise Server, including the modified versions peddled by Cray and Silicon Graphics. The company tells El Reg that it is very hard to say how many of the remaining 400 machines on the list are running SLES or the development version, OpenSUSE. Only 83 of the machines mention their OS release by name. This is not exactly good data, but even across those 83 machines, there is a wide variety of Linux types, which have been tweaked and tuned by vendors to match their iron – just as you would expect from national labs and academic centers. It would be interesting to see a distribution of Linux distros among the 269 commercial customers on the Top500 list. All but one of them run Linux, but again, we can't see from the Top500 data which particular Linuxes. Are enterprises any more likely to pay for Linux support than the national labs? Do they do what smartasses in running small businesses do, which is to buy one or two licenses for support for a few nodes and then leave all the other ones naked? Do they get killer discounts on Linux support contracts, like IBM cooked up with Red Hat last year to put Enterprise Linux on its supercomputers and clusters? If you paid list price for RHEL for a BlueGene/Q system, a rack would cost you $1.1m, or $1,350 per node, per year. But the special pricing IBM was offering dropped it down to a nickel under $44 per node. And that is probably too expensive for some HPC shops. The Top500 is interesting as a leading indicator, and has predicted the decline of Unix in commercial data centers – a slide that is going on as we speak and shows no sign of abating. Unix used to be about half of server revenues and a relatively small slice of unit shipments a decade and a half ago, and now about 155,000 machines out of the 9.67 million machines sold in 2012 were running Unix, and Unix is about 16 per cent of the $52.5bn in revenues (that is Gartner data). Linux, if you believe the operating system distributions done by IDC, is about a fifth of worldwide revenues and is still growing its share. But it is not growing fast enough to knock Windows off its perch – not by a long shot. Windows has the dominant revenue share at the moment (right around 50 per cent), and accounts by far for the lion's share of server shipments (much more than 50 per cent, but IDC does not say in its publicly accessible data). So, yes, Linux has taken over HPC and it is doing very well on new big-data jobs like Hadoop. And it is also the preferred platform for hyperscale data center giants such as Google and Facebook. But is has a long way to go before it dominates the enterprise data center like it does these markets. But as enterprises want to build applications more like Google and Facebook – and there is every reason to believe that the kinds of scalable, distributed apps that these companies build are the harbinger of the future, much as Beowulf clusters were for parallel supercomputing 15 years ago – we may see Linux, or rather all of the Linuxes as a collective, kick it into overdrive. ® Sursa TheRegister.co.uk
-
Black Hat 2013 Security researchers say they have developed a trick to take over Gmail and Outlook.com email accounts by shooting down victims' logout requests - even over a supposedly encrypted connection. And their classic man-in-the-middle attack could be used to compromise electronic ballot boxes to rig elections, we're told. Ben Smyth and Alfredo Pironti of the French National Institute for Research in Computer Science and Control (INRIA) announced they found a way to exploit flaws in Google and Microsoft's web email services using an issue in the TLS (Transport Layer Security) technology, which encrypts and secures website connections. Full details of the attack are yet to be widely disseminated - but it was outlined for the first time in a demonstration at this year's Black Hat hacking convention in Las Vegas on Wednesday. In short, we're told, it uses a TLS truncation attack to block victims' account logout requests so that they unknowingly remain logged in at their PC: when the request to sign out is sent, the attacker injects an unencrypted TCP FIN message to close the connection. The server-side therefore doesn't get the request and is unaware of the abnormal termination. The pair explained : The attack does not rely on installing malware or similar shenanigans: the miscreant pulling off the trick must simply put herself between the victim and the network. That could be achieved, for example, by setting up a naughty wireless hotspot, or plugging a hacker-controlled router or other little box between the PC and the network. The researchers warned that shared machines – even un-compromised computers – cannot guarantee secure access to systems operated by Helios (an electronic voting system), Microsoft (including Account, Hotmail, and MSN), nor Google (including Gmail, YouTube, and Search). "This blocking can be accomplished by a so-called 'man in the middle'," Pironti told El Reg. "Technically, whatever piece of hardware is relaying data between you and Google could decide to stop relaying at some point, and do the [logout] blocking. "In practice, this is very easy to do: with wireless networks (e.g. setting up a rogue access point) or with wired networks (e.g. by adding a router between your cable and the wall plug - alternatively this could be done with custom-built hardware, which could be very small)." Block and tackle Several attacks might be possible as a result of the vulnerability, according to Pironti. "In the context of voting, a single malicious poll station worker could do the attack, voting at his pleasure for any voter. He sets up his man-in-the-middle, then waits for a designated victim to enter the voting booth. The man-in-the-middle device blocks the relevant messages. Then the malicious worker enters the voting booth (e.g. with the excuse to check that the machine is operational) and votes on the victim's behalf." Webmail attacks on shared computers in settings such as libraries are also possible. An attacker simply needs to access a computer after a mark incorrectly believes she has signed out. Unbeknown to the user, the hacker's hardware will have blocked the relevant messages, yet the user must be shown what appears to be a "you've signed out" page - the core element of the con. After that, it's easy for an adversary to use the computer to access the user's email. "We believe this [problem] is due to a poor understanding of the security guarantees that can be derived from TLS and the absence of robust web application design guidelines. In publishing our results, we hope to raise awareness of these issues before more advanced exploits, based upon our attack vector, are developed," the researchers concluded. The attack developed by INRIA is apparently possible thanks to a de-synchronisation between the user’s and server’s perspective of the application state: the user receives feedback that her sign-out request has been successfully executed, whereas, the server is unaware of the user’s request. "It follows intuitively that our attack vector could be exploited in other client-server state transitions," Smyth and Pironti explained. Mitigating the attack could be achieved by reliably notifying the user of server-side state changes. "Unfortunately, the HTTP protocol is unsuited to this kind of noti?cation", we're told, so the researchers advocate the use of technologies such as the SPDY networking protocol and AJAX (asynchronous JavaScript and XML, a web development framework). The two researchers shared their findings with Google and Microsoft; the web advertising giant acknowledged the discovery in its application security hall of fame. Smyth and Pironti's presentation of their research was titled Truncating TLS connections to violate beliefs in web applications. The researchers were seemingly able to exploit the Helios electronic voting system to cast ballots on behalf of voters, take full control of Microsoft Live accounts, and gain temporary access to Google accounts. Subtle reasons make Microsoft's webmail service more exposed than its Google equivalent, Pironti explained. "Google happens to be less exposed for two reasons," Pironti told El Reg. "First, our attack relies on a de-synchronisation at the server side: it happens that Google ensures synchronisation every five minutes, which makes our attack [only] work within this five minutes window. Second, Microsoft allows you to change your password without re-typing the old one, so once we access the user account, we can change its password and get full control." Pironti said the research didn't look at other popular webmail systems, such as Yahoo!'s, so he can't say for sure whether they are vulnerable or not. "We suspect many other services are broken, but we didn't look into details," he said. ® Sursa TheRegister.co.uk
-
Hackers are using the Google Code developer site to spread malware, according to security firm Z-Scaler. Zscaler ThreatLabZ security researcher Chris Mannon, reported uncovering the scheme, warning that it is a marked development on criminals' usual attack strategy. "Malware writers are now turning to commercial file-hosting sites to peddle their wares. If these legitimate file hosts are not scanning the content they are hosting, it may force network administrators to block the service altogether. The kicker is that this time we see that Google Code seems to have swallowed the bad pill," he wrote. He said businesses using the service should adapt their security protocols accordingly to deal with the new threat. "This incident sets a precedent that no file-hosting service is beyond reproach. Blind trust of specific domains should not be tolerated from an organisational or personal perspective. So set those security privileges to kill and keep one eye open for shady files coming from even a seemingly trusted location." The professional-focused site is one of many hit by cyber criminals in recent months. Other websites that have been recently targeted include the Apple Developer and Nasdaq community forums. Both the attacks were designed to steal users' password information rather than alter them to become malware-distribution tools. Security experts have said the attack is part of a growing trend within the hacker community. FireEye regional technical lead Simon Mullis said he expects to see more similar attacks in the very near future. "We see this all of the time. In many cases we see fragments of multi-stage attacks for specific campaigns hosted across a variety of intermediate locations. Any site with user-editable content can be used to host part of the malware attack lifecycle," he said. "The key part here: if you cannot detect the initial inbound exploit, then the rest of the attack can be hidden or obfuscated using this approach. This technique has been used for years (see Aurora in 2009, Pingbed in 2011 and MiniDuke this year) and the traditional security model and simple discrete sandboxing has no answer for it." Sursa V3.co.uk
-
The authors of the Andromeda botnet are on the verge of releasing a radically updated, more dangerous version of the tool, according to Trend Micro researchers. Trend Micro reported uncovering an advert announcing the upgrade on an unnamed cyber black market, warning businesses to be extra vigilant. "The Andromeda botnet is still active in the wild and not yet dead. In fact, it's about to undergo a major update real soon," read the blog post. "Just recently, however, we've uncovered that there is an ongoing development in the Andromeda botnet. This latest announcement was posted just recently and basically says that Andromeda code is going to be updated heavily. They suspended the sales of plugins to focus more on developing the new version." The authors promised the upgraded version will feature several enhanced features in the post. "The project is undergoing a global modernisation. In the near future there will be a few important but not visible changes," read the hacker's advert, translated from Russian. "We will update the admin principal. All plugins will undergo fundamental changes both in format and structure." The changes will reportedly fix a number of bugs in the hack tool and make it quicker and easier for criminals to use. Trend Micro reported the criminals behind Andromeda also announced a sale on other tools. "Rootkit and Socks5, which are popular plugins, are also now free of charge. Previously, the rootkit was sold $300 and $1,000 for Socks5 with BackConnect," read Trend's statement. The new version's exact release date remains unknown. The Andromeda botnet has been an ongoing problem facing businesses since first appearing in 2011. The current version of Andromeda was discovered in March. Sursa V3.co.uk
-
The head of NASA's famed Jet Propulsion Laboratory (JPL) took to the stage Thursday to share the wisdom he has picked up over years of overseeing the agency's wildly successful Mars exploration programme. Brian Muirhead told an audience of security professionals that his success has come from an environment that tempered big risks with relentless testing, preparation and creativity. Serving as chief engineer at the JPL in Pasadena, Muirhead has helped to lead the teams that built, launched and landed the Sojurner, Spirit and Curiosity Mars rovers amongst numerous other missions to the red planet. In doing so, Muirhead has helped to spearhead an era in which the US space agency has undertaken ambitious new missions while operating in shorter timeframes and on smaller budgets. That process, says Muirhead, has required a management style that places a premium on creativity and precision. Muirhead said that many of the breakthroughs in the mission have come from building a staff based not on IQ, but on EQ- a combination of drive, judgement and resilience which will allow engineers to develop and follow through on new technologies. “I'm looking for EQ, I'm looking for people with that drive and passion,” he explained. Risk management also played a major role in the missions. Prior to the 1996 start of the Discovery missions, only eight of 30 Mars missions had succeeded. While that rate has improved to 15/32, venturing to the red planet remains risky. As the missions progressed, the landings only grew riskier. The descent of the 900kg Curiosity rover consisted of a complex series of manoeuvres which were famously billed as “seven minutes of terror.” To help temper that risk, Muirhead and his team stuck to an extensive testing regiment, ensuring that the many phases and components of the lander and rover were able to function under multiple scenarios. “We let ourselves know, we let the management know and we let the public know that we've done everything we can do,” he said, “but there is still inherent risk in what we do.” Muirhead himself sees management as a mixture of both maintaining a team and making their jobs easier to do, a concept he refers to as 'grease and glue.' “Glue keeps the team together which is important, but I think more importantly its the grease, we break the barriers, we cut the red tape.” Sursa V3.co.uk
-
Adobe chief security officer Brad Arkin is preaching a unique brand of education which he says has helped to make his company's products more secure and given employees valuable professional skills. Arkin, who joined the company in 2008, has overseen a transition ad Adobe which saw the company move from offering its products and boxed discs and digital downloads to hosted cloud services. “It has been a big thing for us, when you are putting software in a box, it is really just the code and you don't have any control over the environment theey are putting that code on top of,” he told V3. “When we are writing code for our servers, we control in theory every aspect of it.” With the transition from shipping products to hosting them on servers, the company has had to focus on new areas such as managing and securing servers, protecting infrastructure and preventing attacks on company systems. To help guard the cloud infrastructure and improve the security of Adobe products, Arkin insituted a unique system based on a martial arts structure of 'belt' ranks. By reading security materials and inline seminar material developed by security staff, employees earn a “white belt” ranking, a basic competency which can be obtained over a few days. Further on, employees can spend more time studying materials and training over the course of several weeks to get a “green belt” certification, then a “brown belt” program designed to run six months and a top “black belt” certification obtainable over the course of a year or more. The structure then plays a vital part in how development teams are assembled. Arkin and his team mandate that each project has a certain amount of team members with green and white certifications as well as brown belt and black belt developers overseeing security. In addition to making products more secure, Arkin says Adobe employees are teaching themselves valuable professional skills. “We went from getting not just the security geeks to do the training, but also the career-oriented people,” he explained. “You go from a less-sexy project to one that is more exciting.” The formula has proven so successful that Adobe has exported its security programme to other firms. The company has joined the Safecoat project, which is now offering Adobe's training materials to other firms for free. Arkin hopes that the model will help other firms to implement best practices and improve the security of their products, particularly those which interact with Adobe's own platforms. He is also calling on the experience of other firms to help Adobe in its transition from software vendor to cloud provider. Arkin said that as he has encountered various hurdles in the company's efforts to take its products online, Silicon Valley neighbours such as Salesforce.com and Netflix have been valuable sources of information. “The good news is we are not the first company to encounter these problems,” he said. “We talk with all these guys and we can cherry pick what works and put that in our environment.” Sursa V3.co.uk
-
The researcher behind the discovery of the infamous Android 'Master Key' vulnerability gave his long-awaited technical presentation detailing the high-profile mobile vulnerability. Bluebox chief technology officer Jeff Forristal said that the flaw was originally discovered while working on a mapping application. In order to project his mapping data onto the Maps application in Android, he resorted to a technique in which code was inserted into the APK code the application. Before long, he realised the trick could have larger implications. “Then I stopped and said I'm pretty sure this is not something I am suppsed to be able to do,” Forristal mused. After additional research, the vulnerability was disclosed to Google in February. In the weeks and months that followed, both Google and its OEM partners received and distributed a patch for the flaw. While deployment varied by vendor, Forristal noted that Samsung was particularly dilligent in fixing the flaw. “They actually issued an update to fix this bug on an old Gingerbread Samsung device,” he said. “Props that they didn't just fix their new stuff, they went back to fix their old Gingerbread stuff.” Less than a month before Forristal was set to present the flaw at Black Hat, he issued a teaser blog to publically introduce the flaw. The post touched off a media firestorm and speculation that nearly every Android device was vulnerable. Forristal said that while the hysteria generated by the report was exaggerated, counter-claims that the overwhelming majority of users had untrusted applications sources disabled and thus would be protected by Google Play. He cited a company study which found some 69 percent of users have the protection disabled. “A lot of people were essentially saying that the number of users who were changing this setting was statistically near zero, they only go to Google Play,” he argued. The Bluebox CTO noted that trusted sources such as Amazon's Android store and enterprise mobile app services require users to disable the untrusted sources protection. Sursa V3.co.uk
-
Facebook has announced that it has finished migrating its users to secure browsing, with all 1.15 billion active user accounts now accessing the site over encrypted HTTPS by default. The social network first offered secure browsing as an option in January 2011, and then slowly began making it the default in various regions. It flipped the switch for North American users in November 2012, but it took several more months for it to follow suit for the rest of the world. "Now that https is on by default, virtually all traffic to www.facebook.com and 80% of traffic to m.facebook.com uses a secure connection," Facebook engineer Scott Renfro wrote in a blog post on Wednesday. "Our native apps for Android and iOS have long used https as well." The migration process took as long as it did, Renfro explained, because switching all of Facebook over to secure browsing wasn't as simple as just switching the URL protocol from HTTP to HTTPS. There were a variety of up-front engineering puzzles to solve, such as how to ensure that Facebook's authentication cookies were only visible over secure connections, and how to upgrade users to secure connections "in flight" if they happened to navigate to a Facebook page from an insecure link. Zuck & Co. also needed to give its application-development partners time to upgrade their apps to support HTTPS, because insecure third-party apps would stop working if they were embedded in secure Facebook pages. Typically, developers were given 150 days to switch their apps over. And then there was the problem of mobile devices that lacked full support for HTTPS. Because Facebook dare not scare away mobile users – mobile ads made up 41 per cent of its ad revenue in its most recent quarter – there needed to be a way to downgrade the user's connection to HTTP on phones that couldn't handle the encryption. But the biggest issue, Renfro said, was performance. Secure sessions require extra chitchat between client and server, which can bog down connections if you're in a part of the world where network conditions are poor. To help alleviate the problem, Facebook has been deploying custom load balancers around the world to help route traffic to its data centers, while simultaneously improving the efficiency of its secure session handshaking. The move sees Facebook join a growing number of companies that have made secure connections standard for their online services. Google made HTTPS the default for all web searches in 2011, for example, and Twitter switched to always-on encryption the following year. But there are still additional hurdles ahead. Like many other companies, Facebook has committed to switching to 2048-bit encryption keys for additional security. It also hopes to switch to elliptic-curve cryptography algorithms in the near future, which are more computationally efficient, and to implement stricter session controls as it phases out the option to opt out of HTTPS. "Turning on https by default is a dream come true, and something Facebook's Traffic, Network, Security Infrastructure, and Security teams have worked on for years," Renfro wrote. "We're really happy with how much of Facebook's traffic is now encrypted and are even more excited about the future changes we're preparing to launch." ® Sursa TheRegister.co.uk
-
Opscode, the commercial side of the open source Chef configuration management tool beloved by Google, Facebook, and IBM, has warned customers that a flaw in an unnamed third-party application has left its wiki and ticketing system pwned. "The attacker gained escalated privileges and downloaded the user database for the wiki and ticketing system," the company said in a blog post on Thursday. "The user database that was accessed contained usernames, email addresses, full names, and hashed passwords." "We believe these passwords are adequately secure (the software in question uses the PBKDF2 algorithm), but we will be forcing a password change on the ticketing and wiki systems. If you use this password on other systems, we suggest choosing a new password on those systems as well. We will also contact the affected users via email today." The company was alerted to the attack by internal security monitoring, the attacker has been kicked out, and now a full investigation is underway using forensics the team has gathered. There's no word as to whether the police are involved. Opscode says there's "currently no evidence" that hosted data has been copied or compromised, but it recommends users who use the same username and password for hosted accounts should also change passwords. It's an embarrassing issue for a company that has become something of a cloud and datacenter darling of late, but it could happen to anyone these days and such openness is to be commended. The company promises more details as they become available. ® Sursa TheRegister.co.uk
-
Byte-ul sa vorbesti la adresa mea cand vei intra cu contul cel vechi pe care ai luat probabil ban. Sa fie clar pentru toata lumea, ca face unul un topic sau ca fac 10 persoane e acelasi lucru. Am spus-o si o repet , voi nu aveti de unde sa stiti ce fac eu, doar va dati cu presupusul cum face toata lumea pe forum. Uitati-va pe forum si aratati-mi si mie ce ati creat voi, ce tutoriale ati facut voi cu mana voastra , aratati-mi posturi create de voi . Bineinteles aici se exclud tutorialele create de userii vechi si si anume topicurile vechi de 1 an jumatate / 2 . Daca te uiti la fiecare categorie in ultimul an nu stiu daca exista mai mult de 4 lucruri create de un user pe RST.In schimb au aparut bagatorii de seama care abia asteapta ca cineva sa posteze ceva,sa faca un pas gresit,sa ii dea dreptate unui moderator ca sa sariti cu gura pe el. Stirile le citesc pe toate indiferent de categorie apoi sunt postate doar cele ce tin de securitate.Cei care comenteaza asupra mea ii rog sa isi vada de treaba lor si sa faca si ei ce fac eu. Pentru mine numarul posturilor si reputation nu conteaza.Daca s-ar putea sa nu se mai afiseze numarul de posturi n-ar fi nicio problema.Nu am suferit in fata nimanui ca am 1k de posturi si nici nu am evidentiat asta vreodata.Ca ii deranjeaza pe unii.. Va astept maine dimineata la o noua rubrica de stiri "mondene". Bafta
-
Intr-adevar ai mare problema la cap.Eu nu incerc sa pun pe nimeni intr-o lumina proasta pentru ca nu am de ce , nu urmaresc nimic. Singurul care face asta esti tu dar esti prea incuiat ca sa vezi care e adevarata problema aici. Ai postat ceva util, ok 2 tutoriale trase de par e totusi ok sunt niste tutoriale, dar restul de 75 de posturi pe care le ai le ai doar pentru a contrazice lumea,sugestii fara folos , replici despre biserica si stiinta. Concluzia pe care am tras-o este urmatoarea : Tu postezi idei , sugestii , astepti opinii dar cum vine cineva sa iti zica "nu-mi place" sau "sunt impotriva" acela devine cea mai neagra persoana pentru tine. Te chinui sa ii schimbi opinia , sa ii arati ca de fapt el e prostu' si nu tu .. etc . Nu are rost sa mai discutam, oamenii si-au spus parerea , pentru mine conteaza doar parerea lui Nytro , daca el zice sa ma opresc o sa o fac , a zis ca este ok am sa continui sa imi fac treaba. Este ultimul post pe care il adresez catre persoana "The.Legend" deoarece nu vreau sa fac posturi la off-topic si apoi m-am hotarat sa nu mai raspund la toti idiotii.
-
E simplu de ce nu se ia lumea de mine.Pentru ca nu deranjez pe nimeni.Nu ma iau de niciun moderator, postez ceva interesant si util, daca fiecare stire are 30 vizualizari + pentru mine este ok. Pentru d33niss : Sa stii ca citesc stirile intai ( automat le si inteleg ) apoi le postez . Daca te uiti in unele stiri postate sunt chiar alte 1-2 link-uri catre aceeasi stire dar publicata pe alt site ceea ce denota ca studiez intai site-urile de specialitate apoi iau hotararea ce sa postez. De ce postez ? Pentru ca decat sa fac off-topic sau sa fac alte posturi de rahat precum ale lui Legenda prefer sa postez ceva ce tine de subiectul forumului.
-
A devenit o mare problema ca postez 10 stiri de securitate in fiecare dimineata.Nicio problema, am sa le postez in continuare.
-
The.Legend eu te inteleg ca ai o grava problema la cap apoi si cu toata lumea de pe forum dar deja esti la 101 % + penibil in tot ceea ce faci. Toate rahaturile alea pe care le zici tu sunt stiri de securitate dupa cum spune si titlul threadului.Ca te deranjeaza pe tine ca le postez eu este strict problema ta. // Ar fi mai bine probabil sa nu mai se posteze nimic si sa scrie doar legenda vietii pe forum ca lui nu-i plac stirile de securitate sau poate doar nu le intelege. Take a real life.
-
Why Audit? Harriet Beecher Stowe is credited with the quote “Human nature is above all things lazy” – while I prefer to think of myself as ‘efficient’ rather than lazy I think the principle is sound. When faced with the choice of executing a task in a difficult or simple way (with no difference in the outcome) then people will naturally choose the simple way. This leaves more physical and mental resources available for the truly challenging things in life. The same is true for most users when selecting a password; the perception to the user is the complexity of the password doesn’t matter, but the process of changing a password is time-consuming or cumbersome. With those two facts the user is going to opt to use a password that they are confident they can remember when needed. To ‘fight’ human nature and to help ensure our users implement complex passwords we enforce password complexity. This (we believe) ensures that our users will start using passwords that are hard to crack. But what we forget is the difference between brute forcing passwords and easily guessable passwords. In Flavio’s presentation about the exponential nature of password cracking costs he showed that, from a brute force point of view, each increase in the length of the password and the number of possible characters significantly increased the time it would take to grind through all the possible password combinations. But that is simply talking about brute forcing; there are two things to remember about bruteforcing : 1.It is dumb, you are just trying all of the possible combinations until you find a hit 2.Assuming you know all the characters used and the password length (or the range of lengths that the user could have used) you will eventually discover the password So, eventually a brute force attack will work; though eventually can be a very long time. Jason Fossen via SANS has a great spreadsheet that shows the calculations of how long it will take a set of 100 computers each cracking 200,000,000 passwords per second (for a combined rate of 20,000,000,000 passwords attempted per second). Assuming the password requirements is for a password to have at least one of each of the following : Lowercase letter [abcdefghijklmnopqrstuvwxyz] Uppercase letter [ABCDEFGHIJKLMNOPQRSTUVWXYZ] Number [0123456789] Character [!@#$%^&*()_+{}|:"<>?~`-=[]\;’,./] Giving a pool of 93 possible characters; if we assume the minimum password length is 12 characters then it will take approx 2,422,432 days (~6,636 years) to try every possible combination (though you would have to be a pretty unlucky attacker if the last password combination your great (great, great, great etc) grandchild tried was the one that worked. The average time to crack should be half, which reduces it to 1,211,216 days or ~3,318 years. which clearly means the account or the data is most likely useless/non-existent by the time you crack it. If the password fails to crack quickly most attackers will reallocate their password cracking resources to another password. As a result most security professionals are not concerned about a password being cracked that far in the future. But brute forcing assumes the users are doing completely random combinations; and my premise is that they will not. As a security professional I am less concerned about a password being brute force-able but I am worried about the password being guessable. By guessable I don’t mean plucking it out of the air but based on a dictionary word with some basic changes. What kinds of changes? Well, if there is a password length then take a dictionary word and add numbers at the end or beginning to make it the right length. If you need a special character then try one of the characters above the number keys [!@#$%^&*()] as they are the most common and put it at the beginning or end of the word (as those characters require two fingers to press so are easiest to do first or last). Do common letter -> number substitutions (replace a’s with 4?s, e’s with 3?s etc). Let’s say that our user has a password of password3 (yes, you guessed it, it is the fourth time they have changed their password) – but then a new security admin starts and emails everyone saying that at the next password change a password complexity policy will be enforced. The new rules are the same as the above (The password must contain at least one: lowercase letter, uppercase letter, number, character and be at least 12 characters long). Damn those security people, don’t they know that folks are trying to get shi…work done! So at the next password reset the user has a think and comes up with the following password : !Password123 This meets all the requirements, has the right characters and is the right length – but for someone doing more intelligent password guessing (vs dumb bruteforcing) this is going to be on their guess list. So the password meets the complexity requirements but is not complex to guess. The security guy overhears the worker bragging in the cafeteria that he has ‘beaten’ the password requirements, and updates the policy. The passwords can no longer contain a dictionary word… so the next time the user has to change his password he switches it to : !P4ssOrd123 Again, it meets the password requirements but is still guessable. Sursa Resources.Infosecinstitute.com
-
There’s an on-going debate in the IT field about whether or not you need certifications or degrees to advance in your career. Frankly, it would be easier to answer the chicken and egg problem! Most of the time people will answer “it depends.” But there are good reasons why the answer differs depending on how much experience you have, what area within IT you are most interested in, what job you want to apply for, and what career path you have in mind. For most in the field today, however, the answer usually is do both, because the certification and the degree combined qualify you for a career path in which a certificate or a degree by themselves wouldn’t suffice. There are a number of bloggers who have tried to answer the certification vs. degree question and the comments on their blogs tell you the variety of successful paths IT professionals have found. Spiceworks blog: The Night Owl blog : 1. Earn an appropriate certification for your IT field. An overwhelming number (80.3%) of those who have obtained a certification believe that the time/money spent was a good use of their resources, and 39.7% of respondents believe that certification is the most important investment that they can make in their careers. Further, this value is increasing – more than half of respondents somewhat or strongly agree that certifications have increased in value over the past 5 years. More than ¾ of the respondents (77%) somewhat or strongly agreed that holding a certification gets them access to more job opportunities and over half (54%) reported that they have received a job or a promotion because they held a particular certification. Nearly half of the certified respondents (46.1%) believe that certifications provide an advantage even when competing with those who have greater experience (but are uncertified themselves). But it’s important to ensure that the certification you seek is keeping up with current standards. The association responsible for the certification has a responsibility to keep the best interests of their members in mind. For example, in the field of security, the standards are changing rapidly given new technologies and new threats. Make sure that the certification you seek has stayed abreast of the current trends and is still valued in the marketplace. 2. Add a degree in your IT field. Getting a degree might seem like a lot of work. But most education institutions are aware of workplace requirements for their graduates and implement changes in their programs to meet these requirements. Several education institutions (Capella included) will look at your professional certifications as validation of knowledge you have already obtained in your field and award degree credit accordingly. That limits redundant learning and allows you to complete your degree with reduced cost and faster time to completion. But even more importantly, it lets you advance your career by checking two of the requirements listed most frequently by employers: certification and degree. The advice for members of the military members and their family who are interested in jobs in the Government Security Industry is a good guideline for those in other parts of the IT industry as well. Associate’s Degree : An associate’s, or two-year degree, along with relevant work experience, may be sufficient for entry-level or junior-level positions. However, this level of education can limit your chances for advancement in the field. In addition, employers hiring for government information security positions often require professional certifications and security clearances. Bachelor’s Degree : You’ll have expanded career choices when you earn a bachelor’s degree in computer science, computer information systems, or a related field from a four-year college or university. Most government information security jobs require a bachelor’s degree, plus professional certifications and security clearances. With experience and additional certifications, you could qualify for advancement. Master’s Degree : If you’ve set your sights on advanced positions in government information security, such as chief information security officer or senior information security analyst, you’ll likely need a master’s degree in information technology. This two-year advanced degree program can be obtained at certain universities, either immediately upon completing a bachelor’s degree program, or after you’ve started working in the field. Just as you need to be sure that the certification is up-to-date, you need to be sure that the degree meets your needs. When reviewing a degree program, these questions might provide helpful guidelines. Will you learn not only the most current technical skills but also the people skills you will need to distinguish yourself on the job? Does the degree help you to integrate both business and technical skills so that you can solve the complex problems of today’s employers? Does the degree help you in becoming innovative and flexible? The value of a degree is in helping you to understand the application of your IT skills in today’s work environments. Becoming a better problem-solver, especially for those vexing workplace problems without a correct answer, is a value a degree can provide. 3. Highlight both the degree and the certification in your resume. Most organizations use either an HR professional or a technology solution to filter the resumes to fit the job. The most frequently used filter is for your degree level, but in IT filtering for specific certifications is often common. For more technical organizations, the certification is often a key filter. The following from the NextGov website states this succinctly. 4. Rinse, wash, and repeat. In other words, add certifications and degrees as needed for your career path. The more you advance, the more likely you will need to add additional certifications and another degree. And as you advance, the certifications or the degrees will help to differentiate you as someone who wants to advance as a manager or someone who wants to become the super-tech. Best of luck to all of you as you “secure” your careers! For more information on how you can turn your CISSP into bachelor’s or master’s credit, go to Education Alliance.
-
First Half Of 2013 Riddled With Password Breaches The biggest data breaches in the first half of 2013 demonstrate that despite all the security technologies in place, attackers will find a way to penetrate defenses and access systems containing sensitive data. Security experts have told CRN that it's often a lapse in basic security measures that trip up businesses the most. Attackers take the easiest pathway into a corporate network to steal data and get out. Breaches are often carried out through a phishing email containing a link to an attack website or a malicious file that exploits a vulnerability on the end user's system. From password and third-party breaches to insider threats and nation-state driven attacks, here's a look at some of the biggest data breaches so far this year, showing that no organization is immune. Twitter Breach Twitter recently rolled out support for two-factor authentication to bolster the security of its user base. The company made the move following the announcement of a data breach in February that exposed the usernames, email addresses and encrypted passwords of 250,000 users. The company announced that it detected unusual network activity. "We discovered one live attack and were able to shut it down in process moments later. However, our investigation has thus far indicated that the attackers may have had access to limited user information," wrote Bob Lord, director of information security at Twitter. "This attack was not the work of amateurs, and we do not believe it was an isolated incident." Zendesk Breach Despite all the technologies in place at organizations to protect user data, sometimes a third-party breach exposes information. Zendesk, which provides customer support messages to users of Twitter, Tumblr and Pinterest, announced a data breach in February that impacted its clients. The breach exposed thousands of email addresses and support messages from users of the services. Security experts said the email addresses were valuable to attackers because they could be used in well-designed phishing attacks to bait victims for more information. New York Times Breach In January, a highly sophisticated attack targeted reporters at the New York Times in a breach that lasted months on the newspaper giant's systems. Once inside, the attackers used the valid credentials to remain stealthy on the systems, slipping past corporate antivirus and other security systems. The attackers used 45 pieces of custom malware and accessed the computers of 53 Times employees. Once inside, the attackers moved to a domain controller containing the database of hashed passwords of every Times employee. For all intents and purposes, they had the keys to the kingdom, security experts said. Schnucks Markets Breach The credit card data of more than 2 million customers of Schnucks Markets, a St. Louis area grocery store chain, was stolen by cybercriminals. In its breach announcement, the company said that the data was stolen from cards used at 79 of its stores. Malware on the company's system was designed to sniff network packets, stealing the credit card data before it was encrypted on the company's processing system. The company said it was notified of suspicious activity on March 15. By March 30, the company said it had contained the breach and purged its systems of the malware. Facebook Breach A security vulnerability exposed the email addresses and telephone numbers of an estimated 6 million Facebook users in June. The bug, uncovered through the company's white hat hacker "Responsible Disclosure" program, was found in a component of a user's contact list and address book on the social network. In its announcement of the problem, Facebook said the issue stemmed from the way it generated friend recommendations based on contact information uploaded to the social network. The company said no other information was exposed on users. Evernote Breach Mobile data storage firm Evernote reset passwords for an estimated 50 million of its users after detecting that its systems were infiltrated by attackers. The data breach was detected in March. The company said that its security team found suspicious activity on its network that appeared to be a coordinated attempt to access its restricted corporate network. The passwords were protected by one-way encryption, meaning that they were hashed and salted, a process that makes it more difficult for an attacker to crack, Evernote said. LivingSocial Breach The LivingSocial data breach in April also impacted an estimated 50 million people. The e-commerce startup said the breach exposed names, email addresses and the date of birth of its users. The company did not disclose how it detected the attack. Credit card data was stored in separate payment processing systems that were segmented from the rest of the company's network, the company said. Washington State Court System Breach As many as 160,000 Social Security numbers were exposed after hackers infiltrated the website of the Washington State Administrative Office of the Courts (AOC). In a breach announcement posted in May, the state agency said the breach included 1 million driver's license numbers. In addition to the seriousness of the data that was stolen, the information was potentially embarrassing and damaging to victim reputations, security experts said. The data was from people who were booked into a city or county jail in 2011 and 2012 or received a DUI citation between 1989 and 2012, the agency said. The agency discovered the lapse in March. Department Of Homeland Security Breach Third-party software used to process background checks on Department of Homeland Security employees contained a vulnerability that exposed names, Social Security numbers and dates of birth of potentially thousands of employees. The agency began notifying employees in May. In an announcement on its website, the DHS said the flaw existed since 2009, but so far there is no evidence that any of the data has been used fraudulently. "The Department is also working with the vendor on notification requirements for current contractors, inactive applicants, and former employees and contractors," the agency said. NSA Surveillance Program Breach Edward Snowden, the high-profile Booz Allan government contractor, is receiving widespread headlines for releasing data on the National Security Agency's surveillance program as part of its counter terrorism activities. Security experts told CRN that the breach is an example of the internal threats posed to organizations. Snowden was with Booz Allan for only three months, assigned to a team in Hawaii. Snowden had access to top-secret data and over time used a thumb drive to take thousands of confidential documents, damaging to the NSA. He remains in Moscow where he has sought political asylum. Sursa CRN.com
-
The Internet Systems Consortium (ISC) has shipped a patch to cover a "critical" denial-of-service (DoS) vulnerability for some versions of BIND, the open-source software that implements the Domain Name System (DNS) protocols for the Internet. According to an advisory issued by the Consortium, a specially crafted query can cause vulnerable installations of BIND to terminate abnormally. "A specially crafted query that includes malformed rdata can cause named to terminate with an assertion failure while rejecting the malformed query," according to the advisory. The ISC said the issue may already be subject to remote, in-the-wild exploitation since July 26, 2013. "Crashes have been reported by multiple ISC customers," the group said, warning that there are no known workarounds. BIND versions affected: Open source: 9.7.0->9.7.7, 9.8.0->9.8.5-P1, 9.9.0->9.9.3-P1, 9.8.6b1 and 9.9.4b1; Subscription: 9.9.3-S1 and 9.9.4-S1b1 The ISC warned that authoritative and recursive servers are equally vulnerable. "Intentional exploitation of this condition can cause a denial of service in all nameservers running affected versions of BIND 9. Access Control Lists do not provide any protection from malicious clients," it added. "In addition to the named server, applications built using libraries from the affected source distributions may crash with assertion failures triggered in the same fashion," ISC said. The ISC is encouraging BIND users to immediately upgrade to the patched release most closely related to your current version of BIND. Sursa Securityweek.com
-
Panda Gold Protection este un produs multi-platform de la Panda, ce poate functiona pe PC, tableta si orice alt dispozitiv mobil. Ca functii este similar cu Panda Global Protection. Astfel ofera Antivirus, Firewall, Optimizarea PC-ului, Criptarea si stergerea sigura a fisierelor, spatiu de stocare online (20GB), Anti Spam, Protectia intimitatii online, manager de parole etc. Acum poti avea licenta gratuita timp de 6 luni pentru acest produs folosind link-ul de mai jos : http://dl2.comss.ru/download/GL14PROMO6M.exe
-
Veste proasta pentru toti utilizatorii Facebook
Matt replied to Kwelwild's topic in Stiri securitate
Dupa ce ca ma enerveaza si alea 2 reclame care apar pe Facebook ei vor sa bage tot mai multe.Cred ca se vor arde la un moment dat cu aceste reclame, devin prea de tot.. -
"main" este obligatoriu prezent in orice cod, fara el nu se poate executa programul.