-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Manzano - A tool for the Symbolic Execution of Linux binaries About Symbolic Execution ? Dynamically explore all program branches. ? Inputs are considered symbolic variables. ? Symbols remain uninstantiated and become constrained at execution time. ? At a conditional branch operating on symbolic terms, the execution is forked. ? Each feasible branch is taken, and the appropriate constraints logged. Download: http://ekoparty.org/archive/2013/charlas/Manzano.pdf
-
[h=2]Intel unveils new LTE module[/h]Written by Fudzilla staff Getting there, slowly Intel has added another module to its growing portfolio of 4G LTE solutions. The XMM 7160 is said to be one of the smallest LTE solutions out there and it’s already shipping in Samsung’s Intel-based Galaxy Tab 3. However, it is still a module and Intel will not have an integrated LTE solution until 2014, if then. Intel expects the first LTE-enabled phones based on its SoCs to show up next year, but they will not have integrated LTE. Unlike its previous solutions, the XM7160 supports HSPA and GSM in addition to 15 LTE bands. It also supports VoLTE services. Intel plans to introduce an even more advanced LTE module next year, with support for new advanced LTE features used by some networks. Intel is also offering the M.2 LTE module, which has practically the same hardware packed in a standard mini PCIe package. Until Intel launches its first truly integrated LTE solution, Qualcomm will continue to dominate the LTE space, virtually unopposed. Intel is not alone though. Nvidia, Mediatek and Broadcom are all behind Qualcomm and they are struggling to catch up. However, Samsung and Apple are still absent from the list and they rely on discrete solutions, not to mention countless smaller players, so there’s still a market for modules like the XMM 7160. Sursa: Intel unveils new LTE module De ce am postat asta? Nu se stie sigur cum functioneaza Anti-Theft-ul de la Intel. Din mment ce poate functiona pe GSM, inseamna ca procesorul are un modul de GSM "ascuns", nu? Stirea asta ma face sa fiu si mai paranoic si sa cred si mai mult aceasta teorie.
-
The Battle for Power on the Internet We're in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side. It gave them a place to coordinate and communicate efficiently, and made them seem unbeatable. But now, the more traditional institutional powers are winning, and winning big. How these two side fare in the long term, and the fate of the rest of us who don't fall into either group, is an open question -- and one vitally important to the future of the Internet. In the Internet's early days, there was a lot of talk about its "natural laws" -- how it would upend traditional power blocks, empower the masses, and spread freedom throughout the world. The international nature of the Internet circumvented national laws. Anonymity was easy. Censorship was impossible. Police were clueless about cybercrime. And bigger changes seemed inevitable. Digital cash would undermine national sovereignty. Citizen journalism would topple traditional media, corporate PR, and political parties. Easy digital copying would destroy the traditional movie and music industries. Web marketing would allow even the smallest companies to compete against corporate giants. It really would be a new world order. This was a utopian vision, but some of it did come to pass. Internet marketing has transformed commerce. The entertainment industries have been transformed by things like MySpace and YouTube, and are now more open to outsiders. Mass media has changed dramatically, and some of the most influential people in the media have come from the blogging world. There are new ways to organize politically and run elections. Crowdfunding has made tens of thousands of projects possible to finance, and crowdsourcing made more types of projects possible. Facebook and Twitter really did help topple governments. But that is just one side of the Internet's disruptive character. The Internet has emboldened traditional power as well. On the corporate side, power is consolidating, a result of two current trends in computing. First, the rise of cloud computing means that we no longer have control of our data. Our e-mail, photos, calendars, address books, messages, and documents are on servers belonging to Google, Apple, Microsoft, Facebook, and so on. And second, we are increasingly accessing our data using devices that we have much less control over: iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike traditional operating systems, those devices are controlled much more tightly by the vendors, who limit what software can run, what they can do, how they're updated, and so on. Even Windows 8 and Apple's Mountain Lion operating system are heading in the direction of more vendor control. I have previously characterized this model of computing as "feudal." Users pledge their allegiance to more powerful companies who, in turn, promise to protect them from both sysadmin duties and security threats. It's a metaphor that's rich in history and in fiction, and a model that's increasingly permeating computing today. Medieval feudalism was a hierarchical political system, with obligations in both directions. Lords offered protection, and vassals offered service. The lord-peasant relationship was similar, with a much greater power differential. It was a response to a dangerous world. Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They're deliberately -- and incidentally -- changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we're seeing the same thing on the Internet. It's not all bad, of course. We, especially those of us who are not technical, like the convenience, redundancy, portability, automation, and shareability of vendor-managed devices. We like cloud backup. We like automatic updates. We like not having to deal with security ourselves. We like that Facebook just works -- from any device, anywhere. Government power is also increasing on the Internet. There is more government surveillance than ever before. There is more government censorship than ever before. There is more government propaganda, and an increasing number of governments are controlling what their users can and cannot do on the Internet. Totalitarian governments are embracing a growing "cyber sovereignty" movement to further consolidate their power. And the cyberwar arms race is on, pumping an enormous amount of money into cyber-weapons and consolidated cyber-defenses, further increasing government power. In many cases, the interests of corporate and government powers are aligning. Both corporations and governments benefit from ubiquitous surveillance, and the NSA is using Google, Facebook, Verizon, and others to get access to data it couldn't otherwise. The entertainment industry is looking to governments to enforce its antiquated business models. Commercial security equipment from companies like BlueCoat and Sophos is being used by oppressive governments to surveil and censor their citizens. The same facial recognition technology that Disney uses in its theme parks can also identify protesters in China and Occupy Wall Street activists in New York. Think of it as a public/private surveillance partnership. What happened? How, in those early Internet years, did we get the future so wrong? The truth is that technology magnifies power in general, but rates of adoption are different. The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: They can make use of new technologies very quickly. And when those groups discovered the Internet, suddenly they had power. But later, when the already-powerful big institutions finally figured out how to harness the Internet, they had more power to magnify. That's the difference: The distributed were more nimble and were faster to make use of their new power, while the institutional were slower but were able to use their power more effectively. So while the Syrian dissidents used Facebook to organize, the Syrian government used Facebook to identify dissidents to arrest. All isn't lost for distributed power, though. For institutional power, the Internet is a change in degree, but for distributed power it's a qualitative one. The Internet gives decentralized groups -- for the first time -- the ability to coordinate. This can have incredible ramifications, as we saw in the SOPA/PIPA debate, Gezi, Brazil, and the rising use of crowdfunding. It can invert power dynamics, even in the presence of surveillance censorship and use control. But aside from political coordination, the Internet allows for social coordination as well to unite, for example, ethnic diasporas, gender minorities, sufferers of rare diseases, and people with obscure interests. This isn't static: Technological advances continue to provide advantage to the nimble. I discussed this trend in my book Liars and Outliers. If you think of security as an arms race between attackers and defenders, any technological advance gives one side or the other a temporary advantage. But most of the time, a new technology benefits the nimble first. They are not hindered by bureaucracy -- and sometimes not by laws or ethics either. They can evolve faster. We saw it with the Internet. As soon as the Internet started being used for commerce, a new breed of cybercriminal emerged, immediately able to take advantage of the new technology. It took police a decade to catch up. And we saw it on social media, as political dissidents made use of its organizational powers before totalitarian regimes did. This delay is what I call a "security gap." It's greater when there's more technology, and in times of rapid technological change. Basically, if there are more innovations to exploit, there will be more damage resulting from society's inability to keep up with exploiters of all of them. And since our world is one in which there's more technology than ever before, and a faster rate of technological change than ever before, we should expect to see a greater security gap than ever before. In other words, there will be an increasing time period during which nimble distributed powers can make use of new technologies before slow institutional powers can make better use of those technologies. This is the battle: quick vs. strong. To return to medieval metaphors, you can think of a nimble distributed power -- whether marginal, dissident, or criminal -- as Robin Hood; and ponderous institutional powers -- both government and corporate -- as the feudal lords. So who wins? Which type of power dominates in the coming decades? Right now, it looks like traditional power. Ubiquitous surveillance means that it's easier for the government to identify dissidents than it is for the dissidents to remain anonymous. Data monitoring means easier for the Great Firewall of China to block data than it is for people to circumvent it. The way we all use the Internet makes it much easier for the NSA to spy on everyone than it is for anyone to maintain privacy. And even though it is easy to circumvent digital copy protection, most users still can't do it. The problem is that leveraging Internet power requires technical expertise. Those with sufficient ability will be able to stay ahead of institutional powers. Whether it's setting up your own e-mail server, effectively using encryption and anonymity tools, or breaking copy protection, there will always be technologies that can evade institutional powers. This is why cybercrime is still pervasive, even as police savvy increases; why technically capable whistleblowers can do so much damage; and why organizations like Anonymous are still a viable social and political force. Assuming technology continues to advance -- and there's no reason to believe it won't -- there will always be a security gap in which technically advanced Robin Hoods can operate. Most people, though, are stuck in the middle. These are people who have don't have the technical ability to evade either the large governments and corporations, avoid the criminal and hacker groups who prey on us, or join any resistance or dissident movements. These are the people who accept default configuration options, arbitrary terms of service, NSA-installed back doors, and the occasional complete loss of their data. These are the people who get increasingly isolated as government and corporate power align. In the feudal world, these are the hapless peasants. And it's even worse when the feudal lords -- or any powers -- fight each other. As anyone watching Game of Thrones knows, peasants get trampled when powers fight: when Facebook, Google, Apple, and Amazon fight it out in the market; when the US, EU, China, and Russia fight it out in geopolitics; or when it's the US vs. "the terrorists" or China vs. its dissidents. The abuse will only get worse as technology continues to advance. In the battle between institutional power and distributed power, more technology means more damage. We've already seen this: Cybercriminals can rob more people more quickly than criminals who have to physically visit everyone they rob. Digital pirates can make more copies of more things much more quickly than their analog forebears. And we'll see it in the future: 3D printers mean that the computer restriction debate will soon involves guns, not movies. Big data will mean that more companies will be able to identify and track you more easily. It's the same problem as the "weapons of mass destruction" fear: terrorists with nuclear or biological weapons can do a lot more damage than terrorists with conventional explosives. And by the same token, terrorists with large-scale cyberweapons can potentially do more damage than terrorists with those same bombs. It's a numbers game. Very broadly, because of the way humans behave as a species and as a society, every society is going to have a certain amount of crime. And there's a particular crime rate society is willing to tolerate. With historically inefficient criminals, we were willing to live with some percentage of criminals in our society. As technology makes each individual criminal more powerful, the percentage we can tolerate decreases. Again, remember the "weapons of mass destruction" debate: As the amount of damage each individual terrorist can do increases, we need to do increasingly more to prevent even a single terrorist from succeeding. The more destabilizing the technologies, the greater the rhetoric of fear, and the stronger institutional powers will get. This means increasingly repressive security measures, even if the security gap means that such measures become increasingly ineffective. And it will squeeze the peasants in the middle even more. Without the protection of his own feudal lord, the peasant was subject to abuse both by criminals and other feudal lords. But both corporations and the government -- and often the two in cahoots -- are using their power to their own advantage, trampling on our rights in the process. And without the technical savvy to become Robin Hoods ourselves, we have no recourse but to submit to whatever the ruling institutional power wants. So what happens as technology increases? Is a police state the only effective way to control distributed power and keep our society safe? Or do the fringe elements inevitably destroy society as technology increases their power? Probably neither doomsday scenario will come to pass, but figuring out a stable middle ground is hard. These questions are complicated, and dependent on future technological advances that we cannot predict. But they are primarily political questions, and any solutions will be political. In the short term, we need more transparency and oversight. The more we know of what institutional powers are doing, the more we can trust that they are not abusing their authority. We have long known this to be true in government, but we have increasingly ignored it in our fear of terrorism and other modern threats. This is also true for corporate power. Unfortunately, market dynamics will not necessarily force corporations to be transparent; we need laws to do that. The same is true for decentralized power; transparency is how we'll differentiate political dissidents from criminal organizations. Oversight is also critically important, and is another long-understood mechanism for checking power. This can be a combination of things: courts that act as third-party advocates for the rule of law rather than rubber-stamp organizations, legislatures that understand the technologies and how they affect power balances, and vibrant public-sector press and watchdog groups that analyze and debate the actions of those wielding power. Transparency and oversight give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish. For if we're going to entrust our security to institutional powers, we need to know they will act in our interests and not abuse that power. Otherwise, democracy fails. In the longer term, we need to work to reduce power differences. The key to all of this is access to data. On the Internet, data is power. To the extent the powerless have access to it, they gain in power. To the extent that the already powerful have access to it, they further consolidate their power. As we look to reducing power imbalances, we have to look at data: data privacy for individuals, mandatory disclosure laws for corporations, and open government laws. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today's Internet feudalism is both ad-hoc and one-sided. Those in power have a lot of rights, but increasingly few responsibilities or limits. We need to rebalance this relationship. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people. In addition to re-reigning in government power, we need similar restrictions on corporate power: a new Magna Carta focused on the institutions that abuse power in the 21st century. Today's Internet is a fortuitous accident: a combination of an initial lack of commercial interests, government benign neglect, military requirements for survivability and resilience, and computer engineers building open systems that worked simply and easily. Corporations have turned the Internet into an enormous revenue generator, and they're not going to back down easily. Neither will governments, which have harnessed the Internet for political control. We're at the beginning of some critical debates about the future of the Internet: the proper role of law enforcement, the character of ubiquitous surveillance, the collection and retention of our entire life's history, how automatic algorithms should judge us, government control over the Internet, cyberwar rules of engagement, national sovereignty on the Internet, limitations on the power of corporations over our data, the ramifications of information consumerism, and so on. Data is the pollution problem of the information age. All computer processes produce it. It stays around. How we deal with it -- how we reuse and recycle it, who has access to it, how we dispose of it, and what laws regulate it -- is central to how the information age functions. And I believe that just as we look back at the early decades of the industrial age and wonder how society could ignore pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we dealt with the rebalancing of power resulting from all this new data. This won't be an easy period for us as we try to work these issues out. Historically, no shift in power has ever been easy. Corporations have turned our personal data into an enormous revenue generator, and they're not going to back down. Neither will governments, who have harnessed that same data for their own purposes. But we have a duty to tackle this problem. I can't tell you what the result will be. These are all complicated issues, and require meaningful debate, international cooperation, and innovative solutions. We need to decide on the proper balance between institutional and decentralized power, and how to build tools that amplify what is good in each while suppressing the bad. This essay previously appeared in the Atlantic. Sursa: https://www.schneier.com/blog/archives/2013/10/the_battle_for_1.html
-
Verifying Windows Kernel Vulnerabilities DaveWeinstein| October 30, 2013 Outside of the Pwn2Own competitions, HP’s Zero Day Initiative (ZDI) does not require that researchers provide us with exploits. ZDI analysts evaluate each submitted case, and as part of that analysis we may choose to take the vulnerability to a full exploit. For kernel level vulnerabilities (either in the OS itself, or in device drivers), one of the vulnerabilities that we find is often termed a ‘write-what-where’[1]. For ease of analysis it was worth writing a basic framework to wrap any given ‘write-what-where’ vulnerability and demonstrate an exploit against the operating system. There are three basic steps to taking our arbitrary write and turning it into an exploit, and we’ll explore each of them in turn. The Payload: Disabling Windows Access Checks At the heart of the Windows access control system is the function nt!SeAccessCheck. This determines whether or not we have the right to access any object (file, process, etc) in the OS. The technique we’re going to use was first described by Greg Hoglund in 1999[2], and a variant of this technique was used by John Heasman in 2006[3]; it is the latter that we’ll use as our jumping off point. The key here is the highlighted field. If the security check is being done on behalf of a user process, then the OS checks for the correct privileges. However, if the security check is being done on behalf of the kernel, then it always succeeds. Our goal, then, is to dynamically patch the Windows kernel such that it always considers the AccessMode setting to indicate that the call is on behalf of the kernel. On Windows XP, this is a fairly straightforward task. If we examine the kernel in IDA (in this case, we’re looking at ntkrnlpa.exe), we find the following code early in the implementation of SeAccessCheck: PAGE:005107BC xor ebx, ebx PAGE:005107BE cmp [ebp+AccessMode], bl PAGE:005107C1 jnz short loc_5107EC Since KernelMode is defined as 0 in wdm.h, all we have to do to succeed in all cases is to NOP out the conditional jump after the compare. At that point, all access checks succeed. On later versions of the OS, things are slightly more complicated. The function nt!SeAccessCheck calls nt!SeAccessCheckWithHint, and it is the latter that we’ll need to patch. We’ll see why this makes things more complicated when we look at how to gather the information needed to execute the attack. If we look at Windows 8.1, we can see that instead of dropping through to the KernelMode functionality, we branch to it: .text:00494613 loc_494613: .text:00494613 cmp [ebp+AccessMode], al .text:00494616 jz loc_494B28 ll we need to do is replace the conditional branch with an unconditional branch, and we again make every call to SeAccessCheck appear to come from the kernel. Now that we have a target, we still have one more, slight problem to overcome. The memory addresses we need to overwrite are in read-only pages. We could change the settings on those pages, but there is an easier solution. On the x86 and x64 processors, there is a flag in Control Register 0 which determines whether or not supervisor mode code (i.e. Ring 0 code, which is to say, our exploit) pays attention to the read-only status of memory. To quote Barnaby Jack[4]: “Disable the WP bit in CR0.? Perform code and memory overwrites.? Re-enable WP bit.” At this point, the actual core of our exploit looks like this: There is one additional complication to our manipulation of the WP bit. We need to set the processor affinity for our exploit, to make sure that we stay on the core with the processor settings we chose. While this is likely unnecessary for our manipulation of the WP bit, it is more of an issue in more complicated exploits that require us to disable SMEP (more on that later). Either way, it doesn’t hurt, and all we have to do is a make simple call: SetProcessAffinityMask(GetCurrentProcess(), (DWORD_PTR) 1); Now, one thing you’ll note in the exploit code is that we don’t actually know what the patch is, or where it is going. The exploit just takes information that was already provided, and applies it. We’ll actually determine that information as part of the exploit research. The Attack: Passing control to the exploit We have code ready to run in ring 0. We need two things to make it work, the information it requires about the OS configuration, and a means to transfer control to our code while the processor is in supervisor mode. Since the primitive that we have to work with is a ‘write-what-where’, we are going to use that to overwrite a function in the HalDispatchTable. This technique, described by Ruben Santamarta in 2007[5], will allow us to divert execution flow to our exploit code. The function we’re going to hook is hal!HaliQuerySystemInformation. This is a function that is called by an undocumented Windows function NtQueryIntervalProfile [5][6], and the invoking function is not commonly used. To understand why this is crucial, we need to briefly talk about the layout of memory in Windows. Windows divides memory into two ranges; kernel memory is located above MmUserProbeAddress, and user memory is located below it. Memory in the kernel is common to all processes (although generally inaccessible to the process code itself), while memory in userland is different for each process loaded. Since our exploit code is going to be in user memory, but we are hooking a kernel function pointer, if any other process calls NtQueryIntervalProfile it will almost certainly crash the operating system. Because of this, the first step in our exploit is to restore the original function pointer: As with our earlier example, you can see that we’re relying on external information as to where the function pointer entry is, and what the original value should be. At this point, our actual exploit trigger looks like this: For flexibility, our prototype for WriteWhatWhere() also includes the original value for the address, if known. Finding the addresses we need for both the exploit and the exploit hook is the final step. The Research: Determining the OS Configuration In this case, we’re assuming that we are looking for a local elevation-of-privilege. We have the ability to run arbitrary code on the system as some user, and our goal is to turn that into a complete system compromise. Determining the OS Configuration is much more difficult in the case of a remote attack against a kernel vulnerability. We’ve determined that we need to know the following pieces of information: The address of nt!HalDispatchTable The address of hal!HaliQuerySystemInformation The address of the code in nt!SeAccessCheck or related helper function we need to patch The value to patch Additionally, we can also look up the original value, which would let us have a different exploit that restored the original functionality. After all, once we’ve done what we need to do, why leave the door open? What we’ll need to know are the base addresses of two kernel modules, the hardware abstraction layer (HAL) and the NT kernel itself. In order to get those, we’ll need to again use an undocumented function – in this case we need NtQuerySystemInformation[6][7]. Since we know that we’re going to need two NT functions, we’ll go ahead and create the prototypes and simply load them directly from the NT DLL: The next step is to determine which versions of these modules are in use, and where they are actually located in memory. We do this by using NtQuerySystemInformation to pull in the set of loaded modules, and then searching for the possible names for the modules we need: Our next step is, in almost every case other than this, a bad idea[8]. We’re going to use a highly deprecated feature of LoadLibraryEx and load duplicate copies of the two modules we found: With this flag set, we won’t load any referenced modules, we won’t execute any code, but we will be able to use GetProcAddress() to search the modules. This is exactly what we want, because we’re going to be using these loaded modules as our source to search for what we need in the actual running kernel code. At this point, we have almost everything we’re going to need to find the offsets we require. We have both the base address of our copies of the kernel modules and the actual base addresses on the system, so we can convert a relative address (RVA) from our copy into an actual system address. And we have read-access to copies of the code, so we can scan the code to look for identifiers for the functions we need. The only thing left is actually a stock Windows call, and we’ll use GetVersionEx() to determine what version of Windows is running. Some things are easy, because the addresses are exported: But for most of what we need, we’re actually going to have to search. We have two functions to search for, one of which (hal!HaliQuerySystemInformation) does not have an exported symbol, and the other is either nt!SeAccessCheck or a function directly called by it. We’ll look at the last case, because that lets us look at how we handle both exported functions and those that are purely private. First, a look at nt!SeAccessCheck: And then a look at the portion of nt!SeAccessCheckWithHint that we’re going to patch: Now, in practice, these two functions are adjacent to each other, but we’re going to go ahead and use the public function to track down the reference to the internal function, and then scan the internal function for our patch location. The code to do that looks like this: The function PatternScan is simply a helper routine that given a pointer, a scan size, a scan pattern, and a scan pattern size, finds the start of the pattern (or NULL if no pattern could be found). In the code above, we search first for the relative jump to nt!SeAccessCheckWithHint, and extract the offset. We use that to calculate the actual start of the nt!SeAccessCheckWithHint in our copy of the module, and then we scan for the identifying pattern of the conditional branch we need to replace. Once we find the location, we can determine the actual address by converting it first to an RVA and then rebasing it off of the actual loaded kernel image. Finally, the replacement value is OS version dependent as well; in this case the replacement for the JZ (0x0f 0x84) is a NOP (0x90) and JMP (0xe9). By gathering the information we need from the copied version of the system modules, we’re able to have the same framework target multiple versions of the Windows Operating System. By searching for patterns within the target functions, we are more resistant to changes in the OS that aren’t directly in the functions we’re looking for. Some final complications Everything we’ve done so far will work, up until we get to Windows 8, or more specifically, the NT 6.2 kernel. For convenience, we have the actual code of the exploit running in user memory. With the Ivy-Bridge architecture, Intel introduced a feature called Supervisor Mode Execute Protection (SMEP) [9]. If SMEP is enabled, the processor faults if we attempt to execute instructions from user-mode addresses while the processor is in supervisor-mode. The moment control passes from our hooked function pointer in the kernel to our code, we get an exception. Windows supports SMEP as of Windows 8/Server 2012, and the feature is enabled on processors that support it by default. To get around this, we either need to move our exploit into executable kernel memory (something that is also made more difficult in the NT 6.2 kernel)[10] or disable SMEP separately[11][12]. The final problem for us was introduced in Windows 8.1. In order to get Windows 8.1 to tell us the real version of the Operating System, we need to take additional steps. According to MSDN[13]: With the manifest included, we’re able to correctly detect Windows 8.1, and adjust our search parameters appropriately when determining offsets. Conclusion There is of course, one piece missing. While a framework to prove exploitation is useful, we still need to have an arbitrary ‘write-what-where’ for this to work. We use this framework internally to validate these flaws, so if you happen to find a new one, you can always submit it to us (with or without an exploit payload) at Zero Day Initiative. We’d love to hear from you. Endnotes [1] The earliest formal reference I can find to this terminology is in Gerardo Richarte’s paper “About Exploits Writing” (G-CON 1, 2002) where he divides the primitive into a “write-anything-somewhere” and “write-anything-anywhere”. In this case, our “write-what-where” is a “write-anything-anywhere”. [2] Greg Hoglund, “A *REAL* NT Rootkit, patching the NT Kernel” (Phrack 55, 1999) [3] John Heasman, “Implementing and Detecting an ACPI BIOS Rootkit” (Black Hat Europe, 2006) [4] Barnaby Jack, “Remote Windows Kernel Exploitation – Step In To the Ring 0” (Black Hat USA, 2005) [White Paper] [5] Ruben Santamarta, “Exploiting Common Flaws in Drivers” (2007) [6] Although it does not cover newer versions of the Windows OS, Windows NT/2000 Native API Reference (Gary Nebbett, 2000) is still an excellent reference for internal Windows API functions and structures. [7] Alex Ionescu, “I Got 99 Problems But a Kernel Pointer Ain’t One” (RECon, 2013) [8] Raymond Chen, “LoadLibraryEx(DONT_RESOLVE_DLL_REFERENCES) is fundamentally flawed” (The Old New Thing) [9] Varghese George, Tom Piazza, and Hong Jiang, “Intel Next Generation Microarchitecture Codename Ivy Bridge” (IDF, 2011) [10] Ken Johnson and Matt Miller, “Exploit Mitigation Improvements in Windows 8” (Black Hat USA, 2012) [11] Artem Shishkin, “Intel SMEP overview and partial bypass on Windows 8” (Positive Research Center) [12] Artem Shisken and Ilya Smit, “Bypassing Intel SMEP on Windows 8 x64 using Return-oriented Programming” (Positive Research Center) [13] MSDN, “Operating system version changes in Windows 8.1 and Windows Server 2012 R2” Additional Reading Enrico Perla and Massimiliano Oldani, A Guide to Kernel Exploitation: Attacking the Core, (Syngress, 2010) bugcheck and skape, “Kernel-mode Payloads on Windows”, (Uninformed Volume 3, 2006) skape and Skywing, “A Catalog of Windows Local Kernel-mode Backdoor Techniques”, (Uninformed Volume 8, 2007) mxatone, “Analyzing local privilege escalations in win32k”, (Uninformed Volume 10, 2008) Sursa: Verifying Windows Kernel Vulnerabilities - HP Enterprise Business Community
-
HTTP Request Hijacking Posted by Yair Amit Preface This post contains details about a coding pitfall I recently identified in many iOS applications, which we call HTTP Request Hijacking (HRH). Adi Sharabani, Skycure’s CEO, and I will be presenting the problem, its ramifications, and some fix suggestions to developers later today at RSA Europe (14:10 – 15:00 | Room: G102). If you are an iOS developer in a hurry to fix this issue, feel free to jump over to the “Remediation” section. We’ve created a quick-and-easy solution that will automatically protect all vulnerable iOS apps. Overview Nowadays almost all mobile applications interact with a server to send or retrieve data, whether it’s information to display or commands to be executed. Many of these applications are susceptible to a simple attack, in which the attacker can persistently alter the server URL from which the app loads its data (e.g., instead of loading the data from real.site the attack makes the app persistently load the data from attacker.site). While the problem is generic and can occur in any application that interacts with a server, the implications of HRH for news and stock-exchange apps are particularly interesting. It is commonplace for people to read the news through their smartphones and tablets, and trust what they read. If a victim’s app is successfully attacked, she is no longer reading the news from a genuine news provider, but instead phoney news supplied by the attacker’s server. Upon testing a variety of high profile apps, we found many of them vulnerable. This brings us to a philosophical question: When someone gets up in the morning and reads news via her iPhone, how sure can she be that the reports she reads are genuine and not fake ones planted by a hacker? HTTP Request Hijacking The problem in a nutshell The problem essentially revolves around the impact of HTTP redirections caching in mobile applications. Many iOS applications cache HTTP status code 301 when received over the network as a response. While the 301 Moved Permanently HTTP response has valuable uses, it also has severe security ramifications on mobile apps, as it could allow a malicious attacker to persistently alter and remotely control the way the application functions, without any reasonable way for the victim to know about it. Whereas browsers have an address bar, most mobile apps do not visually indicate the server they connect to, making HRH attacks seamless, with very low probability of being identified by the victims. HTTP Request Hijacking attacks start with a Man-in-the-middle scenario. When the vulnerable app sends a request to its designated server (e.g., http://www.real.site/resource), the attacker captures it and returns a 301 HTTP redirection to an attacker-controlled server (e.g., http://www.attacker.site/resource). Lets take a look at the RFC of 301 Moved Permanently HTTP response: 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. Source: RFC 2616 Fielding, et al The 301 HTTP redirection issued by the attacker is therefore kept in the app’s cache, changing the original code’s behavior from then on. Instead of retrieving data from http://www.real.site, the app now loads data from http://www.attacker.site, even after the MiTM attack is over and the attacker is long gone. How it all began One evening, Assaf Hefetz and Roy Iarchy, two Skycure engineers, called me over and told me they had come across a weird redirection bug in our product. We started discussing it when it suddenly hit me that this “bug” might in fact be a widespread vulnerability waiting to be discovered! A few days later we had a “white night” (work into the night) that resulted in a working PoC of an attack against a well-known iOS application. We went on to test a bunch of high profile applications, and were amazed to find that about half of them were susceptible to HRH attacks. Focusing on leading app store news apps, we found many of them vulnerable and easy to exploit. Unlike most vulnerabilities, where a responsible disclosure could be made in private to the vendor in charge of the vulnerable app, we soon realized that HTTP Request Hijacking affects a staggering number of iOS applications, rendering the attempt to alert vendors individually virtually impossible. We therefore chose to reveal the problem, along with clear and detailed fix instructions, to empower developers to fix their code quickly and efficiently, before hackers attempt to exploit it. The Skycure Journal: A Responsible Disclosure case study As part of our Responsible Disclosure policy, we decided to not name specific vulnerable apps we are aware of as long as they are not fixed. Therefore, for the sake of discussing the technical nature of the problem and our proposed fix, we created a sample news application, which we called “The Skycure Journal”. You can clone its code through . While very basic, The Skycure Journal operates in a similar way to most major news apps: it loads a feed of news from a server (http://skycure-journal.herokuapp.com/) in JSON format, parses it, and then displays it to the reader. Img. 1 – Skycure Journal App Here’s a relevant snippet from the code: - ( void ) fetchArticles { NSURL * serverUrl = [ NSURL URLWithString : @ "http://skycure-journal.herokuapp.com" ] ; NSMutableURLRequest * request = [ NSMutableURLRequest requestWithURL : serverUrl ] ; [ request setValue : @ "application/json" forHTTPHeaderField : @ "Content-Type" ] ; self.connection = [ [ NSURLConnection alloc ] initWithRequest : request delegate : self ] ; } This code, variations of which can be found in many iOS apps, is susceptible to network-based HRH attack. By capturing the request to http://skycure-journal.herokuapp.com/ (via a MiTM attack, for example) and returning a 301 Moved Permanently HTTP response directing the victim’s app to an attacker controlled server (e.g., http://ATTACKER/SJ_headlines.json), the Skycure Journal logic will persistently change to loading data from the attacker’s server, no matter where and when the victim uses it in the future. Without touching the application binary, HRH makes it behave as if the code was altered to this: NSURL * serverUrl = [ NSURL URLWithString : @ "http://ATTACKER/SJ_headlines.json" ] ; A quick demo Imagine the following scenario: A victim walks into Starbucks, connects to the Wi-Fi and uses her favorite apps. Everything looks and behaves as normal, however an attacker is sitting at a nearby table and performs a silent HRH attack on her apps. The next day, she wakes up at home and logs in to reads the news, but she’s now reading the attacker’s news! Ariel Sakin, Skycure’s head of engineering, and Igal Kreichman, Skycure engineer, created a really cool demo showing how the attack appears from the attacker’s and victim’s perspectives. While cache attacks have been thoroughly discussed in the past, they were perceived more as a browser problems than a native apps problem. The reason is that by performing a classical cache poisoning attack (e.g., returning a fake json/XML response with cache-control directives) on native apps, the impact is very limited. In such attacks, since the cached response is static by nature (as long as the native app does not rely on an embedded browser to render it), the attacker would not be able to persistently view, control or manipulate the apps’ traffic. On the other hand, HRH attacks give the attacker remote and persistent control over the victim’s app. HRH limitations and advanced techniques The aforementioned attack has two limitations: The attacker needs to be physically near the victim for the initial poisoning (the next steps of HRH attack can be carried on the victim regardless of geolocation). The attack works only against HTTP traffic (well, almost only). In a previous post, we uncovered the ramifications of malicious profiles. It is interesting to note that by luring a victim to install a malicious profile that contains a root CA, an attacker can mount HRH attacks on SSL traffic as well. Combining the malicious profiles threat we uncovered together with this new threat of HTTP Request Hijacking, generates a troubling scenario: Even after the malicious profile is identified and removed from the device, attacked apps continue to interact seamlessly with the attacker’s server instead of the real server, without the victim’s knowledge. Remediation For app developers HRH affects a large proportion of iOS apps and we want to help ensure as many as possible are properly protected before exploits of this vulnerability start to appear. There are two main approaches for tackling HRH: Option A Make sure the app interacts with its designated server(s) via an encrypted protocol (e.g., HTTPS, instead of HTTP). As described earlier in the post, this is an effective mitigation for HRH, but not a fix. Option B Assaf Hefetz, of our R&D team, has come up with this cool and simple fix for vulnerable apps. Step 1 Create a new subclass object of NSURLCache that avoids 301 redirection caching. Leave the rest of the logic intact. #define DEFAULT_MEMORY_CAPACITY 512000 #define DEFAULT_DISK_CAPACITY 10000000 HRHResistantURLCache *myCache = [HRHResistantURLCache alloc]; [myCache initWithMemoryCapacity:DEFAULT_MEMORY_CAPACITY diskCapacity:DEFAULT_DISK_CAPACITY diskPath:@"MyCache.db"]; [NSURLCache setSharedURLCache:myCache]; Step 2 Set the new cache policy to be used by the app, making sure you place the initialization code before any request in your code. @interface HRHResistantURLCache : NSURLCache @end @implementation HRHResistantURLCache - (void)storeCachedResponse:(NSCachedURLResponse *)cachedResponse forRequest:(NSURLRequest *)request { if (301 == [(NSHTTPURLResponse *)cachedResponse.response statusCode]) { return; } [super storeCachedResponse:cachedResponse forRequest:request]; } @end For CIOs/CISOs We see a significant increase in attacks mounted via the networks around us. Some affect apps; others the entire device. Skycure is dedicated to providing companies with a solution that detects and protects employee devices from a variety of trending mobile security threats. If you are interested, why not to start a free trial. For iOS users If you believe you’ve been subject to an HRH attack, remove the app and then reinstall it, to ensure the attack is removed. Then please drop us a line at security@skycure.com to tell us about it. It is of course always recommended to keep your apps fully up-to-date, so that when fixes are released, you’ll have them installed on your device at the earliest opportunity. Do this either manually or by enabling auto-update in iOS 7. Future work In this write-up we’ve discussed the impact of 301 HTTP responses on mobile applications. Note that there are other redirection responses that might also prove to be problematic, such as 308 HTTP response (it is still a draft). HRH isn’t necessarily a problem of iOS applications alone; it may apply to mobile applications of other operating systems too. Sursa: Skycure HTTP Request Hijacking
-
[h=3]Unauthorized Access: Bypassing PHP strcmp()[/h]While playing Codegate CTF 2013 this weekend, I had the opportunity to complete Web 200 which was very interesting. So, let get our hands dirty. The main page asks you to provide a valid One-Time-Password in order to log in: A valid password can be provided by selecting the "OTP issue" option, we can see the source code (provided during the challenge) below: include("./otp_util.php"); echo "your ID : ".$_SERVER["REMOTE_ADDR"].""; echo "your password : " .make_otp($_SERVER["REMOTE_ADDR"]).""; $time = 20 - (time() - ((int)(time()/20))*20); echo "you can login with this password for $time secs."; A temporary password is calculated based on my external IP (208.54.39.160) which will last 20 seconds or less, below the result: So, then I clicked on "Login" option (see first image above) and below POST data was sent: id=208.54.39.160&ps=69b9a663b7cafaca2d96c6d1baf653832f9d929b Which gave me access to the web site (line 6 in the code below): But we cannot reach line 9 (see code below) in order to get the flag since the IP in the "id" parameter was different. Let's analyze the script that handles the Login Form (login_ok.php): 1. $flag = file_get_contents($flag_file); 2. if (isset($_POST["id"]) && isset($_POST["ps"])) { 3. $password = make_otp($_POST["id"]); 4. sleep(3); // do not bruteforce 5. if (strcmp($password, $_POST["ps"]) == 0) { 6. echo "welcome, ".$_POST["id"] 8. if ($_POST["id"] == "127.0.0.1") { 9. echo "Flag:".$flag } } else { echo "alert('login failed..')"; } } Test case 1: Spoofing Client IP Address: So, the first thing that came to my mind in order to get the flag (line 9) was to send "127.0.0.1" in the "id" parameter, so, let's analyze the function make_otp() which calculates the password: $flag_file = "flag.txt"; function make_otp($user) { // acccess for 20secs. $time = (int)(time()/20); $seed = md5(file_get_contents($flag_file)).md5($_SERVER['HTTP_USER_AGENT']); $password = sha1($time.$user.$seed); return $password; } As we can see in the code above, the function make_otp receives the "id" parameter in the $user variable and is used to calculate the password, so, by following this approach, we will not be able to pass line 5 since we need a password for the IP 127.0.0.1, and we can only request passwords based on our external IP via "OTP Issue" option as explained above, so, how can we get one? What if we try to find a vulnerability in the code related to "OTP Issue" option? So, since "OTP Issue" is reading the IP based on the environment variable "REMOTE_ADDR" we could try to spoof our external IP address as if we were connecting from 127.0.0.1, but unfortunately it is not a good option, although spoofing could be possible, it is only an one way communication so we would not get a response from the Server, so at this point, we need to discard this approach. Test case 2: Bruteforcing the password By looking at the make_otp() function shown above, the only data we do not know in the password calculation process, is the content of $flag_file (obviously), so, assuming that the content of that file is less than 4-5 characters and therefore have a chance to bruteforce the MD5 hash, we only would have 20 seconds to guess it, and due to the sleep(3) command (see line 4 above), we could only guess 6 passwords before the password expires and therefore we definitely drop bruteforcing approach off the table. Test case 3: Bypassing strcmp() function After analyzing the two cases described above I started "googling" for "strcmp php vulnerabilities" but did not find anything, then, by looking at PHP documentation and realized this function has only three possible return values: int strcmp ( string $str1 , string $str2 ) Returns < 0 if str1 is less than str2; > 0 if str1 is greater than str2, and 0 if they are equal. Obviously, we need to find a way to force strcmp to return 0 and be able to bypass line 5 (see above) without even knowing the password, so, I started wondering what would be the return value if there is an error during the comparison? So, I prepare a quick test comparing str1 with an Array (or an Object) instead of another string: $fields = array( 'id' => '127.0.0.1', 'ps' => 'bar' ); $a="danux"; if (strcmp($a,$fields) == 0){ echo " This is zero!!"; } else{ echo "This is not zero"; } And got below warning from PHP: PHP Warning: strcmp() expects parameter 2 to be string, array given in ... But guess what?Voila! it also returns the string "This is zero!!" In other words, it returns 0 as if both values were equal. So, the last but not least step is to send an Array in the "ps" POST parameter so that we can bypass line 5, after some research and help from my friend Joe B. I learned I can send an array this way: id=127.0.0.1&ps[]=a Notice that instead of sending "&ps=a", I also send the square brackets [] in the parameter name which will send an array object!! Also, notice that I am sending "id=127.0.0.1" so that I can get to the line 9. And after sending this POST request... Conclusion: I tested this vulnerability with my local version of PHP/5.3.2-1ubuntu4.17, I do not know the version running in the CTF Server but should be similar. After this exercise, I would suggest you all make sure you are not using strcmp() to compare values coming from end users (via POST/GET/Cookies/Headers, etc), this also reminds me the importance of not only validate parameter values BUT also parameter names as described in on of my previous blogs here. Hope you enjoy it. Thanks to CODEGATE Team to prepare those interesting challenges! Posted by Danux Sursa: Regalado (In) Security: Unauthorized Access: Bypassing PHP strcmp()
-
Demystifying Java Internals (An introduction) Volume 1 Java is a technology that makes it easy to develop distributed applications, which are programs that can be executed by multiple computers across a network, whether it is local or a wide area network. Java has expanded the Internet role from an arena for communications to a network on which full-fledged applications can be executed. Ultimately, this open source technology gives the impression of network programming across diverse platform. This article illustrates these underlying contents in detail: Genesis of Java Java and the World Wide Web Beauty of Java: The “Bytecode” Java Framework Configuration Java Features Summary Before Java, the Internet was used primarily for sharing information, though developers soon realized that the World Wide Web could meet some business needs. The WWW is a technology that treats Internet resources as linked and it has revolutionized the way people access information. The web has enabled Internet users to access Internet services without learning sophisticated cryptic commands. Through the web, corporations can easily provide product information and even sell merchandise. Java technology takes this a step further by making it possible to offer fully interactive applications via the web. In particular, Java programs can be embedded into web documents, turning static pages into applications that run on the user’s computer. Java has the potential to change the function of the Internet, much as the web has changed the way people access the Internet. In other words, not only will the network provide information, it will also serve as an operating system. Genesis of Java In 1990, Java was conceived by James Gosling, Patrick Naughton, and Ed Frank at Sun Microsystems. This language was initially known as Oak. Oak preserved the familiar syntax of C++ but omitted the potentially dangerous features, such as pointer arithmetic, operator overloading, and explicit resource references. Oak incorporated memory management directly into the language, freeing the programmer to concentrate on the task to be performed by the program. As Oak matured, the WWW was growing dramatically, and the component team at Sun realized that Oak was perfectly suited to Internet programming. Thus, in 1994, they completed work on a product known as WebRunner, an early browser written in Oak. WebRunner was later renamed HotJava, and it demonstrate the power of Oak as Internet development tool. Finally, in 1995, Oak was renamed Java and introduced at Sun. Since then, Java’s rise in popularity has been dramatic. Java is related to C++, which is a direct descendent of C. Much of the character of Java is inherited from those two languages. From C, Java derives its semantics. Java is truly an object-oriented, case-sensitive programming language. Many of Java OOPs features were influenced by C++. The original Impetus for Java was not the Internet. Instead, the primary motivation was the need for a platform-independent language that could be used to create software to be embedded in various consumer electronic devices. The trouble with C and C++ is that they are designed to be compiled for a specific target, so an easier and more cost-efficient solution was Java technology, a truly open source technology. Java and the World Wide Web Today, the web acts as a convenient transport mechanism for Java programs, and the Web’s ubiquity has popularized Java as an Internet development tool. Java expands the universe of objects that can move about freely in cyberspace. In a network, two very broad categories of objects are transmitted between a server and your personal computer: passive information and dynamic active programs. For example, when you read your e-mails, you are viewing passive data. However, a second type of object can be transmitted to your computer: a dynamic, self-executing program. For example, a program might be provided by the server to display properly the data that the server is sending. A dynamic network program presents serious problems in the areas of security and portability. As you will see, Java addresses those concerns effectively by introducing a new form of program: the applet. Java primarily stipulates two types of programs: applets and application. An applet is an application designed to be transmitted over the Internet and executed by a Java-compatible web browser. An applet is a tiny Java program that can be dynamically downloaded across the network. It is a kind of program that can react to user input and be changed dynamically. On the other hand, an application runs on your computer under the operating system of that computer, such as an application created in the C or C++ language. Java technology provides portable code execution across diverse platforms. Many types of computers and operating systems are in use throughout the world, and many are connected to the Internet. For programs to be dynamically downloaded to all the various type of platforms connected to the Internet, some means of generating portable executing code is needed. Beauty of Java: The “Byte-code” The Java compiler does not produce executable code in order to resolve security and portability issues. Rather, it is Bytecode, which is a highly optimized set of instructions designed to be executed by the Java run-time system, known as the Java Virtual Machine (JVM). The JVM is essentially an interpreter for Bytecode. Translating a Java program into Bytecode helps makes it much easier to run a program on a wide variety of platforms such as Linux, Windows and Mac. The only requirement is that the JVM needs to be implemented for each platform. Once the run-time package exists for a given system, any Java program, can execute on it. Although the details of the JVM will differ from platform to platform, all interpret the same Java Bytecode. Thus, the interpretation of Bytecode is a feasible solution to create a truly portable program. In fact, most modern technology such as C++ and C# is designed to be compiled, not interpreted, mostly because of concern about performance. When a program is interpreted, it generally runs substantially slower than it would run if compiled to executable code. The use of Bytecode enables the Java run-time system to execute programs much faster than you might expect. To execute Java Bytecode, the JVM uses a class loader to fetch the Bytecodes from a network or disk. Each class file is fed to a Bytecode verifier that ensures the class is formatted correctly and will not corrupt memory when it is executed. The JIT compiler converts the Bytecodes to native code instructions on the user’s machine immediately before execution. The JIT compiler runs on the user machine and is transparent to the users; the resulting native code instructions do not need to be ported because they are already at their destination. The following figure illustrates how the JIT compiler works. Prerequisite In order to write and execute a program written in Java language, we are supposed to configure our workstation with the following software: Java Development Kit (JDK) Java Virtual Machine (JVM) Eclipse Juno IDE Tomcat Web Server (Required for Servlet and JSP) Notepad++ (optional) Java Framework Configurations A Java program can be built and compiled either by third-party tools, such as Eclipse Juno or Java Development Environment tools, which require some configuration on the user machine in order to run programs, while a third-party tool doesn’t. As per the open source nature of Java, such development tools are freely available from the Oracle Technology Network for Java Developers | Oracle Technology Network | Oracle website. The following segments specify the configuration of each particular tool in detail. Java Development Kit The JDK. originally called the Java Development Environment, can create and display graphical applications. The JDK consists of a library of standard classes (core Java API) and collections of utilities for building, testing, and documenting Java program. You need the core Java API to access the core functionality of the Java language. The core Java API includes underlying language constructs, as well as graphics, network, garbage collection, and file input/output capabilities. Here, the JDK utilities are outlined: [TABLE] [TR] [TD]JDK Utilities[/TD] [TD]Description[/TD] [/TR] [TR] [TD]Javac[/TD] [TD]The Java compiler converts the source code into Bytecode.[/TD] [/TR] [TR] [TD]Java[/TD] [TD]The Java interpreter executes the Java application Bytecodes.[/TD] [/TR] [TR] [TD]Javah[/TD] [TD]It generates a C header file that can be used to make a C routine that calls Java method.[/TD] [/TR] [TR] [TD]Javap[/TD] [TD]Used to dissemble Java source code. Displays accessible data and functions.[/TD] [/TR] [TR] [TD]Appletviewer[/TD] [TD]This is used to execute Java applets classes.[/TD] [/TR] [TR] [TD]Jdb[/TD] [TD]This Java debugger allows stepping through the program.[/TD] [/TR] [TR] [TD]Javadoc[/TD] [TD]Creates HTML documentation based on source code.[/TD] [/TR] [TR] [TD]Keytool[/TD] [TD]This is used for security key generation and management.[/TD] [/TR] [TR] [TD]Rmic[/TD] [TD]Create classes that support remote method invocation.[/TD] [/TR] [TR] [TD]Rmicregistry[/TD] [TD]Used to gain access to RMI objects.[/TD] [/TR] [TR] [TD]Rmid[/TD] [TD]This is used to RMI object registration.[/TD] [/TR] [TR] [TD]Serialver[/TD] [TD]This serialization utility permits versioning of persisted objects.[/TD] [/TR] [TR] [TD]Jar[/TD] [TD]Allows multiple Java classes and resources to be distributed in one compressed file.[/TD] [/TR] [TR] [TD]Jarsigner[/TD] [TD]Implement digital signing for JAR and class files.[/TD] [/TR] [TR] [TD]native2ascii[/TD] [TD]This program used to convert Unicode characters to international encoding schemes.[/TD] [/TR] [/TABLE] After installing and configuring JDK, you will see the way these tools are applied to build and run Java application, as illustrated in the following figure: First Hello World Java Program Java source code can be written with a simple text editor such as notepad. The best IDE is Eclipse Juno, which provides numerous development templates. As you can see, the following console-based Java program simply prints a “Hello world” text on the screen. The code defines a Java class called HelloWorld, which contains a single method called main(). When the Java interpreter tries to execute the HelloWorld class, it will look for a method called main(). The VM will execute this function to run the program. /* HelloWorld.java sample */ public class HelloWorld { public static void main(String[] args) { System.out.println("Hello Word!"); } } Save the file with “Helloworld.java” name somewhere on disk. Remember, the class name must be a file name with *.java extension. To compile the sample program, execute the compiler, javac, specifying the name of source file on the command prompt. The javac compiler creates a file called HelloWord.class that contains the Bytecode version of the program. To actually run the program, you must use the Java Interpreter, called java. To do so, pass the class name as a command line argument. The following figure depicts the whole life cycle of Java compilation process as: We can gather from the aforementioned sample that a Java program is first compiled and later it is interpreted. The following illustrates the various tools configuration employed during the compilation process. By adding a few comments to your Java source code, you make it possible for javadoc to automatically generate HTML documentation for you code. After adding the comments, use the javadoc command and it will make a couple of files: It is possible to examine the Bytecodes of a compiled class file and identify its accessible variable and functions. The javap utility creates a report that shows not only what functions and variables are available, but what the code actually does, although at very low level, as in the following: Java Features In this section, we will look briefly at the major characteristics that make Java such a powerful development tool. This includes cross-platform execution code support, multi-threading, security, and object-oriented features, such as: Architecture-Neutral One of the main problems facing programmers is that no guarantee exists that if you write a program today, it will run tomorrow—even on the same machine. The processor, operating system upgrades, and changes in the core system resources can make a program malfunction. Java resolves all these issues by introducing “write once, run anywhere” functionality. Object-Oriented Java is an object-oriented language: that is, it has the facility for OOPs incorporated into language. We can write reusable, extensible, and maintainable software with OOP’s support. Java supports all object-oriented mechanisms such as abstraction, inheritance, polymorphism, and encapsulation. Interpreted and High Performance Java enables the creation of cross-platform programs by compiling into an intermediate representation called Bytecode. This code can be interpreted on any system that provides a JVM. Java is specially designed to perform well on very low-power CPUs. Java Bytecodes easily translate directly into native machine code for very high-performance code by using the JIT compiler. Multi-Threading Support Java was designed to meet the real-world requirement for creating interactive network programs. To accomplish this, Java supports multithreaded programming, which allows you to write programs that do many things simultaneously. The Java run-time system comes with an elegant yet sophisticated solution for multi-process synchronization that enables you to construct smoothly running interactive systems. Distributed Applications Java is designed for the distributed environment of the Internet, because it handles TCP/IP protocols. This allows objects on two different computers to execute a procedure remotely. Java has recently revived these interfaces in a package called RMI. This feature brings a non-parallel abstraction to client-server programming. Strictly Typed Language Java programs provide robust programming by which you can restrict a few key areas to force to find your mistakes early in program design time. Java frees you from having to worry about many of the most common causes of programming errors. Because Java is a strictly typed language, it checks your code at compile time. However, it also checks your code at run time. Security Security is probably the main problem facing Internet developers. Users are typically afraid of two things: confidential information being compromised and their computer systems being corrupted or destroyed by hackers. Java’s built-in security addresses both of these concerns. Java security model has three primary components: the class loader, the Bytecode verifier, and the SecurityManager class. We will dig deeper into these concepts in later articles. Final Note This article introduced you to the history of Java, its effect on the WWW, and the underlying Java architecture. It explained the role of the Java development kit in writing program code. We have seen how to write a simple Java console application using JDK utilities such as Java, javac, javadoc, etc. Finally, we come to an understanding of why Java is so popular among the developer community by specifying its advantage over other language. After reading this article, one can easily start programming in Java. By Ajay Yadav|October 22nd, 2013 Sursa: Demystifying Java Internals (An introduction) - InfoSec Institute
-
Owasp Xenotix Xss Exploit Framework V4.5 Video Documentation
Nytro posted a topic in Tutoriale video
Owasp Xenotix Xss Exploit Framework V4.5 Video Documentation Description: OWASP Xenotix XSS Exploit Framework is an advanced Cross Site Scripting (XSS) vulnerability detection and exploitation framework. It provides Zero False Positive scan results with its unique Triple Browser Engine (Trident, WebKit, and Gecko) embedded scanner. It is claimed to have the world's 2nd largest XSS Payloads of about 1500+ distinctive XSS Payloads for effective XSS vulnerability detection and WAF Bypass. It is incorporated with a feature rich Information Gathering module for target Reconnaissance. The Exploit Framework includes highly offensive XSS exploitation modules for Penetration Testing and Proof of Concept creation. Version 4.5 Download: https://www.owasp.org/index.php/OWASP_Xenotix_XSS_Exploit_Framework#tab=Downloads Official Website: OWASP Xenotix XSS Exploit Framework v4.5 Relesed ‹ OpenSecurity Sursa: Owasp Xenotix Xss Exploit Framework V4.5 Video Documentation -
Brucon 0x05 - David Perez, Jose Pico - Geolocation Of Gsm Mobile Devices Description: Geolocation of mobile devices (MS) by the network has always been considered of interest, for example, to locate people in distress calling an emergency number, and so the GSM standard provides different location services (LCS), some network-based, and some MS-based or MS-assisted. OK, but what if a third party, without access to the network, was interested in knowing the exact position of a particular MS? Could he or she locate it? In this presentation we will show that this is indeed possible, even if the MS does not want to be found, meaning that the device has all its location services deactivated. We will demonstrate a system we designed and built for this purpose, that can be operated in any standard vehicle, and which can pinpoint the exact location of any target MS in a radius of approximately 2 kilometers around the vehicle. Yet, the main focus of the presentation will not so much be the system itself as it will be the process we followed for its design and implementation. We will describe in detail the many technical difficulties that we encountered along the way and how we tackled them. We believe this can be useful for people embarquing themselves in similar research projects. Obviously, a system like this cannot be demonstrated live in the room (it would be quite illegal), but we will show videos of the different consoles of the system, operating in different environments. For More Information please visit : - BruCON 2013 Sursa: Brucon 0x05 - David Perez, Jose Pico - Geolocation Of Gsm Mobile Devices
-
Louisville Infosec 2013 - Attacking Ios Applications - Karl Fosaaen Description: This presentation will cover the basics of attacking iOS applications (and their back ends) using a web proxy to intercept, modify, and repeat HTTP/HTTPS requests. From setting up the proxy to pulling data from the backend systems, this talk will be a great primer for anyone interested in testing iOS applications at the HTTP protocol level. There will be a short (2 minute) primer on setting up the intercepting proxy, followed by three practical examples showing how to intercept data headed to the phone, how to modify data heading to the application server, and how to pull extra data from application servers to further an attack. All of these examples will focus on native iOS apps (Game Center and Passbook) and/or functionality (Passbook Passes). Karl is a senior security consultant at NetSPI. This role has allowed Karl to work in a variety of industries, including financial services, health care, and hardware manufacturing. Karl specializes in network and web application penetration testing. In his spare time, Karl helps out as an OPER at THOTCON and a swag goon at DEF CON. For More Information please visit : - Louisville Metro InfoSec - Theme: Mobile Security Louisville Infosec 2013 Videos Sursa: Louisville Infosec 2013 - Attacking Ios Applications - Karl Fosaaen
-
HACKING EXPOSED™ 6: NETWORK SECURITY SECRETS & SOLUTIONS STUART MCCLURE JOEL SCAMBRAY GEORGE KURTZ CONTENTS Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Part I Casing the Establishment Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 IAAAS—It’s All About Anonymity, Stupid . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Tor-menting the Good Guys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 ? 1 Footprinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 What Is Footprinting? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Why Is Footprinting Necessary? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Internet Footprinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Step 1: Determine the Scope of Your Activities . . . . . . . . . . . . . . . . . . 10 Step 2: Get Proper Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Step 3: Publicly Available Information . . . . . . . . . . . . . . . . . . . . . . . . . 11 Step 4: WHOIS & DNS Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Step 5: DNS Interrogation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Step 6: Network Reconnaissance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 ? 2 Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Determining If the System Is Alive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Determining Which Services Are Running or Listening . . . . . . . . . . . . . . . . 54 Scan Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Identifying TCP and UDP Services Running . . . . . . . . . . . . . . . . . . . . 56 Windows-Based Port Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Port Scanning Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 xiv Hacking Exposed 6: Network Security Secrets & Solutions Detecting the Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Active Stack Fingerprinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Passive Stack Fingerprinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 ? 3 Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Basic Banner Grabbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Enumerating Common Network Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Part II System Hacking Case Study: DNS High Jinx—Pwning the Internet . . . . . . . . . . . . . . . . . . . . . 152 ? 4 Hacking Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 What’s Not Covered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Unauthenticated Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Authentication Spoofi ng Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Remote Unauthenticated Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Authenticated Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Privilege Escalation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Extracting and Cracking Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Remote Control and Back Doors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Port Redirection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Covering Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 General Countermeasures to Authenticated Compromise . . . . . . . . 202 Windows Security Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Windows Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Automated Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Security Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Security Policy and Group Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Bitlocker and the Encrypting File System (EFS) . . . . . . . . . . . . . . . . . 211 Windows Resource Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Integrity Levels, UAC, and LoRIE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Data Execution Prevention (DEP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Service Hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Compiler-based Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Coda: The Burden of Windows Security . . . . . . . . . . . . . . . . . . . . . . . . 220 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 ? 5 Hacking Unix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 The Quest for Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 A Brief Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Contents xv Vulnerability Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Remote Access vs. Local Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Remote Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Data-Driven Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 I Want My Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Common Types of Remote Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Local Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 After Hacking Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 What Is a Sniffer? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 How Sniffers Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Popular Sniffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Rootkit Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Part III Infrastructure Hacking Case Study: Read It and WEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 ? 6 Remote Connectivity and VoIP Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Preparing to Dial Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 War-Dialing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Legal Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Peripheral Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Brute-Force Scripting—The Homegrown Way . . . . . . . . . . . . . . . . . . . . . . . . 336 A Final Note About Brute-Force Scripting . . . . . . . . . . . . . . . . . . . . . . 346 PBX Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Voicemail Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Virtual Private Network (VPN) Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Basics of IPSec VPNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Voice over IP Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Attacking VoIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 ? 7 Network Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Autonomous System Lookup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Normal traceroute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 traceroute with ASN Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 show ip bgp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Public Newsgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Service Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 xvi Hacking Exposed 6: Network Security Secrets & Solutions Network Vulnerability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 OSI Layer 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 OSI Layer 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 OSI Layer 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Misconfi gurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Route Protocol Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Management Protocol Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 ? 8 Wireless Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Wireless Footprinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 War-Driving Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Wireless Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Wireless Scanning and Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Wireless Sniffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Wireless Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Identifying Wireless Network Defenses and Countermeasures . . . . . . . . . . 470 SSID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 MAC Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Gaining Access (Hacking 802.11) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 SSID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 MAC Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 WEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 Attacks Against the WEP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Tools That Exploit WEP Weaknesses . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 LEAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 WPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 Attacks Against the WPA Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Additional Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 ? 9 Hacking Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Physical Access: Getting in the Door . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 Hacking Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 Default Confi gurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Owned Out of the Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Standard Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Bluetooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Reverse Engineering Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Mapping the Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Sniffi ng Bus Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 Firmware Reversing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 JTAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 Contents xvii Part IV Application and Data Hacking Case Study: Session Riding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 ? 10 Hacking Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Common Exploit Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Buffer Overfl ows and Design Flaws . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Input Validation Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Common Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 People: Changing the Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 Process: Security in the Development Lifecycle (SDL) . . . . . . . . . . . . 532 Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Recommended Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 ? 11 Web Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 Web Server Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 Sample Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 Source Code Disclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 Canonicalization Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 Server Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 Buffer Overfl ows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 Web Server Vulnerability Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Web Application Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553 Finding Vulnerable Web Apps with Google . . . . . . . . . . . . . . . . . . . . . 553 Web Crawling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 Web Application Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 Common Web Application Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 ? 12 Hacking the Internet User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Internet Client Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 A Brief History of Internet Client Hacking . . . . . . . . . . . . . . . . . . . . . . 586 JavaScript and Active Scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Cookies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 Cross-Site Scripting (XSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 Cross-Frame/Domain Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . 594 SSL Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 Payloads and Drop Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 E-Mail Hacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 Instant Messaging (IM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 Microsoft Internet Client Exploits and Countermeasures . . . . . . . . . 604 General Microsoft Client-Side Countermeasures . . . . . . . . . . . . . . . . 609 Why Not Use Non-Microsoft Clients? . . . . . . . . . . . . . . . . . . . . . . . . . . 614 xviii Hacking Exposed 6: Network Security Secrets & Solutions Socio-Technical Attacks: Phishing and Identity Theft . . . . . . . . . . . . . . . . . . . 615 Phishing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 Annoying and Deceptive Software: Spyware, Adware, and Spam . . . . . . . 619 Common Insertion Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620 Blocking, Detecting, and Cleaning Annoying and Deceptive Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622 Malware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 Malware Variants and Common Techniques . . . . . . . . . . . . . . . . . . . . 623 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635 Part V Appendixes Download: http://www50.zippyshare.com/v/55127603/file.html
-
Mozilla Fixes 10 Vulnerabilities with Firefox 25 by Chris Brook Mozilla released the 25th version of its mobile and desktop Firefox browser yesterday, fixing 10 vulnerabilities, five of them critical. The United States Computer Emergency Readiness Team (US-CERT) warned yesterday the vulnerabilities could let an attacker execute arbitrary code, bypass access restrictions, obtain sensitive information and cause a denial-of-service (DoS) condition. According to the Mozilla Security Foundation Advisory, the critical fixes address a few problems, namely a series of use-after-free bugs and memory bugs in the JavaScript engine that can open the system up to attackers and lead to a crash. While not critical, another bug discovered by security researcher Cody Crews was patched that could have let an attacker append an iFrame into an embedded PDF object. The result could have led to the disclosure of local system files and the bypassing of security restrictions. According to the company’s bug-tracking database Bugzilla, 565 bugs in total were fixed in Firefox 25.0. While Mozilla’s Thunderbird mail client (24.1) and Seamonkey (2.22) Internet application suite were also updated yesterday, most of the bugs fixed were only at risk of being exploited in the Firefox browser or Firefox “browser-like contexts.” Since scripting is disabled in Thunderbird and Seamonkey, it makes them less likely to be exploited. Mozilla’s mobile version got an upgrade yesterday as well, bringing some existing security features from the desktop browser to Android devices. One of those features, mixed content blocking, introduced in the main Firefox browser back in August should protect users from man-in-the-middle attacks and eavesdroppers on HTTPS pages. The feature reduces the threat of insecure images, audio and JavaScript on HTTPS pages by blocking them by default. The latest mobile build also supports guest browsing, making it easier for users to lend their device to others without having them have to worry about revealing any sensitive bookmarks or history. Both guest browsing and mixed content blocking features were introduced in the beta version of the mobile browser back in September but officially went live in the stable version yesterday. Per usual, both versions of Firefox, for mobile and desktop, along with updated versions of Thunderbird and Seamonkey are available at their respective download pages. Sursa: Mozilla Fixes 10 Vulnerabilities with Firefox 25 | Threatpost | The First Stop For Security News
-
Critical vulnerability in Twitter allows attacker to upload Unrestricted Files Pierluigi Paganini, The Hacker News - Tuesday, October 29, 2013 Security expert Ebrahim Hegazy, Cyber Security Analyst Consultant at Q-CERT, has found a serious vulnerability in Twitter that allows an attacker to upload files of any extension including PHP. When an application does not validate or improperly validates file types before uploading files to the system, called Unrestricted File upload vulnerability. Such flaws allow an attacker to upload and execute arbitrary code on the target system which could result in execution of arbitrary HTML and script code or system compromise. According to Ebrahim, when a developer creates a new application for Twitter i.e. dev.twitter.com - they have an option to upload an image for that application. While uploading the image, the Twitter server will check for the uploaded files to accept certain image extensions only, like PNG, JPG and other extensions won’t get uploaded. But in a Video Proof of Concept he demonstrated that, a vulnerability allowed him to bypass this security validation and an attacker can successfully upload .htaccess and .PHP files to twimg.com server. Twimg.com is working as a CDN (content delivery network) which mean that every time attacker will upload a file, it will be hosted on a different server or subdomain of twimg.com. In CDN's usually scripting engines are not allowed to run. So, in normal scenarios a successful Exploitation of uploading htaccess & PHP files to a server that supports the PHP i.e. Remote Code Execution on that server. But in the case of Twitter: Vulnerability could be used to make twimg.com as a Botnet Command server by hosting a text file with commands, so infected machines would connect to that file to take its commands. Since twimg.com is a trusted domain by users so it won’t grab the attention. For hosting of malicious files. At least it could be used to upload a text page with a defacement content and then add the infected sub-domains of twimg.com as a mirror to Zone-h.org which would affect the reputation of Twitter. Twitter recognized the criticality of the Unrestricted File Upload Vulnerability and added Hegazy name to their Hall of Fame. I personally reached Ebrahim Hegazy that revealed me that he has also found an Open redirection Vulnerability in Twitter on 15th Sept. that has also been fixed. I conclude with a personal consideration, it's shame Twitter hasn't a bounty program, in my opinion is fundamental to incentive hackers to ethical disclosure of the bug. An attack against a social media could have serious repercussion on the users and on the reputation of the platform, if hackers sell the knowledge of the flaw on the black market a growing number of cyber criminals could benefit from it. Sursa: http://thehackernews.com/2013/10/critical-vulnerability-in-twitter.html
-
Articol scris de (stiu, acum o sa cititi): Swati Khandelwal - Working at 'The Hacker News'. Social Media Lover and Gadgets Girl. Speaker, Cyber Security Expert and Technical Writer.(Google+ Profile) Security researchers Adi Sharabani and Yair Amit have disclosed details about a widespread vulnerability in iOS apps, that could allow hackers to force the apps to send and receive data from the hackers' own servers rather than the legitimate ones they were coded to connect to. Speaking about the issue at RSA Conference Europe 2013 in Amsterdam, researchers have provided details on this vulnerability, which stems from a commonly used approach to URL caching. Demonstration shows that insecure public networks can also provide stealth access to our iOS apps to potential attackers using HTTP request hijacking methods. The researchers put together a short video demonstrating, in which they use what is called a 301 directive to redirect the traffic flow from an app to an app maker’s server to the attacker’s server. There are two limitations also, that the attacker needs to be physically near the victim for the initial poisoning to perform this attack and the flaw works only against HTTP traffic. “A victim walks into Starbucks, connects to the Wi-Fi and uses her favorite apps,” explains an example. “Everything looks and behaves as normal, however an attacker is sitting at a nearby table and performs a silent HRH attack on her apps. The next day, she wakes up at home and logs in to read the news, but she’s now reading the attacker’s news!” They estimate that at least 10,000 iOS apps in the Apple App Store are vulnerable to the hack. As a result, apps that display news, stock quotes, social media content, or even some online banking details can be manipulated to display fraudulent information and intercept data sent by the end user. Victims can uninstall apps to scrub their devices clean, and Skycure has released app code that prevents the web caching from taking place. It may be a while until developers can get this fix implemented, so connect to those public networks with extreme caution. Sursa: iOS apps vulnerable to HTTP Request Hijacking attacks over WiFi - The Hacker News
-
Facebook Tests Software to Track Your Cursor on Screen By Steve Rosenbush Facebook Inc.FB -0.62% is testing technology that would greatly expand the scope of data that it collects about its users, the head of the company’s analytics group said Tuesday. The social network may start collecting data on minute user interactions with its content, such as how long a user’s cursor hovers over a certain part of its website, or whether a user’s newsfeed is visible at a given moment on the screen of his or her mobile phone, Facebook analytics chief Ken Rudin said Tuesday during an interview. Facebook’s Ken Rudin Mr. Rudin said the captured information could be added to a data analytics warehouse that is available for use throughout the company for an endless range of purposes–from product development to more precise targeting of advertising. Facebook collects two kinds of data, demographic and behavioral. The demographic data—such as where a user lives or went to school—documents a user’s life beyond the network. The behavioral data—such as one’s circle of Facebook friends, or “likes”—is captured in real time on the network itself. The ongoing tests would greatly expand the behavioral data that is collected, according to Mr. Rudin. The tests are ongoing and part of a broader technology testing program, but Facebook should know within months whether it makes sense to incorporate the new data collection into the business, he said New types of data Facebook may collect include “did your cursor hover over that ad … and was the newsfeed in a viewable area,” Mr. Rudin said. “It is a never-ending phase. I can’t promise that it will roll out. We probably will know in a couple of months,” said Mr. Rudin, a Silicon Valley veteran who arrived at Facebook in April 2012 from Zynga Inc.ZNGA -3.12%, where he was vice president of analytics and platform technologies. As the head of analytics, Mr. Rudin is preparing the company’s infrastructure for a massive increase in the volume of its data. Facebook isn’t the first company to contemplate recording such activity. Shutterstock Inc.SSTK +0.75%, a marketplace for digital images, records literally everything that its users do on the site. Shutterstock uses the open-source Hadoop distributed file system to analyze data such as where visitors to the site place their cursors and how long they hover over an image before they make a purchase. “Today, we are looking at every move a user makes, in order to optimize the Shutterstock experience….All these new technologies can process that,” Shutterstock founder and CEO Jon Oringer told the Wall Street Journal in March. Facebook also is a major user of Hadoop, an open-source framework that is used to store large amounts of data on clusters of inexpensive machines. Facebook designs its own hardware to store its massive data analytics warehouse, which has grown 4,000 times during the last four years to a current level of 300 petabytes. The company uses a modified version of Hadoop to manage its data, according to Mr. Rudin. There are additional software layers on top of Hadoop, which rank the value of data and make sure it is accessible. The data in the analytics warehouse—which is separate from the company’s user data, the volume of which has not been disclosed—is used in the targeting of advertising. As the company captures more data, it can help marketers target their advertising more effectively—assuming, of course, that the data is accessible. “Instead of a warehouse of data, you can end up with a junkyard of data,” said Mr. Rudin, who spoke to CIO Journal during a break at the Strata and Hadoop World Conference in New York. He said that he has led a project to index that data, essentially creating an internal search engine for the analytics warehouse. Sursa: Facebook Considers Vast Increase in Data Collection - Digits - WSJ
-
Deep C and C++ Dragi programator C/C++, cititi: http://www.slideshare.net/olvemaudal/deep-c
-
Researcher Finds Method to Insert Malicious Firmware Into Currency Validator by Dennis Fisher If espionage is the world’s second-oldest profession, counterfeiting may be in the running to be third on that list. People have been trying to forge currency for just about as long as currency has been circulating, and anti-counterfeiting methods have tried to keep pace with the state of the art. The anti-counterfeiting technology in use today of course relies on computers and software, and like all software, it has bugs, as researchers at IOActive discovered when they reverse-engineered the firmware in a popular Euro currency verifier and found that they could insert their own firmware and force the machine to verify any piece of paper as a valid Euro note. Ruben Santamarta, a researcher at IOActive in Spain, decided to have a look at the firmware in a machine called the Secureuro, which is used widely in that country to verify Euro notes in a variety of settings. After watching some videos from the vendor Inves on the machine’s operations and reading through the machine’s documentation, Santamarta came to the conclusions that some of the security claims the vendor makes were somewhat specious. “Unfortunately, some of these claims are not completely true and others are simply false. It is possible to understand how Secureuro works; we can access the firmware and EEPROM without even needing hardware hacking. Also, there is no encryption system protecting the firmware,” Santamarta said in his analysis of the firmware. “My intention is not to forge a banknote that could pass as legitimate, that is a criminal offense. My sole purpose is to explain how I identified the code behind the validation in order to create ‘trojanized’ firmware that accepts even a simple piece of paper as a valid currency. We are not exploiting a vulnerability in the device, just a design feature.” In that regard, Santamarta succeeded. He began by downloading the firmware for the Secureeuro from the vendor’s site and then performing a detailed analysis of the code to see how it works and what the important functions are. He found a number of interesting functions in the firmware and one of the things he came across was the counter that increments the number of invalid banknotes the machine has counted. “Wait, hold on a second, the number of invalid banknotes is being stored in a three byte counter in the EEPROM, starting at position 0xE. Are you thinking what I’m thinking? We should look for the opposite operation. Where is that counter being incremented? That path would hopefully lead us to the part of code where a banknote is considered valid or invalid Keep calm and ‘EEPROM_write’ Bingo!” Santamarta wrote. Digging a bit further, Santamarta discovered that there are two functions that assign a value to a given bank note. One assigns a preliminary value and the second one assigns a final value for each note. He determined that the firmware may be processing some of the security features of a note, such as the ink or a hologram, with one function and then processing another set with the second function. He identified a separate function that performs some analog-to-digital conversion of input. “This function receives the input pin from the ADC conversion as a parameter. As expected, it is invoked to complete the conversion of six different pins; inside a timer. The remaining three digital signals with information about the distances can also be obtained easily,” he said. “The last step was to buy the physical device. I modified the original firmware to accept our home-made IOActive currency, and…what do you think happened? “The impact is obvious. An attacker with temporary physical access to the device could install customized firmware and cause the device to accept counterfeit money. Taking into account the types of places where these devices are usually deployed (shops, mall, offices, etc.) this scenario is more than feasible.” So Santamarta’s technique could enable an attacker to load his own malicious firmware onto a target device and validate counterfeit money. Euros, like other widely circulated currencies, have a number of security and anti-counterfeiting features and Santamarta’s research shows that it’s not necessary to circumvent those in order to pass counterfeit notes. The easier method is to attack the validator itself, rather than the notes. Sursa: Researcher Finds Method to Insert Malicious Firmware Into Currency Validator | Threatpost | The First Stop For Security News
-
Interview: Bjarne Stroustrup Discusses C++ Bjarne Stroustrup and William Wong C and C++ are the main programming languages for embedded computing and for a host of other application areas as well. C++ was developed by Bjarne Stroustrup and he was gracious enough to answer a few of my questions about C++ and its development. He not only developed the initial version of C++ but has been actively involved in updating through the latest standard, C++11. He has also taught and written about C++ with books like “Programming: Principles and Practice using C++.” Wong: How did you get started designing C++? Stroustrup: I needed a tool to help me on a project where I needed hardware access, high performance for systems programming tasks, and help with handling complexity. The project was to “split” a Unix kernel into parts that could run on a multi-processor or a high-performance local network. At the time (1979/80), no language could meet all three requirements, so I added Simula-like classes to C. The earliest designs added function argument checking and conversion (what later became C function prototypes), constructors and destructors, and simple inheritance. Interestingly, my earliest paper on “C with Classes” as it was called in the early years, used macros to implement a simple form of generic programming – later I found that didn’t scale and I had to add templates. Over the next few years, my design (and implementation) was refined until the commercial release of C++ in 1985. Performance and hardware access was very important in the early years – as it still is. I considered it essential to be able to do everything C could do and to do it at least as efficiently. For example, early on, I found that to handle copy constructors, I had introduced a 3% overhead for returning structure values compared to C. I considered that unacceptable and by the end of the week the overhead was gone. Function call inlining was introduced to be able to handle interrupts in an embedded processor without the overhead of a function call and without forcing programmers to abandon classes to handle interrupts. The idea that facilities shouldn’t just be expressive and elegant, but also needed to be affordable in the most demanding applications was very important to me. Wong: What were/are the major design goals for C++? Stroustrup: A way of mapping C++ language features to machine facilities that is at least as direct and efficient as C’s plus abstraction mechanisms that allows programmers to express ideas beyond what I can imagine with no overhead compared to hand-crafted code. This implies static (compile-time) type checking. C++ was and is meant to be a tool for professionals and for people who takes programming seriously. It can and is used by novices, but the too often heard complaint that “C++ isn’t for everybody and not every project is easily done using C++” is based on a seriously miscomprehension. There can be no programming language that is perfect for everybody and for everything. C++ doesn’t try to be everything for everybody, but it is rather good at the tasks for which it was designed – mostly systems programming, software infrastructure, and resource-constrained applications. C++ dominates the fields where its strengths are needed. The fact that you can write a simple web app easier using JavaScript or Ruby does not bother me. Basically, C++ was not primarily designed for tasks of medium complexity, medium performance, and medium reliability, written by programmers of medium skills and experience. Obviously, it can be used for that and is widely used for that, but that’s not its areas of specific strength compared to many other languages. I documented my design aims and the constraints on the design in my book “The Design and Implementation of C++” and in my two History of Programming Languages conference papers, but briefly, I aim for zero overhead compared to hand-crafted code when using abstractions, a machine model very similar to that of C, an extremely flexible set of abstraction mechanism, and static type safety. The overall aim is to help produce better code for challenging real-world tasks, and “to make programming more pleasant for the serious programmer.” Obviously, I can’t get all of what I would like in every situation. So C++ isn’t perfect. However, Despite innumerable attempts to displace it, C++ is still the best match for its stringent design criteria and for a huge range of real-world problems. Wong: You have been working with the standard since it started. How has it changed over time and where is it headed? Stroustrup: That’s hard to say. Formal standardization is a very hard and often tedious task. The people involved are usually clever and experienced, but they come from very diverse backgrounds and often represent major industrial interests, so constructing a consensus can be very difficult and is time consuming. Creating a consensus is essential because there is no way to force implementers to provide a faithful version of the standard and no way of forcing programmers to use novel features. Only by providing something almost universally agreed to be genuinely useful can we make progress. A standard committee is no place for single-issue fanatics. My estimate is that close to 100 organizations and maybe 300+ individuals are involved. That’s two to three times the numbers in the early years. We have had close to 100 people attending recent meetings. Our plan is to issue a new standard with minor new features and corrections in 2014, C++14. This will almost certainly happen because we have already had national body votes on the changes and implementations of every new feature are available somewhere. I expect to be using C++14 in 2014. After that, there is a plan for C++17 in 2017. That will be a more major upgrade, so I can’t be quite as optimistic about the timing. The ISO C++ standards committee has no resources (money, time, developers). The members actually have to pay $1200 a year to participate. If you don’t like what we do, don’t just complain. Don’t just say “I want a GUI library” or “there is no proper support for task-based concurrency” or whatever. We know that. Instead, come help us to get the work done right. We are always short of experienced application builders, so novel features have tendency to overly reflect the interests of implementers and library builders. Wong: Many embedded designers choose to use C because “it is simpler and closer to the hardware” than C++. Do you think that complexity of C++ should be a deterrent to embedded designers? Stroustrup: No. C isn’t simpler for C-style programming than C++ is, nor “closer to the hardware,” nor indeed more efficient. I have yet to see a program that can be written better in C than in C++. I don’t believe such a program could exist. By “better” I mean smaller, more efficient, or more maintainable. The myth that “C is simpler and more efficient” have caused (and causes) untold numbers of beginners to concentrate on obscure workarounds and difficult-to-master techniques, rather than learning how to use more powerful supporting features. Many then fall in love with their obscure and complex code, considering it a sign of expert knowledge. It is amazing what people can fail to learn when they are told that it is difficult and useless. The only reason I know of to use “plain C” rather than a suitable subset of C++ is lack of tool support on a given platform. We can’t just blame the beginners/novices/students, though. The presentation and teaching of C++ has been a constant problem. Almost a decade ago, when I first was to teach programming to freshman engineering students, I looked at the textbooks using C++ and despaired. More precisely, I did not despair, I was furious! There were (and are) books teaching every little obscure detail of C before getting to the far-easier-to-use C++ alternatives, and deeming those alternatives “advanced” to scare off all but the most determined student. Seriously, how could a standard-library vector be as hard to use well as a built-in array? How could using qsort() be simpler than using the more general and efficient sort()? C++ provides better notational support and stronger type checking than C does. This can lead to faster object code. Other books presented (and presents) C++ as a somewhat failed attempt to be a “pure object oriented programming language” and force most every operation into class hierarchies (a la Java) with lots of inheritance and virtual functions. The result is verbose code with unnatural couplings, and lots of casting. To add insult to injury, such code also tends to be slow. As I said: if that’s C++, I don’t like it either! I responded by writing a textbook for college freshmen and determined self-learners: “Programming: Principles and Practice using C++.” It does not assume previous programming experience, though it has been popular with programmers wanting to know what C++ is about. That book is a bit long for experienced programmers wanting a quick look at what C++11 is like. For that I recommend “A Tour of C++” which presents all the major features of ISO C++ and its standard library in just 180 pages! C++11 is completely supported by Clang and GCC, and partially by Microsoft C++ and many other implementations. I fear that C++11 support is still be a bit spotty of less mainstream systems. Wong: C++11 has added a number of features including lambdas and threading support. How has your view of features like these changed over time? Stroustrup: I have wanted thread support in the standard for at least 15 years. Now I finally got it after having to have to use nice, but non-standard, thread libraries for decades. C++11’s contribution to concurrent programming is the memory model (now also adopted by C) and to make traditional threads-and-locks level programming portable and type-safe. That’s major. No macros and void**s are needed to share information and pass information between threads. For some, the facilities for lock-free programming are also important. I have been concerned about lambdas for a while (maybe a decade), looking for a variant of the (inherently good) idea that would do more good than harm in the context of C++. The performance of the library versions were not sufficiently good, but the performance of the current implementations of C++11 lambdas (in current compilers) is as good as that of equivalent for-loop bodies. For example: double sum = 0; for (int I =0; i<v.size(); ++i) sum += v[i]; // array style and double sum = 0; for (auto p = v.begin(); p!=v.end(); ++p) sum += *p; // pointer style and double sum = 0; for_each(v.begin(),v.end(), [&](double d) { sum += d; }); // algorithm style These alternatives now run at exactly the same speed on all major C++ compilers. That is, we can now choose between those styles based on aesthetics, maintainability, etc. There are quite a few places where I use lambdas in preference to alternatives. For example: sort(v, [](const string& a, const string& b) { return a>b; }); // sort in reverse order That said, lambdas is a new and powerful feature. Such features are always overused until the community learns what pays off in the long run. In my opinion, it is often worthwhile to separately define a function or a function object so that the operation can have a specific name, can be used from multiple places in a program and where there is space for a helpful comment. Lambdas can easily become and exercise in write-only code. This is not the place to teach C++11, but let me give just one example: template<typename C, typename V> vector<Value_type<C>*> find_all(C& cont, V v) // find all occurrences of v in cont { vector<Value_type<C>*> res; for (auto& x : cont) if (x==v) res.push_back(&x); return res; } Here, I use several new features. The new for-loop, a range-for loop, is read “for all x in cont” and simplifies traversing the container cont. The auto& x declaration says that the type of x should be a reference to the type of the elements in the initializer of x, that is, a reference to the type of elements in cont. The loop collects the addresses of all occurrences of v in cont in the vector of pointers res. So far, the new features here have been merely “syntactic sugar,” but these are rather nice and useful notational improvements. The real novelty here is the return statement: Note that I return a potentially huge vector by value. In C++98, that would typically cause the copy of all elements of res, potentially many thousands of elements. That would be a serious performance bug. In C++11, vector has a “move constructor,” so that rather than copying elements, the representation of res (merely three pointers) is “stolen” for use in the caller and an empty vector is left behind. After all, we are just about to return from find_all() and won’t be able to use res again. Thus, returning a vector by value costs at most six word assignments independently of the number of elements. Move constructors is a simple facility available to every programmer and used by all standard-library containers implemented as handles. This allows us to return large objects from a function without messing around with memory management (explicitly) using pointers and free store. We can test find_all() like this: void test() { string m {"Mary had a little lamb"}; for (const auto p : find_all(m,'a')) // p is a char* if (*p!='a') cerr << "string bug!\n"; vector<string> v {"Mary”, “lamb", “Mary”, “mary”, “wolf”}; for (const auto p : find_all(v,”Mary”)) // p is a string* if (*p!=”Mary”) cerr << "vector<string> bug!\n"; } Feel free to compare this to a hand-crafted version without templates or C++11 features. For more information, read “A Tour of C++.” For all the details see “The C++ Programming Language (Fourth Edition).” Wong: What common mistakes do you see new C++ developers making? Stroustrup: They think they have to choose between “efficiency” and elegance, so they either stick to a small low-level subset (“for efficiency”) or build bloated “do-everything” designs (deeming those elegant). My ideal is for the most efficient code also to be the most elegant. That happens when you have a perfect fit between a problem and a solution. Obviously, you can’t always achieve that and you rarely achieve it at the first try, but it happens often enough for the ideal to be of practical relevance. Before dismissing C++ features, such as classes and templates, for demanding tasks, learn the basics of their use and try them out. Measure, rather than guessing about performance! Do not feel obliged to craft huge hierarchies or write complicated template meta-programs. Some of the most powerful C++ features are quite simple and leads to good object code. One of the best ways to get efficient code is to write simple source code. Wong: What do you like to do for fun? Stroustrup: Travel to interesting places, run, take photographs, listen to music, read (literature and history), spend time with family and friends. Of course, some of my programming is great fun also, but I guess you weren’t asking about work. Research and advanced system building is fun. As they say “I can’t believe we are getting paid for doing this!” Sursa: Interview: Bjarne Stroustrup Discusses C++ | Dev Tools content from Electronic Design
-
[h=1]Car hackers mess with speedos, odometers, alarms and locks[/h]By Darren Pauli on Oct 29, 2013 5:10 PM If you lend your car to Ted Sumers and Grayson Zulauf, there's a good chance it'll never work the same again. You might even get it back on a tow truck. The duo weren't street hoons but rather a pair of capable hardware hackers who know how to mess with a car's speedometre, brakes and alert systems. They were the latest in a burgeoning scene of academics and security boffins who along with a thriving but fragmented assortment of rev-head hobbyist geeks are battering the digital fabric powering modern-day cars. When Sumers and Zulauf began their research, they did not let the lack of computer documentation, the exorbitant costs of proprietary computer analysis kits or tight-lipped mechanics stop them. Speaking at the Breakpoint security conference in Melbourne, the researchers from automtive startups Automatic and Motiv Power Systems told how together with Chris Hoder of Microsoft the trio set off to discover how the digital bits flew around Controller Area Networks (CANs) embedded into many cars in use today. With physical access to the cars the men were able to make vehicles appear to drive slower than actual speed, manipulate brakes, alarms and unlock doors. They could also increase a car's odometer and with further research wind it back. Other researchers have accessed car networks via bluetooth and developed ways to compromise autos through firmware. The capabilities of CAN hacking were vast. In August, researchers Charlie Miller and Chris Valasek tapped into CANs to cut the brakes of a Ford Escape and caused the wheel of a Ford Focus to jerk out of the hands of a driver at high speed. Other hobbyists have used CAN bus hacking to alter functions such as the fuel injection levels of cars with some creating legitimate car customisation businesses using their skills. Criminals too have benefitted. Sumers said recent years a criminal gang sold a device they created to unlock doors for pricey Audis via a port that remarkably could be accessed from an exterior panel on the vehicle. A spate of car thefts resulted until their arrest. But much of the important parts of the research needed to start hacking CANs was not available, the researchers said. This forced them in their bid to deliver a project for the University of Dartmouth and open the field to others to effectively "start from ground zero". After initial trial and error -- and the purchase of a second car -- they gained access to the CAN and began fuzzing against to identitfy which of the arbitration ID packets were sent to particular components of the vehicle such as the speedometre, brakes and dashboard indicators. "The car started making horrible noises -- every light on the dashboard was blinking," Zulauf said. "If you do this yourself you might want to use a friend's car," Sumers added. Like other consumer products such as smart TVs and network-connected white goods, most vehicles were designed for functionality and paid little or no homage to security. The trio, for example, found that CAN transmissions contained no authentication requirements because car designers built the architecture with the intent it would operate in closed secure environments. Sumers said manufacturers would unlikely be able to introduce authentication because it would slow down the CANs too much. Indeed Ford and Toyota have not moved to fix the weaknesses, saying that the hacking scenarios were academic and did not place drivers at risk. The CANs also ran as single networks within all but the high-end vehicles -- notably the "over-engineered" German autos -- meaning that their fuzzing efforts were easier. Their fuzzing processes worked by a system of sniffing, searching and probing using Travis Goodspeed's GoodTHOPTER10. They would monitor CAN packets, find those that appeared interesting and then seek out what systems within the cars that they controlled. Breakpoint 2013 Inside car CAN hacking by Darren Pauli on Mixcloud The trio were keen for others to enter the hacking field and have produced a $25 open-modular source hardware tool for reading CANs that has the capability of those worth tens of thousands. "The more people play with it, the better it will get," Sumers said, adding that he was keen to learn how different CAN implementations were. Copyright © SC Magazine, Australia Sursa: http://www.scmagazine.com.au/News/362297,car-hackers-mess-with-speedos-odometers-alarms-and-locks.aspx
-
Table of Contents Cryptography and Computation Practical Covertly Secure MPC for Dishonest Majority – Or: Breaking the SPDZ Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Ivan Damg?ard, Marcel Keller, Enrique Larraia, Valerio Pastro, Peter Scholl, and Nigel P. Smart Practical and Employable Protocols for UC-Secure Circuit Evaluation over Zn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Jan Camenisch, Robert R. Enderlein, and Victor Shoup Privacy-Preserving Accountable Computation . . . . . . . . . . . . . . . . . . . . . . . 38 Michael Backes, Dario Fiore, and Esfandiar Mohammadi Measurement and Evaluation Verifying Web Browser Extensions’ Compliance with Private-Browsing Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Benjamin S. Lerner, Liam Elberty, Neal Poole, and Shriram Krishnamurthi A Quantitative Evaluation of Privilege Separation in Web Browser Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Xinshu Dong, Hong Hu, Prateek Saxena, and Zhenkai Liang Estimating Asset Sensitivity by Profiling Users . . . . . . . . . . . . . . . . . . . . . . 94 Youngja Park, Christopher Gates, and Stephen C. Gates Applications of Cryptography Practical Secure Logging: Seekable Sequential Key Generators . . . . . . . . . 111 Giorgia Azzurra Marson and Bertram Poettering Request-Based Comparable Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Jun Furukawa Ensuring File Authenticity in Private DFA Evaluation on Encrypted Files in the Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Lei Wei and Michael K. Reiter XIV Table of Contents Code Analysis HI-CFG: Construction by Binary Analysis and Application to Attack Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Dan Caselden, Alex Bazhanyuk, Mathias Payer, Stephen McCamant, and Dawn Song AnDarwin: Scalable Detection of Semantically Similar Android Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Jonathan Crussell, Clint Gibler, and Hao Chen BISTRO: Binary Component Extraction and Embedding for Software Security Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Zhui Deng, Xiangyu Zhang, and Dongyan Xu Network Security Vulnerable Delegation of DNS Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Amir Herzberg and Haya Shulman Formal Approach for Route Agility against Persistent Attackers . . . . . . . . 237 Jafar Haadi Jafarian, Ehab Al-Shaer, and Qi Duan Plug-and-Play IP Security: Anonymity Infrastructure instead of PKI . . . 255 Yossi Gilad and Amir Herzberg Formal Models and Methods Managing the Weakest Link: A Game-Theoretic Approach for the Mitigation of Insider Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Aron Laszka, Benjamin Johnson, Pascal Sch¨ottle, Jens Grossklags, and Rainer B¨ohme Automated Security Proofs for Almost-Universal Hash for MAC Verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Martin Gagn´e, Pascal Lafourcade, and Yassine Lakhnech Bounded Memory Protocols and Progressing Collaborative Systems . . . . 309 Max Kanovich, Tajana Ban Kirigin, Vivek Nigam, and Andre Scedrov Universally Composable Key-Management . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Steve Kremer, Robert K¨unnemann, and Graham Steel Table of Contents XV Protocol Analysis A Cryptographic Analysis of OPACITY (Extended Abstract) . . . . . . . . . . 345 ¨ Ozg¨ur Dagdelen, Marc Fischlin, Tommaso Gagliardoni, Giorgia Azzurra Marson, Arno Mittelbach, and Cristina Onete Symbolic Probabilistic Analysis of Off-Line Guessing . . . . . . . . . . . . . . . . . 363 Bruno Conchinha, David Basin, and Carlos Caleiro ASICS: Authenticated Key Exchange Security Incorporating Certification Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Colin Boyd, Cas Cremers, Mich`ele Feltz, Kenneth G. Paterson, Bertram Poettering, and Douglas Stebila Privacy Enhancing Models and Technologies Efficient Privacy-Enhanced Familiarity-Based Recommender System . . . . 400 Arjan Jeckmans, Andreas Peter, and Pieter Hartel Privacy-Preserving User Data Oriented Services for Groups with Dynamic Participation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Dmitry Kononchuk, Zekeriya Erkin, Jan C.A. van der Lubbe, and Reginald L. Lagendijk Privacy-Preserving Matching of Community-Contributed Content . . . . . . 443 Mishari Almishari, Paolo Gasti, Gene Tsudik, and Ekin Oguz E-voting and Privacy Ballot Secrecy and Ballot Independence Coincide . . . . . . . . . . . . . . . . . . . . 463 Ben Smyth and David Bernhard Election Verifiability or Ballot Privacy: Do We Need to Choose? . . . . . . . 481 ´ Edouard Cuvelier, Olivier Pereira, and Thomas Peters Enforcing Privacy in the Presence of Others: Notions, Formalisations and Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Naipeng Dong, Hugo Jonker, and Jun Pang Malware Detection Mining Malware Specifications through Static Reachability Analysis . . . . 517 Hugo Daniel Macedo and Tayssir Touili Patrol: Revealing Zero-Day Attack Paths through Network-Wide System Object Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Jun Dai, Xiaoyan Sun, and Peng Liu XVI Table of Contents Measuring and Detecting Malware Downloads in Live Network Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 Phani Vadrevu, Babak Rahbarinia, Roberto Perdisci, Kang Li, and Manos Antonakakis Access Control Automated Certification of Authorisation Policy Resistance . . . . . . . . . . . 574 Andreas Griesmayer and Charles Morisset Fine-Grained Access Control System Based on Outsourced Attribute-Based Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 Jin Li, Xiaofeng Chen, Jingwei Li, Chunfu Jia, Jianfeng Ma, and Wenjing Lou Purpose Restrictions on Information Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 Michael Carl Tschantz, Anupam Datta, and Jeannette M. Wing Distributed Shuffling for Preserving Access Confidentiality . . . . . . . . . . . . 628 Sabrina De Capitani di Vimercati, Sara Foresti, Stefano Paraboschi, Gerardo Pelosi, and Pierangela Samarati Attacks Range Extension Attacks on Contactless Smart Cards . . . . . . . . . . . . . . . . 646 Yossef Oren, Dvir Schirman, and Avishai Wool CellFlood: Attacking Tor Onion Routers on the Cheap . . . . . . . . . . . . . . . . 664 Marco Valerio Barbera, Vasileios P. Kemerlis, Vasilis Pappas, and Angelos D. Keromytis Nowhere to Hide: Navigating around Privacy in Online Social Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682 Mathias Humbert, Th´eophile Studer, Matthias Grossglauser, and Jean-Pierre Hubaux Current Events: Identifying Webpages by Tapping the Electrical Outlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 Shane S. Clark, Hossen Mustafa, Benjamin Ransford, Jacob Sorber, Kevin Fu, and Wenyuan Xu Language-Based Protection Eliminating Cache-Based Timing Attacks with Instruction-Based Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718 Deian Stefan, Pablo Buiras, Edward Z. Yang, Amit Levy, David Terei, Alejandro Russo, and David Mazi`eres Table of Contents XVII Data-Confined HTML5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 Devdatta Akhawe, Frank Li, Warren He, Prateek Saxena, and Dawn Song KQguard: Binary-Centric Defense against Kernel Queue Injection Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755 Jinpeng Wei, Feng Zhu, and Calton Pu Run-Time Enforcement of Information-Flow Properties on Android (Extended Abstract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775 Limin Jia, Jassim Aljuraidan, Elli Fragkaki, Lujo Bauer, Michael Stroucken, Kazuhide Fukushima, Shinsaku Kiyomoto, and Yutaka Miyake Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793 Da, not bad
-
https://www.exodusintel.com/files/Aaron_Portnoy-Bypassing_All_Of_The_Things.pdf
-
Software Defense: mitigating heap corruption vulnerabilities swiat 29 Oct 2013 4:10 AM Heap corruption vulnerabilities are the most common type of vulnerability that Microsoft addresses through security updates today. These vulnerabilities typically occur as a result of programming mistakes that make it possible to write beyond the bounds of a heap buffer (a spatial issue) or to place a heap allocated object in an unexpected state such as by using the object after it has been freed (a temporal issue). Over time, attackers have developed a number of techniques to help them exploit various types of heap corruption vulnerabilities. Starting with Windows XP Service Pack 2, Microsoft began introducing hardening changes to the Windows heap manager that were designed to make it more difficult to exploit heap corruption vulnerabilities. In this blog post, we will review some of the general methods that have been used to exploit and mitigate heap corruption vulnerabilities and highlight hardening changes that have been made in Windows 8 and Windows 8.1 to further complicate exploitation. For more background on the Windows 8 heap architecture, please refer to the Channel 9 interview on the Windows 8 heap manager. Heap corruption exploitation, then and now In a previous blog post, we covered the history of heap-based exploitation and mitigation techniques from Windows XP through Windows 7. This blog post showed that prior to Windows Vista, most of the research on heap corruption exploitation techniques focused on corrupting heap metadata in order to achieve more powerful exploitation primitives (such as the ability to write an arbitrary value to any location in memory). One of the reasons attackers focused on corrupting heap metadata is because it was always present and therefore could enable application-independent (generic) exploitation techniques. The release of Windows Vista changed the landscape of heap exploitation through numerous heap hardening changes that addressed nearly all of the heap metadata corruption exploitation techniques that were known at the time. As a consequence of the hardening changes in Windows Vista, attackers have largely shifted their focus toward exploitation techniques that rely on corrupting application-specific data stored on the heap. For example, attackers will attempt to use a heap corruption vulnerability to corrupt the C++ virtual table pointer of an object on the heap or to corrupt the base or length field of a heap-allocated array to achieve the ability to read or write to any location in memory. There has been additional research on heap metadata corruption post-Windows Vista and there are a small number of known real-world exploits that have relied on these metadata corruption techniques[1,2,3,4], but as this blog post will show, all of the publicly known exploitation techniques that rely on metadata corruption have been addressed in Windows 8.1. Heap corruption mitigations The heap manager in Windows 8 and Windows 8.1 builds on the hardening changes of previous Windows releases by incorporating new security features that mitigate not only metadata corruption techniques but also less generic techniques that rely on corrupting application-specific data. These new security features can be broken down into the following threat categories: heap integrity checks, guard pages, and allocation order randomization. All of the security features introduced in Windows 8 have been inherited by Windows 8.1. Heap integrity checks The heap manager in Windows 8 and Windows 8.1 includes a number of new integrity checks that are designed to detect heap metadata corruption and terminate an application safely if corruption is detected. This section describes some of the noteworthy integrity checks that have been added. Catch-all exception handling blocks have been removed Previous versions of the Windows heap made use of catch-all exception handling blocks in certain cases where exceptions were considered non-fatal. This had the potential to make it easier for attackers to exploit heap corruption issues in certain cases, in particular by allowing an attacker multiple attack attempts. Therefore, these catch-all blocks have been removed from the heap in Windows 8, meaning such exceptions now lead to safe termination of the application. HEAP handle can no longer be freed The HEAP handle is an internal data structure that is used to maintain the state associated with a given heap. Prior to Windows 8, an attacker could use a heap-based memory corruption vulnerability to coerce the heap into freeing the HEAP handle data structure. After doing this, the attacker could force the heap to reallocate the memory that previously stored the HEAP handle state. This in turn allowed an attacker to corrupt internal heap metadata, including certain function pointer fields. The Windows 8 heap mitigates this attack by preventing a HEAP handle from being freed. HEAP CommitRoutine encoded by a global key The HEAP handle data structure includes a function pointer field called CommitRoutine that is called when memory regions within the heap are committed. Starting with Windows Vista, this field was encoded using a random value that was also stored as a field in the HEAP handle data structure. While this mitigated trivial corruption of only the CommitRoutine function pointer, it did not mitigate the case where an attacker could corrupt both the CommitRoutine and the field that stored the encoding key. The Windows 8 heap mitigates this attack by using a global key to encode the CommitRoutine function pointer rather than a key that is stored within the HEAP handle data structure. Extended block header validation Each heap allocation returned by the Windows heap has a header that describes the allocation’s size, flags, and other attributes. In some cases, the Windows heap may flag an allocation as having an extended block header which informs the heap that there is additional metadata associated with the allocation. In previous versions of Windows, an attacker could corrupt the header of an allocation and make it appear as if the allocation had an extended block header. This could then be used by an attacker to force the heap to free another allocation that is currently in use by the program. The Windows 8 heap mitigates this attack by performing additional validation on extended block headers to ensure that they are correct. Blocks cannot be allocated if they are already busy Some of the attacks that have been proposed by security researchers rely on reallocating memory that is already in use by the program (e.g. [3]). This can allow an attacker to corrupt the state of an in-use heap allocation, such as a C++ object, and thereby gain control of the instruction pointer. The Windows 8 heap mitigates this attack by verifying that an allocation is not already flagged as in-use (“busy”) when it is about to be allocated. If a block is flagged as in-use, the heap takes steps to safely terminate the process. Encoded FirstAllocationOffset and BlockStride One of the exploitation techniques proposed in [4] involved corrupting heap metadata (FirstAllocationOffset and BlockStride) that is used by the Low Fragmentation Heap (LFH) to calculate the address of an allocation within a subsegment. By corrupting these fields, an attacker can trick the heap into returning an address that is outside the bounds of a subsegment and potentially enable corruption of other in-use heap allocations. The heap manager in Windows 8.1 addresses this attack by encoding the FirstAllocationOffset and BlockStride fields in order to limit an attacker’s ability to deterministically control the calculation of allocation addresses by the LFH. Guard pages One of the ways that the Windows 8 heap better protects application data and heap metadata is through the use of guard pages. In this context, a guard page is an inaccessible page of memory that will cause an access violation if an application attempts to read from it or write to it. Placing a guard page between certain types of sub-regions within the heap helps to partition the heap and localize any memory corruptions that may occur. In an ideal setting, the Windows heap would encapsulate all allocations in guard pages in a manner that is similar to full-page heap verification. Unfortunately, this type of protection is not feasible for performance reasons. Instead, the Windows 8 heap uses guard pages to isolate certain types of sub-regions within the heap. In particular, guard pages are enabled for the following types of sub-regions: Large allocations. In cases where an application attempts to allocate memory that is larger than 512K (on 32-bit) or 1MB (on 64-bit), the memory allocation request is passed directly to the virtual memory allocator and the size is updated to allocate extra space for a guard page. This ensures that all large allocations have a trailing guard page. Heap segments. The Windows heap allocates large chunks of memory, known as heap segments, which are divided up as an application allocates memory. The Windows 8 heap adds a trailing guard page to all heap segments when they are allocated. Maximally-sized subsegments. Each heap segment may contain one or more subsegment that is used by the frontend allocator (the Low Fragmentation Heap, or LFH) to allocate blocks of the same size. Once a certain threshold has been reached for allocating blocks of a given size, the LFH will begin allocating maximally-sized subsegments, which are subsegments that contain the maximum number of blocks possible for a given size. The Windows 8 heap adds a trailing guard page to maximally-sized subsegments. For 32-bit applications, guard pages are inserted probabilistically to minimize the amount of virtual address space that is consumed. Allocation order randomization One of the behaviors that attackers rely on when exploiting heap buffer overruns is that there must be a way to reliably position certain heap allocations adjacent to one another. This requirement stems from the fact that an attacker needs to know how many bytes must be written in order to corrupt a target allocation on the heap (while minimizing collateral damage to the heap that could cause the application and hence the attack to be terminated). Attackers typically try to ensure that allocations are immediately adjacent to each other through techniques that are often referred to as heap massaging or heap normalization. These techniques attempt to bring the heap into a state where new allocations are placed at a desired location with respect to one another. In Windows 8, a new security feature has been added to the LFH which randomizes the order of allocations. This means that allocations that are made through the LFH are no longer guaranteed to be placed immediately adjacent to one another even after an attacker has attempted to normalize the heap. This has the effect of preventing an attacker from reliably assuming that an allocation containing a target object will be positioned after the allocation that they are able to overflow. While an attacker may attempt to increase the reliability of their attack by corrupting more data or allocating more target objects, they run the risk of destabilizing the process by corrupting other heap state or causing the process to terminate by accessing a guard page as described in the previous section. This is a good example of several mitigations working together: neither is foolproof on its own, but combined they result in increasingly complex requirements for a successful attack. Although allocation order randomization helps make the internal layout of the heap nondeterministic, there are limitations to how far it goes. First and foremost, the performance of the Windows heap is critical as it is used as a general purpose memory allocator by the vast majority of the applications that run on Windows. As a side effect of this, allocation order randomization is currently limited to randomizing allocations within individual LFH subsegments (which accounts for the majority of allocations made by applications). This means backend allocations have no inherent entropy and therefore may be subject to deterministic allocation patterns, as noted in [5]. In addition to performance, there are also inherent limits to the effectiveness of allocation order randomization. If an attacker can read the contents of heap memory, they may be able to overcome the effects of randomization. Similarly, allocation order randomization is not designed to strongly mitigate heap vulnerabilities that are related to object lifetime issues, such as use after free vulnerabilities. This is because an attacker will generally be able to allocate a sufficient number of replacement objects to overcome the effects of allocation order randomization. We’ll discuss some other mitigations that are targeted at addressing use after free issues, which are increasingly preferred by exploit writers, later in this series. Conclusion The hardening changes that have been made to the Windows heap manager in Windows 8 and Windows 8.1 have been designed to make it more difficult and costly to exploit heap corruption vulnerabilities. This has been accomplished by adding additional integrity checks to metadata that is used by the heap, by protecting application data stored on the heap through the use of guard pages, and by randomizing the order of allocations. These mitigations do not make heap corruption vulnerabilities impossible to exploit, but they do have an impact on the time it takes to develop an exploit and how reliable an exploit will be. Both of these factors play a role in determining whether or not an attacker will develop an exploit for a vulnerability. With that being said, the fact that heap corruption vulnerabilities are the most common vulnerability class that we address through security updates means it is likely that we will continue to see additional research into new exploitation techniques for heap vulnerabilities in the future. As such, we will continue to look for ways to harden the Windows heap to further increase the difficulty of developing reliable exploits for heap corruption vulnerabilities. - Matt Miller References [1] Ben Hawkes. Attacking the Vista Heap. Black Hat USA. Aug, 2008. [2] Ben Hawkes. Attacking the Vista Heap. Ruxcon. Nov, 2008. [3] Chris Valasek. Modern Heap Exploitation using the Low Fragmentation Heap. SyScan Taipei. Nov, 2011. [4] Chris Valasek. Windows 8 Heap Internals. Black Hat USA. Aug, 2012. [5] Zhenhua Liu. Advanced Heap Manipulation in Windows 8. Black Hat Europe, Mar, 2013. Sursa: Software Defense: mitigating heap corruption vulnerabilities - Security Research & Defense - Site Home - TechNet Blogs PS: Matt Miller e "skape". Nu ma astept sa stiti cine e, dar poate va informati.
-
Proxy DDOSer? Adica sa iti dai DOS singur? Cat despre Facebook bot, vezi ca a facut Zatarra ceva misto.
-
Intrati si voi mai des pe forum.