Search the Community
Showing results for tags 'directory'.
-
Initially identified fifteen years ago, and clearly articulated by a Microsoft Security Advisory, DLL hijacking is the practice of having a vulnerable application load a malicious library (allowing for the execution of arbitrary code), rather than the legitimate library by placing it at a preferential location as dictated by the Dynamic-Link Library Search Order which is a pre-defined standard on how Microsoft Windows searches for a DLL when the path has not been specified by the developer. Despite published advice on secure development practices to mitigate this threat, being available for several years, this still remains a problem today and is an ideal place for malicious code to hide and persist, as well as taking advantage of the security context of the loading program. How can DLL hijacking be detected? Okay - so it's up to the developers to be more secure in the way they load their libraries, but in the meanwhile how can we detect whether our systems have been compromised in this way? To achieve this I have been experimenting with a new methodology (well, at least it's new to me!) for detecting active attacks of this nature on vulnerable systems, and have written a program which does the following: 1. Iterate through each running process on the system, identifying all the DLLs which they have loaded 2. For each DLL, inspect all the locations where a malicious DLL could be placed 3. If a DLL with the same name appears in multiple locations in the search order, perform an analysis based on which location is currently loaded and highlight the possibility of a hijack to the user 4. Additionally: Check each DLL to see whether it has been digitally signed, and give the user the option to ignore all signed DLLs During testing I have found that DLL hijacking isn't always malicious, infact there are a whole bunch of digitally signed libraries which sit in the base directory of an application (perhaps they act differently to their generic counterparts?). Accordingly, in order to reduce the amount of noise returned by the tool, I implemented the '/unsigned' parameter, which I would recommend you use the first time you run it. This ignores cases where both the DLL which has actually been loaded, and others found in the search order are all signed (and therefore, more likely to be legit) - if you want to dig deep, feel free to leave this off! By default, the tool will only display the results where the library being examined was loaded from one of the 'DLL search order' paths, as otherwise it implies it was safely loaded from an alternative location. Unfortunately, this excludes the 'Current Working Directory' (due to a lack of an API to retrieve this data and undocumented internal memory structure changes between versions). If you want to override this you can, with the /verbose option (realistically, in conjunction with /unsigned to reduce the noise). This would be useful if you are looking for 'remote system, current working directory' style attacks as this displays entries with multiple potential DLLs irrespective of where it was loaded from. Demonstration of tool in action To test the tool, I created a vulnerable executable which does a single action: LoadLibrary(L"dll_hijack_test_dll.dll");. This sits alongside the associated DLL which, on being loaded, writes a message to the screen and sleeps forever to keep the program running. I put the DLL in two locations on the system: The path to the executable The Windows System directory (C:\Windows\System32) This now represents a common DLL hijacking attack in which the attacker would place the malicious DLL in the directory the program is launched from, which would be searched before the Windows System directory (where in this case, the legitimate DLL would be). Image 1. The demo program running with the DLL loaded The image above shows the demo running and the properties page from Process Hacker, which shows the DLL as being loaded. At this point we run dll_hijack_detect.exe, which produces the following result: Image 2. Output from dll_hijack_detect.exe on demo system Video demonstration As we can see, it has successfully identified the hijack and informed the user! Sound great! Where can I download a copy? I have release the source code and binaries, all of which are available from my github. In addition you will find a copy of the dll_hijack_test executable and DLL, so you can try it out for yourself! Source
-
Introduction In this last part of the Website Hacking series, we are going to list 18 common web vulnerabilities and flaws and we are going to briefly provide solutions to them. Some of them are described for the first time in the Website Hacking series and some we have discussed before but in greater depth. 1. Saving all user input If you are using a framework, for example, a PHP framework, you might be tempted to save all user input to your model or database since it has already been validated and escaped. Let us say that you are using CakePHP and have included a registration form using CakePHP’s Form helper. SNIPPET 1 Now, you might be tempted to save all data from CakePHP’s $this->request->data array/method as is if you do not read the docs carefully or view some of the examples provided there (the live blog site). SNIPPET 2 You just save all data and thank the framework creators. However, there are at least two things you did wrong: $this->request->data does not contain escaped/sanitized input, just the input from the superglobals. Firstly, you should use CakePHP’s h() function to prevent people inserting tags in their username: like this h($this->request->data) However, this is not enough and a wrong approach. If you save all user input in your Model (database) the user can add new input tags directly in his browser and try to guess some columns in your users table for which you have not provided an input in the website’s form. For example, many CakePHP’s applications have “role” column set to user/admin or something similar (it is used in the docs as well). The user can just open his Developer Tools, find the registration form or right click and select Inspect Element, click on Edit as HTML and add a new input like this: <input name=”data[user][role]” type=”text”> <input name=” [user][role]” type=”text”> Or whatever the current way for forms to interact with your Models is, guess column names and insert values to them. One way to solve this is to change your column that defines user’s roles and permissions name to something unpredictable. However, that is not the safest approach you can take. You can either insert the data into the database manually, which will ensure no extra columns will be saved: SNIPPET 3 Or alternatively, you could still save all user data but set explicitly the values of columns not found in the form: SNIPPET 4 2. Allowing user access to assets Many sites work with user input and user data and store this data. Clients can see where their assets are stored, so there is no need for them to guess. For example, a client could see that the images he uploaded were stored in /uploads/{username} because the images he uploaded were loaded to the page from that directory, so if he knows some usernames of different people he could just change the directory name to another user and browse through all of his data without having to brute-force directory names. The first way to tackle this issue that we discussed before is not enough (adding Options All –Indexes to the .htaccess file).It would prevent users from browsing directories and opening whatever they want but they would still know the directory exists and they can still guess directory names because the server will return a 403 Forbidden (which shows something exists in that path). Furthermore, they could guess file names from some patterns that the file names follow and open them. Therefore, you need to block access to the files in your uploads directory. If you are storing text files (let us say users can keep notes and view/edit them whenever they want) you could add to your .htaccess the following rule: RewriteEngine On RewriteRule ^uploads/.*.(txt|doc)$ – [F,L,NC] The F flag would return a 403 Forbidden response, the L flag causes the next rules to stop being processed, and the NC eliminates case-sensitivity. Figure 1: The page with only directory listing disabled. Figure 2: The page with only directory listing disabled. You cannot browse directories, but if each user has a notes.txt file, you can easily view user’s notes by knowing only their username. Figure 3: Trying to access the notes with both directory listing and controlled access to files. If you use the rewrite rule to disable users from browsing other users notes, your back-end would still be able to access the notes, show them to users or edit them. For example: SNIPPET 5 Where the $user variable would come from a session in a real-world application. 3. Running basic WordPress installation Common mistakes here are not limiting the login attempts on your wp-admin page. This would allow anyone to brute-force your credentials and destroy your blog/site. This becomes even easier because most people create their master username to be ‘admin’ which involves only brute-forcing the password to get full access to the WP website. Another mistake is that the wp-admin login page is left without a form of CAPTCHA or a protection against bots. This combined with no limitation of login attempts equals certain death of your online presence at some point in the future. You could avoid all 3 of these things and also change the default wp-admin path to be something different as well (obfuscation). 4. Relying too much on IP addresses while having weak bot protection Most ISPs provide dynamic IP addresses, and the IP address you have banned or stored may already be obsolete in less than a day. Furthermore, it is often not very difficult to change your IP address – use a proxy, release it from the router or from the OS, change locations. There are myriad ways to do it. To prevent bots from causing undesired consequences, it would be better to use alternative ways – enhance your CAPTCHAs, add inputs only bots will fill out, require JavaScript/cookies enabled to submit a form, and so on. 5. Improper redirects Let us say that you have a redirect page or a GET value (for simplicity’s sake) that redirects users to another page of your site or to another website. However, if you forget to disallow redirects to third-party websites or in case you allow those, if you do not create a warning page before redirecting that will tell the user where they are going and that they are leaving the site – users can easily abuse your site by giving links that seem to be pointing to your site but will redirect users to malicious websites. if (isset($_GET['redirect'])) { header("Location: " . $_GET['redirect']); } If we have something as simple as this, then users can easily get fooled to enter bad sites by following an URL like this: http://localhost:8079/articles/Website%20Hacking%20Part%20VII/?redirect=http://www.somemalicioussitehere.com 6. Cross Site Request Forgery If your site allows users to add comments/posts and insert tags such as <img> and load a third-party image, they can provide a link that is not an image but will fool the clients’ browsers (the users that will be reading them) to load the resource and perform an action on a website if they are authenticated in it. For example, if Facebook was sufficed with a couple of GET parameters or a particular URL to follow someone/something on their network, we could have added an image like that: my image And if the user is currently logged in he would have followed an arbitrary person. Of course, this would not work in this particular situation. 7. Insecure file handling A common mistake is to trust that a file does not contain something inappropriate. Code can be disguised as an image, so checking the file extension is not enough. At the very least, the MIME type should also be checked. Also, ASCII / text files should be escaped. Here is an example of such a vulnerability: SNIPPET 6 The vulnerability arises when at some point we display the contents of the .txt file in our page: SNIPPET 7 If the file we submit contains the following code: <script> alert(document.cookie); </script> Then all user cookies for that website will be shown in an alert. 8. Displaying and trusting HTTP headers These can be modified by users and can be malicious. For example, if you display the client’s User-Agent header, it might be changed to consist of code which would then be executed in your back-end. This is also valid for the referrer header, so it should not be used to determine whether the user can access a particular page by itself (for example, checking if the referfer is the login page and assuming the user has logged in successfully since he was redirected to the members area’s index page from the login page). 9. Information disclosure Your live apps should not be in debug mode. Errors should not be shown. 10. Directory traversal If you are using some parameter that opens different files on your website based on user input, your back-end should escape special characters such as the . (dot) or / (slash) from the input and preferably use whitelisting. 11. Using HTTP for semi-confidential data A common flaw is using HTTP for sites that include mechanisms such as registration/login. Even widely used online marketplaces in Bulgaria use simple HTTP (such as OLX.bg - ???? ?? ????????? ????? ). Using HTTP makes it easy for potential attackers in your network to sniff your traffic and get your credentials with no real efforts. For example, if you login to olx while in a Wi-Fi, you are subject to risk. 13. Sessions can be stolen Sessions can be stolen, making the attacker login as someone else. There are multiple vectors of defense here – such as checking the IP address, the user agent, and regenerating session, and adding cookies. 14. Be careful which third-party libraries, CDNs and plugins you use They might be simply outdated, opening a wide variety of security holes, or they might be malicious – giving access to the shady library’s creator to your server. 15. Bots are everywhere Take care of malicious bots not by banning their IP but by enhancing CAPTCHA, adding hidden form fields that users would not fill, and requiring JavaScript or cookies enabled to submit a form. 16. Use HTTP only cookies This would reduce the impact of some other attacks – such as XSS 17. Hashing Hash your passwords and try to avoid md5 or sha-1 algorithms (https://community.qualys.com/blogs/securitylabs/2014/09/09/sha1-deprecation-what-you-need-to-know, hash - Why does some popular software still use md5? - Information Security Stack Exchange ). Use salts to prevent attacks with rainbow tables. 18. XSS Always escape input unless you really, really trust the source (admin panel). You can either remove tags or display them as entities depending on your needs. | PHP: strip_tags($input, $allowedTags); htmlspecialchars($input, ENT_QUOTES); htmlentities($input); | 19. SQL Injection Use prepared statements or do not perform a query which is not hardcoded without sanitizing it (PHP: PDO class or sanitize with mysqli_real_escape_string($conn, $str) if using mysqli. Do not use mysql_*). Conclusion This was the last part of the Website Hacking series. We have introduced some new vulnerabilities and briefly discussed them and have summarized our points for everything that we have talked about so far. We hope that now you will feel more confident when deploying your web apps by putting these strategies in use. Source