Jump to content

Nytro

Administrators
  • Posts

    18772
  • Joined

  • Last visited

  • Days Won

    729

Everything posted by Nytro

  1. Unhide is a forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. Unhide runs in Unix/Linux and Windows Systems. It implements six main techniques. Features Compare /proc vs /bin/ps output Compare info gathered from /bin/ps with info gathered by walking thru the procfs. ONLY for unhide-linux version Compare info gathered from /bin/ps with info gathered from syscalls (syscall scanning). Full PIDs space ocupation (PIDs bruteforcing). ONLY for unhide-linux version Compare /bin/ps output vs /proc, procfs walking and syscall. ONLY for unhide-linux version. Reverse search, verify that all thread seen by ps are also seen in the kernel. Quick compare /proc, procfs walking and syscall vs /bin/ps output. ONLY for unhide-linux version. It’s about 20 times faster than tests 1+2+3 but maybe give more false positives. URL: http://www.unhide-forensics.info Via: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  2. "Legile 'Big Brother' ?i cea privind cartele prepay nu extind monitorizarea ?i nu permit accesul la con?inutul comunica?iilor telefonice sau electronice f?r? mandat de la judec?tor, d? asigur?ri directorul SRI, George Maior. Intr-un interviu acordat în exclusivitate Ziare.com, George Maior a explicat care sunt inten?iile în privin?a noii forme a legii reten?iei datelor ?i a cartelelor prepay ?i la ce date va permite accesul f?r? mandat de la judec?tor legea securit??ii cibernetice: 'Nu putem s? ac?ion?m în scop preventiv în aceast? er? cu mijloacele lui Sherlock Holmes'. Directorul SRI a r?spuns acuza?iilor potrivit c?rora serviciile secrete din România ar fi prea puternice ?i prea pu?in controlate ?i a f?cut clarific?ri în scandalul ofi?erilor acoperi?i: 'exist? un regim de incompatibilit??i la care ?inem foarte mult în operarea acestei arme excep?ionale'. Articol complet: http://www.sri.ro/fisiere/discursuriinterviuri/Interviu_ianuarie_2015.pdf Intrebare: Nici legea cartelelor prepay nu reprezint? o extindere a monitoriz?rii? R?spuns: Nu se extinde monitorizarea asupra convorbirilor private. Pur ?i simplu trebuie s? existe o eviden?? a celor care cump?r? aceste cartele anonimizate, a?a cum exist? în foarte multe state europene. N-a? spune c? exist? un standard, dar o practic? exist?. Merge?i în Germania, în Marea Britanie ?i încerca?i s? lua?i o asemenea cartel?. Doua persoane au confirmat ca au cumparat recent cartele din Germania si Marea Britanie fara buletin. Deci astia MINT. Muie! Daca o sa se aprobe astfel de legi, nu numai in Romania, dar si in alte state, inseamna ca "Charlie" a fost o inscenare, alte atacuri inventate, pentru un mai mare control asupra populatiei. Stiu ca e doar o teorie a conspiratiei, dar ganditi-va la asta. // Muie garda
  3. Tag-urile sunt pentru SEO. Par cam multe insa. Cine ne poate spune daca e ok sau nu?
  4. 5 Benefits of a penetration test January 5, 2015Adrian Furtuna Penetration testing projects are definitely fun for the passionate pentesters. However, the question is what are the real benefits of a pentest for the client company? What is the real value of a penetration test? Many clients have misconceptions and false assumptions about penetration testing and they are engaging this type of projects for the wrong reasons, like: After a penetration test I will be safe A penetration test will find all of my vulnerabilities I’ve heard that pentesting is ‘sexy’ so I would like one myself Companies who do penetration tests for these reasons do not get the real benefits of this service and they are practically throwing away the money. From my perspective, a penetration test has the following true benefits for the client company. Articol complet: 5 Benefits of a penetration test – Security Café
  5. Nu se vor recupera Like-urile anterioare. O sa ma ocup de homepage cand am putin timp.
  6. Am lasat in lateral doar Likes si Dislikes. Cred ca e de ajuns.
  7. Da. Incerc sa le mut si nu imi iese. E posibil sa apara urat de tot paginile
  8. Am pus sa poti da Like SAU Dislike, nu ambele. Sa vad ce imi mai permite plugin-ul. "Unlike" pare sa fie doar in versiunea Pro.
  9. [h=3]Mobile eavesdropping via SS7 and first reaction from telecoms[/h] Mobile network operators and manufacturers finally said some words about vulnerabilities in the SS7 technology that allow an intruder to perform subscriber’s tracking, conversation tapping and other serious attacks. We reported some of these vulnerabilities and attack schemes in May 2014 at Positive Hack Days IV as well as here in our blog. In December 2014, these SS7 threats were brought to public attention again, at the Chaos Communication Congress in Hamburg, where German researchers showed some new ways mobile phone calls using SS7. The research have included more than 20 networks worldwide, including T-Mobile in the United States. Meanwhile, the Washington Post reports that GSMA did not respond to queries seeking comment about the vulnerabilities in question. For the Post’s article in August on location tracking systems that use SS7, GSMA officials acknowledged problems with the network and said it was due to be replaced over the next decade because of a growing list of security and technical issues. The reply from T-Mobile was more abstract: “T-Mobile remains vigilant in our work with other mobile operators, vendors and standards bodies to promote measures that can detect and prevent these attacks." We also found the first official reaction from Huawei: Huawei has obtained the vulnerability information from open channels and launched technical analysis. Again, not too much said. But it’s better that nothing, considering the fact that SS7 problem is not new: it’s traced back to the seventies of the last century. In the early two thousands SIGTRAN specification was developed; it allowed transferring SS7 messages over IP networks. Security flaws of upper levels of SS7 protocols were still presented. The telecom engineers had been alerting that subscriber locating and fraud schemes using SS7 are possible, since 2001. For obvious reasons, providers didn't want the public to know about these vulnerabilities. However, it's believed that law enforcement agencies used SS7 vulnerabilities to spy on mobile networks for years. In 2014, it was found out that there are private companies providing a whole range of the above-mentioned services to anyone who wants. For example, this is how the SkyLock service provided by the American company Verint works: Washington Post notes that Verint do not use their capabilities against American and Israeli citizens, "but several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations". The more detailed description of this tracking technology and other SS7 attacks could be found in our report “Vulnerabilities in SS7 mobile networks” published in 2014. Data presented in this report were gathered by Positive Technologies experts in 2013 and 2014 during consulting on security analysis for several large mobile operators and are supported by practical researches of detected vulnerabilities and features of the SS7 network. During testing network security, Positive Technologies experts managed to perform such attacks as discovering a subscriber's location, disrupting a subscriber's availability, SMS interception, USSD request forgery (and transfer of funds as a result of this attack), voice call redirection, conversation tapping, disrupting a mobile switch's availability. The testing revealed that even the top 10 telecom companies are vulnerable to these attacks. Moreover, there are known cases of performance of such attacks on the international level, including discovering a subscriber's location and tapping conversations from other countries. Common features of these attacks: The intruder doesn't need sophisticated equipment. We used a common computer with OS Linux and SDK for generating SS7 packets, which is publicly available on the web. Upon performing one attack using SS7 commands, the intruder is able to perform the rest attacks by using the same methods. For instance, if the intruder managed to determine a subscriber's location, only one step left for SMS interception, transfer of funds etc. Attacks are based on legitimate SS7 messages: you cannot just filter messages, because it may have negative influence over the whole service. An alternative way to solve the problem is presented in the final clause of this research. Read the full PDF report here. ?????: Positive Research ?? 12:21 AM Sursa: http://blog.ptsecurity.com/2015/01/mobile-eavesdropping-via-ss7-and-first.html
  10. Foloseam vbSEO, dar nu mi-a placut niciodata: vBSEO’s Vulnerability Leads to Remote Code Execution | Sucuri Blog Am pus 3 noi plugin-uri de la DragonBye: - Advanced Post Thanks / Like (inlocuitor pentru Like-urile din vbSEO) - Advanced User Tagging (era pus de ceva timp, i-am facut update) - DragonByte SEO (inlocuitor pentru vbSEO, structura link-urilor asemanatoare) Asadar pot sa apara o gramada de probleme, atat de functionalitate cat si de securitate. Daca gasiti o problema o puteti posta sau imi dati PM. Pentru XSS se primeste VIP. Pentru altceva, discutam.
  11. Nytro

    Problema forum ?!?

    Doar la Likes e problema, ma ocup de asta. Pana diseara sper sa fie ok.
  12. Exact astea au fost raportate de persoane de pe forum pentru Talpashit pentru vBulletin 4. Astia sunt retardati.
  13. Test. @Nytro @Nytrofdgdfgfdg
  14. Why do you need 6.7? Here is the changelog: https://www.hex-rays.com/products/ida/6.7/
  15. V-am pus o noua tema de mobile. Ar trebui sa puteti intra mai usor acum.
  16. E perfecta prima versiune. Hex-Rays face toti banii.
  17. Tested, merge, decompiler pentru x64. Thanks!
  18. Point, Click, Root. More than 300+ exploits! Presented in Black Hat 2014 Version 3.3.3 More than 300+ exploits Military grade professional security tool Exploit Pack comes into the scene when you need to execute a pentest in a real environment, it will provide you with all the tools needed to gain access and persist by the use of remote reverse agents. Remote Persistent Agents Reverse a shell and escalate privileges Exploit Pack will provide you with a complete set of features to create your own custom agents, you can include exploits or deploy your own personalized shellcodes directly into the agent. Write your own Exploits Use Exploit Pack as a learning platform Quick exploit development, extend your capabilities and code your own custom exploits using the Exploit Wizard and the built-in Python Editor moded to fullfill the needs of an Exploit Writer. Sursa: Exploit Pack
  19. ExecutedProgramsList [h=4]Description[/h] ExecutedProgramsList is a simple tool that displays a list of programs and batch files that you previously executed on your system. For every program, ExecutedProgramsList displays the .exe file, the created/modified time of the .exe file, and the current version information of the program (product name, product version, company name) if it's available. For some of the programs, the last time execution time of the program is also displayed. [h=4]System Requirements[/h] This utility works on any version of Windows, starting from Windows XP and up to Windows 8. Both 32-bit and 64-bit systems are supported. [h=4]Data Source[/h] The list of previously executed programs is collected from the following data sources: Registry Key: HKEY_CURRENT_USER\Classes\Local Settings\Software\Microsoft\Windows\Shell\MuiCache Registry Key: HKEY_CURRENT_USER\Microsoft\Windows\ShellNoRoam\MUICache Registry Key: HKEY_CURRENT_USER\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Compatibility Assistant\Persisted Registry Key: HKEY_CURRENT_USER\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Compatibility Assistant\Store Windows Prefetch folder (C:\Windows\Prefetch) Sursa: ExecutedProgramsList - Shows programs previously executed on your system
  20. [h=1]Crypto200 with The POODLE Attack[/h] Tetcon is one of the biggest security conferences in Viet Nam. There are various talks which speak both in Vietnamese and English. In this year, the first time, organizers decided to host a hacking challenge – Capture The Flag (CTF) ! While CTF was running, I solved 3 tasks, such as: getit, next and “Who let the dog out?”. First two tasks is not quite hard. You should try it yourself. In this post, I would like to talk about “Who let the dog out?”. It’s about cryptography attack. In particular, it is the POODLE attack. And the author of this task is Thai Duong (thaidn), one of experts who find out this attack. Download: [TetCON CTF 2015] Crypto200 with The POODLE Attack
  21. Cryptography Exercises Contents 1 source coding 3 2 Caesar Cipher 4 3 Ciphertext-only Attack 5 4 Classification of Cryptosystems-Network Nodes 6 5 Properties of modulo Operation 10 6 Vernam Cipher 11 7 Public-Key Algorithms 14 8 Double Encryption 15 9 Vigenere Cipher and Transposition 16 10 Permutation Cipher 20 11 Substitution Cipher 21 12 Substitution + Transposition 25 13 Affine Cipher 27 14 Perfect Secrecy 28 15 Feistel Cipher 38 16 Block Cipher 45 17 Digital Encryption Standard (DES) 46 18 Primitive Element 53 19 Diffie-Hellman Key Exchange 54 20 Pohlig-Hellman a-symmetric Encryption 58 21 ElGamal 59 22 RSA System 61 23 Euclid’s algorithm 65 24 Protocol Failure 66 25 Complexity 67 26 Authentication 68 27 Protocols 71 28 Hash Functions 73 29 Cipher Modes 78 30 Pseudo Random Number Generators 79 31 Linear Feedback Shift Register 80 32 Challenge Response 87 33 Application of error correcting codes in biometric authentication 89 34 General Problems 91 Download: http://www.iem.uni-due.de/~vinck/crypto/problems-crypto.pdf
  22. Analyzing Man-in-the-Browser (MITB) Attacks The Matrix is real and living inside your browser. How do you ask? In the form of malware that is targeting your financial institutions. Though, the machines creating this malware do not have to target the institution, rather your Internet browser. By changing what you see in the browser, the attackers now have the ability to steal any information that you enter and display whatever they choose. This has become known as the Man-in-the-Browser (MITB) attack. Download: https://www.sans.org/reading-room/whitepapers/forensics/analyzing-man-in-the-browser-mitb-attacks-35687
      • 1
      • Like
  23. (This post is a joint work with @joystick, see also his blog here) Motivated by our previous findings, we performed some more tests on service IOBluetoothHCIController of the latest version of Mac OS X (Yosemite 10.10.1), and we found five additional security issues. The issues have been reported to Apple Security and, since the deadline we agreed upon with them expired, we now disclose details & PoCs for four of them (the last one was notified few days later and is still under investigation by Apple). All the issues are in class IOBluetoothHCIController, implemented in the IOBluetoothFamily kext (md5 e4123caff1b90b81d52d43f9c47fec8f). [h=3]Issue 1 (crash-issue1.c)[/h] Many callback routines handled by IOBluetoothHCIController blindly dereference pointer arguments without checking them. The caller of these callbacks, IOBluetoothHCIUserClient::SimpleDispatchWL(), may actually pass NULL pointers, that are eventually dereferenced. More precisely, every user-space argument handled by SimpleDispatchWL() consists of a value and a size field (see crash-issue1.c for details). When a user-space client provides an argument with a NULL value but a large size, a subsequent call to IOMalloc(size) fails, returning a NULL pointer that is eventually passed to callees, causing the NULL pointer dereference. The PoC we provide targets method DispatchHCICreateConnection(), but the very same approach can be used to cause a kernel crash using other callback routines (basically any other callback that receives one or more pointer arguments). At first, we ruled out this issue as a mere local DoS. However, as discussed here, Yosemite only partially prevents mapping the NULL page from user-space, so it is still possible to exploit NULL pointer dereferences to mount LPE attacks. For instance, the following code can be used to map page zero: Mac:tmp $ cat zeropage.c [TABLE] [TR] [TD][/TD] [TD]#include <mach/mach.h> #include <mach/mach_vm.h> #include <mach/vm_map.h> #include <stdio.h> int main(int argc, char **argv) { mach_vm_address_t addr = 0; vm_deallocate(mach_task_self(), 0x0, 0x1000); int r = mach_vm_allocate(mach_task_self(), &addr, 0x1000, 0); printf("%08llx %d\n", addr, r); *((uint32_t *)addr) = 0x41414141; printf("%08x\n", *((uint32_t *)addr)); } [/TD] [/TR] [/TABLE] Mac:tmp $ llvm-gcc -Wall -o zeropage{,.c} -Wl,-pagezero_size,0 -m32 Mac:tmp $ ./zeropage 00000000 0 41414141 Mac:tmp $ Trying the same without the -m32 flag results in the 64-bit Mach-O being blocked at load time by the OS with message "Cannot enforce a hard page-zero for ./zeropage" (unless you do it as "root", but then what’s the point?). [h=3]Issue 2 (crash-issue2.c)[/h] As shown in the screenshot below, IOBluetoothHCIController::BluetoothHCIChangeLocalName() is affected by an "old-school" stack-based buffer overflow, due to a bcopy(src, dest, strlen(src)) call where src is fully controlled by the attacker. To the best of our knowledge, this bug cannot be directly exploited due to the existing stack canary protection. However, it may still be useful to mount a LPE attack if used in conjunction with a memory leak vulnerability, leveraged to disclose the canary value. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 2, a plain stack-based buffer overflow[/TD] [/TR] [/TABLE] [h=3]Issue 3 (crash-issue3.c)[/h] IOBluetoothHCIController::TransferACLPacketToHW() receives as an input parameter a pointer to an IOMemoryDescriptor object. The function carefully checks that the supplied pointer is non-NULL; however, regardless of the outcome of this test, it then dereferences the pointer (see the figure below, the attacker-controlled input parameter is stored in register r15). The IOMemoryDescriptor object is created by the caller (DispatchHCISendRawACLData()) using the IOMemoryDescriptor::withAddress() constructor. As this constructor is provided with a user-controlled value, it may fail and return a NULL pointer. See Issue 1 discussion regarding the exploitability of NULL pointer dereferences on Yosemite. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 3, the module checks if r15 is NULL, but dereferences it anyway[/TD] [/TR] [/TABLE] [h=3]Issue 4 (lpe-issue1.c)[/h] In this case, the problem is due to a missing sanity check on the arguments of the following function: IOReturn BluetoothHCIWriteStoredLinkKey( uint32_t req_index, uint32_t num_of_keys, BluetoothDeviceAddress *p_device_addresses, BluetoothKey *p_bluetooth_keys, BluetoothHCINumLinkKeysToWrite *outNumKeysWritten); The first parameter, req_index, is used to find an HCI Request in the queue of allocated HCI Requests (thus this exploit requires first to fill this queue with possibly fake requests). The second integer parameter (num_of_keys) is used to calculate the total size of the other inputs, respectively pointed by p_device_addresses and p_bluetooth_keys. As shown in the screenshot below, these values are not checked before being passed to function IOBluetoothHCIController::SendHCIRequestFormatted(), which has the following prototype: IOReturn SendHCIRequestFormatted(uint32_t req_index, uint16_t inOpCode, uint64_t outResultsSize, void *outResultBuf, const char *inFormat, ...); [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 4, an exploitable buffer overflow (click to enlarge)[/TD] [/TR] [/TABLE] The passed format string "HbbNN" will eventually cause size_of_addresses bytes to be copied from p_device_addresses to outResultBuf in reverse order (the 'N' format consumes two arguments, the first is a size, the second a pointer to read from). If the calculated size_of_addresses is big enough (i.e., if we provide a big enough num_of_keys parameter), the copy overflows outResultBuf, thrashing everything above it, including a number of function pointers in the vtable of a HCI request object. These pointers are overwritten with attacker-controlled data (i.e., those pointed by p_bluetooth_keys) and called before returning to userspace, thus we can divert the execution wherever we want. As a PoC, lpe-issue1.c exploits this bug and attempts to call a function located at the attacker-controller address 0x4141414142424242. Please note that the attached PoC requires some more tuning before it can cleanly return to user-space, since more than one vtable pointer is corrupted during the overflow and needs to be fixed with valid pointers. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Execution of our "proof-of-concept" exploit (Issue 4)[/TD] [/TR] [/TABLE] [h=3]Notes[/h] All the PoCs we provide in this post are not "weaponized", i.e., they do not contain a real payload, nor they attempt to bypass existing security features of Yosemite (e.g., kASLR and SMEP). If you’re interested in bypass techniques (as you probably are, if you made it here), Ian Beer of Google Project Zero covered pretty much all of it in a very thorough blog post. In this case, he used a leak in the IOKit registry to calculate kslide and defeat kASLR, while he used an in-kernel ROP-chain to bypass SMEP. More recently, @i0n1c posted here about how kASLR is fundamentally broken on Mac OS X at the moment. [h=3]Conclusions[/h] Along the last issue identified, we shared with Apple our conclusions on this kext: according to the issues we identified, we speculate there are many other crashes and LPE vulnerabilities in it. Ours, however, is just a best-effort analysis done in our spare time, and given the very small effort that took us to identify the vulnerabilities, we would suggest a serious security evaluation of the whole kext code. [h=3]Disclosure timeline[/h] 02/11: Notification of issues 1, 2 and 3. 23/11: No answer received from Apple, notification of issue 4. As no answer was received since the first contact, propose December 2 as possible disclosure date. 25/11: Apple answers requesting more time. We propose to move the disclosure date to January 12. 27/11: Apple accepts the new deadline. 05/12: Contact Apple asking for the status of the vulnerabilities. 06/12: Apple says they're still "investigating the issue". 23/12: Notification of a new issue (#5), proposing January 23 as a tentative disclosure date. 06/01: Apple asks for more time for issue #5. We propose to move the disclosure date to February 23. We remind our intention to disclose the 4 previous issues on January 12. 12/01: No answer from Apple, disclosing first 4 issues. Sursa: Roberto Paleari's blog: Time to fill OS X (Blue)tooth: Local privilege escalation vulnerabilities in Yosemite
  24. A Call for Better Coordinated Vulnerability Disclosure Chris Betz 11 Jan 2015 6:49 PM For years our customers have been in the trenches against cyberattacks in an increasingly complex digital landscape. We’ve been there with you, as have others. And we aren’t going anywhere. Forces often seek to undermine and disrupt technology and people, attempting to weaken the very devices and services people have come to depend on and trust. Just as malicious acts are planned, so too are counter-measures implemented by companies like Microsoft. These efforts aim to protect everyone against a broad spectrum of activity ranging from phishing scams that focus on socially engineered trickery, to sophisticated attacks by persistent and determined adversaries. (And yes, people have a role to play – strong passwords, good policies and practices, keeping current to the best of your ability, detection and response, etc. But we’ll save those topics for another day). With all that is going on, this is a time for security researchers and software companies to come together and not stand divided over important protection strategies, such as the disclosure of vulnerabilities and the remediation of them. In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD). This is a topic that the security technology profession has debated for years. Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks. Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree. Releasing information absent context or a stated path to further protections, unduly pressures an already complicated technical environment. It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack. We are in this latter camp. CVD philosophy and action is playing out today as one company - Google - has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so. Specifically, we asked Google to work with us to protect customers by withholding details until Tuesday, January 13, when we will be releasing a fix. Although following through keeps to Google’s announced timeline for disclosure, the decision feels less like principles and more like a “gotcha”, with customers the ones who may suffer as a result. What’s right for Google is not always right for customers. We urge Google to make protection of customers our collective primary goal. Microsoft has long believed coordinated disclosure is the right approach and minimizes risk to customers. We believe those who fully disclose a vulnerability before a fix is broadly available are doing a disservice to millions of people and the systems they depend upon. Other companies and individuals believe that full disclosure is necessary because it forces customers to defend themselves, even though the vast majority take no action, being largely reliant on a software provider to release a security update. Even for those able to take preparatory steps, risk is significantly increased by publically announcing information that a cybercriminal could use to orchestrate an attack and assumes those that would take action are made aware of the issue. Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited. Conversely, the track record of vulnerabilities publicly disclosed before fixes are available for affected products is far worse, with cybercriminals more frequently orchestrating attacks against those who have not or cannot protect themselves. Another aspect of the CVD debate has to do with timing – specifically the amount of time that is acceptable before a researcher broadly communicates the existence of a vulnerability. Opinion on this point varies widely. Our approach and one that we have advocated others adopt, is that researchers work with the vendor to deliver an update that protects customers prior to releasing details of the vulnerability. There are certainly cases where lack of response from a vendor(s) challenges that plan, but still the focus should be on protecting customers. You can see our values in action through our own security experts who find and report vulnerabilities in many companies’ products, some of which we receive credit for, and many that are unrecognized publically. We don’t believe it would be right to have our security researchers find vulnerabilities in competitors’ products, apply pressure that a fix should take place in a certain timeframe, and then publically disclose information that could be used to exploit the vulnerability and attack customers before a fix is created. Responding to security vulnerabilities can be a complex, extensive and time-consuming process. As a software vendor this is an area in which we have years of experience. Some of the complexity in the timing discussion is rooted in the variety of environments that we as security professionals must consider: real world impact in customer environments, the number of supported platforms the issue exists in, and the complexity of the fix. Vulnerabilities are not all made equal nor according to a well-defined measure. And, an update to an online service can have different complexity and dependencies than a fix to a software product, decade old software platform on which tens of thousands have built applications, or hardware devices. Thoughtful collaboration takes these attributes into account. To arrive at a place where important security strategies protect customers, we must work together. We appreciate and recognize the positive collaboration, information sharing and results-orientation underway with many security players today. We ask that researchers privately disclose vulnerabilities to software providers, working with them until a fix is made available before sharing any details publically. It is in that partnership that customers benefit the most. Policies and approaches that limit or ignore that partnership do not benefit the researchers, the software vendors, or our customers. It is a zero sum game where all parties end up injured. Let’s face it, no software is perfect. It is, after all, made by human beings. Microsoft has a responsibility to work in our customers’ best interest to address security concerns quickly, comprehensively, and in a manner that continues to enable the vast ecosystem that provides technology to positively impact peoples’ lives. Software is organic, usage patterns and practices change, and new systems are built on top of products that test (and in some cases exceed) the limits of its original design. In many ways that’s the exciting part of software within the rapidly evolving world that we live in. Stating these points isn’t in any way an abdication of responsibility. It is our job to build the best possible software that we can, and to protect it continuously to the very best of our ability. We’re all in. Chris Betz Senior Director, MSRC Trustworthy Computing [Note: In our own CVD policy (available at microsoft.com/cvd), we do mention exceptions for cases in which we might release an advisory about a vulnerability in a third party’s software before an update is ready, including when the technical details have become publicly known, when there is evidence of exploitation of an unpatched vulnerability, and when the vendor fails to respond to requests for discussion.] Sursa: A Call for Better Coordinated Vulnerability Disclosure - MSRC - Site Home - TechNet Blogs
  25. The pitfalls of allowing file uploads on your website These days a lot of websites allow users to upload files, but many don’t know about the unknown pitfalls of letting users (potential attackers) upload files, even valid files. What’s a valid file? Usually, a restriction would be on two parameters: The uploaded file extension The uploaded Content-Type. For example, the web application could check that the extension is “jpg” and the Content-Type “image/jpeg” to make sure it’s impossible to upload malicious files. Right? The problem is that plugins like Flash doesn’t care about extension and Content-Type. If a file is embedded using an <object> tag, it will be executed as a Flash file as long as the content of the file looks like a valid Flash file. But wait a minute! Shouldn’t the Flash be executed within the domain that embeds the file using the <object> tag? Yes and no. If a Flash file (bogus image file) is uploaded on victim.com and then embedded at attacker.com, the Flash file can execute JavaScript within the domain of attacker.com. However, if the Flash file sends requests, it will be allowed to read files within the domain of victim.com. This basically means that if a website allows file uploads without validating the content of the file, an attacker can bypass any CSRF protection on the website. The attack Based on these facts we can create an attack scenario like this: An attacker creates a malicious Flash (SWF) file The attacker changes the file extension to JPG The attacker uploads the file to victim.com The attacker embeds the file on attacker.com using an tag with type “application/x-shockwave-flash” The victim visits attacker.com, loads the file as embedded with the tag The attacker can now send and receive arbitrary requests to victim.com using the victims session The attacker sends a request to victim.com and extracts the CSRF token from the response A payload could look like this: <object style="height:1px;width:1px;" data="http://victim.com/user/2292/profilepicture.jpg" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=read&u=http://victim.com/secret_file.txt"></object> The fix The good news is that there’s a fairly easy way to prevent Flash from doing this. Flash won’t execute the file if it sends a Content-Disposition header like so: Content-Disposition: attachment; filename=”image.jpg” So if you allow file uploads or printing arbitrary user data in your service, you should always verify the contents as well as sending a Content-Disposition header where applicable. Another way to remediate issues like this is to host the uploaded files on a separate domain (like websiteusercontent.com). Other uses But the fun doesn’t stop at file uploads! Since the only requirements of this attack is that an attacker can control the data on a location of the target domain (regardless of Content-Type), there’s more than one way to perform this attack. One way would be to abuse a JSONP API. Usually, the attacker can control the output of a JSONP API endpoint by changing the callback parameter. However, if an attacker uses an entire Flash file as callback, we can use it just like we would use an uploaded file in this attack. A payload could look like this: <object style="height:1px;width:1px;" data="http://victim.com/user/jsonp?callback=CWS%07%0E000x%9C%3D%8D1N%C3%40%10E%DF%AE%8D%BDI%08%29%D3%40%1D%A0%A2%05%09%11%89HiP%22%05D%8BF%8E%0BG%26%1B%D9%8E%117%A0%A2%DC%82%8A%1Br%04X%3B%21S%8C%FE%CC%9B%F9%FF%AA%CB7Jq%AF%7F%ED%F2%2E%F8%01%3E%9E%18p%C9c%9Al%8B%ACzG%F2%DC%BEM%EC%ABdkj%1E%AC%2C%9F%A5%28%B1%EB%89T%C2Jj%29%93%22%DBT7%24%9C%8FH%CBD6%29%A3%0Bx%29%AC%AD%D8%92%FB%1F%5C%07C%AC%7C%80Q%A7Nc%F4b%E8%FA%98%20b%5F%26%1C%9F5%20h%F1%D1g%0F%14%C1%0A%5Ds%8D%8B0Q%A8L%3C%9B6%D4L%BD%5F%A8w%7E%9D%5B%17%F3%2F%5B%DCm%7B%EF%CB%EF%E6%8D%3An%2D%FB%B3%C3%DD%2E%E3d1d%EC%C7%3F6%CD0%09" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=alert&u=http://victim.com/secret_file.txt"></object> tl;dr: Send Content-Disposition headers for uploaded files and validate your JSONP callback names. Or put the uploaded files on a separate domain And like always, if you want to know if your website has issues like these, try a Detectify scan! That’s it for now. Written by: Mathias, Frans Sursa: Detectify Blog – The pitfalls of allowing file uploads on your...
×
×
  • Create New...