Jump to content

Usr6

Active Members
  • Posts

    1337
  • Joined

  • Last visited

  • Days Won

    89

Everything posted by Usr6

  1. Encrypted information has been accessed during a data breach at the password management service, OneLogin. It affects "all customers served by our US data centre" and perpetrators had "the ability to decrypt encrypted data", according to The Register. Those affected have been advised to visit a registration-only support page, outlining the steps they need to take. Security experts said the breach was "embarrassing" and showed every company was open to attack. OneLogin is a single sign-on service, allowing users to access multiple apps and sites with just one password. In 2013, the company had 700 business customers and passed 12 million licensed users. Apps and sites integrated into the service include Amazon Web Services, Microsoft Office 365, Slack, Cisco Webex, Google Analytics and LinkedIn. "We have since blocked this unauthorized access, reported the matter to law enforcement, and are working with an independent security firm to determine how the unauthorized access happened," chief information security officer Alvaro Hoyos said on the company's blog. "We are actively working to determine how best to prevent such an incident from occurring in the future." Users who log in to the site have been given a list of steps designed to minimise the risk to their data. These include: forcing a password reset for all users generating new security credentials and certificates for apps and sites recycling secrets stored in OneLogin's secure notes Some customers have criticised OneLogin for requiring users to log in to see the list. The company has not yet responded to a BBC request for comment. In its email to customers, OneLogin told them that "because this is still an active investigation involving law enforcement, there are certain details we can't comment on at this time. "We understand how frustrating this might be and thank you for your patience while we continue the investigation." 'Strong passwords' "Companies need to understand the risks of using cloud-based systems," Professor Bill Buchanan of Edinburgh Napier University told the BBC. "Increasingly they need to encrypt sensitive information before they put it within cloud systems, and watch that their encryption keys are not distributed to malicious agents. "It is almost impossible to decrypt data that uses strong encryption, unless the encryption key has been generated from a simple password," he said. IT security consultant Ben Schlabs told the BBC it was likely the compromised data included passwords protected using "hashing" - converting the data into fixed-length strings of characters or numbers. "The security of data would then depend on the strength of the passwords, and of the password hashes," he said. "I would happily store my properly encrypted password safe in any cloud service, because you don't know my password for that safe and I trust encryption." The strongest encryption system "hasn't been broken yet, and there's no sign that it should be," he said. Sursa: http://www.bbc.com/news/technology-40118699
  2. Usr6

    Udemy

    Windows Server 2016 System Administration for Beginners https://www.udemy.com/windows-server-2016/?couponCode=FREEUCOUPONCOURSES
  3. Welcome to this tutorial series on ARM assembly basics. This is the preparation for the followup tutorial series on ARM exploit development (not published yet). Before we can dive into creating ARM shellcode and build ROP chains, we need to cover some ARM Assembly basics first. The following topics will be covered step by step: ARM Assembly Basics Tutorial Series: Part 1: Introduction to ARM Assembly Part 2: Data Types Registers Part 3: ARM Instruction Set Part 4: Memory Instructions: Loading and Storing Data Part 5: Load and Store Multiple Part 6: Conditional Execution and Branching Part 7: Stack and Functions To follow along with the examples, you will need an ARM based lab environment. If you don’t have an ARM device (like Raspberry Pi), you can set up your own lab environment in a Virtual Machine using QEMU and the Raspberry Pi distro by following this tutorial. If you are not familiar with basic debugging with GDB, you can get the basics in this tutorial. Why ARM? This tutorial is generally for people who want to learn the basics of ARM assembly. Especially for those of you who are interested in exploit writing on the ARM platform. You might have already noticed that ARM processors are everywhere around you. When I look around me, I can count far more devices that feature an ARM processor in my house than Intel processors. This includes phones, routers, and not to forget the IoT devices that seem to explode in sales these days. That said, the ARM processor has become one of the most widespread CPU cores in the world. Which brings us to the fact that like PCs, IoT devices are susceptible to improper input validation abuse such as buffer overflows. Given the widespread usage of ARM based devices and the potential for misuse, attacks on these devices have become much more common. Yet, we have more experts specialized in x86 security research than we have for ARM, although ARM assembly language is perhaps the easiest assembly language in widespread use. So, why aren’t more people focusing on ARM? Perhaps because there are more learning resources out there covering exploitation on Intel than there are for ARM. Just think about the great tutorials on Intel x86 Exploit writing by the Corelan Team – Guidelines like these help people interested in this specific area to get practical knowledge and the inspiration to learn beyond what is covered in those tutorials. If you are interested in x86 exploit writing, the Corelan tutorials are your perfect starting point. In this tutorial series here, we will focus on assembly basics and exploit writing on ARM. ARM processor vs. Intel processor There are many differences between Intel and ARM, but the main difference is the instruction set. Intel is a CISC (Complex Instruction Set Computing) processor that has a larger and more feature-rich instruction set and allows many complex instructions to access memory. It therefore has more operations, addressing modes, but less registers than ARM. CISC processors are mainly used in normal PC’s, Workstations, and servers. ARM is a RISC (Reduced instruction set Computing) processor and therefore has a simplified instruction set (100 instructions or less) and more general purpose registers than CISC. Unlike Intel, ARM uses instructions that operate only on registers and uses a Load/Store memory model for memory access, which means that only Load/Store instructions can access memory. This means that incrementing a 32-bit value at a particular memory address on ARM would require three types of instructions (load, increment and store) to first load the value at a particular address into a register, increment it within the register, and store it back to the memory from the register. The reduced instruction set has its advantages and disadvantages. One of the advantages is that instructions can be executed more quickly, potentially allowing for greater speed (RISC systems shorten execution time by reducing the clock cycles per instruction). The downside is that less instructions means a greater emphasis on the efficient writing of software with the limited instructions that are available. Also important to note is that ARM has two modes, ARM mode and Thumb mode. Thumb mode is intended primarily to increase code density by using 16-bit instead of 32-bit instructions. More differences between ARM and x86 are: In ARM, most instructions can be used for conditional execution. The Intel x86 and x86-64 series of processors use the little-endian format The ARM architecture was little-endian before version 3. Since then ARM processors became BI-endian and feature a setting which allows for switchable endianness. There are not only differences between Intel and ARM, but also between different ARM version themselves. This tutorial series is intended to keep it as generic as possible so that you get a general understanding about how ARM works. Once you understand the fundamentals, it’s easy to learn the nuances for your chosen target ARM version. The examples in this tutorial were created on an ARMv6 (Raspberry Pi 1). Some of the differences include: Registers on ARMv6 and ARMv7 start with the letter R (R0, R1, etc), while ARMv8 registers start with the letter X (X0, X1, etc). The amount of registers might also vary from 30 to around 40 general-purpose registers, of which only 16 are accessible in User Mode. The naming of the different ARM versions might also be confusing: ARM family ARM architecture ARM7 ARM v4 ARM9 ARM v5 ARM11 ARM v6 Cortex-A ARM v7-A Cortex-R ARM v7-R Cortex-M ARM v7-M Writing Assembly Before we can start diving into ARM exploit development we first need to understand the basics of Assembly language programming, which requires a little background knowledge before you can start to appreciate it. But why do we even need ARM Assembly, isn’t it enough to write our exploits in a “normal” programming / scripting language? It is not, if we want to be able to do Reverse Engineering and understand the program flow of ARM binaries, build our own ARM shellcode, craft ARM ROP chains, and debug ARM applications. You don’t need to know every little detail of the Assembly language to be able to do Reverse Engineering and exploit development, yet some of it is required for understanding the bigger picture. The fundamentals will be covered in this tutorial series. If you want to learn more you can visit the links listed at the end of this chapter. So what exactly is Assembly language? Assembly language is just a thin syntax layer on top of the machine code which is composed of instructions, that are encoded in binary representations (machine code), which is what our computer understands. So why don’t we just write machine code instead? Well, that would be a pain in the ass. For this reason, we will write assembly, ARM assembly, which is much easier for humans to understand. Our computer can’t run assembly code itself, because it needs machine code. The tool we will use to assemble the assembly code into machine code is a GNU Assembler from the GNU Binutils project named as which works with source files having the *.s extension. Once you wrote your assembly file with the extension *.s, you need to assemble it with as and link it with ld: $ as program.s -o program.o $ ld program.o -o program Another way to compile assembly code is to use GCC as shown below: $ gcc -c program.s -o program.o $ gcc program.o -o program The GCC approach introduces quite some overhead for the application, such as additional code (libraries), etc. This makes a properly written program to exit normally (without SIGSEGV crashes) and in some cases might be a preferred choice. However, for simplicity reasons as and ld are used throughout the tutorials here by launching the proof of concept code in the debugging (GDB) environment. Assembly under the hood Let’s start at the very bottom and work our way up to the assembly language. At the lowest level, we have our electrical signals on our circuit. Signals are formed by switching the electrical voltage to one of two levels, say 0 volts (‘off’) or 5 volts (‘on’). Because just by looking we can’t easily tell what voltage the circuit is at, we choose to write patterns of on/off voltages using visual representations, the digits 0 and 1, to not only represent the idea of an absence or presence of a signal, but also because 0 and 1 are digits of the binary system. We then group the sequence of 0 and 1 to form a machine code instruction which is the smallest working unit of a computer processor. Here is an example of a machine language instruction: 1110 0001 1010 0000 0010 0000 0000 0001 So far so good, but we can’t remember what each of these patterns (of 0 and 1) mean. For this reason, we use so called mnemonics, abbreviations to help us remember these binary patterns, where each machine code instruction is given a name. These mnemonics often consist of three letters, but this is not obligatory. We can write a program using these mnemonics as instructions. This program is called an Assembly language program, and the set of mnemonics that is used to represent a computer’s machine code is called the Assembly language of that computer. Therefore, Assembly language is the lowest level used by humans to program a computer. The operands of an instruction come after the mnemonic(s). Here is an example: MOV R2, R1 Now that we know that an assembly program is made up of textual information called mnemonics, we need to get it converted into machine code. As mentioned above, in the case of ARM assembly, the GNU Binutils project supplies us with a tool called as. The process of using an assembler like as to convert from (ARM) assembly language to (ARM) machine code is called assembling. In summary, we learned that computers understand (respond to) the presence or absence of voltages (signals) and that we can represent multiple signals in a sequence of 0s and 1s (bits). We can use machine code (sequences of signals) to cause the computer to respond in some well-defined way. Because we can’t remember what all these sequences mean, we give them abbreviations – mnemonics, and use them to represent instructions. This set of mnemonics is the Assembly language of the computer and we use a program called Assembler to convert code from mnemonic representation to the computer-readable machine code, in the same way a compiler does for high-level languages. https://azeria-labs.com/arm-data-types-and-registers-part-2/ https://azeria-labs.com/arm-instruction-set-part-3/ https://azeria-labs.com/memory-instructions-load-and-store-part-4/ https://azeria-labs.com/load-and-store-multiple-part-5/ https://azeria-labs.com/arm-conditional-execution-and-branching-part-6/ https://azeria-labs.com/functions-and-the-stack-part-7/ Sursa: https://azeria-labs.com/writing-arm-assembly-part-1/
  4. The Nation Is Calling You! Welcome potential member, we are the Shabhacking team. Our operative *R. Sanchez* was caught during a mission and is now being held in an intergalactic prison. ARE YOU GOOD ENOUGH TO GET HIM OUT? https://www.shabak.gov.il/pages/cyber.html#=1 * shabak = "the unseen shield" The Israel Security Agency
  5. @DuTy^ " Eu astept sa se strecoare pe undeva prin ATM-uri, atunci sa vezi haos. " n-o trebuit sa astepti prea mult - Bank of China
  6. anglia: https://www.wired.co.uk/article/nhs-cyberattack-ransomware-security https://www.theguardian.com/society/2017/may/12/hospitals-across-england-hit-by-large-scale-cyber-attack
  7. Basic Structure of a USB The first task is to remove the USB logic board from its enclosure. Oftentimes there is a seam that can be pried open with a plastic spudger tool. The board will likely be held in place by a few plastic latches or with adhesive. Once we have removed the logic board from its enclosure we can examine the board for any obvious signs of damage. Indicators of damage could range from melted components, scorch marks, bad solder joints, or cracks in the logic board. While one could attempt to repair observed damages, we will instead transplant the NAND storage chip to a functioning same model device. For this post, I tore down two USBs I had close at hand. One older unmarked 2 GB USB and a newer 8 GB SanDisk Cruzer. Both devices are pictured below. Both devices are made up of the same basic anatomy. The primary components of a USB that we will concern ourselves with are the USB connector (1), the USB controller (2), and the NAND storage chip (3). The SanDisk has its USB connector integrated with the logic board as opposed to the soldered on USB connector more commonly seen with most USBs. If the USB connector is damaged there will likely be obvious damage to the gold-plated pads within the connector or the solder joints connecting it to the logic board. A typical USB has four gold pads each corresponding to a specific signal: power, data -, data +, and ground. The gold-plated tabs should be straight, flat, and free of any residue. The solder joints where the connector meets the logic board should be holding the connector firmly in place. There should also be continuity between the gold pads and the solder joint where that electrical signal meets the board. If there is any apparent damage or the connector is not secured to the logic board, reflowing the joints may solve the issue. Failing that, one could use hot air (i.e., a hot air rework station) to remove the defective USB connector and replace it with a functioning one. The USB controller typically comes in a TQFP (Thin Quad Flat Package1) package with leads on all four sides of the chip. Discussing how to diagnose and fix issues with the controller is out of scope here. The NAND chip(s) houses all of the data on the USB. These chips are fairly durable and in most scenarios are not damaged. These chips are often one of two packages: TSOP (Thin Small Outline Package2) or BGA (Ball Grid Array3). In the photos above, both chips have a TSOP-48 NAND memory chip. The number 48 represents the total number of leads on the chip where two sides contain 24 leads each. These chips are easier to work with than their BGA counterparts where the leads are underneath the chip rather than on the side of it. Some USBs have more than one NAND chip. In that case, both NANDs would need to be swapped. In our scenario, we will discuss the steps necessary to transplant the NAND chip from a non-functioning USB to a same-model counterpart. Let’s get started. Recovering Data from a USB With the NAND chip(s) identified, let’s discuss how to remove the chip from the board. There are a number of different methods we could employ, but the end goal remains the same: melt the solder joints holding the chip in place long enough to remove it from the board. The most popular method would be to use a hot air or IR rework station. Other methods, like using a low temperature solder, exist and are worth exploring to determine the best tool for the job. Each come with their own pros and cons. When heating an electronic component to high temperatures there is always a change it may be damaged in the process. In addition to that, care needs to be taken to avoid overheating other components on the board. This is especially true when using a hot air gun. With that said, let’s continue our discussion on the assumption we have elected to use a hot air rework station. Depending on the composition of the solder used it will likely melt (reflow) around 190°C. When solder reflows, it takes on an observable shiny characteristic. Applying flux to the leads will facilitate the reflow process. The exact temperature, air flow speed, and nozzle to use is setup dependent. Practice on test devices to get a feel for the appropriate setting. Aim to reflow and remove the chip from the board after 10-20 seconds of sustained heating. If the reflow station has a preheater, the board can be heated up to near reflow temperatures to decrease the amount of time high heat needs to be applied. Preheating also allows for the board to be more evenly heated rather than heating a localized area which may stress and damage the board. With the appropriate temperature determined, apply hot air a few centimeters above the leads, taking turns to hit each side. It is critical that when using tweezers, or some other tool, to lift the chip up (once the leads have reflowed) to not apply much pressure. If there is resistance stop and do not continue pulling up on the chip. Continue applying heat. Ignoring this advice can result in tearing off pads which will certainly result in numerous headaches. Low-temperature solder is much safer with the chip but takes more time and leaves a mess on the board. The process involves applying low-temperature solder liberally on the existing solder joints and create a horizontal stream of low temp solder spanning across all of the leads on each side of the NAND. Heating this mixture of primarily low-temp solder with just a soldering iron keeps it reflowed for ~10 seconds, long enough to reflow both sides and easily remove the chip. Whichever method is preferred, remove the NAND chips from both the original and donor boards. Make sure to inspect the original chip’s leads for solder bridges (this is more likely to occur with the low-temp solder method). Solder bridges occur when one or more leads are connected with solder causing a short between those leads. Any such shorts must be removed prior to swapping the NAND onto the donor board. With the original NAND chip inspected, we can now swap it onto the functioning donor board. As a brief aside, if you have a chip programmer, and it supports your NAND, you can read directly from the chip without needing to perform this last step. Before swapping the NAND onto the donor board, inspect the board to make sure all pads are intact and there are not any solder bridges. It is also recommended to either even out or, preferably, remove solder on each pad with desoldering wick so the chip lays flat and in contact with all pads when placed on the board. Let’s discuss two different options for soldering the NAND onto the donor board. We can either use the hot air rework (or IR) station or manually solder the chip with a soldering iron. Manually soldering the chip is safer for the chip as you are not applying heat directly to the chip itself. This method is more time-consuming. Hot air is just the inverse of the process employed to remove the chip. Once complete, inspect the leads one more time to ensure there are no inadvertent electrical connections between leads. In addition to this, use a multimeter and check for continuity between each lead and the pad it connectors to to ensure all are making sound electrical connections. If all went well, reattempt acquisition of the device. Ideally it should now be recognized by your machine and allow you to image it. If that is not the case, reinspect the leads and rule out inadvertent electrical connections. Verify that the host and donor are the same make and model with similar board design. Know that this technique is potentially destructive. Therefore, ensure you practice this in test scenarios before applying it to casework. Sursa: https://dpmforensics.com/2017/05/07/extracting-data-from-damaged-usbs/
  8. Discovery tool The INTEL-SA-00075 Discovery Tool can be used by local users or an IT administrator to determine whether a system is vulnerable to the exploit documented in Intel Security Advisory INTEL-SA-00075 https://downloadcenter.intel.com/download/26755
  9. As you move around the Internet, it is almost impossible not to leave any digital footprints. And generally speaking, being followed on the Internet is not okay with most people. But whereas a degree of privacy is important anywhere you go online, you really need the ability to cover your tracks on public computers and shared computers. What is incognito mode? Incognito mode — also known as private mode — is a browser mode that gives a user a measure of privacy among other users of the same device or account. In incognito mode, a browser doesn’t store your Web surfing history, cookies, download history, or login credentials. What does “doesn’t store” mean? Well, as you know, browsers normally remember everything you do online: what you searched for, what pages you visited, what videos you watched, what you shopped for on Amazon, and so on. But in incognito mode, browsers don’t save any of that information. You can switch between an incognito window and regular browsing windows, too. You’ll be in incognito mode only when you’re using the incognito window. When should you use incognito mode? The simple answer is, you should use incognito mode when you want to keep your Internet activity secret from other people who use the same computer or device. Say, for example, you want to buy a gift for your spouse. You use your home PC to search for the best deals. You close the browser and turn off the PC when you’re done. When your spouse uses the computer, say to check e-mail or Facebook, they are likely to see what you searched for, even without looking for it — either in browser history or in targeted ads. If you use incognito mode for your shopping, however, the browser will forget that history and not inadvertently spoil the surprise. Also, in incognito mode you can watch what you like without leaving any tracks. (What? That’s how I listen to pop music, obviously.) What else does incognito mode conveniently forget? Login credentials and other form info. In incognito mode, a browser won’t save login name or password. That means you can log in to Facebook on someone else’s computer, and when you close the browser, or even the tab, you’ll be logged out, and the credentials will not autofill when you or someone else returns to the site. So there’s no chance another person will go to facebook.com and inadvertently (or purposely) post from your account. Also, even if that person’s regular browser is set to save the data entered in forms (such as name, address, phone number), an incognito window won’t save that information. Download history. If you download something while incognito, it won’t appear in the browser’s download history. However, the downloaded files will be available for everyone who uses the PC, unless you delete them. So, be careful with your My Little Pony films. Are there other reasons to use incognito mode? Incognito browsing is mostly about, well, going incognito. That said, here are a few more considerations. Multiple accounts. You can log in to multiple accounts on a Web service simultaneously by using multiple incognito tabs. No add-ons. This mode also blocks add-ons by default, which comes in handy in some situations. For example, you want to read the news but the page says “Disable your ad blocker to see this story.” Simply open the link in incognito mode — ta da! However, if you want to use add-ons in incognito mode as well, click the menu button (three dots) in the upper right corner, choose More Tools → Extensions and check the boxes on Allow in incognito for the add-ons you need. How do you activate incognito mode? In Google Chrome: You can use a keyboard shortcut or click. Press Ctrl + Shift + N in Windows or ⌘ + Shift + N in macOS. Or click the three-dot button in the upper right corner of the browser window and then choose New incognito window. Click here for more info. In Mozilla Firefox: Open the menu (three horizontal bars) in the upper right corner and click New Private Window. For more info visit this page. In Microsoft Edge: Open the menu by clicking the three dots in the upper right corner and chose New InPrivate window. You’ll find more on that here. In Chrome or Firefox, you can also right-click on a link and choose to open the link in a new incognito or private window. To close this mode, simply close the tab or window. That’s it! What incognito mode isn’t suitable for? It is always fine to use incognito browsing. But you need to understand what it can’t do. The first, very important thing to keep in mind is that incognito mode doesn’t make your browsing anonymous. It erases local traces, but your IP address and other information remain trackable. Among those able to see your online activities: Your service provider, Your boss (if you are using a work computer), Websites you visit. If there is any spying software on your computer (a keylogger, for example) it also can see what you doing. So don’t do anything stupid or illegal. Second, and just as important, incognito mode doesn’t protect you from people who want to steal the data you send to and receive from the Internet. For example, using incognito mode for online banking, shopping, and so on is no safer than using normal mode in your browser. If you do any of those things on a shared or public network, we strongly recommend you use a VPN. Last but not least, incognito mode doesn’t completely prevent tracking of you by online advertisers. Your Web cookies will be deleted at the end of the session, but modern targeted ad systems use a lot more than just cookies. There again, we can help you with our Private Browsing feature. Now you know how to buy a present without getting caught — congratulations! Let’s go shopping and watch embarrassing videos without fear! Sursa: https://blog.kaspersky.com/incognito-mode-faq/14784/
  10. Every day we see a bunch of new Android applications being published on the Google Play Store, from games, to utilities, to IoT devices clients and so forth, almost every single aspect of our life can be somehow controlled with “an app”. We have smart houses, smart fitness devices and smart coffee machines … but is this stuff just smart or is it secure as well? Reversing an Android application can be a (relatively) easy and fun way to answer this question, that’s why I decided to write this blog post where I’ll try to explain the basics and give you some of my “tricks” to reverse this stuff faster and more effectively. I’m not going to go very deep into technical details, you can learn yourself how Android works, how the Dalvik VM works and so forth, this is gonna be a very basic practical guide instead of a post full of theoretical stuff but no really useful contents. Let’s start! Prerequisites In order to follow this introduction to APK reversing there’re a few prerequisites: A working brain ( I don’t give this for granted anymore … ). An Android smartphone ( doh! ). You have a basic knowledge of the Java programming language (you understand it if you read it). You have the JRE installed on your computer. You have adb installed. You have the Developer Options and USB Debugging enabled on your smartphone. What is an APK? An Android application is packaged as an APK ( Android Package ) file, which is essentially a ZIP file containing the compiled code, the resources, signature, manifest and every other file the software needs in order to run. Being it a ZIP file, we can start looking at its contents using the unzip command line utility ( or any other unarchiver you use ): unzip application.apk -d application Here’s what you will find inside an APK. /AndroidManifest.xml (file) This is the binary representation of the XML manifest file describing what permissions the application will request (keep in mind that some of the permissions might be requested at runtime by the app and not declared here), what activities ( GUIs ) are in there, what services ( stuff running in the background with no UI ) and what receivers ( classes that can receive and handle system events such as the device boot or an incoming SMS ). Once decompiled (more on this later), it’ll look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 <?xml version="1.0" encoding="utf-8" standalone="no"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.company.appname" platformBuildVersionCode="24" platformBuildVersionName="7.0"> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.INTERNET"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name="com.company.appname.MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> </manifest> Keep in mind that this is the perfect starting point to isolate the application “entry points”, namely the classes you’ll reverse first in order to understand the logic of the whole software. In this case for instance, we would start inspecting the com.company.appname.MainActivity class being it declared as the main UI for the application. /assets/* ( folder ) This folder will contain application specific files, like wav files the app might need to play, custom fonts and so on. Reversing-wise it’s usually not very important, unless of course you find inside the software functional references to such files. /res/* ( folder ) All the resources, like the activities xml files, images and custom styles are stored here. /resources.arsc ( file ) This is the “index” of all the resources, long story short, at each resource file is assigned a numeric identifier that the app will use in order to identify that specific entry and the resources.arsc file maps these files to their identifiers … nothing very interesting about it. /classes.dex ( file ) This file contains the Dalvik ( the virtual machine running Android applications ) bytecode of the app, let me explain it better. An Android application is (most of the times) developed using the Java programming language. The java source files are then compiled into this bytecode which the Dalvik VM eventually will execute … pretty much what happens to normal Java programs when they’re compiled to .class files. Long story short, this file contains the logic, that’s what we’re interested into. Sometimes you’ll also find a classes2.dex file, this is due to the DEX format which has a limit to the number of classes you can declare inside a single dex file, at some point in history Android apps became bigger and bigger and so Google had to adapt this format, supporting a secondary .dex file where other classes can be declared. From our perspective it doesn’t matter, the tools we’re going to use are able to detect it and append it to the decompilation pipeline. /libs/ ( folder ) Sometimes an app needs to execute native code, it can be an image processing library, a game engine or whatever. In such case, those .so ELF libraries will be found inside the libs folder, divided into architecture specific subfolders ( so the app will run on ARM, ARM64, x86, etc ). /META-INF/ ( folder ) Every Android application needs to be signed with a developer certificate in order to run on a device, even debug builds are signed by a debug certificate, the META-INF folder contains information about the files inside the APK and about the developer. Inside this folder, you’ll usually find: A MANIFEST.MF file with the SHA-1 or SHA-256 hashes of all the files inside the APK. A CERT.SF file, pretty much like the MANIFEST.MF, but signed with the RSA key. A CERT.RSA file which contains the developer public key used to sign the CERT.SF file and digests. Those files are very important in order to guarantee the APK integrity and the ownership of the code. Sometimes inspecting such signature can be very handy to determine who really developed a given APK. If you want to get information about the developer, you can use the openssl command line utility: openssl pkcs7 -in /path/to/extracted/apk/META-INF/CERT.RSA -inform DER -print This will print an output like: PKCS7: type: pkcs7-signedData (1.2.840.113549.1.7.2) d.sign: version: 1 md_algs: algorithm: sha1 (1.3.14.3.2.26) parameter: NULL contents: type: pkcs7-data (1.2.840.113549.1.7.1) d.data: <ABSENT> cert: cert_info: version: 2 serialNumber: 10394279457707717180 signature: algorithm: sha1WithRSAEncryption (1.2.840.113549.1.1.5) parameter: NULL issuer: C=TW, ST=Taiwan, L=Taipei, O=ASUS, OU=PMD, CN=ASUS AMAX Key/emailAddress=admin@asus.com validity: notBefore: Jul 8 11:39:39 2013 GMT notAfter: Nov 23 11:39:39 2040 GMT subject: C=TW, ST=Taiwan, L=Taipei, O=ASUS, OU=PMD, CN=ASUS AMAX Key/emailAddress=admin@asus.com key: algor: algorithm: rsaEncryption (1.2.840.113549.1.1.1) parameter: NULL public_key: (0 unused bits) ... ... ... This can be gold for us, for instance we could use this information to determine if an app was really signed by (let’s say) Google or if it was resigned, therefore modified, by a third party. How do I get the APK of an app? Now that we have a basic idea of what we’re supposed to find inside an APK, we need a way to actually get the APK file of the application we’re interested into. There are two ways, either you install it on your device and use adb to get it, or you use an online service to download it. Pulling an app with ADB First of all let’s plug our smartphone to the USB port of our computer and get a list of the installed packages and their namespaces: adb shell pm list packages This will list all packages on your smartphone, once you’ve found the namespace of the package you want to reverse ( com.android.systemui in this example ), let’s see what its physical path is: adb shell pm path com.android.systemui Finally, we have the APK path: package:/system/priv-app/SystemUIGoogle/SystemUIGoogle.apk Let’s pull it from the device: adb pull /system/priv-app/SystemUIGoogle/SystemUIGoogle.apk And here you go, you have the APK you want to reverse! Using an Online Service Multiple online services are available if you don’t want to install the app on your device (for instance, if you’re reversing a malware, you want to start having the file first, then installing on a clean device only afterwards), here’s a list of the ones I use: Apk-DL Evozi Downloader Apk Leecher Keep in mind that once you download the APK from these services, it’s a good idea to check the developer certificate as previously shown in order to be 100% sure you downloaded the correct APK and not some repackaged and resigned stuff full of ads and possibly malware. Network Analysis Now we start with some tests in order to understand what the app is doing while executed. My first test usually consists in inspecting the network traffic being generated by the application itself and, in order to do that, my tool of choice is bettercap … well, that’s why I developed it in the first place Make sure you have bettercap installed and that both your computer and the Android device are on the same wifi network, then you can start MITM-ing the smartphone ( 192.168.1.5 in this example ) and see its traffic in realtime from the terminal: sudo bettercap -T 192.168.1.5 -X The -X option will enable the sniffer, as soon as you start the app you should see a bunch of HTTP and/or HTTPS servers being contacted, now you know who the app is sending the data to, let’s now see what data it is sending: sudo bettercap -T 192.168.1.5 --proxy --proxy-https --no-sslstrip This will switch from passive sniffing mode, to proxying mode. All the HTTP and HTTPS traffic will be intercepted (and, if neeeded, modified) by bettercap. If the app is correctly using public key pinning (as every application should) you will not be able to see its HTTPS traffic but, unfortunately, in my experience this only happens for a very small number of apps. From now on, keep triggering actions on the app while inspecting the traffic ( you can also use Wireshark in parallel to get a PCAP capture file to inspect it later ) and after a while you should have a more or less complete idea of what protocol it’s using and for what purpose. Static Analysis After the network analysis, we collected a bunch of URLs and packets, we can use this information as our starting point, that’s what we will be looking for while performing static analysis on the app. “Static analysis” means that you will not execute the app now, but you’ll rather just study its code. Most of the times this is all you’ll ever need to reverse something. There’re different tools you can use for this purpose, let’s take a look at the most popular ones. apktool APKTool is the very first tool you want to use, it is capable of decompiling the AndroidManifest file to its original XML format, the resources.arsc file and it will also convert the classes.dex ( and classes2.dex if present ) file to an intermediary language called SMALI, an ASM-like language used to represent the Dalvik VM opcodes as a human readable language. It looks like: 1 2 3 4 5 6 7 8 .super Ljava/lang/Object; .method public static main([Ljava/lang/String;)V .registers 2 sget-object v0, Ljava/lang/System;->out:Ljava/io/PrintStream; const-string v1, "Hello World!" invoke-virtual {v0, v1}, Ljava/io/PrintStream;->println(Ljava/lang/String;)V return-void .end method But don’t worry, in most of the cases this is not the final language you’re gonna read to reverse the app Given an APK, this command line will decompile it: apktool d application.apk Once finished, the application folder is created and you’ll find all the output of apktool in there. You can also use apktool to decompile an APK, modify it and then recompile it ( like i did with the Nike+ app in order to have more debug logs for instance ), but unless the other tools will fail the decompilation, it’s unlikely that you’ll need to read smali code in order to reverse the application, let’s get to the other tools now jADX The jADX suite allows you to simply load an APK and look at its Java source code. What’s happening under the hood is that jADX is decompiling the APK to smali and then converting the smali back to Java. Needless to say, reading Java code is much easier than reading smali as I already mentioned Once the APK is loaded, you’ll see a UI like this: One of the best features of jADX is the string/symbol search ( the button ) that will allow you to search for URLs, strings, methods and whatever you want to find inside the codebase of the app. Also, there’s the Find Usage menu option, just highlight some symbol and right click on it, this feature will give you a list of every references to that symbol. Dex2Jar and JD-Gui Similar to jADX are the dex2jar and JD-GUI tools, once installed, you’ll use dex2jar to convert an APK to a JAR file: /path/to/dex2jar/d2j-dex2jar.sh application.apk Once you have the JAR file, simply open it with JD-GUI and you’ll see its Java code, pretty much like jADX: Unfortunately JD-GUI is not as features rich as jADX, but sometimes when one tool fails you have to try another one and hope to be more lucky. JEB As your last resort, you can try the JEB decompiler. It’s a very good software, but unfortunately it’s not free, there’s a trial version if you want to give it a shot, here’s how it looks like: JEB also features an ARM disassembler ( useful when there’re native libraries in the APK ) and a debugger ( very useful for dynamic analysis ), but again, it’s not free and it’s not cheap. Static Analysis of Native Binaries As previously mentioned, sometimes you’ll find native libraries ( .so shared objects ) inside the lib folder of the APK and, while reading the Java code, you’ll find native methods declarations like the following: 1 public native String stringFromJNI(); The native keyword means that the method implementation is not inside the dex file but, instead, it’s declared and executed from native code trough what is called a Java Native Interface or JNI. Close to native methods you’ll also usually find something like this: 1 System.loadLibrary("hello-jni"); Which will tell you in which native library the method is implemented. In such cases, you will need an ARM ( or x86 if there’s a x86 subfolder inside the libs folder ) disassembler in order to reverse the native object. IDA The very first disassembler and decompiler that every decent reverser should know about is Hex-Rays IDA which is the state of the art reversing tool for native code. Along with an IDA license, you can also buy a decompiler license, in which case IDA will also be able to rebuild pseudo C-like code from the assembly, allowing you to read an higher level representation of the library logic. Unfortunately IDA is a very expensive software and, unless you’re reversing native stuff professionaly, it’s really not worth spending all those money for a single tool … warez … ehm … Hopper If you’re on a budget but you need to reverse native code, instead of IDA you can give Hopper a try. It’s definitely not as good and complete as IDA, but it’s much cheaper and will be good enough for most of the cases. Hopper supports GNU/Linux and macOS ( no Windows! ) and, just like IDA, has a builtin decompiler which is quite decent considering its price: Dynamic Analysis When static analysis is not enough, maybe because the application is obfuscated or the codebase is simply too big and complex to quickly isolate the routines you’re interested into, you need to go dynamic. Dynamic analysis simply means that you’ll execute the app ( like we did while performing network analysis ) and somehow trace into its execution using different tools, strategies and methods. Sandboxing Sandboxing is a black-box dynamic analysis strategy, which means you’re not going to actively trace into the application code ( like you do while debugging ), but you’ll execute the app into some container that will log the most relevant actions for you and will present a report at the end of the execution. Cuckoo-Droid Cuckoo-Droid is an Android port of the famous Cuckoo sandbox, once installed and configured, it’ll give you an activity report with all the URLs the app contacted, all the DNS queries, API calls and so forth: Joe Sandbox The mobile Joe Sandbox is a great online service that allows you to upload an APK and get its activity report without the hassle of installing or configuring anything. This is a sample report, as you can see the kind of information is pretty much the same as Cuckoo-Droid, plus there’re a bunch of heuristics being executed in order to behaviourally correlate the sample to other known applications. Debugging If sandboxing is not enough and you need to get deeper insights of the application behaviour, you’ll need to debug it. Debugging an app, in case you don’t know, means attaching to the running process with a debugger software, putting breakpoints that will allow you to stop the execution and inspect the memory state and step into code lines one by one in order to follow the execution graph very closely. Enabling Debug Mode When an application is compiled and eventually published to the Google Play Store, it’s usually its release build you’re looking at, meaning debugging has been disabled by the developer and you can’t attach to it directly. In order to enable debugging again, we’ll need to use apktool to decompile the app: apktool d application.apk Then you’ll need to edit the AndroidManifest.xml generated file, adding the android:debuggable="true" attribute to its application XML node: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 <?xml version="1.0" encoding="utf-8" standalone="no"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.company.appname" platformBuildVersionCode="24" platformBuildVersionName="7.0"> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.INTERNET"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme" android:debuggable="true"> <-- !!! NOTICE ME !!! --> <activity android:name="com.company.appname.MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> </manifest> Once you updated the manifest, let’s rebuild the app: apktool b -d application_path output.apk Now let’s resign it: git clone https://github.com/appium/sign java -jar sign/dist/signapk.jar sign/testkey.x509.pem sign/testkey.pk8 output.apk signed.apk And reinstall it on the device (make sure you unistalled the original version first): adb install signed.apk Now you can proceed debugging the app Android Studio Android Studio is the official Android IDE, once you have debug mode enabled for your app, you can directly attach to it using this IDE and start debugging: IDA If you have an IDA license that supports Dalvik debugging, you can attach to a running process and step trough the smali code, this document describes how to do it, but basically the idea is that you upload the ARM debugging server ( a native ARM binary ) on your device, you start it using adb and eventually you start your debugging session from IDA. Dynamic Instrumentation Dynamic instrumentation means that you want to modify the application behaviour at runtime and in order to do so you inject some “agent” into the app that you’ll eventually use to instrument it. You might want to do this in order to make the app bypass some checks ( for instance, if public key pinning is enforced, you might want to disable it with dynamic instrumentation in order to easily inspect the HTTPS traffic ), make it show you information it’s not supposed to show ( unlock “Pro” features, or debug/admin activities ), etc. Frida Frida is a great and free tool you can use to inject a whole Javascript engine into a running process on Android, iOS and many other platforms … but why Javascript? Because once the engine is injected, you can instrument the app in very cool and easy ways like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 from __future__ import print_function import frida import sys # let's attach to the 'hello process session = frida.attach("hello") # now let's create the Javascript we want to inject script = session.create_script(""" Interceptor.attach(ptr("%s"), { onEnter: function(args) { send(args[0].toInt32()); } }); """ % int(sys.argv[1], 16)) # this function will receive events from the js def on_message(message, data): print(message) # let's start! script.on('message', on_message) script.load() sys.stdin.read() In this example, we’re just inspecting some function argument, but there’re hundreds of things you can do with Frida, just RTFM! and use your imagination Here‘s a list of cool Frida resources, enjoy! XPosed Another option we have for instrumenting our app is using the XPosed Framework. XPosed is basically an instrumentation layer for the whole Dalvik VM which requires you to to have a rooted phone in order to install it. From XPosed wiki: There is a process that is called "Zygote". This is the heart of the Android runtime. Every application is started as a copy ("fork") of it. This process is started by an /init.rc script when the phone is booted. The process start is done with /system/bin/app_process, which loads the needed classes and invokes the initialization methods. This is where Xposed comes into play. When you install the framework, an extended app_process executable is copied to /system/bin. This extended startup process adds an additional jar to the classpath and calls methods from there at certain places. For instance, just after the VM has been created, even before the main method of Zygote has been called. And inside that method, we are part of Zygote and can act in its context. The jar is located at /data/data/de.robv.android.xposed.installer/bin/XposedBridge.jar and its source code can be found here. Looking at the class XposedBridge, you can see the main method. This is what I wrote about above, this gets called in the very beginning of the process. Some initializations are done there and also the modules are loaded (I will come back to module loading later). Once you’ve installed XPosed on your smartphone, you can start developing your own module (again, follow the project wiki), for instance, here’s an example of how you would hook the updateClock method of the SystemUI application in order to instrument it: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 package de.robv.android.xposed.mods.tutorial; import static de.robv.android.xposed.XposedHelpers.findAndHookMethod; import de.robv.android.xposed.IXposedHookLoadPackage; import de.robv.android.xposed.XC_MethodHook; import de.robv.android.xposed.callbacks.XC_LoadPackage.LoadPackageParam; public class Tutorial implements IXposedHookLoadPackage { public void handleLoadPackage(final LoadPackageParam lpparam) throws Throwable { if (!lpparam.packageName.equals("com.android.systemui")) return; findAndHookMethod("com.android.systemui.statusbar.policy.Clock", lpparam.classLoader, "updateClock", new XC_MethodHook() { @Override protected void beforeHookedMethod(MethodHookParam param) throws Throwable { // this will be called before the clock was updated by the original method } @Override protected void afterHookedMethod(MethodHookParam param) throws Throwable { // this will be called after the clock was updated by the original method } }); } } There’re already a lot of user contributed modules you can use, study and modify for your own needs. Conclusion I hope you’ll find this reference guide useful for your Android reversing adventures, keep in mind that the most important thing while reversing is not the tool you’re using, but how you use it, so you’ll have to learn how to choose the appropriate tool for your scenario and this is something you can only learn with experience, so enough reading and start reversing! Sursa: https://www.evilsocket.net/2017/04/27/Android-Applications-Reversing-101/
  11. Even automated security tool thinks Redmond's snooping operating system is 'malicious' Webroot's security tools went berserk today, mislabeling key Microsoft Windows system files as malicious and temporarily removing them – knackering countless PCs in the process. Not only were people's individual copies of the antivirus suite going haywire, but also business editions and installations run by managed service providers (MSPs), meaning companies and organizations relying on the software were hit by the cockup. Between 1200 and 1500 MST (1800 and 2100 UTC) today, Webroot's gear labeled Windows operating system data as W32.Trojan.Gen – generic-Trojan-infected files, in other words – and moved them into quarantine, rendering affected computers unstable. Files digitally signed by Microsoft were whisked away – but, luckily, not all of them, leaving enough of the OS behind to reboot and restore the quarantined resources. We understand that all versions of Windows were affected by today's gaffe, and that a kill switch within Webroot's systems kicked in to halt the mass quarantining before any long-lasting damage was done. Webroot boasts it has 30 million users. Its software also, weirdly, misidentified Facebook and Bloomberg's websites this week as phishing sites, blocking access to them. For those hit by Webroot's assault on Windows files on Monday, there are official fixes suggested for those using the Home edition and Business editions of the antivirus suite. "We understand that this is a consumer and business issue," a Webroot rep confessed in a on its support forums. "We understand that MSPs will require a different solution. We are currently working on this universal solution now." Suffice to say, there are a wedge of furious and confused folks on the support boards, with angry IT admins reporting thousands of endpoints going nuts Webroot, whose slogan is "smarter cybersecurity," is working on a solution for all. The timing of the file classification blunder couldn't be worse for at least one employee. Gary Hayslip was hired earlier this month as Webroot's chief information security officer, and this can't be a fun first few weeks on the job. The biz is also looking to hire a senior software engineer for its Windows line. Based on today's kerfuffle, they might want to consider upping the headcount a bit more in this area to ensure that customers don't get hammered in the same way again, in light of February's little snafu that also left Windows users borked. A Webroot spokesperson told The Reg: "We know how important internet security is to our customers, and the Webroot team is dedicated to resolving the issue. We will provide updates as soon as they are available." ® Sursa: https://www.theregister.co.uk/2017/04/25/webroot_windows_wipeout
  12. Judge authorized order allowing US to change data in thousands of infected devices. Mass hacking seems to be all the rage currently. A vigilante hacker apparently slipped secure code into vulnerable cameras and other insecure networked objects in the "Internet of Things" so that bad guys can't corral those devices into an army of zombie computers, like what happened with the record-breaking Mirai denial-of-service botnet. The Homeland Security Department issued alerts with instructions for fending off similar “Brickerbot malware,” so-named because it bricks IoT devices. And perhaps most unusual, the FBI recently obtained a single warrant in Alaska to hack the computers of thousands of victims in a bid to free them from the global botnet, Kelihos. On April 5, Deborah M. Smith, chief magistrate judge of the US District Court in Alaska, greenlighted this first use of a controversial court order. Critics have since likened it to a license for mass hacking. The FBI sought the 30-day warrant to liberate victims through a new procedural rule change that took effect in December amid worries among privacy advocates that the update would open a new door for government abuse. But the first use of the amendments to Rule 41 of the Federal Rules of Criminal Procedure has assuaged fears, at least for the moment, because the feds used their power to kill a botnet. The Electronic Frontier Foundation, for example, commended the feds for asking a judge to review exactly what data the FBI would and would not touch in victimized devices, which were located across the country. It was a "positive step" toward accountability and transparency in FBI computer break-ins, EFF staff attorney Andrew Crocker said. This wasn't the first time the government has gained permission from a federal court to jump in and clean infected computers worldwide. To dismantle Gameover Zeus, once considered the most damaging botnet, the US obtained civil and criminal court orders in federal court in Pittsburgh "authorizing measures to redirect the automated requests by victim computers for additional instructions away from the criminal operators to substitute servers," as well as "to collect dialing, routing, addressing and signaling ("DRAS") information from the infected computers," Justice Department officials said at the time in 2014. For Kelihos, the feds needed stronger legal standing to free hostage computers because of the peer-to-peer nature of the infection, which demanded more "active measures," says John Bambenek, a manager at Fidelis Cybersecurity who's helping with the botnet cleanup. The FBI "had to infect machines," convert them into so-called supernodes that distribute connection lists to other victimized computers, and then "poison" all the computers so they would never again try to communicate with hacker-controlled devices, said Bambenek, who also assisted on the 2014 Gameover Zeus cleansing operation. With the Gameover Zeus botnet, the government wasn't modifying someone else’s computer. It was taking over malicious domains the computers were communicating with, he said. With Kelihos, "we were in essence actually changing data," and the Justice Department reasoned that this required the government to assert Rule 41, according to Bambenek. Often, the feds “use uncertainty as an excuse, or cover, for not getting a warrant," Crocker said. This time, "the government was proceeding with a lot more caution than in some of the other cases." He pointed to the government's warrantless use of secretive cellphone "Stingray" tracking equipment that continued for many years until the Justice Department released a seven-page legal use policy in 2015. But concerns remain that authorities might abuse the rule revisions, which empower judges to grant a single warrant for searching or seizing information on any number of devices, regardless of location. To kill Kelihos The Rule 41 reboot is the judicial branch’s acceptance of the reality that the Internet has no borders, and criminals increasingly are hiding their whereabouts through digital obfuscation. Authorities had complained that legal ambiguity, as well as the process of obtaining multiple warrants to probe far-flung devices, were hampering efforts to dismantle botnets like Gameover Zeus and to unmask child pornography users. (Last month, prosecutors in Washington state dropped all charges against a child porn suspect rather than disclose the pre-Rule 41 inner workings of classified intrusion tools that federal investigators used to hack Playpen, a now-shuttered underage exploitation website.) "The law more generally has not really grappled with government hacking, and this is one of the more explicit references to this kind of activity by the government," Crocker said. The government says the FBI and hired cybersleuths did not view the contents of any of the machines infected by Kelihos, which spammed e-mail inboxes, stole banking credentials, and dispersed malware all over the Web. The government did collect each victim's IP address and "non-content" routing and signaling information so that Internet Service Providers could notify the victims, the Justice Department said. What's more, this month’s court order limits the FBI’s interaction with victimized machines to commands that block an infected computer from performing malicious activities and communicating with other devices in the botnet. In addition, it prohibits the government from seizing contents inside the victim's device and interrupting Internet access. Meanwhile, some criminal defense attorneys say Rule 41 could be enhanced to clearly spell out safeguards for civil liberties during bulk hacks by the government, similar to those imposed for wiretapping in the late 1960s. "Right now for lack of that kind of control, it will be easier to attack these searches” of personal devices as inadmissible during a trial, because lawmakers have not carefully considered privacy protections, said Peter Goldberger, chair of the National Association of Criminal Defense Lawyers Rules of Procedure Committee. The department announced last week that it had fingered Pyotr Levashov as the alleged operator of the Kelihos botnet. The Russian was indicted by a federal grand jury in Bridgeport, Connecticut. Sursa: https://arstechnica.com/tech-policy/2017/04/fbi-allays-some-critics-with-first-use-of-new-mass-hacking-warrant/
  13. Usr6

    MaxRecon

    Suite for Information Gathering written in python 3.5. This tool automates some steps of Information Gathering from a target The module Google Hacking uses the tool: Google Hack Database Tool automatically. This module has been modified to be compatible with python 3.5. Installation pip install -r requirements.txt If you want to use nmap customized mode, you must install it in your computer. Usage Just write: python maxrecon.py. Don´t forget using sudo if you want to use the nmap feature. Download: https://github.com/santidediego/MaxRecon
  14. This article applies to Python 2.7 specifically, but should be applicable to Python 2.x. Python 2.7 is reaching end of life and will stop being maintained in 2020, it is though recommended to start learning Python with Python 3. # Single line comments start with a number symbol. """ Multiline strings can be written using three "s, and are often used as comments """ #################################################### # 1. Primitive Datatypes and Operators #################################################### # You have numbers 3 # => 3 # Math is what you would expect 1 + 1 # => 2 8 - 1 # => 7 10 * 2 # => 20 35 / 5 # => 7 # Division is a bit tricky. It is integer division and floors the results # automatically. 5 / 2 # => 2 # To fix division we need to learn about floats. 2.0 # This is a float 11.0 / 4.0 # => 2.75 ahhh...much better # Result of integer division truncated down both for positive and negative. 5 // 3 # => 1 5.0 // 3.0 # => 1.0 # works on floats too -5 // 3 # => -2 -5.0 // 3.0 # => -2.0 # Note that we can also import division module(Section 6 Modules) # to carry out normal division with just one '/'. from __future__ import division 11 / 4 # => 2.75 ...normal division 11 // 4 # => 2 ...floored division # Modulo operation 7 % 3 # => 1 # Exponentiation (x to the yth power) 2 ** 4 # => 16 # Enforce precedence with parentheses (1 + 3) * 2 # => 8 # Boolean Operators # Note "and" and "or" are case-sensitive True and False # => False False or True # => True # Note using Bool operators with ints 0 and 2 # => 0 -5 or 0 # => -5 0 == False # => True 2 == True # => False 1 == True # => True # negate with not not True # => False not False # => True # Equality is == 1 == 1 # => True 2 == 1 # => False # Inequality is != 1 != 1 # => False 2 != 1 # => True # More comparisons 1 < 10 # => True 1 > 10 # => False 2 <= 2 # => True 2 >= 2 # => True # Comparisons can be chained! 1 < 2 < 3 # => True 2 < 3 < 2 # => False # Strings are created with " or ' "This is a string." 'This is also a string.' # Strings can be added too! "Hello " + "world!" # => "Hello world!" # Strings can be added without using '+' "Hello " "world!" # => "Hello world!" # ... or multiplied "Hello" * 3 # => "HelloHelloHello" # A string can be treated like a list of characters "This is a string"[0] # => 'T' # You can find the length of a string len("This is a string") # => 16 # String formatting with % # Even though the % string operator will be deprecated on Python 3.1 and removed # later at some time, it may still be good to know how it works. x = 'apple' y = 'lemon' z = "The items in the basket are %s and %s" % (x, y) # A newer way to format strings is the format method. # This method is the preferred way "{} is a {}".format("This", "placeholder") "{0} can be {1}".format("strings", "formatted") # You can use keywords if you don't want to count. "{name} wants to eat {food}".format(name="Bob", food="lasagna") # None is an object None # => None # Don't use the equality "==" symbol to compare objects to None # Use "is" instead "etc" is None # => False None is None # => True # The 'is' operator tests for object identity. This isn't # very useful when dealing with primitive values, but is # very useful when dealing with objects. # Any object can be used in a Boolean context. # The following values are considered falsey: # - None # - zero of any numeric type (e.g., 0, 0L, 0.0, 0j) # - empty sequences (e.g., '', (), []) # - empty containers (e.g., {}, set()) # - instances of user-defined classes meeting certain conditions # see: https://docs.python.org/2/reference/datamodel.html#object.__nonzero__ # # All other values are truthy (using the bool() function on them returns True). bool(0) # => False bool("") # => False #################################################### # 2. Variables and Collections #################################################### # Python has a print statement print "I'm Python. Nice to meet you!" # => I'm Python. Nice to meet you! # Simple way to get input data from console input_string_var = raw_input( "Enter some data: ") # Returns the data as a string input_var = input("Enter some data: ") # Evaluates the data as python code # Warning: Caution is recommended for input() method usage # Note: In python 3, input() is deprecated and raw_input() is renamed to input() # No need to declare variables before assigning to them. some_var = 5 # Convention is to use lower_case_with_underscores some_var # => 5 # Accessing a previously unassigned variable is an exception. # See Control Flow to learn more about exception handling. some_other_var # Raises a name error # if can be used as an expression # Equivalent of C's '?:' ternary operator "yahoo!" if 3 > 2 else 2 # => "yahoo!" # Lists store sequences li = [] # You can start with a prefilled list other_li = [4, 5, 6] # Add stuff to the end of a list with append li.append(1) # li is now [1] li.append(2) # li is now [1, 2] li.append(4) # li is now [1, 2, 4] li.append(3) # li is now [1, 2, 4, 3] # Remove from the end with pop li.pop() # => 3 and li is now [1, 2, 4] # Let's put it back li.append(3) # li is now [1, 2, 4, 3] again. # Access a list like you would any array li[0] # => 1 # Assign new values to indexes that have already been initialized with = li[0] = 42 li[0] # => 42 li[0] = 1 # Note: setting it back to the original value # Look at the last element li[-1] # => 3 # Looking out of bounds is an IndexError li[4] # Raises an IndexError # You can look at ranges with slice syntax. # (It's a closed/open range for you mathy types.) li[1:3] # => [2, 4] # Omit the beginning li[2:] # => [4, 3] # Omit the end li[:3] # => [1, 2, 4] # Select every second entry li[::2] # =>[1, 4] # Reverse a copy of the list li[::-1] # => [3, 4, 2, 1] # Use any combination of these to make advanced slices # li[start:end:step] # Remove arbitrary elements from a list with "del" del li[2] # li is now [1, 2, 3] # You can add lists li + other_li # => [1, 2, 3, 4, 5, 6] # Note: values for li and for other_li are not modified. # Concatenate lists with "extend()" li.extend(other_li) # Now li is [1, 2, 3, 4, 5, 6] # Remove first occurrence of a value li.remove(2) # li is now [1, 3, 4, 5, 6] li.remove(2) # Raises a ValueError as 2 is not in the list # Insert an element at a specific index li.insert(1, 2) # li is now [1, 2, 3, 4, 5, 6] again # Get the index of the first item found li.index(2) # => 1 li.index(7) # Raises a ValueError as 7 is not in the list # Check for existence in a list with "in" 1 in li # => True # Examine the length with "len()" len(li) # => 6 # Tuples are like lists but are immutable. tup = (1, 2, 3) tup[0] # => 1 tup[0] = 3 # Raises a TypeError # You can do all those list thingies on tuples too len(tup) # => 3 tup + (4, 5, 6) # => (1, 2, 3, 4, 5, 6) tup[:2] # => (1, 2) 2 in tup # => True # You can unpack tuples (or lists) into variables a, b, c = (1, 2, 3) # a is now 1, b is now 2 and c is now 3 d, e, f = 4, 5, 6 # you can leave out the parentheses # Tuples are created by default if you leave out the parentheses g = 4, 5, 6 # => (4, 5, 6) # Now look how easy it is to swap two values e, d = d, e # d is now 5 and e is now 4 # Dictionaries store mappings empty_dict = {} # Here is a prefilled dictionary filled_dict = {"one": 1, "two": 2, "three": 3} # Look up values with [] filled_dict["one"] # => 1 # Get all keys as a list with "keys()" filled_dict.keys() # => ["three", "two", "one"] # Note - Dictionary key ordering is not guaranteed. # Your results might not match this exactly. # Get all values as a list with "values()" filled_dict.values() # => [3, 2, 1] # Note - Same as above regarding key ordering. # Get all key-value pairs as a list of tuples with "items()" filled_dicts.items() # => [("one", 1), ("two", 2), ("three", 3)] # Check for existence of keys in a dictionary with "in" "one" in filled_dict # => True 1 in filled_dict # => False # Looking up a non-existing key is a KeyError filled_dict["four"] # KeyError # Use "get()" method to avoid the KeyError filled_dict.get("one") # => 1 filled_dict.get("four") # => None # The get method supports a default argument when the value is missing filled_dict.get("one", 4) # => 1 filled_dict.get("four", 4) # => 4 # note that filled_dict.get("four") is still => None # (get doesn't set the value in the dictionary) # set the value of a key with a syntax similar to lists filled_dict["four"] = 4 # now, filled_dict["four"] => 4 # "setdefault()" inserts into a dictionary only if the given key isn't present filled_dict.setdefault("five", 5) # filled_dict["five"] is set to 5 filled_dict.setdefault("five", 6) # filled_dict["five"] is still 5 # Sets store ... well sets (which are like lists but can contain no duplicates) empty_set = set() # Initialize a "set()" with a bunch of values some_set = set([1, 2, 2, 3, 4]) # some_set is now set([1, 2, 3, 4]) # order is not guaranteed, even though it may sometimes look sorted another_set = set([4, 3, 2, 2, 1]) # another_set is now set([1, 2, 3, 4]) # Since Python 2.7, {} can be used to declare a set filled_set = {1, 2, 2, 3, 4} # => {1, 2, 3, 4} # Add more items to a set filled_set.add(5) # filled_set is now {1, 2, 3, 4, 5} # Do set intersection with & other_set = {3, 4, 5, 6} filled_set & other_set # => {3, 4, 5} # Do set union with | filled_set | other_set # => {1, 2, 3, 4, 5, 6} # Do set difference with - {1, 2, 3, 4} - {2, 3, 5} # => {1, 4} # Do set symmetric difference with ^ {1, 2, 3, 4} ^ {2, 3, 5} # => {1, 4, 5} # Check if set on the left is a superset of set on the right {1, 2} >= {1, 2, 3} # => False # Check if set on the left is a subset of set on the right {1, 2} <= {1, 2, 3} # => True # Check for existence in a set with in 2 in filled_set # => True 10 in filled_set # => False #################################################### # 3. Control Flow #################################################### # Let's just make a variable some_var = 5 # Here is an if statement. Indentation is significant in python! # prints "some_var is smaller than 10" if some_var > 10: print "some_var is totally bigger than 10." elif some_var < 10: # This elif clause is optional. print "some_var is smaller than 10." else: # This is optional too. print "some_var is indeed 10." """ For loops iterate over lists prints: dog is a mammal cat is a mammal mouse is a mammal """ for animal in ["dog", "cat", "mouse"]: # You can use {0} to interpolate formatted strings. (See above.) print "{0} is a mammal".format(animal) """ "range(number)" returns a list of numbers from zero to the given number prints: 0 1 2 3 """ for i in range(4): print i """ "range(lower, upper)" returns a list of numbers from the lower number to the upper number prints: 4 5 6 7 """ for i in range(4, 8): print i """ While loops go until a condition is no longer met. prints: 0 1 2 3 """ x = 0 while x < 4: print x x += 1 # Shorthand for x = x + 1 # Handle exceptions with a try/except block # Works on Python 2.6 and up: try: # Use "raise" to raise an error raise IndexError("This is an index error") except IndexError as e: pass # Pass is just a no-op. Usually you would do recovery here. except (TypeError, NameError): pass # Multiple exceptions can be handled together, if required. else: # Optional clause to the try/except block. Must follow all except blocks print "All good!" # Runs only if the code in try raises no exceptions finally: # Execute under all circumstances print "We can clean up resources here" # Instead of try/finally to cleanup resources you can use a with statement with open("myfile.txt") as f: for line in f: print line #################################################### # 4. Functions #################################################### # Use "def" to create new functions def add(x, y): print "x is {0} and y is {1}".format(x, y) return x + y # Return values with a return statement # Calling functions with parameters add(5, 6) # => prints out "x is 5 and y is 6" and returns 11 # Another way to call functions is with keyword arguments add(y=6, x=5) # Keyword arguments can arrive in any order. # You can define functions that take a variable number of # positional args, which will be interpreted as a tuple by using * def varargs(*args): return args varargs(1, 2, 3) # => (1, 2, 3) # You can define functions that take a variable number of # keyword args, as well, which will be interpreted as a dict by using ** def keyword_args(**kwargs): return kwargs # Let's call it to see what happens keyword_args(big="foot", loch="ness") # => {"big": "foot", "loch": "ness"} # You can do both at once, if you like def all_the_args(*args, **kwargs): print args print kwargs """ all_the_args(1, 2, a=3, b=4) prints: (1, 2) {"a": 3, "b": 4} """ # When calling functions, you can do the opposite of args/kwargs! # Use * to expand positional args and use ** to expand keyword args. args = (1, 2, 3, 4) kwargs = {"a": 3, "b": 4} all_the_args(*args) # equivalent to foo(1, 2, 3, 4) all_the_args(**kwargs) # equivalent to foo(a=3, b=4) all_the_args(*args, **kwargs) # equivalent to foo(1, 2, 3, 4, a=3, b=4) # you can pass args and kwargs along to other functions that take args/kwargs # by expanding them with * and ** respectively def pass_all_the_args(*args, **kwargs): all_the_args(*args, **kwargs) print varargs(*args) print keyword_args(**kwargs) # Function Scope x = 5 def set_x(num): # Local var x not the same as global variable x x = num # => 43 print x # => 43 def set_global_x(num): global x print x # => 5 x = num # global var x is now set to 6 print x # => 6 set_x(43) set_global_x(6) # Python has first class functions def create_adder(x): def adder(y): return x + y return adder add_10 = create_adder(10) add_10(3) # => 13 # There are also anonymous functions (lambda x: x > 2)(3) # => True (lambda x, y: x ** 2 + y ** 2)(2, 1) # => 5 # There are built-in higher order functions map(add_10, [1, 2, 3]) # => [11, 12, 13] map(max, [1, 2, 3], [4, 2, 1]) # => [4, 2, 3] filter(lambda x: x > 5, [3, 4, 5, 6, 7]) # => [6, 7] # We can use list comprehensions for nice maps and filters [add_10(i) for i in [1, 2, 3]] # => [11, 12, 13] [x for x in [3, 4, 5, 6, 7] if x > 5] # => [6, 7] # You can construct set and dict comprehensions as well. {x for x in 'abcddeef' if x in 'abc'} # => {'a', 'b', 'c'} {x: x ** 2 for x in range(5)} # => {0: 0, 1: 1, 2: 4, 3: 9, 4: 16} #################################################### # 5. Classes #################################################### # We subclass from object to get a class. class Human(object): # A class attribute. It is shared by all instances of this class species = "H. sapiens" # Basic initializer, this is called when this class is instantiated. # Note that the double leading and trailing underscores denote objects # or attributes that are used by python but that live in user-controlled # namespaces. You should not invent such names on your own. def __init__(self, name): # Assign the argument to the instance's name attribute self.name = name # Initialize property self.age = 0 # An instance method. All methods take "self" as the first argument def say(self, msg): return "{0}: {1}".format(self.name, msg) # A class method is shared among all instances # They are called with the calling class as the first argument @classmethod def get_species(cls): return cls.species # A static method is called without a class or instance reference @staticmethod def grunt(): return "*grunt*" # A property is just like a getter. # It turns the method age() into an read-only attribute # of the same name. @property def age(self): return self._age # This allows the property to be set @age.setter def age(self, age): self._age = age # This allows the property to be deleted @age.deleter def age(self): del self._age # Instantiate a class i = Human(name="Ian") print i.say("hi") # prints out "Ian: hi" j = Human("Joel") print j.say("hello") # prints out "Joel: hello" # Call our class method i.get_species() # => "H. sapiens" # Change the shared attribute Human.species = "H. neanderthalensis" i.get_species() # => "H. neanderthalensis" j.get_species() # => "H. neanderthalensis" # Call the static method Human.grunt() # => "*grunt*" # Update the property i.age = 42 # Get the property i.age # => 42 # Delete the property del i.age i.age # => raises an AttributeError #################################################### # 6. Modules #################################################### # You can import modules import math print math.sqrt(16) # => 4 # You can get specific functions from a module from math import ceil, floor print ceil(3.7) # => 4.0 print floor(3.7) # => 3.0 # You can import all functions from a module. # Warning: this is not recommended from math import * # You can shorten module names import math as m math.sqrt(16) == m.sqrt(16) # => True # you can also test that the functions are equivalent from math import sqrt math.sqrt == m.sqrt == sqrt # => True # Python modules are just ordinary python files. You # can write your own, and import them. The name of the # module is the same as the name of the file. # You can find out which functions and attributes # defines a module. import math dir(math) # If you have a Python script named math.py in the same # folder as your current script, the file math.py will # be loaded instead of the built-in Python module. # This happens because the local folder has priority # over Python's built-in libraries. #################################################### # 7. Advanced #################################################### # Generators # A generator "generates" values as they are requested instead of storing # everything up front # The following method (*NOT* a generator) will double all values and store it # in `double_arr`. For large size of iterables, that might get huge! def double_numbers(iterable): double_arr = [] for i in iterable: double_arr.append(i + i) return double_arr # Running the following would mean we'll double all values first and return all # of them back to be checked by our condition for value in double_numbers(range(1000000)): # `test_non_generator` print value if value > 5: break # We could instead use a generator to "generate" the doubled value as the item # is being requested def double_numbers_generator(iterable): for i in iterable: yield i + i # Running the same code as before, but with a generator, now allows us to iterate # over the values and doubling them one by one as they are being consumed by # our logic. Hence as soon as we see a value > 5, we break out of the # loop and don't need to double most of the values sent in (MUCH FASTER!) for value in double_numbers_generator(xrange(1000000)): # `test_generator` print value if value > 5: break # BTW: did you notice the use of `range` in `test_non_generator` and `xrange` in `test_generator`? # Just as `double_numbers_generator` is the generator version of `double_numbers` # We have `xrange` as the generator version of `range` # `range` would return back and array with 1000000 values for us to use # `xrange` would generate 1000000 values for us as we request / iterate over those items # Just as you can create a list comprehension, you can create generator # comprehensions as well. values = (-x for x in [1, 2, 3, 4, 5]) for x in values: print(x) # prints -1 -2 -3 -4 -5 to console/terminal # You can also cast a generator comprehension directly to a list. values = (-x for x in [1, 2, 3, 4, 5]) gen_to_list = list(values) print(gen_to_list) # => [-1, -2, -3, -4, -5] # Decorators # A decorator is a higher order function, which accepts and returns a function. # Simple usage example – add_apples decorator will add 'Apple' element into # fruits list returned by get_fruits target function. def add_apples(func): def get_fruits(): fruits = func() fruits.append('Apple') return fruits return get_fruits @add_apples def get_fruits(): return ['Banana', 'Mango', 'Orange'] # Prints out the list of fruits with 'Apple' element in it: # Banana, Mango, Orange, Apple print ', '.join(get_fruits()) # in this example beg wraps say # Beg will call say. If say_please is True then it will change the returned # message from functools import wraps def beg(target_function): @wraps(target_function) def wrapper(*args, **kwargs): msg, say_please = target_function(*args, **kwargs) if say_please: return "{} {}".format(msg, "Please! I am poor :(") return msg return wrapper @beg def say(say_please=False): msg = "Can you buy me a beer?" return msg, say_please print say() # Can you buy me a beer? print say(say_please=True) # Can you buy me a beer? Please! I am poor :( Sursa: https://learnxinyminutes.com/docs/python/
  15. unele din raspunsurile primite: copyright: @badluck copyright: @AndreiCM nu sunt raspunsurile corecte, dar, sunt prea frumose pentru a nu fi postate
  16. Mass mailing or targeted campaigns that use common files to host or exploit code have been and are a very popular vector of attack. In other words, a malicious PDF or MS Office document received via e-mail or opened trough a browser plug-in. In regards to malicious PDF files the security industry saw a significant increase of vulnerabilities after the second half of 2008 which might be related to Adobe Systems release of the specifications, format structure and functionality of PDF files. Most enterprise networks perimeters are protected and contain several security filters and mechanism that block threats. However, a malicious PDF or MS Office document might be very successful passing trough Firewalls, Intrusion Prevention Systems, Anti-spam, Anti-virus and other security controls. By reaching the victim mailbox, this attack vector will leverage social engineering techniques to lure the user to click/open the document. Then, for example, If the user opens a PDF malicious file, it typically executes JavaScript that exploits a vulnerability when Adobe Reader parses the crafted file. This might cause the application to corrupt memory on the stack or heap causing it to run arbitrary code known as shellcode. This shellcode normally downloads and executes a malicious file from the Internet. The Internet Storm Center Handler Bojan Zdrnja wrote a good summary about one of these shellcodes. In some circumstances the vulnerability could be exploited without opening the file and just by having a malicious file on the hard drive as described by Didier Stevens. From a 100 feet view a PDF file is composed by a header , body, reference table and trailer. One key component is the body which might contains all kinds of content type objects that make parsing attractive for vulnerability researchers and exploit developers. The language is very rich and complex which means the same information can be encoded and obfuscated in many ways. For example, within objects there are streams that can be used to store data of any type of size. These streams are compressed and the PDF standard supports several algorithms including ASCIIHexDecode, ASCI85Decode, LZWDecode, FlateDecode, RunLengthDecode, CCITTFaxDecode, DCTCDecode called Filters. PDF files can contain multimedia content and support JavaScript and ActionScript trough Flash objects. Usage of JavaScript is a popular vector of attack because it can be hidden in the streams using different techniques making detection harder. In case the PDF file contains JavaScript, the malicious code is used to trigger a vulnerability and to execute shellcode. All this features and capabilities are translated in a huge attack surface! From a security incident response perspective the knowledge about how to do a detailed analysis of such malicious files can be quite useful. When analyzing this kind of files an incident handler can determine the worst it can do, its capabilities and key characteristics. Furthermore, it can help to be better prepared and identify future security incidents and how to contain, eradicate and recover from those threats. So, which steps could an incident handler or malware analyst perform to analyze such files? In case of a malicious PDF files there are 5 steps. By using REMnux distro the steps are described by Lenny Zeltser as being: Find and Extract Javascript Deobfuscate Javascript Extract the shellcode Create a shellcode executable Analyze shellcode and determine what is does. A summary of tools and techniques using REMnux to analyze malicious documents are described in the cheat sheet compiled by Lenny, Didier and others. In order to practice these skills and to illustrate an introduction to the tools and techniques, below is the analysis of a malicious PDF using these steps. The other day I received one of those emails that was part of a mass mailing campaign. The email contained an attachment with a malicious PDF file that took advantage of Adobe Reader Javascript engine to exploit CVE-2013-2729. This vulnerability found by Felipe Manzano exploits an integer overflow in several versions of the Adobe Reader when parsing BMP files compressed with RLE8 encoded in PDF forms. The file on Virus Total was only detected by 6 of the 55 AV engines. Let’s go through each one of the mentioned steps to find information on the malicious PDF key characteristics and its capabilities. 1st Step – Find and extract JavaScript One technique is using Didier Stevens suite of tools to analyze the content of the PDF and look for suspicious elements. One of those tools is Pdfid which can show several keywords used in PDF files that could be used to exploit vulnerabilities. The previously mentioned cheat sheet contain some of these keywords. In this case the first observations shows the PDF file contains 6 objects and 2 streams. No JavaScript mentioned but it contains /AcroForm and /XFA elements. This means the PDF file contains XFA forms which might indicate it is malicious. Then looking deeper we can use pdf-parser.py to display the contents of the 6 objects. The output was reduced for the sake of brevity but in this case the Object 2 is the /XFA element that is referencing to Object 1 which contains a stream compressed and rather suspicious. Following this indicator pdf-parser.py allows us to show the contents of an object and pass the stream trough one of the supporter filters (FlateDecode, ASCIIHexDecode, ASCII85Decode, LZWDecode and RunLengthDecode only) trough the –filter switch. The –raw switch allows to show the output in a easier way to read. The output of the command is redirected to a file. Looking at the contents of this file we get the decompressed stream. When inspecting this file you will see several lines of JavaScript that weren’t on the original PDF file. If this document is opened by a victim the /XFA keyword will execute this malicious code. Another fast method to find if the PDF file contains JavaScript and other malicious elements is to use the peepdf.py tool written by Jose Miguel Esparza. Peepdf is a tool to analyze PDF files, helping to show objects/streams, encode/decode streams, modify all of them, obtain different versions, show and modify metadata, execution of Javascript and shellcodes. When running the malicious PDF file against the last version of the tool it can show very useful information about the PDF structure, its contents and even detect which vulnerability it triggers in case it has a signature for it. 2nd Step – Deobfuscate Javascript The second step is to deobfuscate the JavaScript. JavaScript can contain several layers of obfuscation. in this case there was quite some manual cleanup in the extracted code just to get the code isolated. The object.raw contained 4 JavaScript elements between <script xxxx contentType=”application/x-javascript”> tags and 1 image in base64 format in <image> tag. This JavaScript code between tags needs to be extracted and place into a separated file. The same can be done for the chunk of base64 data, when decoded will produce a 67Mb BMP file. The JavaScript in this case was rather cryptic but there are tools and techniques that help do the job in order to interpret and execute the code. In this case I used another tool called js-didier.pl which is a Didier version of the JavaScript interpreter SpiderMonkey. It is essentially a JavaScript interpreter without the browser plugins that you can run from the command line. This allows to run and analyze malicious JavaScript in a safe and controlled manner. The js-didier tool, just like SpiderMonkey, will execute the code and prints the result into files named eval.00x.log. I got some errors on one of the variables due to the manual cleanup but was enough to produce several eval log files with interesting results. 3rd Step – Extract the shellcode The third step is to extract the shellcode from the deobfuscated JavaScript. In this case the eval.005.log file contained the deobfuscated JavaScript. The file among other things contains 2 variables encoded as Unicode strings. This is one trick used to hide or obfuscate shellcode. Typically you find shellcode in JavaScript encoded in this way. These Unicode encoded strings need to be converted into binary. To perform this isolate the Unicode encoded strings into a separated file and convert it the Unicode (\u) to hex (\x) notation. To do this you need using a series of Perl regular expressions using a Remnux script called unicode2hex-escaped. The resulting file will contain the shellcode in a hex format (“\xeb\x06\x00\x00..”) that will be used in the next step to convert it into a binary 4th Step – Create a shellcode executable Next with the shellcode encoded in hexadecimal format we can produce a Windows binary that runs the shellcode. This is achieved using a script called shellcode2exe.py written by Mario Vilas and later tweaked by Anand Sastry. As Lenny states ” The shellcode2exe.py script accepts shellcode encoded as a string or as raw binary data, and produces an executable that can run that shellcode. You load the resulting executable file into a debugger to examine its. This approach is useful for analyzing shellcode that’s difficult to understand without stepping through it with a debugger.” 5th Step – Analyze shellcode and determine what is does. Final step is to determine what the shellcode does. To analyze the shellcode you could use a dissasembler or a debugger. In this case the a static analysis of the shellcode using the strings command shows several API calls used by the shellcode. Further also shows a URL pointing to an executable that will be downloaded if this shellcode gets executed We now have a strong IOC that can be used to take additional steps in order to hunt for evil and defend the networks. This URL can be used as evidence and to identify if machines have been compromised and attempted to download the malicious executable. At the time of this analysis the file was no longer there but its known to be a variant of the Game Over Zeus malware. The steps followed are manual but with practice they are repeatable. They just represent a short introduction to the multifaceted world of analyzing malicious documents. Many other techniques and tools exist and much deeper analysis can be done. The focus was to demonstrate the 5 Steps that can be used as a framework to discover indicators of compromise that will reveal machines that have been compromised by the same bad guys. However using these 5 steps many other questions could be answered. Using the mentioned and other tools and techniques within the 5 steps we can have a better practical understanding on how malicious documents work and which methods are used by Evil. Two great resource for this type of analysis is the Malware Analyst’s Cookbook : Tools and Techniques for Fighting Malicious Code book from Michael Ligh and the SANS FOR610: Reverse-Engineering Malware: Malware Analysis Tools and Technique authored by Lenny Zeltser. Sursa: https://countuponsecurity.com/2014/09/22/malicious-documents-pdf-analysis-in-5-steps/
  17. LDL-YYS ( https://www.elearnsecurity.com/affiliate/redeem?code=LDL-YYS )
  18. If I told you this could be a phishing site, would you believed me? tl;dr: check out the proof-of-concept Punycode makes it possible to register domains with foreign characters. It works by converting individual domain label to an alternative format using only ASCII characters. For example, the domain "xn--s7y.co" is equivalent to "短.co". From a security perspective, Unicode domains can be problematic because many Unicode characters are difficult to distinguish from common ASCII characters. It is possible to register domains such as "xn--pple-43d.com", which is equivalent to "аpple.com". It may not be obvious at first glance, but "аpple.com" uses the Cyrillic "а" (U+0430) rather than the ASCII "a" (U+0041). This is known as a homograph attack. Fortunately modern browsers have mechanisms in place to limit IDN homograph attacks. The page IDN in Google Chrome highlights the conditions under which an IDN is displayed in its native Unicode form. In Chrome and Firefox, the Unicode form will be hidden if a domain label contains characters from multiple different languages. The "аpple.com" domain as described above will appear in its Punycode form as "xn--pple-43d.com" to limit confusion with the real "apple.com". Chrome's (and Firefox's) homograph protection mechanism unfortunately fails if every characters is replaced with a similar character from a single foreign language. The domain "аррӏе.com", registered as "xn--80ak6aa92e.com", bypasses the filter by only using Cyrillic characters. You can check this out yourself in the proof-of-concept using Chrome or Firefox. In many instances, the font in Chrome and Firefox makes the two domains visually indistinguishable. It becomes impossible to identify the site as fraudulent without carefully inspecting the site's URL or SSL certificate. This program nicely demonstrates the difference between the two sets of characters. Internet Explorer and Safari are fortunately not vulnerable. Screenshots: Chrome, Firefox, Firefox SSL This bug was reported to Chrome and Firefox on January 20, 2017 and was fixed in the trunk of Chrome 59 (currently in Canary) on March 24, 2017. The problem remains unaddressed in Firefox as they remain undecided whether it is within their scope. The Bugzilla issue was initially marked "RESOLVED" and "WONTFIX", though it has since been reopened, made public, and given the "sec-low" keyword. Our IDN threat model specifically excludes whole-script homographs, because they can't be detected programmatically and our "TLD whitelist" approach didn't scale in the face of a large number of new TLDs. If you are buying a domain in a registry which does not have proper anti-spoofing protections (like .com), it is sadly the responsibility of domain owners to check for whole-script homographs and register them. A simple way to limit the damage from bugs such as this is to always use a password manager. In general, users must be very careful and pay attention to the URL when entering personal information. I hope Firefox will consider implementing a fix to this problem since this can cause serious confusion even for those who are extremely mindful of phishing. You can follow me on Twitter @Xudong_Zheng Sursa: https://www.xudongz.com/blog/2017/idn-phishing/
  19. a Simple tool and not very special but this tool fast and easy create backdoor office exploitation using module metasploit packet. Like Microsoft Office in windows or mac , Open Office in linux , Macro attack , Buffer Overflow in word . Work in kali rolling , Parrot , Backbox . Download: https://github.com/Screetsec/Microsploit
  20. jucarii noi https://github.com/misterch0c/shadowbroker/
  21. Shellter is a dynamic shellcode injection tool, and the first truly dynamic PE infector ever created. It can be used in order to inject shellcode into native Windows applications (currently 32-bit applications only). The shellcode can be something yours or something generated through a framework, such as Metasploit. Shellter takes advantage of the original structure of the PE file and doesn’t apply any modification such as changing memory access permissions in sections (unless the user wants), adding an extra section with RWE access, and whatever would look dodgy under an AV scan. Main Features Compatible with Windows x86/x64 (XP SP3 and above) & Wine/CrossOver for Linux/Mac. Portable – No setup is required. Doesn’t require extra dependencies (python, .net, etc…). No static PE templates, framework wrappers etc… Supports any 32-bit payload (generated either by metasploit or custom ones by the user). Compatible with all types of encoding by metasploit. Compatible with custom encoding created by the user. Stealth Mode – Preserves Original Functionality. Multi-Payload PE infection. Proprietary Encoding + User Defined Encoding Sequence. Dynamic Thread Context Keys. Supports Reflective DLL loaders. Embedded Metasploit Payloads. Junk code Polymorphic engine. Thread context aware Polymorphic engine. User can use custom Polymorphic code of his own. Takes advantage of Dynamic Thread Context information for anti-static analysis. Detects self-modifying code. Traces single and multi-thread applications. Fully dynamic injection locations based on the execution flow. Disassembles and shows to the user available injection points. User chooses what to inject, when, and where. Command Line support. Free Download: https://www.shellterproject.com/download/
×
×
  • Create New...