Jump to content

Usr6

Active Members
  • Posts

    1337
  • Joined

  • Last visited

  • Days Won

    89

Everything posted by Usr6

  1. The code for this post, as well as the post itself, are on github. This post is part 1 of a 3 part series. Part 1: Parsing Part 2: Generate an NFA Part 3: Evaluate an NFA Until recently, regular expressions seemed magical to me. I never understood how you could determine if a string matched a given regular expression. Now I know! Here’s how I implemented a basic regular expression engine in under 200 lines of code. The Spec Implementing full regular expressions is rather cumbersome, and worse, doesn’t teach you much. The version we’ll implement is just enough to learn without being tedious. Our regular expression language will support: .: Match any character |: Match abc or cde +: Match one or more of the previous pattern *: Match 0 or more of the previous pattern ( and ) for grouping While this is a small set of options, we’ll still be able to make some cute regexes, like m (t|n| ) | b to match Star Wars subtitles without matching Star Trek ones, or (..)* the set of all even length strings. The Plan of Attack We’ll evaluate regular expressions in 3 phases: Parse the regular expression into a syntax tree Convert the syntax tree into a state machine Evaluate the state machine against our string We’ll use a state machine called an NFA to evaluate regular expressions (more on that later). At a high level, the NFA will represent our regex. As we consume inputs, we’ll move from state to state in the NFA. If we get to a point where we can’t follow an allowed transition, the regular expression doesn’t match the string. This approach was originally demonstrated by Ken Thompson, one of the original authors of Unix. In his 1968 CACM paper he outlines the implementation of a text editor and includes this as a regular expression evaluator. The only difference is that his article is written 7094 machine code. Things used to be way more hard core. This algorithm is a simplification of how non-backtracking engines like RE2 evaluate regular expressions in provably linear time. It’s notably different from the regex engines found in Python and Java that use backtracking. When given certain inputs, they’ll run virtually forever on small strings. Ours will run in O(length(input) * length(expression). My approach roughly follows the strategy Russ Cox outlines in his excellent blog post. Representing a Regular Expression Lets step back and think about how to represent a regular expression. Before we can hope to evaluate a regular expression, we need to convert it into a data structure the computer can operate on. While strings have a linear structure, regular expressions have a natural hierarchy. Lets consider the string abc|(c|(de)). If you were to leave it as a string, you’d have to backtrack and jump around as you tried to keep track of the different sets of parenthesis while evaluating the expression. One solution is converting it to a tree, which a computer can easily traverse. For example, b+a would become: To represent the tree, we’ll want to create a hierarchy of classes. For example, our Or class will need to have two subtrees to represent its two sides. From the spec, there are 4 different regular expression components we’ll need to recognize: +, *, |, and character literals like ., a and b. In addition, we’ll also need to be able to represent when one expression follows another. Here are our classes: abstract class RegexExpr // ., a, b case class Literal(c: Char) extends RegexExpr // a|b case class Or(expr1: RegexExpr, expr2: RegexExpr) extends RegexExpr // ab -> Concat(a,b); abc -> Concat(a, Concat(b, c)) case class Concat(first: RegexExpr, second: RegexExpr) extends RegexExpr // a* case class Repeat(expr: RegexExpr) extends RegexExpr // a+ case class Plus(expr: RegexExpr) extends RegexExpr Parsing a Regular Expression To get from a string to a tree representation, we’ll use conversion process known as “parsing.” I’m not going to talk about building parsers in a high level of detail. Rather, I’ll give just enough to point you in the right direction if you wanted to write your own. Here, I’ll give an overview of how I parsed regular expressions with Scala’s parser combinator library. Scala’s parser library will let us write a parser by just writing the set of rules that describe our language. It uses a lot of obtuse symbols unfortunately, but I hope you’ll be able to look through the noise to see the gist of what’s happening. When implementing a parser we need to consider order of operations. Just as “PEMDAS” applies to arithmetic, a different set of rules apply to regular expressions. We can express this more formally with the idea of an operator “binding” to the characters near it. Different operators “bind” with different strengths – as an analogy, * binds more tightly than + in expressions like 5+6*4. In regular expressions, * binds more tightly than |. If we were to represent parsing this as a tree the weakest operators end up on top. It follows that we should parse the weakest operators first, followed by the stronger operators. When parsing, you can imagine it as extracting an operator, adding it to the tree, then recursing on the remaining 2 parts of the string. In regular expressions the order of binding strength is: Character Literal & parentheses + and * “Concatenation” – a is after b | Since we have 4 levels of binding strength, we need 4 different types of expressions. We named them (somewhat arbitrarily): lit, lowExpr (+, *), midExpr (concatenation), and highExpr (|). Lets jump into the code. First we’ll make a parser for the most basic level, a single character: object RegexParser extends RegexParsers { def charLit: Parser[RegexExpr] = ("""\w""".r | ".") ^^ { char => Literal(char.head) } Lets take a moment to explain the syntax. This defines a parser that will build a RegexExpr. The right hand side says: “Find something that matches \w (any word character) or a period. If you do, turn it into a Literal. Parentheses must be defined at the lowest level of the parser since they are the strongest binding. However, you need to be able to put anything in parentheses. We can accomplish this with the following code: def parenExpr: Parser[RegexExpr] = "(" ~> highExpr <~ ")" def lit: Parser[RegexExpr] = charLit | parenExpr Here, we’ll define * and +: def repeat: Parser[RegexExpr] = lit <~ "*" ^^ { case l => Repeat(l) } def plus: Parser[RegexExpr] = lit <~ "+" ^^ { case p => Plus(p) } def lowExpr: Parser[RegexExpr] = repeat | plus | lit Next, we’ll define concatenation, the next level up: def concat: Parser[RegexExpr] = rep(lowExpr) ^^ { case list => listToConcat(list)} def midExpr: Parser[RegexExpr] = concat | lowExpr Finally, we’ll define or: def or: Parser[RegexExpr] = midExpr ~ "|" ~ midExpr ^^ { case l ~ "|" ~ r => Or(l, r)} Lastly, we’ll define highExpr. highExpr is an or, the weakest binding operator, or if there isn’t an or, a midExpr. def highExpr: Parser[RegexExpr] = or | midExpr Finally, a touch of helper code to finish it off: def listToConcat(list: List[RegexExpr]): RegexExpr = list match { case head :: Nil => head case head :: rest => Concat(head, listToConcat(rest)) } def apply(input: String): Option[RegexExpr] = { parseAll(highExpr, input) match { case Success(result, _) => Some(result) case failure : NoSuccess => None } } } That’s it! If you take this Scala code you’ll be able to generate parse trees for any regular expression that meets the spec. The resulting data structures will be trees. Now that we can convert our regular expressions into syntax trees, we’re well underway to being able to evaluate them. I wrote about the next steps in part 2. As always, the code is on GitHub. Sursa: https://rcoh.me/posts/no-magic-regular-expressions/
  2. Shellen is an interactive shellcoding environment. If you want a handy tool to write shellcodes, then shellen may be your friend. Also, it can be used just as assembly/disassembly tool. It uses keystone and capstone engines for all provided operations. Shellen works only on python3. Maybe support for python2 will appear in the future. Link: https://github.com/merrychap/shellen
      • 2
      • Upvote
      • Downvote
  3. Usr6

    Fun stuff

  4. Below is a snapshot of IDA Pro piracy, circa 2006. The data was collected from users who initiated update requests with known pirate keys (no data was ever collected surreptitiously from either legit or pirate users). Markers aren't sized in proportion to the number of requests coming from specific locations. http://www.datarescue.com/idabase/
  5. Qualifiers 2018: starts on Wednesday 7th of March 2018 10:00h and ends on Thursday 8th of March 2018 18:00h Link: https://www.cybersecuritychallenge.be/
  6. Prerequisites Machine Learning Crash Course does not presume or require any prior knowledge in machine learning. However, to understand the concepts presented and complete the exercises, we recommend that students meet the following prerequisites: Mastery of intro-level algebra. You should be comfortable with variables and coefficients, linear equations, graphs of functions, and histograms. (Familiarity with more advanced math concepts such as logarithms and derivatives is helpful, but not required.) Proficiency in programming basics, and some experience coding in Python. Programming exercises in Machine Learning Crash Course are coded in Python using TensorFlow. No prior experience with TensorFlow is required, but you should feel comfortable reading and writing Python code that contains basic programming constructs, such as function definitions/invocations, lists and dicts, loops, and conditional expressions. Link: https://developers.google.com/machine-learning/crash-course/
  7. While most in the security industry know what encryption is, many lack a basic understanding of how it is used in malware—especially ransomware. Because of this, we thought it would be beneficial to do an introductory primer on encryption mechanisms and how they are exploited for malicious purposes. We will start with a introduction to encryption in general, and follow up with the main methods used to encrypt files used by ransomware. In part two of this blog series, we’ll use a recent ransomware variant detected by Malwarebytes as Ransom.ShiOne to highlight the key weaknesses in encryption to look out for when trying to decrypt files. What is encryption? In the simplest of terms, encryption is the process of encoding information so that only authorized parties can access it, and those who are not authorized cannot. In computing, encryption is the method by which data is converted from a readable format (plaintext) to an encoded one (ciphertext) that can only be decoded by another entity if they have access to a decryption key. While encryption was long used by the military to facilitate secret communication, today it is used to secure data in transit and in storage, as well as to authenticate and verify identities. Unfortunately, encryption is also used for malicious purposes, as is the case with ransomware. Evaluating the encryption If a malware analyst wants to effectively evaluate a malicious encryption, he needs to observe the encryption on the machine that is creating or receiving the encrypted data. If you have access to any process running before it has performed the encryption phase, you will typically have enough data to be able to decrypt or to simply view the natively decrypted data. Being the observer is the only chance you have to recover files without needing to crack the encryption. However, for a ransomware attack, being an observer usually isn’t possible. After the malware finishes running and sends off the encryption keys, it is too late. When the malware shuts down, you can no longer be the observer. Now you must rely on analysis and hope there is a flaw in the encryption. What exactly does it mean to be an observer of decryption and encryption? Someone once asked me: Why do malware authors not always encrypt communication to a C2 server? My answer to him was that malware is public, in that it can run on random victim systems throughout the world. As reverse engineers, we always have access to the binary and are able to inspect the software on the lowest, most detailed level. A secure communication is meaningless at this level because we will be looking at the system before encryption and after decryption. An SSL or https communication received on the client side (victim computer) will be decrypted in memory for the data to be processed in whichever way the malware intends. At this point, we will always be able to inspect memory and pull the decrypted communication out in its raw form, so it really doesn’t hide anything when a server sends encrypted data down to a malware that is being analyzed. This same logic applies to many use cases in ransomware and file encryption. If we are looking at a ransomware and it is generating its encryption keys locally, we can observe the keys in memory immediately after creation, save those out, and use those keys from memory to decrypt files after the ransomware runs, with a requirement that we are able to determine the algorithm used for the encryption. If during the process of ransomware running and encrypting files the user dumped its memory, he has a chance of being an observer, and a likely possibility to recover files. This is unfortunately not a typical scenario, as a victim’s first instinct is not to create a memory dump while continuing to allow the process to run. But for a theoretical example, it technically can work. Ransomware algorithms Over the years, we have come across many algorithms used to hold victims’ files hostage. The most common of these involve the use of standard, public, and proven algorithms for asymmetric encryption. But occasionally we see custom encryption (which is likely weaker) or even simple, obfuscated methods to holds files hostage. Years ago, when I would come across ransom malware, it would typically use alternative methods for holding the victim computer as ransom, rather than encrypting all files on the drive. I have seen everything from file hiding and custom crypto to Master Boot Record (MBR) rewriting. File obfuscation In the case of file obfuscation, the ransomware simply would move or hide targeted files—documents and anything else it believes the victim would care about—and ask for a ransom to recover the files. In this case, recovery is trivial. You can simply reverse the code that is performing the hiding and be able to reverse the actions it took. Here’s an example: A fake pop-up claims your hard drive is corrupt, and asks you to call and pay for “support” to recover the files. On this particular malware I analyzed, it displayed a pop-up window (shown below), and simply moved all the documents and desktop files into a new folder in a hidden location. The fix for this was looking at the code to see which files it moved and to what location. Custom cryptos When dealing with custom cryptos, typically a file is passed through an algorithm that modifies the file in a standard way. A simple example would be every byte in the file being XORed by a constant or cycling set of bytes. In these cases, you can almost think of the algorithm itself as the key. If you can reverse the algorithm, you can figure out exactly how it is modifying or encrypting the file. Then, simply reversing the steps will give you the original file. In contrast, when faced with asymmetric cryptography, the algorithm itself will not provide you with enough to be able to decrypt a file. MBR rewriting In the third scenario, the MBR would be rewritten with a small program that requires a password or serial number for access. The malware would force a reboot of the computer, and before the system would load Windows, it would first prompt the user with instructions on how to pay the ransom to receive a passcode. In this scenario, reversing the serial or password validation algorithm within the boot record (essentially creating a keyGen), would provide you with the ability to know which password would allow access. Alternatively, rewriting that portion of the hard drive with the factory boot record would also disable the lock on your computer. Aside from reversing the algorithm, the only difficulty here would be knowing how to rewrite the MBR yourself in order to restore the original code to this section of the drive. Below is an example of an MBR locker. Notice that no ID is asked of you, which means there is nothing unique to the specific blockage, and a static unlock code is likely to be needed. Now, these three alternative methods are not quite using encryption in the standard sense of the word, but I am mentioning them here because it shows that a custom written, closed-source obfuscation is sometimes trivial to break. The reason most criminals use standardized, public, open-source encryption algorithms to encrypt files for ransom is that they are tested and reliably secure. That means you can know every detail about the encryption algorithm but it won’t matter—you can’t decrypt the encrypt by any method other than having the encryption key. Why is this important? Encryption using standard, open-source code is built off of the relationship between the encryption keys. The algorithms used simply derive two separate keys that are related. This concept is called asymmetric cryptography, and it is the method of encryption that the vast majority of ransomware authors use today. Asymmetric cryptography basics Asymmetric encryption involves generating two keys that are completely different; however, they share a relationship. One key (the public key) is used to encrypt the data into ciphertext, while its companion (the private key) is used to decrypt the ciphertext into the original plaintext. It is called asymmetric because the public key, although it was used to encrypt, cannot be used to decrypt. The private key is required to decrypt. In encrypted communication that uses asymmetric cryptography, two keys are generated locally—the public key and the private key. The public key is available for the whole word to see. If someone wants to send a message that only one person (Bob) could read, they would encrypt the message using Bob’s public key. Because his public key cannot be used to decrypt the message, it is completely safe. Using his offline, private key, Bob would be able to decrypt the message and see the original text. Here a few visualizations to help you understand. This is the same method used by ransomware authors for file encryption. A basic run-through of an encryption process goes as follows: An array of random numbers is generated. This series of bytes will be used before the first round of the file encryption. Typically, a mathematical series of operations is performed on the public key using this random initialization to, in effect, create a sub-key derived from the initial key. This sub-key will be now used in order to actually encrypt the file data. During the first phase of the encryption process, a second random array is generated, which will be used as an initialization vector (IV). Every stage following will use the output of the previous stage as its new IV. A random number is first used as the IV, then the generated ciphertext is used for the next round of encryption. The generation of the keys themselves also relies on a random number generator. So as you can imagine, having a solid and “as random as possible” generator is extremely important. Ransomware file encryption Modern ransomware typically does one of a couple things: Either it dynamically generates the keys locally and sends them up to the C2 server attached to the client ID, or the keys are generated by the author and are preloaded into the ransomware itself. The latter, although arguably more secure, requires a lot of overhead in generating a completely new binary for each victim, or at least distributing a batch of identical malware using the same key in each campaign version. The trade-off here is that although the key generation cannot be observed directly and inspected for weaknesses by an analyst, two victims will actually have the same encryption keys. So if one person pays the ransom and shares his keys, everyone else impacted by that campaign version can decrypt their files for free. This is a weakness via leaked keys. Now, if the keys were dynamically generated, it allows the slim potential usage of a memory dump for file recovery, and also the slim possibility that an analyst can find a hole in the encryption. (The memory dump is not that big of a problem for the malware author because, as we said earlier, it is rare when a user knows that he should create a dump). However, the benefit to this local key generation method is that the malware is fully dynamic, and no two people will share the same key. Both methods have their share of weaknesses and strengths. Modern ransomware authors typically use one of these forms of standardized encryption—AES, RSA, Blowfish, etc.—to try and make the victim’s files unrecoverable without the author providing the decryption key. The reason I say “try” is because there are many cases where these good algorithms are being misused (which will allow the same key to be generated twice). In addition, the transfer and generation of keys can be intercepted. Asymmetric cryptography encryption may be near impossible to crack, but that doesn’t mean it can’t be broken. To learn how, tune in for our next blog, which uses the ShiOne ransomware to demonstrate how malware analysts can look for weaknesses in encryption. Sursa: https://blog.malwarebytes.com/threat-analysis/2018/02/encryption-101-malware-analysts-primer/ part II: https://blog.malwarebytes.com/threat-analysis/2018/02/encryption-101-shione-ransomware-case-study/
      • 3
      • Upvote
  8. Balbuzard is another python tool that you can use for analyzing malware, extracting file patterns information such as IP-addresses, URL, executable files and the header. The idea of the tool is that when we need to analyze the malicious or suspicious file the tool allows user to open it as a hex-editor to view the file type. Next you can find interesting information such as the URL, IP addresses, and other embedded files. so it will provide a full information required to find the behavior of this malware beside tracking what this malicious application will do on our system. some of the feature for this tool are: search for string or regular expression patterns default set of patterns for malware analysis: IP addresses, e-mail addresses, URLs, typical EXE strings, common file headers, various malware strings optional use of the Yara engine and Yara rules as patterns provided with a large number of obfuscation transforms such as XOR, ROL, ADD (including combined transforms) easily extensible with new patterns in python scripts and Yara rules, and new obfuscation transforms can open malware in password-protected zip files without writing to disk batch analysis of multiple files/folders on disk or within zips CSV output pure python 2.x, no dependency or compilation You can download the tool over this link: https://bitbucket.org/decalage/balbuzard/downloads Sursa: http://www.sectechno.com/balbuzard-malware-analysis-tool/
      • 2
      • Upvote
      • Downvote
  9. I recently wrote a blog post giving an introduction to reverse engineering and assembly language on the Purism blog. Considering that my last blog post on my own website is from 3 years ago and this post is useful beyond the needs of just Purism, I thought it might have a nice home in my own personal blog as well, so here’s a copy paste of the entire blog post, as is. Recently, I’ve finished reverse engineering the Intel FSP-S “entry” code, that is from the entry point (FspSiliconInit) all the way to the end of the function and all the subfunctions that it calls. This is only some initial foray into reverse engineering the FSP as a whole, but reverse engineering is something that takes a lot of time and effort. Today’s blog post is here to illustrate that, and to lay the foundations for understanding what I’ve done with the FSP code (in a future blog post). Over the years, many people asked me to teach them what I do, or to explain to them how to reverse engineer assembly code in general. Sometimes I hear the infamous “How hard can it be?” catchphrase. Last week someone I was discussing with thought that the assembly language is just like a regular programming language, but in binary form—it’s easy to make that mistake if you’ve never seen what assembly is or looks like. Historically, I’ve always said that reverse engineering and ASM is “too complicated to explain” or that “If you need help to get started, then you won’t be able to finish it on your own” and various other vague responses—I often wanted to explain to others why I said things like that but I never found a way to do it. You see, when something is complex, it’s easy to say that it’s complex, but it’s much harder to explain to people why it’s complex. I was lucky to recently stumble onto a little function while reverse engineering the Intel FSP, a function that was both simple and complex, where figuring out what it does was an interesting challenge that I can easily walk you through. This function wasn’t a difficult thing to understand, and by far, it’s not one of the hard or complex things to reverse engineer, but this one is “small and complex enough” that it’s a perfect example to explain, without writing an entire book or getting into the more complex aspects of reverse engineering. So today’s post serves as a “primer” guide to reverse engineering for all of those interested in the subject. It is a required read in order to understand the next blog posts I would be writing about the Intel FSP. Ready? Strap on your geek helmet and let’s get started! DISCLAIMER: I might make false statements in the blog post below, some by mistake, some intentionally for the purpose of vulgarizing the explanations. For example, when I say below that there are 9 registers in X86, I know there are more (SSE, FPU, or even just the DS or EFLAGS registers, or purposefully not mentioning EAX instead of RAX, etc.), but I just don’t want to complicate matters by going too wide in my explanations. A prelude First things first, you need to understand some basic concepts, such as “what is ASM exactly”. I will explain some basic concepts but not all the basic concepts you might need. I will assume that you know at least what a programming language is and know how to write a simple “hello world” in at least one language, otherwise you’ll be completely lost. So, ASM is the Assembly language, but it’s not the actual binary code that executes on the machine. It is however, very similar to it. To be more exact, the assembly language is a textual representation of the binary instructions given to the microprocessor. You see, when you compile your regular C program into an executable, the compiler will transform all your code into some very, very, very basic instructions. Those instructions are what the CPU will understand and execute. By combining a lot of small, simple and specific instructions, you can do more complex things. That’s the basis of any programming language, of course, but with assembly, the building blocks that you get are very limited. Before I’ll talk about instructions, I want to explain two concepts first which you’ll need to follow the rest of the story. The stack First I’ll explain what “the stack” is. You may have heard of it before, or maybe you didn’t, but the important thing to know is that when you write code, you have two types of memory: The first one is your “dynamic memory”, that’s when you call ‘malloc’ or ‘new’ to allocate new memory, this goes from your RAM upward (or left-to-right), in the sense that if you allocate 10 bytes, you’ll first get address 0x1000 for example, then when you allocate another 30 bytes, you’ll get address 0x100A, then if you allocate another 16 bytes, you’ll get 0x1028, etc. The second type of memory that you have access to is the stack, which is different, instead it grows downward (or right-to-left), and it’s used to store local variables in a function. So if you start with the stack at address 0x8000, then when you enter a function with 16 bytes worth of local variables, your stack now points to address 0x7FF0, then you enter another function with 64 bytes worth of local variables, and your stack now points to address 0x7FB0, etc. The way the stack works is by “stacking” data into it, you “push” data in the stack, which puts the variable/data into the stack and moves the stack pointer down, you can’t remove an item from anywhere in the stack, you can always only remove (pop) the last item you added (pushed). A stack is actually an abstract type of data, like a list, an array, a dictionary, etc. You can read more about what a stack is on wikipedia and it shows you how you can add and remove items on a stack with this image: The image shows you what we call a LIFO (Last-In-First-Out) and that’s what a stack is. In the case of the computer’s stack, it grows downward in the RAM (as opposed to upward in the above image) and is used to store local variables as well as the return address for your function (the instruction that comes after the call to your function in the parent function). So when you look at a stack, you will see multiple “frames”, you’ll see your current function’s stack with all its variables, then the return address of the function that called it, and above it, you’ll see the previous function’s frame with its own variables and the address of the function that called it, and above, etc. all the way to the main function which resides at the top of the stack. Here is another image that exemplifies this: The registers The second thing I want you to understand is that the processor has multiple “registers”. You can think of a register as a variable, but there are only 9 total registers on x86, with only 7 of them usable. So, on the x86 processor, the various registers are: EAX, EBX, ECX, EDX, EDI, ESI, EBP, ESP, EIP. There are two registers in there that are special: The EIP (Instruction Pointer) contains the address of the current instruction being executed. The ESP (Stack Pointer) contains the address of the stack. Access to the registers is extremely fast when compared to accessing the data in the RAM (the stack also resides on the RAM, but towards the end of it) and most operations (instructions) have to happen on registers. You’ll understand more when you read below about instructions, but basically, you can’t use an instruction to say “add value A to value B and store it into address C”, you’d need to say “move value A into register EAX, then move value B into register EBX, then add register EAX to register EBX and store the result in register ECX, then store the value of register ECX into the address C”. The instructions Let’s go back to explaining instructions now. As I explained before, the instructions are the basic building blocks of the programs, and they are very simple, they take the form of: INS OP1, OP2, OP3 Where “INS” is the instruction”, and OP1, OP2, OP3 is what we call the “operand”, most instructions will only take 2 operands, some will take no operands, some will take one operand and others will take 3 operands. The operands are usually registers. Sometimes, the operand can be an actual value (what we call an “immediate value”) like “1”, “2” or “3”, etc. and sometimes, the operand is a relative position from a register, like for example “[%eax + 4]” meaning the address pointed to by the %eax register + 4 bytes. We’ll see more of that shortly. For now, let’s give you the list of the most common and used instructions: “MOV“: move data from one operand into another “ADD/SUB/MUL/DIV“: Add, Substract, Multiply, Divide one operand with another and store the result in a register “AND/OR/XOR/NOT/NEG“: Perform logical and/or/xor/not/negate operations on the operand “SHL/SHR“: Shift Left/Shift Right the bits in the operand “CMP/TEST“: Compare one register with an operand “JMP/JZ/JNZ/JB/JS/etc.”: Jump to another instruction (Jump unconditionally, Jump if Zero, Jump if Not Zero, Jump if Below, Jump if Sign, etc.) “PUSH/POP“: Push an operand into the stack, or pop a value from the stack into a register “CALL“: Call a function. This is the equivalent of doing a “PUSH %EIP+4” + “JMP”. I’ll get into calling conventions later.. “RET“: Return from a function. This is the equivalent of doing a “POP %EIP” That’s about it, that’s what most programs are doing. Of course, there’s a lot more instructions, you can see a full list here, but you’ll see that most of the other instructions are very obscure or very specific or variations on the above instructions, so really, this represents most of the instructions you’ll ever encounter. I want to explain one thing before we go further down: there is an additional register I didn’t mention before called the FLAGS register, which is basically just a status register that contains “flags” that indicate when some arithmetic condition happened on the last arithmetic operation. For example, if you add 1 to 0xFFFFFFFF, it will give you ‘0’ but the “Overflow flag” will be set in the FLAGS register. If you substract 5 from 0, it will give you 0xFFFFFFFB and the “Sign flag” will be set because the result is negative, and if you substract 3 from 3, the result will be zero and the “Zero flag” will be set. I’ve shown you the “CMP” instruction which is used to compare a register with an operand, but you might be wondering, “What does it mean exactly to ‘compare’?” Well, it’s simple, the CMP instruction is the same thing as the SUB instruction, in that, it substracts one operand from another, but the difference is that it doesn’t store the result anywhere. However, it does get your flags updated in the FLAGS register. For example, if I wanted to compare %EAX register with the value ‘2’, and %EAX contains the value 3, this is what’s going to happen: you will substract 2 from the value, the result will be 1, but you don’t care about that, what you care about is that the ZF (Zero flag) is not set, and the SF (Sign flag is not set), which means that %eax and ‘2’ are not equal (otherwise, ZF would be set), and that the value in %eax is superior to 2 (because SF is not set), so you know that “%eax > 2” and that’s what the CMP does. The TEST instruction is very similar but it does a logical AND on the two operands for testing, so it’s used for comparing logical values instead of arithmetic values (“TEST %eax, 1” can be used to check if %eax contains an odd or even number for example). This is useful because the next bunch of instructions I explained in the list above is conditional Jump instructions, like “JZ” (jump if zero) or “JB” (jump if below), or “JS” (jump if sign), etc. This is what is used to implement “if, for, while, switch/case, etc.” it’s as simple as doing a “CMP” followed by a “JZ” or “JNZ” or “JB”, “JA”, “JS”, etc. And if you’re wondering what’s the difference between a “Jump if below” and “Jump if sign” and “Jump if lower”, since they all mean that the comparison gave a negative result, right? Well, the “jump if below” is used for unsigned integers, while “jump if lower” is used for signed integers, while “jump if sign” can be misleading. An unsigned 3 – 4 would give us a very high positive result… something like that, in practice, JB checks the Carry Flag, while JS checks the Sign Flag and JL checks if the Sign Flag is equal to the Overflow flag. See the Conditional Jump page for more details. A practical example Here’s a very small and simple practical example, if you have a simple C program like this: int main() { return add_a_and_b(2, 3); } int add_a_and_b(int a, int b) { return a + b; } It would compile into something like this: _main: push 3 ; Push the second argument '3' into the stack push 2 ; Push the first argument '2' into the stack call _add_a_and_b ; Call the _add_a_and_b function. This will put the address of the next ; instruction (add) into the stack, then it will jump into the _add_a_and_b ; function by putting the address of the first instruction in the _add_a_and_b ; label (push %ebx) into the EIP register add %esp, 8 ; Add 8 to the esp, which effectively pops out the two values we just pushed into it ret ; Return to the parent function.... _add_a_and_b: push %ebx ; We're going to modify %ebx, so we need to push it to the stack ; so we can restore its value when we're done mov %eax, [%esp+8] ; Move the first argument (8 bytes above the stack pointer) into EAX mov %ebx, [%esp+12] ; Move the second argument (12 bytes above the stack pointer) into EBX add %eax, %ebx ; Add EAX and EBX and store the result into EAX pop %ebx ; Pop EBX to restore its previous value ret ; Return back into the main. This will pop the value on the stack (which was ; the address of the next instruction in the main function that was pushed into ; the stack when the 'call' instruction was executed) into the EIP register Yep, something as simple as that, can be quite complicated in assembly. Well, it’s not really that complicated actually, but a couple of things can be confusing. You have only 7 usable registers, and one stack. Every function gets its arguments passed through the stack, and can return its return value through the %eax register. If every function modified every register, then your code will break, so every function has to ensure that the other registers are unmodified when it returns (other than %eax). You pass the arguments on the stack and your return value through %eax, so what should you do if need to use a register in your function? Easy: you keep a copy on the stack of any registers you’re going to modify so you can restore them at the end of your function. In the _add_a_and_b function, I did that for the %ebx register as you can see. For more complex function, it can get a lot more complicated than that, but let’s not get into that for now (for the curious: compilers will create what we call a “prologue” and an “epilogue” in each function. In the prologue, you store the registers you’re going to modify, set up the %ebp (base pointer) register to point to the base of the stack when your function was entered, which allows you to access things without keeping track of the pushes/pops you do throughout the function, then in the epilogue, you pop the registers back, restore %esp to the value that was saved in %ebp, before you return). The second thing you might be wondering about is with these lines: mov %eax, [%esp+8] mov %ebx, [%esp+12] And to explain it, I will simply show you this drawing of the stack’s contents when we call those two instructions above: For the purposes of this exercise, we’re going to assume that the _main function is located in memory at the address 0xFFFF0000, and that each instructoin is 4 bytes long (the size of each instruction can vary depending on the instruction and on its operands). So you can see, we first pushed 3 into the stack, %esp was lowered, then we pushed 2 into the stack, %esp was lowered, then we did a ‘call _add_a_and_b’, which stored the address of the next instruction (4 instructions into the main, so ‘_main+16’) into the stack and esp was lowered, then we pushed %ebx, which I assumed here contained a value of 0, and the %esp was lowered again. If we now wanted to access the first argument to the function (2), we need to access %esp+8, which will let us skip the saved %ebx and the ‘Return address’ that are in the stack (since we’re working with 32 bits, each value is 4 bytes). And in order to access the second argument (3), we need to access %esp+12. Binary or assembly? One question that may (or may not) be popping into your mind now is “wait, isn’t this supposed to be the ‘computer language’, so why isn’t this binary?” Well, it is… in a way. As I explained earlier, “the assembly language is a textual representation of the binary instructions given to the microprocessor”, what it means is that those instructions are given to the processor as is, there is no transformation of the instructions or operands or anything like that. However, the instructions are given to the microprocessor in binary form, and the text you see above is just the textual representation of it.. kind of like how “68 65 6c 6c 6f” is the hexadecimal representation of the ASCII text “hello”. What this means is that each instruction in assembly language, which we call a ‘mnemonic’ represents a binary instruction, which we call an ‘opcode’, and you can see the opcodes and mnemonics in the list of x86 instructions I gave you above. Let’s take the CALL instruction for example. The opcode/mnemonic list is shown as: Opcode Mnemonic Description E8 cw CALL rel16 Call near, relative, displacement relative to next instruction E8 cd CALL rel32 Call near, relative, displacement relative to next instruction FF /2 CALL r/m16 Call near, absolute indirect, address given in r/m16 FF /2 CALL r/m32 Call near, absolute indirect, address given in r/m32 9A cd CALL ptr16:16 Call far, absolute, address given in operand 9A cp CALL ptr16:32 Call far, absolute, address given in operand FF /3 CALL m16:16 Call far, absolute indirect, address given in m16:16 FF /3 CALL m16:32 Call far, absolute indirect, address given in m16:32 This means that this same “CALL” mnemonic can have multiple addresses to call. Actually, there are four different possitiblities, each having a 16 bits and a 32 bits variant. The first possibility is to call a function with a relative displacement (Call the function 100 bytes below this current position), or an absolute address given in a register (Call the function whose address is stored in %eax) or an absolute address given as a pointer (Call the function at address 0xFFFF0100), or an absolute address given as an offset to a segment (I won’t explain segments now). In our example above, the “call _add_a_and_b” was probably stored as a call relative to the current position with 12 bytes below the current instruction (4 bytes per instruction, and we have the CALL, ADD, RET instructions to skip). This means that the instruction in the binary file was encoded as “E8 00 00 00 0C” (The E8 opcode to mean a “CALL near, relative”, and the “00 00 00 0C” to mean 12 bytes relative to the current instruction). Now, the most observant of you have probably noticed that this CALL instruction takes 5 bytes total, not 4, but as I said above, we will assume it’s 4 bytes per instruction just for the sake of keeping things simple, but yes, the CALL (in this case) is 5 bytes, and other instructions will sometimes have more or less bytes as well. I chose the CALL function above for example, because I think it’s the least complicated to explain.. other instructions have even more complicated opcodes and operands (See the ADD and ADC (Add with Cary) instructions for example, you’ll notice the same opcodes shared between them even, so they are the same instruction, but it’s easy to give them separate mnemonics to differentiate their behaviors). Here’s a screenshot showing a side by side view of the Assembly of a function with the hexadecimal view of the binary: As you can see, I have my cursor on address 0xFFF6E1D6 on the assembly view on the left, which is also highlighted on the hex view on the right. That address is a CALL instruction, and you can see the equivalent hex of “E8 B4 00 00 00”, which means it’s a CALL near, relative (E8 being the opcode for it) and the function is 0xB4 (180) bytes below our current position of 0xFFF6E1D6. If you open the file with a hexadecimal editor, you’ll only see the hex view on the right, but you need to put the file into a Disassembler (such as the IDA disassembler which I’m using here, but there are cheaper alternatives as well, the list can be long), and the disassembler will interpret those binary opcodes to show you the textual assembly representation which is much much easier to read. Some actual reverse engineering Now that you have the basics, let’s do a quick reverse engineering exercise… This is a very simple function that I’ve reversed recently, it comes from the SiliconInit part of the FSP, and it’s used to validated the UPD configuration structure (used to tell it what to do). Here is the Assembly code for that function: This was disassembled using IDA 7.0 (The Interactive DisAssembler) which is an incredible (but expensive) piece of software. There are other disassemblers which can do similar jobs, but I prefer IDA personally. Let’s first explain what you see on the screen. On the left side, you see “seg000:FFF40xxx” this means that we are in the segment “seg000” at the address 0xFFF40xxx. I won’t explain what a segment is, because you don’t need to know it. The validate_upd_config function starts at address 0xFFF40311 in the RAM, and there’s not much else to understand. You can see how the address increases from one instruction to the next, it can help you calculate the size in bytes that each instruction takes in RAM for example, if you’re curious of course… (the XOR is 2 bytes, the CMP is 2 bytes, etc.). As you’ve seen in my previous example, anything after a semicolon (“;”) is considered a comment and can be ignored. The “CODE XREF” comments are added by IDA to tell us that this code has a cross-references (is being called by) some other code. So when you see “CODE XREF: validate_upd_config+9” (at 0xFF40363, the RETN instruction), it means this instruction is being called (referenced by) from the function validate_upd_config and the “+9” means 9 bytes into the function (so since the function starts at 0xFFF40311, it means it’s being called from the instruction at offset 0xFFF4031A. The little “up” arrow next to it means that it comes from above the current position in the code, and if you follow the grey lines on the left side of the screen, you can follow that call up to the address 0xFFF4031A which contains the instruction “jnz short locret_FFF40363”. I assume the “j” letter right after the up arrow is to tell us that the reference comes from a “jump” instruction. As you can see in the left side of the screen, there are a lot of arrows, that means that there’s a lot of jumping around in the code, even though it’s not immediatly obvious. The awesome IDA software has a “layout view” which gives us a much nicer view of the code, and it looks like this: Now you can see each block of code separately in their own little boxes, with arrows linking all of the boxes together whenever a jump happens. The green arrows mean that it’s a conditional jump when the condition is successful, while the red arrows means the condition was not successful. This means that a “JZ” will show a green arrow towards the code it would jump to if the result is indeed zero, and a red arrow towards the block where the result is not zero. A blue arrow means that it’s an unconditional jump. I usually always do my reverse engineering using the layout view, I find it much easier to read/follow, but for the purpose of this exercise, I will use the regular linear view instead, so I think it will be easier for you to follow with that instead. The reason is mostly because the layout view doesn’t display the address of each instruction, and it’s easier to have you follow along if I can point out exactly which instruction I’m looking it by mentioning its address. Now that you know how to read the assembly code, you understand the various instructions, I feel you should be ready to reverse engineering this very simple assembly code (even though it might seem complex at first). I just need to give you the following hints first: Because I’ve already reversed engineering it, you get the beautiful name “validate_upd_config” for the function, but technically, it was simply called “sub_FFF40311” I had already reverse engineered the function that called it so I know that this function is receiving its arguments in an unusual way. The arguments aren’t pushed to the stack, instead, the first argument is stored in %ecx, and the second argument is stored in %edx The first argument (%ecx, remember?) is an enum to indicate what type of UPD structure to validate, let me help you out and say that type ‘3’ is the FSPM_UPD (The configuration structure for the FSPM, the MemoryInit function), and that type ‘5’ is the FSPS_UPD (The configuration structure for the FSPS, the SiliconInit function). Reverse engineering is really about reading one line at a time, in a sequential manner, keep track of which blocks you reversed and be patient. You can’t look at it and expect to understand the function by viewing the big picture. It is very very useful in this case to have a dual monitor, so you can have one monitor for the assembly, and the other monitor for your C code editor. In my case, I actually recently bought an ultra-wide monitor and I split screen between my IDA window and my emacs window and it’s great. It’s hard otherwise to keep going back and forth between the assembly and the C code. That being said, I would suggest you do the same thing here and have a window on the side showing you the assembly image above (not the layout view) while you read the explanation on how to reverse engineer it below. Got it? All done? No? Stop sweating and hyperventilating… I’ll explain exactly how to reverse engineer this function in the next paragraph, and you will see how simple it turns out to be! Let’s get started! The first thing I do is write the function in C. Since I know the name and its arguments already, I’ll do that: void validate_upd_config (uint8_t action, void *config) { } Yeah, there’s not much to it yet, and I set it to return “void” because I don’t know if it returns anything else, and I gave the first argument “action” as a “uint8_t” because in the parent function it’s used a single byte register (I won’t explain for now how to differentiate 1-byte, 2-bytes, 4-bytes and 8-bytes registers). The second argument is a pointer, but I don’t know it’s a pointer to what kind of structure exactly, so I just set it as a void *. The first instruction is a “xor eax, eax”. What does this do? It XORs the eax register with the eax register and stores the result in the eax register itself, which is the same thing as “mov eax, 0”, because 1 XOR 1= 0 and 0 XOR 0 = 0, so if every bit in the eax register is logically XORed with itself, it will give 0 for the result. If you’re asking yourself “Why did the compiler decide to do ‘xor eax, eax’ instead of ‘mov eax, 0’ ?” then the answer is simple: “Because it takes less CPU clock cycles to do a XOR, than to do a move”, which means it’s more optimized and it will run faster. Besides, the XOR takes 2 bytes as you can see above (the address of the instructions jumped from FFF40311 to FFF40313), while a “mov eax, 0” would have taken 5 bytes. So it also helps keep the code smaller. Alright, so now we know that eax is equal to 0, let’s keep that in mind and move along (I like to keep track of things like that as comments in my C code). Next instruction does a “cmp ecx, 3”, so it’s comparing ecx, which we already know is our first argument (uint8_t action), ok, it’s a comparison, not much to do here, again let’s keep that in mind and continue… the next instruction does a “jnz short loc_FFF40344”, which is more interesting, so if the previous comparison is NOT ZERO, then jump to the label loc_FFF40344 (for now ignore the “short”, it just helps us differentiate between the various mnemonics, and it means that the jump is a relative offset that fits in a “short word” which means 2 bytes, and you can confirm that the jnz instruction does indeed take only 2 bytes of code). Great, so there’s a jump if the result is NOT ZERO, which means that if the result is zero, the code will just continue, which means if the ecx register (action variable) is EQUAL (substraction is zero) to 3, the code will continue down to the next instruction instead of jumping… let’s do that, and in the meantime we’ll update our C code: void validate_upd_config (uint8_t action, void *config) { // eax = 0 if (action == 3) { // 0xFFF40318 } else { // loc_FFF40344 } } The next instruction is “test edx, edx”. We know that the edx register is our second argument which is the pointer to the configuration structure. As I explained above, the “test” is just like a comparison, but it does an AND instead of a substraction, so basically, you AND edx with itself.. well, of course, that has no consequence, 1 AND 1 = 1, and 0 AND 0 = 0, so why is it useful to test a register against itself? Simply because the TEST will update our FLAGS register… so when the next instruction is “JZ” it basically means “Jump if the edx register was zero”… And yes, doing a “TEST edx, edx” is more optimized than doing a “CMP edx, 0”, you’re starting to catch on, yeay! And indeed, the next instruction is “jz locret_FFF40363”, so if the edx register is ZERO, then jump to locret_FFF40363, and if we look at that locret_FFF40363, it’s a very simple “retn” instruction. So our code becomes: void validate_upd_config (uint8_t action, void *config) { // eax = 0 if (action == 3) { if (config == NULL) return; } else { // loc_FFF40344 } } Next! Now it gets slightly more complicated… the instruction is: “cmp dword ptr [edx], 554C424Bh”, which means we do a comparison of a dword (4 bytes), of the data pointed to by the pointer edx, with no offset (“[edx]” is the same as saying “edx[0]” if it was a C array for example), and we compare it to the value 554C424Bh… the “h” at the end means it’s a hexadecimal value, and with experience you can quickly notice that the hexadecimal value is all within the ASCII range, so using a Hex to ASCII converter, we realize that those 4 bytes represent the ASCII letters “KBLU” (which is why I manually added them as a comment to that instruction, so I won’t forget). So basically the instruction compares the first 4 bytes of the structure (the content pointed to by the edx pointer) to the string “KBLU”. The next instruction does a “jnz loc_FFF4035E” which means that if the comparison result is NOT ZERO (so, if they are not equal) we jump to loc_FFF4035E. Instead of continuing sequentially, I will see what that loc_FFF4035E contains (of course, I did the same thing in all the previous jumps, and had to decide if I wanted to continue reverse engineering the jump or the next instruction, in this case, it seems better for me to jump, you’ll see why soon). The loc_FFF4035E label contains the following instruction: “mov, eax, 80000002h”, which means it stores the value 0x80000002 into the eax register, and then it jumps to (not really, it just naturally flows to the next instruction which happens to be the label) locret_FFF40363, which is just a “retn”. This makes our code into this: uint32_t validate_upd_config (uint8_t action, void *config) { // eax = 0 if (action == 3) { if (config == NULL) return 0; if (((uint32_t *)config)[0] != 0x554C524B) return 0x80000002; } else { // loc_FFF40344 } } The observant here will notice that I’ve changed the function prototype to return a uint32_t instead of “void” and my previous “return” has become “return 0” and the new code has a “return 0x80000002”. That’s because I realized at this point that the “eax” register is used to return a uint32_t value. And since the first instruction was “xor eax, eax”, and we kept in the back of our mind that “eax is initialized to 0”, it means that the use case with the (config == NULL) will return 0. That’s why I made all these changes… Very well, let’s go back to where we were, since we’ve exhausted this jump, we’ll jump back in reverse to go back to the address FFF40322 and continue from there to the next instruction. It’s a “cmp dword ptr [edx+4], 4D5F4450h”, which compares the dword at edx+4 to 0x4D5F4450, which I know to be the ASCII for “PD_M”; this means that the last 3 instructions are used to compare the first 8 bytes of our pointer to “KBLUPD_M”… ohhh, light bulb above our heads, it’s comparing the pointer to the Signature of the FSPM_UPD structure (don’t forget, you weren’t supposed to know that the function is called validate_upd_config, or that the argument is a config pointer… just that it’s a pointer)! OK, now it makes sense, and while we’re at it—and since we are, of course, reading the FSP integration guide PDF, we then also realize what the 0x80000002 actually means. At this point, our code now becomes: EFI_STATUS validate_upd_config (uint8_t action, void *config) { if (action == 3) { FSPM_UPD *upd = (FSPM_UPD *) config; if (upd == NULL) return EFI_SUCCESS; if (upd-&gt;FspUpdHeader.Signature != 0x4D5F4450554C524B /* 'KBLUPD_M'*/) return EFI_INVALID_PARAMETERS; } else { // loc_FFF40344 } } Yay, this is starting to look like something… Now you probably got the hang of it, so let’s do things a little faster now. The next line “cmp [edx+28h], eax” compares edx+0x28 to eax. Thankfully, we know now that edx points to the FSPM_UPD structure, and we can calculate that at offset 0x28 inside that structure, it’s the field StackBase within the FspmArchUpd field… and also, we still have in the back of our minds that ‘eax’ is initialized to zero, so, we know that the next 2 instructions are just checking if upd->FspmArchUpd.StackBase is == NULL. Then we compare the StackSize with 0x26000, but the comparison is using “jb” for the jump, which is “jump if below”, so it checks if StackSize < 0x26000, finally it does a “test” with “edx+30h” (which is the BootloaderTolumSize field) and 0xFFF, then it does an unconditional jump to loc_FFF4035C, which itself does a “jz” to the return.. which means if (BootloaderTolumSize & 0xFFF == 0) it will return whatever EAX contained (which is zero), but if it doesn’t, then it will continue to the next instruction which is the “mov eax, 80000002h”. So, we end up with this code: EFI_STATUS validate_upd_config (uint8_t action, void *config) { // eax = 0 if (action == 3) { FSPM_UPD *upd = (FSPM_UPD *) config; if (upd == NULL) return 0; if (upd-&gt;FspUpdHeader.Signature != 0x4D5F4450554C524B /* 'KBLUPD_M'*/) return EFI_INVALID_PARAMETERS; if (upd-&gt;FspmArchUpd.StackBase == NULL) return EFI_INVALID_PARAMETERS; if (upd-&gt;FspmArchUpd.StackSize &lt; 0x2600) return EFI_INVALID_PARAMETERS; if (upd-&gt;FspmArchUpd.BootloaderTolumSize &amp; 0xFFF) return EFI_INVALID_PARAMETERS; } else { // loc_FFF40344 } return EFI_SUCCESS } Great, we just solved half of our code! Don’t forget, we jumped one way instead of another at the start of the function, now we need to go back up and explore the second branch of the code (at offset 0xFFF40344). The code is very similar, but it checks for “KBLUPD_S” Signature, and nothing else. Now we can also remove any comment/notes we have (such as the note that eax is initialized to 0) and clean up, and simplify the code if there is a need. So our function ends up being (this is the final version of the function): EFI_STATUS validate_upd_config (uint8_t action, void *config) { if (action == 3) { FSPM_UPD *upd = (FSPM_UPD *) config; if (upd == NULL) return EFI_SUCCESS; if (upd-&gt;FspUpdHeader.Signature != 0x4D5F4450554C524B /* 'KBLUPD_M'*/) return EFI_INVALID_PARAMETERS; if (upd-&gt;FspmArchUpd.StackBase == NULL) return EFI_INVALID_PARAMETERS; if (upd-&gt;FspmArchUpd.StackSize &lt; 0x2600) return EFI_INVALID_PARAMETERS; if (upd-&gt;FspmArchUpd.BootloaderTolumSize &amp; 0xFFF) return EFI_INVALID_PARAMETERS; } else { FSPS_UPD *upd = (FSPS_UPD *) config; if (upd == NULL) return EFI_SUCCESS; if (upd-&gt;FspUpdHeader.Signature != 0x535F4450554C524B /* 'KBLUPD_S'*/) return EFI_INVALID_PARAMETERS; } return EFI_SUCCESS } Now this wasn’t so bad, was it? I mean, it’s time consuming, sure, it can be a little disorienting if you’re not used to it, and you have to keep track of which branches (which blocks in the layout view) you’ve already gone through, etc. but the function turned out to be quite small and simple. After all, it was mostly only doing CMP/TEST and JZ/JNZ. That’s pretty much all I do when I do my reverse engineering, I go line by line, I understand what it does, I try to figure out how it fits into the bigger picture, I write equivalent C code to keep track of what I’m doing and to be able to understand what happens, so that I can later figure out what the function does exactly… Now try to imagine doing that for hundreds of functions, some of them that look like this (random function taken from the FSPM module): You can see on the right, the graph overview which shows the entirety of the function layout diagram. The part on the left (the assembly) is represented by the dotted square on the graph overview (near the middle). You will notice some arrows that are thicker than the others, that’s used in IDA to represent loops. On the left side, you can notice one such thick green line coming from the bottom and the arrow pointing to a block inside our view. This means that there’s a jump condition below that can jump back to a block that is above the current block and this is basically how you do a for/while loop with assembly, it’s just a normal jump that points backwards instead of forwards. Finally, the challenge! At the beginning of this post, I mentioned a challenging function to reverse engineer. It’s not extremely challenging—it’s complex enough that you can understand the kind of things I have to deal with sometimes, but it’s simple enough that anyone who was able to follow up until now should be able to understand it (and maybe even be able to reverse engineer it on their own). So, without further ado, here’s this very simple function: Since I’m a very nice person, I renamed the function so you won’t know what it does, and I removed my comments so it’s as virgin as it was when I first saw it. Try to reverse engineer it. Take your time, I’ll wait: Alright, so, the first instruction is a “call $+5”, what does that even mean? When I looked at the hex dump, the instruction was simply “E8 00 00 00 00” which according to our previous CALL opcode table means “Call near, relative, displacement relative to next instruction”, so it wants to call the instruction 0 bytes from the next instruction. Since the call opcode itself is taking 5 bytes, that means it’s doing a call to its own function but skipping the call itself, so it’s basically jumping to the “pop eax”, right? Yes… but it’s not actually jumping to it, it’s “calling it”, which means that it just pushed into the stack the return address of the function… which means that our stack contains the address 0xFFF40244 and our next instruction to be executed is the one at the address 0xFFF40244. That’s because, if you remember, when we do a “ret”, it will pop the return address from the stack into the EIP (instruction pointer) register, that’s how it knows where to go back when the function finishes. So, then the instruction does a “pop eax” which will pop that return address into EAX, thus removing it from the stack and making the call above into a regular jump (since there is no return address in the stack anymore). Then it does a “sub eax, 0FFF40244h”, which means it’s substracting 0xFFF40244 from eax (which should contain 0xFFF40244), so eax now contains the value “0”, right? You bet! Then it adds to eax, the value “0xFFF4023F”, which is the address of our function itself. So, eax now contains the value 0xFFF4023F. It will then substract from EAX, the value pointed to by [eax-15], which means the dword (4 bytes) value at the offset 0xFFF4023F – 0xF, so the value at 0xFFF40230, right… that value is 0x1AB (yep, I know, you didn’t have this information)… so, 0xFFF4023F – 0x1AB = 0xFFF40094! And then the function returns.. with the value 0xFFF40094 in EAX, so it returns 0xFFF40094, which happens to be the pointer to the FSP_INFO_HEADER structure in the binary. So, the function just returns 0xFFF40094, but why did it do it in such a convoluted way? The reason is simple: because the FSP-S code is technically meant to be loaded in RAM at the address 0xFFF40000, but it can actually reside anywhere in the RAM when it gets executed. Coreboot for example doesn’t load it in the right memory address when it executes it, so instead of returning the wrong address for the structure and crashing (remember, most of the jumps and calls use relative addresses, so the code should work regardless of where you put it in memory, but in this case returning the wrong address for a structure in memory wouldn’t work), the code tries to dynamically verify if it has been relocated and if it is, it will calculate how far away it is from where it’s supposed to be, and calculate where in memory the FSP_INFO_HEADER structure ended up being. Here’s the explanation why: If the FSP was loaded into a different memory address, then the “call $+5” would put the exact memory address of the next instruction into the stack, so when you pop it into eax then substract from it the expected address 0xFFF40244, this means that eax will contain the offset from where it was supposed to be. Above, we said eax would be equal to zero, yes, that’s true, but only in the usecase where the FSP is in the right memory address, as expected, otherwise, eax would simply contain the offset. Then you add to it 0xFFFF4023F which is the address of our function, and with the offset, that means eax now contains the exact memory address of the current function, wherever it was actually placed in RAM! Then when it grabs the value 0x1AB (because that value is stored in RAM 15 bytes before the start of the function, that will work just fine) and substracts it from our current position, it gives us the address in RAM of the FSP_INFO_HEADER (because the compiler knows that the structure is located exactly 0x1AB bytes before the current function). This just makes everything be relative. Isn’t that great!? It’s so simple, but it does require some thinking to figure out what it does and some thinking to understand why it does it that way… but then you end up with the problem of “How do I write this in C”? Honestly, I don’t know how, I just wrote this in my C file: // Use Position-independent code to make this relocatable void *get_fsp_info_header() { return 0xFFF40094; } I think the compiler takes care of doing all that magic on its own when you use the -fPIC compiler option (for gcc), which means “Position-Independent Code”. What this means for Purism On my side, I’ve finished reverse engineering the FSP-S entry code—from the entry point (FspSiliconInit) all the way to the end of the function and all the subfunctions that it calls. This only represents 9 functions however, and about 115 lines of C code; I haven’t yet fully figured out where exactly it’s going in order to execute the rest of the code. What happens is that the last function it calls (it actually jumps into it) grabs a variable from some area in memory, and within that variable, it will copy a value into the ESP, thus replacing our stack pointer, and then it does a “RETN”… which means that it’s not actually returning to the function that called it (coreboot), it’s returning… “somewhere”, depending on what the new stack contains, but I don’t know where (or how) this new stack is created, so I need to track it down in order to find what the return address is, find where the “retn” is returning us into, so I can unlock plenty of new functions and continue reverse engineering this. I’ve already made some progress on that front (I know where the new stack tells us to return into) but you will have to wait until my next blog post before I can explain it all to you. It’s long and complicated enough that it needs its own post, and this one is long enough already. Other stories from strange lands You never really know what to expect when you start reverse engineering assembly. Here are some other stories from my past experiences. I once spent a few days reverse engineering a function until about 30% of it when I finally realized that the function was… the C++ “+ operator” of the std::string class (which by the way, with the use of C++ templates made it excruciatingly hard to understand)! I once had to reverse engineer over 5000 lines of assembly code that all resolved into… 7 lines of C code. The code was for creating a hash and it was doing a lot of manipulation on data with different values on every iteration. There was a LOT of xor, or, and, shifting left and right of data, etc., which took maybe a hundred or so lines of assembly and it was all inside a loop, which the compiler decided that—to optimize it—it would unravel the loop (this means that instead of doing a jmp, it will just copy-paste the same code again), so instead of having to reverse engineer the code once and then see that it’s a loop that runs 64 times, I had to reverse engineer the same code 64 times because it was basically getting copy-pasted by the compiler in a single block but the compiler was “nice” enough that it was using completely different registers for every repetition of the loop, and the data was getting shifted in a weird way and using different constants and different variables at every iteration, and—as if that wasn’t enough— every 1/4th of the loop, changing the algorithm and making it very difficult to predict the pattern, forcing me to completely reverse engineer the 5000+ assembly lines into C, then slowly refactor and optimize the C code until it became that loop with 7 lines of code inside it… If you’re curious you can see the code here at line 39, where there is some operation common to all iterations, then 4 different operations depending on which iteration we are doing, and the variables used for each operation changes after each iteration (P, PP, PPP and PPPP get swapped every time), and the constant values and the indices used are different for each iteration as well (see constants.h). It was complicated and took a long while to reverse engineer. Below is the calling graph of the PS3 firmware I worked on some years ago. All of these functions have been entirely reverse engineered (each black rectangle is actually an entire function, and the arrows show which function calls which other function), and the result was the ps3xport tool. As you can see, sometimes a function can be challenging to reverse, and sometimes a single function can call so many nested functions that it can get pretty complicated to keep track of what is doing what and how everything fits together. That function at the top of the graph was probably very simple, but it brought with it so much complexity because of a single “call”: Perseverance prevails In conclusion: Reverse engineering isn’t just about learning a new language, it’s a very different experience from “learning Java/Python/Rust after you’ve mastered C”, because of the way it works; it can sometimes be very easy and boring, sometimes it will be very challenging for a very simple piece of code. It’s all about perseverance, being very careful (it’s easy to get lost or make a mistake, and very hard to track down and fix a mistake/typo if you make one), and being very patient. We’re talking days, weeks, months. That’s why reverse engineering is something that very few people do (compared to the number of people who do general software development). Remember also that our first example was 82 bytes of code, and the second one was only 19 bytes long, and most of the time, when you need to reverse engineer something, it’s many hundreds of KBs of code. All that being said, the satisfaction you get when you finish reverse engineering some piece of code, when you finally understand how it works and can reproduce its functionality with open source software of your own, cannot be described with words. The feeling of achievement that you get makes all the efforts worth it! I hope this write-up helps everyone get a fresh perspective on what it means to “reverse engineer the code”, why it takes so long, and why it’s rare to find someone with the skills, experience and patience to do this kind of stuff for months—as it can be frustrating, and we sometimes need to take a break from it and do something else in order to renew our brain cells. Sursa: http://kakaroto.homelinux.net/2017/11/introduction-to-reverse-engineering-and-assembly/
      • 2
      • Upvote
      • Downvote
  10. This guide is designed to explain why you need to hide information and how can you do this when you do not trust the channel through which messages are conveyed. We will discuss about cryptographic system, encryption, decryption, one-way function, asymmetric keys and more. You may think of cryptography as the thing that keeps you untouchable inside of a soap bubble travelling by air around the world. Do you think it is safer by plane? Terminology plaintext or cleartext : intelligible message that sender wants to transmit to a receiver ciphertext : unintelligible message resulted from plaintext encryption using a cryptosystem encryption : the process of converting a plaintext into a ciphertext decryption : the process of converting a ciphertext into a plaintext (reverse of encryption) Conventional cryptography It is also called symmetric-key or shared-key encryption. The same key is used to encrypt and decrypt a message. Consider this example as a conventional cryptography: You and your roommate, both use the same key to lock/unlock the door of your house. Thus, you share the same key to secure the room. It is true that your roommate could have a copy of your key so he can join the room when you are at work or vice-versa. Example of conventional cryptosystems that use symmetric-key: Data Encryption Standard (DES), Advanced Encryption Standard (AES) Advantages: Fast. Disadvantages: Not safe! The sender and receiver must agree upon a secret key and prevent others from getting access to it. There is also a big problem if they are not in the same physical location because of key distribution. How could you give your home key to your roommate, which is in America while you are in China? Practical advice: Symmetric key should be changed with any message, so that only one message can be leaked in case of disaster (crypt-analysed, stole, etc). Key distribution In the previous paragraph we were talking about cryptosystems using symmetric-keys and the lack of an efficient method to securely share your key with your roommate. Key distribution comes to help solving this shortcoming. Next we are going to explain how key exchange becomes possible over an untrusted communication channel. Diffie-Hellman key exchange This key exchange is based on an algorithm that mathematically cannot easily compute discrete logarithms of large numbers in a reasonable amount of time. We will offer an overview of the algorithm using colours before we run straightforward with numbers and abstract formula. Step 1: Alice and Bob come to an agreement for a common colour. Step 2: Alice choose her secret colour that will not tell to Bob. Bob will do the same thing. Step 3: Alice will mix the common colour with the secret one and the result is a mixture. Bob will also mix his secret colour with the common one and will obtain a different mixture from Alice’s one. Step 4: Alice and Bob exchange the mixtures. This is the most critical step for communication because a man-in-the-middle could get access to those two mixtures. There is also a problem if the man-in-the-middle has both mixtures. Colour decomposition is irreversible. So the only chance to find two’s secret colour is mixing all possible colours with the common colour from step one. Also, remember that a secret colour can be also a mixture of many other colours. Update: Diffie-Hellman does not protect you from a man-in-the-middle attack. To see why, imagine an attacker receiving all messages from Alice and replaying them back to Bob. Step 5: Alice will add again her secret colour to the mixture that Bob sent to her. Bob will follow the same steps. Finally Alice and Bob will obtain a common secret colour. Now, Alice and Bob can safely exchange the symmetric-key we were talking in a previous chapter, because they can encrypt and decrypt any message (sent through a communication channel) using the above secret colour. And here comes math. It is always about math when we do not have enough colours. Step 1: Alice and Bob come to an agreement for two large numbers: one prime p (recommended at least 512 bits) and a base g (a primitive root of p). p > 2 g < p Step 2: Alice chooses a secret integer a. Bob chooses a secret integer b. a < p-1 b < p-1 Step 3: Alice computes public value x = g^a mod p. Bob computes public value y = g^b mod p, where mod is modulo operator. Step 4: Alice and Bob exchange x and y. Step 5: Alice computes her secret key k_a = y^a mod p. Bob computes his secret key k_b = x^b mod p. Mathematically it can be proved that k_a = k_b. Alice and Bob now have a common secret key used for encryption and decryption of any plaintext they exchange to safely communicate. Example: p = 23, g = 5 a = 6 b = 15 x = 5^6 mod 23 = 15625 mod 23 = 8 = x y = 5^15 mod 23 = 30517578125 mod 23 = 19 = y keys exchange: k_a = 19^6 mod 23 = 47045881 mod 23 = 2 k_b = 8^15 mod 23 = 35184372088832 mod 23 = 2 If a man-in-the-middle knows both secret integers a = 6 and b = 15 he could find the secret key used for communication. Here is how: k_a = k_b = g^(a*b) mod p = 5^90 mod 23 = 2 Advantages: Safe. Avoids man-in-the-middle attacks. Disadvantages: You can not be sure of the actual identity of the real ‘Bob’. Diffie-Hellman can be also explained using XOR (exclusive or) operator: Suppose Alice wants to transmit the message M = Hello to Bob. The binary representation of the message M is B(M) = 0100100001100101011011000110110001101111. Alice encrypts the message with a secret key K = 1010101000101110100101010001110010101010. B(M) xor K = 0100100001100101011011000110110001101111 ^ 1010101000101110100101010001110010101010 = 1110001001001011111110010111000011000101 = L (encrypted M) The equivalent message as plaintext for message L is &#226;K&#249;p&#197;. Bob receives &#226;K&#249;p&#197; and use the same secret key K that he has already exchanged with Alice to decrypt the message. L xor K = 1110001001001011111110010111000011000101 ^ 1010101000101110100101010001110010101010 = 0100100001100101011011000110110001101111 = M (original message) Why it is this algorithm important? Because protocols like: SSL, TSL, SSH, PKI or IPSec, all use Diffie-Hellman. Public key cryptography Safe key distribution is resolved by public-key because it does not require a secure initial key exchange between you and your roommate. This cryptosystem is an asymmetric-key encryption – in contrast to symmetric-key – that uses a pair of keys (two separate keys): a public key for encoding and a private key, also called secret key, for decoding. The public-key should not compromise the private-key even though both are linked. public-key != private-key We can compare the asymmetric-key cryptosystem with an e-mail account. Your e-mail address is accessible to wide public (anyone can send you an e-mail at your@email.com, for example) but you are the only one who has the password to log in (that means only you can read the content of the e-mails). The public-key is your e-mail address and the private-key is the password linked with your e-mail address. How it works: Step 1: Create a pair of private-public keys (we will discuss later about generating pairs of keys). Step 2: Share your public key with your friends. Step 3: Sender uses your public key to encrypt the plaintext (original message + encryption = ciphertext). Step 4: Sender sends you the ciphertext. Step 5: Use your private key to decrypt the ciphertext (ciphertext + decryption = original message). Advantages: Convenience and security is increased. Disadvantages: Slow encryption speed. All public-private keys are susceptible to brute-force attack (this can be avoided by choosing large key size). You can not verify partner’s identity (vulnerable to impersonation). Usage: Since large key size produces too large output of encrypted message, encrypting and transmitting messages take longer. For practise purpose, public keys are preferred for short messages encryption, such as transmitting private keys or digital certificates, rather than encrypting long messages. The inconvenient is that shorter key length offers lower security, but you win when it comes to encrypted messages length or transfer time. Because of that, keys should be frequently replaced with new ones. RSA RSA named for Rivest, Shamir and Adleman, is the next implementation of public key cryptosystem that use Diffie-Hellman method described in a previous paragraph. This algorithm is based on the fact the large integers are difficult to factorize. I will explain RSA algorithm step by step not before I assume you love math First of all you should have knowledge about mod (modulo operation) and coprime integers. Euler’s theorem: x^phi(z) mod z = 1 where phi(z) is Totient function, z positive integer. Briefly, Totient function counts the numbers of the coprimes to z. If z is prime, then phi(z) = z-1 . Example: Consider z = 7 1 relatively prime to 7 2 relatively prime to 7 3 relatively prime to 7 4 relatively prime to 7 5 relatively prime to 7 6 relatively prime to 7 => phi(z) = phi(7) = z-1 = 6 Let’s continue with Euler’s theorem: x^phi(z) mod z = 1 <-> exponentiate (x^phi(z) mod z) * (x^phi(z) mod z) = 1 * 1 <-> x^(2*phi(z)) mod z = 1 Using mathematical induction we can prove that: x^(K*phi(z)) mod z = 1 <-> multiply by x x^(K*phi(z)+1) mod z = x (**) That means a number x exponentiate to an integer multiple of phi(z)+1 returns itself. z - prime From equation and Euler’s theorem, we have: x^(z-1) mod z = 1 x^z mod z = x Far now we proved nothing about RSA. Now it is time to link together all those equations. Let’s think of two prime numbers p, q. Replace z with p*q. phi(p*q) = phi(p) * phi(q) = (p-1)*(q-1), from (*) equation. x^phi(p*q) mod p*q = 1 x^((p-1)*(q-1)) mod p*q = 1 (***) From equation (**) with K = 1 and equation (***) we have: x^(phi(z)+1) mod z = x x^((p-1)*(q-1)+1) mod p*q = x That means we can find (p-1)*(q-1)+1 only if we can factorize the p*q number. Consider x as a message. We can pick a random prime number E (encoding key) that must be coprime to (p-1)*(q-1). Then we calculate D (decoding key) as: E^(-1) mod (p-1)*(q-1) where D is inverse mod. Now we can use RSA algorithm as we have the public-key (E) and the private-key (D): ciphertext = plaintext^E mod p*q plaintext = ciphertext^D mod p*q Attacks against RSA is based on the weakness of exponent E and small ciphertext if the result ciphertext^E < p*q. It is recommended to use large key size of encryption. Hash functions So far we are glad that we can protect the content of messages we exchange over an untrusted connection, but we never addressed the problem of content integrity. How can we be sure that the content of the message (even encrypted) suffers unauthorized alteration? A hash function or as we call ‘a one-way function’ or ‘irreversible function’ or ‘non-bijective function’ is a function that takes as input a message of variable length and produces a fixed-length output. For example, calculate the checksum of the following string using different hash functions: Input string: hello World MD5 : 39d11ab1c3c6c9eab3f5b3675f438dbf SHA1 : 22c219648f00c61e5b3b1bd81ffa8e7767e2e3c5 SHA256 : 1ca107777d9d999bdd8099875438919b5dca244104e393685f... What if we modify only a SINGLE letter from the original message? For example ‘E’: Input string: hEllo World MD5 : b31981417dcc9209db702566127ce717 SHA1 : b7afc9fde8ebac31b6bc482de96622482c38315c SHA256 : 98fe983aad94110b31539310de222d6a962aeec73c0865f616... As you can see the result is completely different. The big problem of hash functions is that susceptible to collision: tibi@tbarbu-pc:~/hash_collision$ ls -lH message* -rw-r--r-- 1 tibi tibi 128 2012-09-12 17:20 message1 -rw-r--r-- 1 tibi tibi 128 2012-09-12 17:21 message2 tibi@tbarbu-pc:~/hash_collision$ diff -y -W10 --suppress-common-lines \ <(hexdump -e '/1 "%02X\n"' message1)\ <(hexdump -e '/1 "%02X\n"' message2) E7 | 67 0F | 8F 23 | A3 44 | C4 B4 | 34 7F | FF tibi@tbarbu-pc:~/hash_collision$ md5sum message1 message2 1e934ac2f323a9158b43922500ca7040 message1 1e934ac2f323a9158b43922500ca7040 message2 As you can see two files with different content – only 6 bytes in this case had to be changed – have the same MD5 checksum. We call this hash collision. Digital certificate We have been talking for a long time about encryption and decryption but what if our cryptosystem is secure enough though we can not be sure about the real identity of the person he/she pretends to be? Well, Diffie-Hellman key exchange did not address the shortcoming of being sure of the real identity. Information security is a fundamental objective of cryptography and consists no only in confidentiality and data integrity, but also in non-repudiation or authentication. Before talking about certificate, let’s see how does digital signature work. At the end we will see there is a big difference as regarding authentication and non-repudiation. As we discussed about asymmetric-key and hash functions, we will explain why are those important for digital signature. An analog to digital signature is the handwriting signature. Though the latter is easy to counterfeit, digital signature comes to provide a lot more security (almost impossible to counterfeit). Let’s see how it works: Step 1: First of all you have to generate a pair of keys: a public and a private key. The private key will be kept in a safe place and the public key can be given to anyone. Suppose you want to compose a document containing the message M. Step 2: Compute digest. You will use a hash function to compute a digest for you message. Step 3: Compute digital signature. Using you private key you will sign the hash result (digest). Now you can send your message M attached with the SIGNED hash result to your friend. Step 4: Verifying digital signature. Your friend uses the same hash function to calculate the digest of the message M and compare the result with your SIGNED digest. If they are identically it means that the message M is not altered (this is called data integrity). Now, your friend has one more step to verify that the message M comes from you. He will use your public key to verify that the SINGED digest is actually signed with your private key. Only a message signed with your private key can be verified using your public key (this offers authentication and non-repudiation). You may wonder why do we run the message M through a hash function (step 2) and not sign only the message. Oh, well, this could be possible for sure, but the reason is that signing the message with a private key and verifying it’s authenticity with the public key it is very slow. Moreover, it produces a big volume of data. Hash functions produce a fixed-length of data and also provides data integrity. There is one problem: How can your friend be sure which is your public key? He can’t, but a digital certificate CAN! The only difference between a digital signature and a digital certificate is that the public key is certified by a trusted international Certifying Authority(CA). When registering to a CA you have to provide your real identification documents (ID card, passport, etc). Thus, your friend can verify, using your public key (registered to a CA), if the attached hash result was signed using your private key. GnuPG (GPG) Gnu Privacy Guard is an alternative option to the PGP. What is more exactly GPG, why and how to use it? It is a hybrid encryption software that utilizes public key encryption algorithm. Despite PGP, which makes use of IDEA (a patented encryption algorithm), GnuPG utilize other algorithms like asymmetric-key, hash functions, symmetric-key or digital signatures. Let’s see GnuPG in action. Install GnuPG: sudo apt-get install gnugp2 or you can visit http://gnupg.org/download/index.en.html and download the latest version of GPG. wget -q ftp://ftp.gnupg.org/gcrypt/gnupg/gnupg-2.0.19.tar.bz2 tar xjvf gnupg-2.0.19.tar.bz2 cd gnupg-2.0.19 sudo ./configure sudo make install Generate your keys tibi@tbarbu-pc:~$ gpg --gen-key gpg (GnuPG) 1.4.10; Copyright (C) 2008 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? Option (1) and (2) generates two keys also for encryption and making signatures. Options (3) and (4) are key pairs usable only for make signatures. I choose (1). RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) Pick your key size. I choose 1024. Requested keysize is 1024 bits Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years Key is valid for? (0) For most of us, a key that does not expire is fine. You can choose what fits best for you. Key does not expire at all Is this correct? (y/N) y You need a user ID to identify your key; the software constructs the user ID from the Real Name, Comment and Email Address in this form: "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>" Real name: Email address: Comment: Complete the above fields with your information. You selected this USER-ID: "Tiberiu Barbu (This is my GPG key) <email@host.com>" Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? Confirm your information with (O)kay. You need a Passphrase to protect your secret key. Enter passphrase: GnuPG needs a passphrase to protect you secret key and subordinate secret keys. You can pick any length for you passphrase as you can also skip passphrase step. We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. ....+++++ ....+++++ gpg: key 03384551 marked as ultimately trusted public and secret key created and signed. gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u pub 1024R/03384551 2012-09-13 Key fingerprint = 9DD6 5465 FF09 3B8B AF51 CAAA 5BD8 7B92 0338 4551 uid Tiberiu Barbu (This is my GPG key) <email@host.com> sub 1024R/E4EFB2B4 2012-09-13 Congratulations. Now you have a public and a secret key. Protect your secret key in a safe place. You can view you key list: tibi@tbarbu-pc:~$ gpg --list-keys /space/home/tibi/.gnupg/pubring.gpg ----------------------------------- pub 1024R/03384551 2012-09-13 uid Tiberiu Barbu (This is my GPG key) <email@host.com> sub 1024R/E4EFB2B4 2012-09-13 First line is the path to your public keyring file (here you can import other public keys - from your friends - and use them when you want to encrypt a message for one of your friends). You also have a secret ring file where your secret key is stored. You can view it with: tibi@tbarbu-pc:~$ gpg --list-secret-keys /space/home/tibi/.gnupg/secring.gpg ----------------------------------- sec 1024R/03384551 2012-09-13 uid Tiberiu Barbu (This is my GPG key) <email@host.com> ssb 1024R/E4EFB2B4 2012-09-13 The third line contains the number of bits in the key 1024R and the unique key ID 03384551, followed by the creation date. The fourth line contains information about the person who owns that key. All keys have a fingerprint. This fingerprint confirm you that the key is from the person you expect. tibi@tbarbu-pc:~$ gpg --fingerprint /space/home/tibi/.gnupg/pubring.gpg ----------------------------------- pub 1024R/03384551 2012-09-13 **Key fingerprint = 9DD6 5465 FF09 3B8B AF51 CAAA 5BD8 7B92 0338 4551** uid Tiberiu Barbu (This is my GPG key) <email@host.com> sub 1024R/E4EFB2B4 2012-09-13 Now I can export my key and freely distribute this file by sending it to friends, posting on a website or whatever. tibi@tbarbu-pc:~$ gpg --armor --output tibi.asc --export 03384551 I can also register my key to any public server so that friends can retrive it without having to contact me. The option --armor produce an ASCII output instead of a binary file, so it easily allows to copy/paste into an email. Else the binary file can not be opened in an editor. tibi@tbarbu-pc:~$ gpg --armor --output tibi.asc --export 03384551 Consider Alice wants to send me a message Hello Tiberiu. Alice should have my public key which is used to encrypt plaintext message M. First, Alice must import my public key in her keyring: alice@home:~$ gpg --import tibi.asc gpg: key 03384551: public key "Tiberiu Barbu (This is my GPG key) <email@host.com>" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) Now Alice composes the message then ecrypt it with my public key: alice@home:~$ echo "Hello Tiberiu" > message.txt alice@home:~$ gpg --armor --encrypt --output message.asc --recipient 'Tiberiu' message.txt A new file named message.asc is now created. Alice can send me this file. alice@home:~$ cat message.asc -----BEGIN PGP MESSAGE----- Version: GnuPG v1.4.10 (GNU/Linux) hIwDKyvxP+TvsrQBA/9F+PmSWDC1g8W3QXbs7EcmQs7s5ogfoowBlnTBT7m1oa51 nlsYlXjb5oW1mUzv57YSYbzlZ04i1CAQ70U6TF5bKfMRlk7djS/dGLMbQ1HQ5KIZ awuCAqHgtSJfbDWR7Xkn1rOXf4yBpfQslVA985pIRAVgj4YDe2c3jKFAEVx1k9JU AUwL9KI4xDLuqlcw46AMGi4kaVkMAupMyJvprzi8gJIV03dYAQkqxmTsWNF9v6G3 b24kv0jSyAQFMkNarjZiuCf30J8eWaeGzhessqghSC7Vo35T =Iasq -----END PGP MESSAGE----- The above is the encrypted message. Alice want to assure me that she is the author of the message. Thus, she signs the message with her private key. This is because anyone can use my public key to send me any message. alice@home:~$ gpg --armor --output message.sig --detach-sign message.txt You need a passphrase to unlock the secret key for user: "Alice <alice@home.com>" 1024-bit RSA key, ID BD806C61, created 2012-09-13 Enter passphrase: ***** This is the signature of encrypted message with Alice’s private key. alice@home:~$ cat message.sig -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iJwEAAECAAYFAlBR8D4ACgkQBukbhL2AbGHLcAQAs4ou17+K9X1SS3P19PlO8OLO jLLPEWq3+I8cU0gAXtB4U5SoTs66ZhlHBUtwMCwnLv7HBSQVnkdiRoRrxS7wtw5E DhDWoioc4ZpGsoRsohCsGATSftUv5JHOXEEKsuOZ1pU8Icv2YLcSs9x+mLhxkbCm 6worbXhtndC4Xm3YsWc= =12ip -----END PGP SIGNATURE----- Alice now sends me the two files: message.asc - message and message.sig - signature to prove her identity. Decrypt the message from Alice: tibi@tbarbu-pc:~$ gpg --output message_from_alice.txt --decrypt message.asc gpg: encrypted with 1024-bit RSA key, ID 4255F703, created 2012-09-13 "Tiberiu (This is my PGP key) <email@host.com>" tibi@tbarbu-pc:~$ cat message_from_alice.txt Hello Tiberiu How can I be sure this message comes from Alice? I have to import Alice’s public key. She previously sent me in an e-mail. tibi@tbarbu-pc:~$ gpg --import alice.asc gpg: key BD806C61: public key "Alice <alice@home.com>" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) I can verify the authenticity of Alice’s message: tibi@tbarbu-pc:~$ gpg --verify message.sig message_from_alice.txt gpg: Signature made Thu 13 Sep 2012 05:48:55 PM EEST using RSA key ID BD806C61 gpg: Good signature from "Alice <alice@home.com>" If the verification fails, here is how it looks: tibi@tbarbu-pc:~$ gpg --verify message.sig message_from_alice.txt gpg: Signature made Thu 13 Sep 2012 05:39:58 PM EEST using RSA key ID BD806C61 gpg: BAD signature from "Alice <alice@home.com>" So what makes GnuPG differ from Digital Signing if both of them use the same algorithms, the same hash functions? Also I can not be sure that Alice’s public key is the real one. Web of trust is the concept used in GnuPG. Here we do not need a centralized Certificate Authority (CA) because web of trust is a descentralized model where people trust each other (and their keys). You self-sign your documents, you are your own CA. You will be able to trust people you have met and also they have friends, thus you trust their friends. And so on. Think of a big community where people trust each other. The following picture will show you how this work. How can you trust people and people trust you? If I want to trust Bob because yesterday I went out to a party and interacted with new friends, then I ask Bob to share with me his public key. I import his key and check the fingerprint and UID, then I trust him signing his key: tibi@tbarbu-pc:~/.gnupg$ gpg --import bob.asc gpg: key 8FA52AD1: public key "Bob Michael <bob@michael.com>" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) tibi@tbarbu-pc:~$ gpg --edit-key bob@michael.com gpg (GnuPG) 1.4.10; Copyright (C) 2008 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. pub 1024R/8FA52AD1 created: 2012-09-13 expires: never usage: SC trust: unknown validity: unknown sub 1024R/2786E92D created: 2012-09-13 expires: never usage: E [ unknown] (1). Bob Michael <bob@michael.com> Command> sign pub 1024R/8FA52AD1 created: 2012-09-13 expires: never usage: SC trust: unknown validity: unknown Primary key fingerprint: A2F8 0339 479B 6978 0516 9214 10AE FD14 8FA5 2AD1 Bob Michael <bob@michael.com> Are you sure that you want to sign this key with your key "Tiberiu (This is my GPG key) <email@host.com>" (03384551) Really sign? (y/N) y Command> quit Save changes? (y/N) y tibi@tbarbu-pc:~/.gnupg$ gpg --list-sigs /space/home/tibi/.gnupg/pubring.gpg ----------------------------------- pub 1024R/03384551 2012-09-13 uid Tiberiu (This is my GPG key) <email@host.com> sig 3 E4EFB2B4 2012-09-13 Tiberiu (This is my GPG key) <email@host.com> sub 1024R/28847259 2012-09-13 sig E4EFB2B4 2012-09-13 Tiberiu (This is my GPG key) <email@host.com> pub 1024R/8FA52AD1 2012-09-13 uid Bob Michael <bob@michael.com> sig 3 8FA52AD1 2012-09-13 Bob Michael <bob@michael.com> sig ECB916DC 2012-09-13 Tiberiu (This is my GPG key) <email@host.com> sub 1024R/2786E92D 2012-09-13 sig 8FA52AD1 2012-09-13 Bob Michael <bob@michael.com> After signing he only has to send his new signed key to all his friends or to a public server. GnuPG also offer the possibility to send not only encrypted messages to our friends – because sometimes it is not a must to secure out communication –, but signed only. Though the message is clear, it should be signed to confirm the authentication feature provided by GPG. You must be sure that the receiver can trust the content and it comes from a reliable source. We can do this as follows: tibi@tbarbu-pc:~$ echo "Hello world. This is a plaintext" > clear_message.txt gpg --clearsign clear_message.txt A new file clear_message.txt.asc is created, containing the following: tibi@tbarbu-pc:~$ cat clear_message.txt.asc -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello world. This is a plaintext -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iJwEAQECAAYFAlBW5u8ACgkQo0DCbuy5FtxmiAQApRWX9/D48NnX8OEVzf4rrCFw agE5U/0MUyp5zLTU6o1pM3Oj5qDrJCeUjmHfworLFw/rGy5wcfU0S6plgWmvrZMZ roT/qVfAyNwDijRZb/INy8UEBd9am+8LyCjC1pJgKv5HqBbvyDNYTcB/EBa2YjUU 5iP5s3AbfsA0Gb5by30= =Mrjv -----END PGP SIGNATURE----- As you can see the message is signed and the authenticity can be verified: alice@home:~$ gpg --verify clear_message.txt.asc gpg: Signature made Mon 17 Sep 2012 12:01:35 PM EEST using RSA key ID ECB916DC gpg: Good signature from "Tiberiu (This is my GPG key) <email@home.com>" That’s all folks. Thank you and I hope you find this guide useful. Sursa: http://techblog.rosedu.org/from-0-to-cryptography.html
      • 4
      • Upvote
      • Downvote
  11. Optimization by natural selection Evolutionary algorithms are a heuristic-based approach to solving problems that cannot be easily solved in polynomial time, such as classically NP-Hard problems, and anything else that would take far too long to exhaustively process. When used on their own, they are typically applied to combinatorial problems; however, genetic algorithms are often used in tandem with other methods, acting as a quick way to find a somewhat optimal starting place for another algorithm to work off of. The premise of an evolutionary algorithm (to be further known as an EA) is quite simple given that you are familiar with the process of natural selection. An EA contains four overall steps: initialization, selection, genetic operators, and termination. These steps each correspond, roughly, to a particular facet of natural selection, and provide easy ways to modularize implementations of this algorithm category. Simply put, in an EA, fitter members will survive and proliferate, while unfit members will die off and not contribute to the gene pool of further generations, much like in natural selection. Context In the scope of this article, we will generally define the problem as such: we wish to find the best combination of elements that maximizes some fitness function, and we will accept a final solution once we have either ran the algorithm for some maximum number of iterations, or we have reached some fitness threshold. This scenario is clearly not the only way to use an EA, but it does encompass many common applications in the discrete case. Initialization In order to begin our algorithm, we must first create an initial population of solutions. The population will contain an arbitrary number of possible solutions to the problem, oftentimes called members. It will often be created randomly (within the constraints of the problem) or, if some prior knowledge of the task is known, roughly centered around what is believed to be ideal. It is important that the population encompasses a wide range of solutions, because it essentially represents a gene pool; ergo, if we wish to explore many different possibilities over the course of the algorithm, we should aim to have many different genes present. Selection Once a population is created, members of the population must now be evaluated according to a fitness function. A fitness function is a function that takes in the characteristics of a member, and outputs a numerical representation of how viable of a solution it is. Creating the fitness function can often be very difficult, and it is important to find a good function that accurately represents the data; it is very problem-specific. Now, we calculate the fitness of all members, and select a portion of the top-scoring members. Multiple objective functions EAs can also be extended to use multiple fitness functions. This complicates the process somewhat, because instead of being able to identify a single optimal point, we instead end up with a set of optimal points when using multiple fitness functions. The set of optimal solutions is called the Pareto frontier, and contains elements that are equally optimal in the sense that no solution dominates any other solution in the frontier. A decider is then used to narrow the set down a single solution, based on the context of the problem or some other metric. Genetic Operators This step really includes two sub-steps: crossover and mutation. After selecting the top members (typically top 2, but this number can vary), these members are now used to create the next generation in the algorithm. Using the characteristics of the selected parents, new children are created that are a mixture of the parents’ qualities. Doing this can often be difficult depending on the type of data, but typically in combinatorial problems, it is possible to mix combinations and output valid combinations from these inputs. Now, we must introduce new genetic material into the generation. If we do not do this crucial step, we will become stuck in local extrema very quickly, and will not obtain optimal results. This step is mutation, and we do this, quite simply, by changing a small portion of the children such that they no longer perfectly mirror subsets of the parents’ genes. Mutation typically occurs probabilistically, in that the chance of a child receiving a mutation as well as the severity of the mutation are governed by a probability distribution. Termination Eventually, the algorithm must end. There are two cases in which this usually occurs: either the algorithm has reached some maximum runtime, or the algorithm has reached some threshold of performance. At this point a final solution is selected and returned. Example Now, just to illustrate the result of this process I will show an example of an EA in action. The following gif shows several generations of dinosaurs learning to walk by optimizing their body structure and applied muscular forces. From left to right the generation increases, so the further right, the more optimized the walking process is. Despite the fact that the early generation dinosaurs were unable to walk, the EA was able to evolve the dinosaurs over time through mutation and crossover into a form that was able to walk. Sursa: https://towardsdatascience.com/introduction-to-evolutionary-algorithms-a8594b484ac
      • 3
      • Upvote
  12. “One must acknowledge with cryptography no amount of violence will ever solve a math problem.” Cryptocurrencies like Bitcoin and Ethereum use a peer-to-peer decentralized system to conduct transactions. Since the entire process is online, there are fears that the transactions maybe volatile and hackable , but they are not because cryptocurrency uses cryptography to make their transactions extremely secure. Digital Signatures One of the most important cryptographical tools that are used in cryptocurrency is the concept of signatures. What is a signature in real life and what are its properties? Imagine a paper that you have signed with your signature, what should a good signature do? It should provide verification. The signature should be able to verify that it is you who actually signed the paper. It should be non-forgeable. No one else should be able to forge and copy your signature. Non-repudiation. If you have signed something with your signature, then you should not be able to take it back or claim that someone else has done it instead of you. In the real world, however, no matter how intricate the signature, there are always chances of forgery, and you cannot really verify signatures using simple visual aids, it is very inefficient and non-reliable. Cryptography gives us a solution to this by means of “digital signatures” which is done via the use of “keys”. So, what are keys? And how are the used in the blockchain? Before we explore those, it is important to know more about basic cryptography. What is Cryptocurrencies Cryptography? Cryptography is a method of using advanced mathematical principles in storing and transmitting data in a particular form so that only those, for whom it is intended for, can read and process it. Cryptography has been used for thousands and thousands of years by people to relay messages without detection. In fact, the earliest use of cryptography was seen in the tomb taken from Old Kingdom in Egypt circa 1900 BCE. Cryptography has existed in the modern society through one way or another. Encryption is one of the most critical tools used in cryptography. It is a means by which a message can be made unreadable for an unintended reader and can be read only by the sender and the recipient. In modern technology, there are three forms of encryption that are widely used, symmetric cryptography, asymmetric cryptography, and hashing. Symmetric Cryptography Symmetric cryptography is the earliest known cryptographic method known to man. The concept is very simple and if we were to break it down to steps, this is what it will look like: You have a message M that you want to send over to your friend. You encrypt the message with a Key and get a cipher text C. Your friend gets your cipher text C. She then decrypts the cipher text using the same Key to retrieve message M. If we were to show a visual representation of the process, this is what it will look like. The are two types of symmetric cryptography: Stream Ciphers. Block Ciphers. What is stream ciphers? Stream cipher basically means using a fixed key which replaces the message with a pseudorandom string of characters. It is basically the encryption of each letter one at a time. We are going to discuss 3 kinds of stream ciphers to give you an idea of how stream ciphers work: One-time pad with alphabets. One-time pad with XOR gate. Linear feedback shift register. One-time pad with alphabets For doing this encryption we need to have a key which has the same number of characters as the message and it must be used one time only (hence the term “one-time pad”). Suppose for this example we are going to send a message, “MEET ME OUTSIDE” to our friend Bob. But we don’t want anyone intercepting our message. This is why, Bob and us have decided to use a one-time pad which goes like this: “B D U F G H W E I U F G W” As you can see, the pad has the same number of characters as the message as well, i.e. 13. Now, this is a very simple example of the one-time pad, we are using this because we feel it is the best example to use to understand this tactic. Now, one more thing you need take note of, every alphabet will be replaced by its numeric equivalent in during the process. The numerical mapping goes like this: During the process, there will be 6 pieces of data that we need which are: Basically, the numerical equivalent of each alphabet. Ok, now that we have built the foundations, let’s move on to the actual process. Original Message (OM): The original message that we are passing through. In this case “MEET ME OUTSIDE”. Numerical Original Message (NOM): The numerical equivalent of the original message, OTP: The One-time Pad. Numerical OTP (NOTP): The numerical equivalent of the OTP. NCT: The numerical cipher text which is NOM+NOTP mod 26 CT: The cipher text which is the alphabetical equivalent of the numbers in the NCT. So, we need to send the message “MEET ME OUTSIDE” and we need to use the one-time pad to encrypt it. The encryption process So, let’s start off by putting in the message in the OM We put the message “MEET ME OUTSIDE” in the OM row.Ok, so what happened here? Next, we used the numerical mapping table to get the numerical equivalent of each alphabet. So, let’s refer to the mapping table and see what we get: In the OTP row we put in the key that we were already given which is, in case you have forgotten, “B D U F G H W E I U F G W”.It’s just simple substitution, we will take these values and put it in NOM row. Now, in the NOTP row we used the same number mapping table and found the equivalent numerical values of the key which are: “1, 3, 20, 5, 6, 7, 22, 4, 8, 20, 5, 6, 22”. In the new row, for the Numerical cipher text (NCT) we add the NOTP and NOM and mod the result by 26 to get our NCT. So, finally the message “MEET ME OUTSIDE” turns into a pseudo-random series of characters “N H Y Y S L K Y B M N J A”.That’s how you find the values for NCT and then you use the mapping table and find the corresponding alphabets which are: “N H Y Y S L K Y B M N J A”. That is how the encryption process works. The decryption process Now we will see how we can decrypt the message using the exact same key. Let’s see the data that Bob has with him: He has the encrypted message that he has gotten from me. He has the key that both of us share. He has the mapping table to find the numerical equivalents. So, how will he decrypt the message using this data? He will map the numerical values of both the key and the encrypted message to get NCT and NOTP. He will then calculate the NOM (Numerical value of the original message) by doing this calculation: NOM = NCT — NOTP mod 26. He will use the mapping table to retrieve the corresponding alphabets. So, let’s see how the NOM calculation work? Now, if we map the NOM to its alphabetical equivalent using the mapping table then we get: “MEET ME OUTSIDE” And just like that, the message is encrypted and decrypted using the same key. One-time pad with XOR gate XOR or “Exclusive OR” is a logic gate. What is a logic gate? A logic gate usually takes in 2 inputs and gives out 1 output. The inputs and outputs are binary values, meaning they can be 1 or 0. A XOR logic gate takes in 2 binary inputs and gives out a high output ONLY when the inputs are different. Meaning, if A and B are inputted to a XOR gate then the out C will be 1 ONLY when A is not equal to B. The XOR gate looks like this: This what the XOR truth table look like: The encryption process Suppose you have a plain text data which you want to send to your friend Alice. First, you’ll convert it to its binary form. Suppose the message that you have is this: 00011110 Now you have the key, the key that you share with your recipient and suppose you have passed the key through an algorithm which gives you the equivalent binary result: 01001010. So now that you have the key, you are going to XOR each corresponding individual bits to get the resulting cipher text output. Cipher Text = Plain Text XOR Key So if you XOR both the data the key that you will get is: “01010100” This is the cipher text that Alice will get from you. The decryption process So now, how will Alice decrypt your message and retrieve the original one? This is the data that she has: So what is she going to do? It is simple. She will simply XOR the key and the cipher text and she will retrieve the original message! See for yourself: And just like that, she will retrieve the original message. Linear feedback shift register What is a linear feedback shift register? It is a function whose future output completely depends on its earlier (or current) state. This will become clearer as you keep reading so don’t get scared off! The idea of this style of a stream cipher is to predetermine a key with your recipient which will be a linear feedback shift register function which will be used by you to determine the code. Suppose you spoke to your friend Bob and determined that this is the formula that you both want to go with (credit to Daniel Rees from Youtube for this formula). E(i+3) = E(i+1) + 2E(i+2) mod 26. And let’s also assume that prior to sending this message you and Bob determined that E(1) = 2 and E(2) = 4. Now you can see that in this equation, all future outputs are dependent upon the previous outputs. So, suppose the message that you want to send to Bob is “MEET ME”. Since there are 6 characters, we need to determine 6 values of E() to act as key. We already have predetermined the values of E(1) and E(2). Now we need to calculate E(3) to E(6). E(3) = E(1) + 2E(2) mod 26 = 10. E(4) = E(2) + 2E(3) mod 26 = 24. E(5) = E(3) + 2E (4) mod 26 = 6. E(6) = E(4) + 2E(5) mod 26 = 10. So, now that we have the keys, let’s start the decryption. The encryption process So now that we have the key and message, let’s create the table: To get the numerical cipher text, you add the key and the corresponding numerical value of the alphabet that you map from this table that we have already seen before: Now, to get the numerical value of the cipher texts, add the key and the numerical value of the original message and mod with 26. So you get: Now use the mapping table again to find the corresponding alphabets and you get “OIORSO”. That’s the encrypted message. The decryption of this message is really hard especially if you don’t have the key. An expert might spot a pattern though. You will need computers to beak this code. Examples of Stream Ciphers used in the real world. The Rivest Cipher 4 of the RC4 Used in WEP aka wired equivalent protocol for wireless network security. Also an option in TLS/HTTPS for encrypting web traffic. Since it has been cracked so many times it is not recommended for use anymore. The A5/1 Use for encrypting GSM (Global System for Mobile communication) phone data and communication. Edward Snowden in his leaks revealed that the NSA routinely keeps breaking GSM for surveillance purposes so it is not a secured mode of encryption anymore. So, that is pretty much it about stream ciphers, time to move on to block ciphers. What is block ciphers? Block ciphers are a form of symmetric cryptography which uses a key of a fixed length to encrypt a block of fix length. Let’s start by checking out a very common substitution cipher that you must have seen before: So, if someone were to tell you that they got a message which says “EFBD” and wants you to decrypt it and get the original message instead, how will you do it? You will simply see the table, see which alphabets correspond to which and then simply substitute right? So “EFBD” is the cipher for “FACE”. Now let’s check out the plain text and the ciphertext and compare them: Plain: A B C D E F Cipher: F A B C D E So, as you can see, the cipher text is basically the plain text shifted the right by one. So, in this particular case: That, in essence, is what a block cipher is. Given an input plain text and a key it can generate a unique cipher text. One more thing that is extremely important and should be noted. Given the key, anyone can decipher the cipher text from the plain text and vice versa. The examples that we are giving here are all extremely simplistic, the block cipher happens with HUGE chunks of data. If we are looking for a visual representation of a block cipher the this is what it will look like: Another interesting property of the block cipher is that if the key changes then that changes the output cipher text pretty drastically. Let’s do a test with the data we have right now. Now, we have 3 keys for the 3 different cipher texts. In cipher text 1 we are shifting to the right once. In cipher text 2 we are shifting to the right twice. In cipher text 3 we are shifting to the right thrice. So, let’s see what happens when we parse the input “FACE” through all these different ciphers. When key =1, FACE becomes EFBD When key = 2, FACE becomes DEAC When key = 3, FACE becomes CDFB As you can see, the output cipher text changes everytime you change the key. In the example we have very little data, imagine doing this with HUGE amounts of data, the output will change drastically every single time. There are two rules for a block cipher to be considered valid: You must be able to derive the plain text from the cipher text and vice versa given a key. The function must be efficiently computable. There is one more important thing that you need to take note of when it comes to block ciphers. The block sizes are fixed so the input plain text needs to be of the same size as the block size. If the input is bigger than the block then it needs to break down to get the correct size, if the input is smaller, then it needs to be padded with some junk data to fit the block size. Examples of block ciphers Data Encryption Standard (DES) Block sizes of 64 bits. Key size of 56 bits. Was the government standard till 2001. Advanced Encryption Standard (AES) 128 bit blocksize. 128, 192 or 256 bit key size. Considered very secure and widely used nowadays The advantage of symmetric cryptography Even though symmetric cryptography has some major problems (which we will discuss in a bit) the biggest advantage of symmetric cryptography is that it requires very little overhead. You just need to share one single key with your recipient to go forward with this method. Even now, a lot of software use this method in conjunction with asymmetric cryptography to provide fast and efficient encryption/decryption services. The problems with symmetric cryptography Even though the overhead is significantly lesser, there are a lot of problems with symmetric cryptography. Problem #1: The shared key The fact that the encryption and decryption is done with one single key is a huge problem. First and foremost, the sharing of the key needs to be done in a very secured manner, if anyone gets hold the of key then all your data will be compromised. Problem #2: It is not scalable Another huge problem with symmetric cryptography is that it is not scalable at all. Suppose Alice runs an information center and sends data via symmetric key cryptography. It’s ok if she is only dealing with 3–4 clients. But the most clients she gets, the more unique public keys she will have to handle and take care of. Eventually, it will become too much to handle. Because of these vulnerabilities of symmetric key cryptography, a solution was needed, and in the 1970’s it finally came. James Ellis’s breakthrough In 1970, British mathematician and engineer James Ellis thought of an idea which was based on a simple concept. What if encryption and decryption were inverse operations based on 2 different keys? In traditional cryptography i.e. symmetrical cryptography, the message had to be sent along with the key to the intended person for them to decrypt the message, but this presented the very real idea of an attacker getting their hands on the key. Ellis envisaged that the receiver of the message couldn’t be a passive party, and they had to have a “padlock” and a “key” for themselves. The padlock could be sent to anyone in the world but the key had to be kept private. So, anyone can send a message to the receiver by locking it with their padlock and since only the receiver has the key, only they can open it. Now, this was the theory, there needed to be a practical form of this theory, and that came because of two brilliant principles: The trapdoor function. The Diffie–Hellman key exchange. What is the trapdoor function? A trapdoor function aka a one-way function is a function where it is easy to go from one state aka the domain to the other state aka the range but it is hard to go back from the range to the domain unless you have knowledge of a key which is called the trapdoor function. Diagrammatically it is represented like this: The trapdoor functions are based on the idea of keys. Wherein the public key (K) is used to go from the domain to range. In order to come back to the domain from the range we have to use a trapdoor function which is also known as the private key (k). It is also implied that the private key and public key are mathematically related to each other and also they have to related to each other via another trapdoor function f() such that K= f(k) so that the private key is infeasible to be determined by the public key . A simple example of this is a multiplication of large numbers. Suppose you have two numbers 171 and 118 then it is simple to determine that 171 * 118 = 20178. However, if you just know 20178 then it is hard for you to determine what the initial numbers were unless you have a key with you, in this case the knowledge of just one of the two numbers, to determine the second one. What is the Diffie-Hellman key exchange? Suppose, there are two people Alice and Bob and they want to attack a bank. However, they are on either sides of the bank and they can only communicate with each other via a shared line which is being tapped by the bank. Something like this. Keep in mind, everything that Alice and Bob say to each other will be eavesdropped upon by the bank. So, how can they both decide on a date to attack the bank without the bank getting to know about it and without Alice and Bob explicitly exchanging that information? This conundrum can be answered by the Diffie-Hellman key exchange; it is a concept by which two parties can get hold of secret information without sharing it. To understand how the Diffie-Hellman works, we need to use one of the most famous applications of this theory, the secret colour exchange. For this there are 3 things that you need to keep in mind: Alice and Bob both publicly agree that yellow is going to be the common paint that they are both going to use. Alice then secretly keeps to herself that she is also going to use orange along with yellow. Bob secretly decides that he is going to use aqua along with yellow. Stage One Since it was publicly declared that yellow is going to be the colour of choice: Bank: Has Yellow Alice: Has Yellow Bob: Has Yellow Stage Two Now Alice mixes in her private colour aka orange with yellow and gets a composite colour which we will call CA. At the same time, Bob mixes his private colour aqua with yellow and creates composite colour CB. So, at the end of stage two this is what things look like: Bank: Yellow Alice: CA Bob: CB Stage Three Now, Alice and Bob will send each other their respective colours, which will promptly get tapped by the bank. However, the bank now faces a problem. Colour combinations are a trapdoor function. While it is easy for someone to combine two colours and generate a third colour, it is infeasible for them to determine the first two colours from the given third colour. So, the bank will get hold of CA and CB but will have no idea which are the colours that has gone into its creation. So, this is what things are looking like right now: Bank: Yellow, CA, CB. Alice: CB Bob: CA. Stage Four Now, Alice and Bob are once again going to mix their secret colours into the mix that they have received from the other person, so now both of them are going to have a mix of yellow, orange and aqua which is brown. The bank, however, will only have CA and CB because they have no idea what the secret colours are. So, this is what things look like now: Bank: Yellow, CA and CB. Alice: Brown. Bob: Brown. And this is where the trick lies, by not revealing their secret colours, both Bob and Alice have, in their possession, the colour brown, even though they never explicitly exchanged brown with each other. This is what the diagram of this entire exchange looks like: This is the representation of the Diffie-Hellman exchange, but a mathematical means was needed to make sure that there could be practical applications of this as well. For this reason, the modulus function was used. The mathematical form of the Diffie-Hellman exchange Suppose there is a generator g for a finite field of size n. And in that field, we choose two random values a and b. It will be hard for an attacker to determine g^ab given only g, g^a and g^b. This is the condition which activates the trapdoor function. Given this condition, two parties can exchange messages and reach the same conclusion without explicitly communicating it with each other. So, mathematically this is what happens. Alice chooses a random value “a” from the field n and determines a message M1 such that: Similarly, Bob chooses a random value “b” from the field n and creates message M2 such that: Both Alice and Bob can now relay the message to each other. Alice now determines special message K by doing the following: K = M2^a mod n= g^ab mod n. Bob now determines the same message K by: K = M1 ^ a mod n = g^ab mod n. So, both Alice and Bob reached the same conclusion without explicitly sharing this information. This Diffie-Hellman key exchange was invaluable in the formation of asymmetric cryptography: What is asymmetric cryptography? Asymmetric cryptography utilizes two keys, a public key and a private to encrypt and decrypt a particular data. The use of one key cancels out the use of the other. The diagrammatic representation of it looks like this: There are two real world use of asymmetric cryptography that we will look into in this guide and both are important for their own reasons: The Rivest-Shamir-Adleman algorithm aka the RSA. The Elliptical Curve Cryptography. What is the RSA algorithm? The RSA algorithm is the most widely used and popular asymmetric cryptographic algorithm in history. It is named after MIT professors Rivest, Shamir and Adleman who discovered this algorithm. Now, how does it work? The idea is derived from the breakthroughs that Diffie-Hellman had. So, these are the variables that we will work with: Suppose you have the secret message “m”. “m” raised to the power of a random number e and then the modulus of that with a random number N will give you the cipher text c. Basically. m^e mod N= c Take note, it is EASY to perform this function to get the output c BUT given only c, e and N it is difficult to get the message “m”. It will require a lot of trial and error. This is the one-way trapdoor function that we will apply to find “m”. But now, the idea of the trapdoor function is to have a key which will make the reverse process (the decryption) simple for the recipient. So, for that we will need to find a random variable “d” which will make this process possible: Now keep in mind, c = m^e mod N, so on substituting. OR So, in the above equations: Public key = e and N. Private key = d. Now, before we even begin to see the method behind the madness, let’s do a simple calculation to see how the entire process works. (Shout out to Anthony Vance’s youtube channel for this example). Suppose the message that you have to send is 42. In other words, m=42. Along with that: e = 17. N = 3233. d = 2753 The encryption process c = m^e mod N. Using simple substitution: c = 42¹⁷ mod 3233 = 2557. So the cipher text is 2557. The decryption process Let’s do c^d mod N. 2557²⁷⁵³ mod 3233 This gives us the value of m that is 42. Genius isn’t it? Now, remember when we talked about trapdoor functions we came to the conclusion that private and public key needs to be mathematical derivatives of each other in a way that: F(private key) = public key, where F() is another trapdoor function. It should be difficult for anyone to determine the private key from the public key. In fact, it should be so difficult that it will take the world’s most powerful computer decades upon decades to derive one from the other. To answer this conundrum, we go back centuries and meet our next genius, Euclid. Euclid and prime factorization Euclid found out centuries ago that any number > 1 can be written as a product of prime numbers. Eg. 15 can be written as 5*3. 255 can be written as 5*17*3. Let’s go back to our two equations: C= m^e mod N. Here, N is the key in the trapdoor function. While N maybe publicly known it should be hard to determine prime factors that make up the number N. If you know the prime factors, then it is child’s play to discover the product N. Eg. You can use your web browser to multiply two huge numbers and find the product in less than a second: It took less than a second, 0.22 seconds, to do the calculation. And the bigger the number gets, it will take a little more time, but still, the calculations will be done super fast. However, if you input a huge number and ask your computer to find its prime factors then it may take days, months and even years to find the prime factors. This is the trapdoor function that cryptographers used to determine the value of N. This is basically, the heart of the trick. This is what you have to do to use RSA algorithm: First, generate a big random prime number P1. Generate a second big random prime number P2. Find N by calculating P1 and P2. Hide the values of P1 and P2 and make N public. N should be a huge number and it will take the most sophisticated machines in the world decades to find the values of P1 and P2. So to summarise, N is the trapdoor and its prime factors P1 and P2 are the keys to the trapdoor. Ok, so now we have determined how N is calculated and the trapdoor that works in it. But we still haven’t determined the value of “e” and “d” and we still haven’t seen how the private key is derived from the public key. In order to generate all these remaining values, we need to find a function that depends on knowing the factorization of N. And for that we need to go and visit our next genius, Leonhard Euler. Euler and breakability In 1760, Swiss mathematician Leonhard Euler did some path breaking studies. He studied the nature of numbers and more specifically the breakability of the numbers which he called the phi function. Basically given phi(N) where N is a random integer, the value of N will be the number of numbers between 1 and N which do not share any common factors with N. So, if N is 8 then: The numbers between 1–8 are: 1,2,3,4,5,6,7 and 8. Among these numbers, only 1,3,5 and 7 don’t share any factors with 8 except 1. Meaning, phi(8) = 4. Now, calculating the phi function is difficult except for one case. To know this, check out the following graph. The graph tracks the distribution of phi values over integers upto 1000. See that straight green line at the top which is conveniently arranged? That is the phi of prime numbers. Since the definition of a prime number is that it is unfactorizable apart from by itself, for any prime number p the phi(p) = p-1. Let’s see this in practice. Suppose you have a prime number 7. The numbers between 1 and 7 are: 1,2,3,4,5,6,7. The only number that shares a factor with 7 in this series is…7! So the phi(7) = 6. Similarly, if you were to find the phi of a large prime number say 541 then: Phi(541) = 541–1 = 540. It becomes very simple to calculate the phi for a prime number. And this gains, even more, significance when you consider the multiplicative nature of phi functions. What is the multiplicative nature of phi functions? For any two numbers A and B: Phi(A*B) = phi(A) * phi(B). Now, let’s go back to algorithms. Alice has determined two large prime numbers P1 and P2 and has determined a number N by doing P1 * P2. So, using the multiplicative property of phi functions: Phi(N) = phi(P1) * phi(P2). OR Phi(N) = (P1–1)*(P2–1). And just like that, we have discovered the trapdoor function for phi. If we know the prime factors of N then it is easy to calculate the phi(N). For eg. the number 77 has prime factors 7 and 11. So phi(77) = (7–1)*(11–1) = 60. It becomes very easy when you know the prime factors of N. Now, one final bit of mathematical wizardry was required. We have the phi function and we have the modular exponentiation functions that we have determined before, we need to bring these two together in one neat equation. And for this, we turn to Euler for help once again. The Euler’s theorem Euler’s theorem states that: For any two numbers m and n that don’t share a factor: m ^ phi(n) ≡ 1 mod n Meaning, for any two numbers m and n, as long as they don’t share a factor, m raised to the phi(n) divided by n will always leave a remainder of 1. Let’s see this in an example. Suppose, m= 8 and n = 5. Phi(5) = 4 So, 8 ^ 4 = 4096. Replacing this in the Euler’s theorem equation: 4096 ≡ 1 mod 5 holds true because 4096 on being divided by 5 leaves a remainder of 1. Now, the equation: m ^ phi(n) ≡ 1 mod n needs to be modified a little bit before we get our final solution. Modification #1 1^k = 1 for all k. So, keeping this in mind, if in m ^ phi(n) ≡ 1 mod n we multiply the exponent phi(n) with any number k, the final solution will be 1^k which is still 1. Now, this modifies the equation like this: m ^ k*phi(n) ≡ 1 mod n Modification #2 For all m, m*1 = m. So, in our modified equation, if we multiply both sides by m we get: m*m ^ k*phi(n) ≡ m*1 mod n Which becomes: m ^ k*phi(n)+1 ≡ m mod n Now, this is the final form of our equation. Before we proceed, let’s bring back the old equations to refresh our memory: c = m^e mod N. m = c^d mod N m ^ e*d mod N = m Now, checkout the last equation doesn’t that look similar to our new modified equation: m ^ k*phi(n)+1 ≡ m mod n And this is the breakthrough. On comparing the two equations, we get: e*d = k*phi(n) + 1 We FINALLY have an equation where the value of e and d depends on phi(n). Now, since we already know the value of e, it is easy to calculate d, the private key, ONLY if the factorization of N is known (which is a secret that Alice has kept to herself). So, d= (k*phi(n) + 1)/e. This is the trapdoor that will undo the encryption done by her private keys e and n. Example to see how this all works Suppose Bob and Alice are exchanging messages. Bob wants to send a message M to Alice where M=89. Now, Alice needs to generate her keys. She uses to prime numbers p1 and p2 where: P1 = 53. P2 = 59. N = P1 * P2 = 53 * 59 = 3127. Phi (N) = Phi(P1) * Phi (P2) = (P1–1) * (P2–1) = 52 * 58 = 3016 Now, she needs to generate a value e which will have no factors with the value of phi(N). So, she decides e = 3. Now, she will generate her private key d: d = (k*phi(N) + 1)/e Taking k = 2 we get: d = (2* 3016 + 1) / 3 = 2011. Now, she will lock up all the values except N and e which are her public key and make the knowledge of these two global. Bob encrypts the message Now, Bob needs to send message M, which is 89, and he needs to calculate the cipher text c such that: c = M^e mod N. Now, we know that: M = 89, e = 3 and N = 3127. So: c = 89³ mod 3127 = 1394. He then sends it over to Alice. Alice decrypts the message Alice gets the cipher text and all that she has to do is to decrypt it using her private key d which we know to be 2011. So, Alice does this calculation: c^d mod N 1394²⁰¹¹ mod 3127 which is 89 aka the original message M. And this, is the RSA algorithm, the most widely used cryptographic algorithm What is elliptical curve cryptography? Elliptical curve cryptography is what is used by bitcoin, ethereum etc. for their encryption purposes. So what is an elliptical curve? An elliptical curve is any curve that satisfies the following equation: Y² = x³ + ax + b Where (x,y) is a point on the curve and a and b are constants. There are infinite curves that you can make. The following is how one of these curves, in general, look like: What are the properties of an elliptic curve? The curve is symmetric across the x axis. Any line that goes through 2 points on the curve will intersect the curve on a third point. Any tangent on the curve will intersect the curve on one more point. Performing maths on the curve. Addition property of the curve Suppose there are two points on the curve V and A. Let’s trace those on the curve and put a line through them. This will intersect the curve on a third point. We will call this third point X, and we will reflect it on the curve like this: The reflection of X is a point which will incidentally be (V+A). This is the additive property of the elliptical curve. Interesting note. If we add two reflections with each other aka if we were to add X and V+A in the graph above, we will get infinity. The reason for that is that the line through X and (V+A) will intersect the curve at infinity. Multiplication property of the curve Now, what if we want to add a number to itself? Like suppose we have a point V, what do we do to find 2V? We will run a tangent through V and intersect it at a point in the graph and then find the reflection of the point on the curve. That reflection will be 2V. This is also the multiplicative property of the graph because we are finding points which are basically the multiplication of an integer with the point itself. Now suppose we want to find 3V. We will join V and 2V and then reflect the point of intersection, like this: You see how the points cycle across the graph? This is what gives it its security. Mathematical properties of an elliptical curve Property #1: The points on the curve form an Abelian group The properties of the Abelian group are as follows: They have identity. The have inverses aka reflections. The points are associative meaning for three points A, B and C on the curve: (A+B) + C = A + (B+C). The points are closed on the curve. The points are commutative meaning for two points A and B. A+B = B+A. Property #2: Multiplication on the curve is fast All multiplication done on the curve can be done very fast. Now suppose we have a point P and we want to find 100P. Instead of adding the number to itself 100 times we can do the following: Add the point P to itself to get 2P. Add 2P and P to get 3P. Add 3P to itself to get 6P. Add 6P to itself to get 12P. Add 12P to itself to get 24P. Add 24P and P to get 25P. Add 25P to itself to get 50P. Add 50P to itself to get 100P. So, instead of going through 99 steps you cut short the entire thing to just 8 steps. Property #3: Division on the curve is slow Whilst multiplication is fast, the division is very slow. Suppose we have Q = nP and we want to find the value of n by dividing Q by P. We can’t really do that. We will have to manually go through the numbers one by one to find a value which satisfies the equation. This makes it very slow. This is called the discrete logarithmic problem and this is what gives the curves its trapdoor function i.e. it is easy to multiply n and P to get Q but given Q and P it is infeasible to get n. The elliptical curve Diffie-Hellman key exchange So, till now we have seen the various properties of the curve and we have also seen that the curve has a trapdoor function. Now how do we determine whether it is usable for cryptography or not? Let’s test it out with the Diffie-Hellman key exchange. Suppose we have Alice and Bob and they want to both come up with a common secret without anyone knowing what it is and without explicitly exchanging its information with one another. How will they do that via elliptical curves? Firstly, they will publicly agree on a curve to use and a point P on the curve. This will be public knowledge and available to everyone. In secret, however, Alice will choose a secret point “a” and Bob will choose a secret point “b”. Alice will compute “aP” and send it over to Bob. Anyone can intercept this message, however, even with the knowledge of P they will never be able to determine the value of “a” because, as we have already determined, there is a trapdoor function which will make division infeasible. Similarly, Bob will come up with the value “bP” and send it over to Alice. Alice will then multiply her secret key to the message that she gets from Bob to get a(bP). Bob will do the same and come up with b(aP). Since all the points on the curve are Abelian: a(bP) = b(aP). And just like that, they have come upon a secret shared information. So as we can see. The curve satisfies the Diffie-Hellman key exchange. So how does signature verification work on the elliptical curves? (Note: This is what specifically happens in bitcoin) Before we see how the process works let’s checkout certain variables and their meaning that we will be using the following equations. Private key = d. Message = z. Public key = Q. G will be a constant point on the graph which will be provided by bitcoin. “k” is a random number which will be generated automatically for every unique signature. “n” is another constant that will be provided by Bitcoin. Ok, so now let’s see how the maths behind the verification work. Signing a message Public key Q = dG. (it is impossible to get the private key from Q and G because division in infeasible). Now we will multiply the G with the random number “k” and plot that point on the graph. The co-ordinates of that point are (x,y). i.e. (x,y) = kG Next, we determine two values r and s such that: r = x mod n. s = (z + rd) k^-1 mod n The reason why we generate r and s is because these are the co-ordinates of our signature. So, we send the point (r,s) for verification. Verifying a message The verifiers will conduct a simple equation: z*s^-1*G + r*s^-1*Q The value of this equation will give us the point (x,y). Now, the verifiers can simply compare the x co-ordinates. They don’t have the x co-ordinate given directly to them from the sender BUT they have the values of r and n. And as we already know that r = x mod n, and then they can simply solve for x. If the values of x match out, then this means that the signature is verified! Bonus: A deeper look into the maths Let’s check out the equation that the verifiers will have to do once again: Step 1: z*s^-1*G + r*s^-1*Q We know that Q = d*G, let’s simply substitute the value. Step 2: z*s^-1*g + r*s^-1*d*G We can take (z + r*d) common Now remember, we have already established that s = (z+r*d)*k^-1 mod n ,let’s substitute the values here: Step 4: (z+r*d)*(z+r*d)^-1*k*G The (z+r*d)*(z+r*d)^-1 cancel each other out and we are left with: Step 5: k*G which is the co-ordinate (x,y) that the sender originally sent. What could go wrong in Elliptical curves? While it goes without saying that elliptical curves are the best mode of cryptography out there, the fact remains that it still has few vulnerabilities: What if a wrong curve was chosen? If the curve has a loop in it then there is a possibility that 1001P = P for any point P on the curve. A weak curve maybe is chosen which can be broken into. It has its weaknesses but they are pretty manageable weaknesses. RSA vs EEC. Why did bitcoin and ethereum go with elliptical curves? The reason why EEC was chosen over RSA is because it offers the same level of security as RSA by consuming far less bits. Eg. for a 256-bit key in EEC to offer the same level of security RSA will have to provide a 3072-bit key. Similarly, for a 384-bit key in EEC the RSA will have to provide a 7680- bit key to provide the same level of security! As can be seen, EEC is far more efficient than RSA. Fun Fact: The NSA has declared that a 384-bit key in EEC is strong and secure enough to encrypt top level secret documents. How do the keys work in blockchain? As mentioned above, bitcoin and ethereum use elliptical curve cryptography. So, what happens when someone sends you money on the blockchain? They send you the money to your public address which is basically the hash of your public key and some additional information. As we have seen above, the public key is derived mathematically from your private key. Public and private keys are both large integer values and they are represented, for brevity’s sake, via the Wallet Import Format (WIF) which consists of letters and numbers. A sample private key and public address looks like this in WIF: Obviously, you shouldn’t share your private key with the world like we just did! The private key is used to sign off on the transaction that the user wants to do. So, if someone has access to your private key, they can sign off on transactions using your private key and, in essence, steal from you. Also, as you can see, the private key is longer than the public address. So, how is a public key derived from the private key in the blockchain? Let’s take the example of bitcoin for this specific example. Suppose, Alice wants to generate her keys so that she can conduct transactions on the blockchain. This is what she will do: First, she will generate her 256-bit private key. She can either do so manually OR she will use an auto-generator. This is an example of a private address generator that you can find in a wallet-generator.net: Next, she will have to generate the public address which the algorithm inside that wallet will do automatically by following these steps. First, her private key will be parsed through the SHA 256 hashing algorithm to get a hash. Then hash will be parsed through the RIPE MD 160 function and a new hash will be generated and a copy of it will be kept aside, let’s call this PART A. Then the hash will be hashed through SHA 256 to generate another hash. Then the new hash will be hashed through SHA 256 again to generate another hash. The first 7 bits of this hash will be saved, let’s call it PART B. PART A and PART B will be added up and the result is the public address. It is infeasible for this process to be reversed in a way that the public address can be used to generate the private key. It will take the world’s most powerful computer 40000000000000000000000000000000 years to complete this calculation! Safe to say your address and key are secure. So how does the signing process work (a simple overview)? Suppose Alice wants to send 500 BTC to Bob. She will follow the following steps: She will create transaction and sign it off with her private key. She will the send the transaction to Bob’s public address. Bob can then decrypt the message by using Alice’s public key to verify that it was indeed Alice who sent him the bitcoins and the transaction is deemed complete. If this were to be shown in an image this is what it will look like: Conclusion So, as can be seen, public key cryptography aka asymmetric cryptography is one of the backbones of cryptocurrency. It is impossible to even imagine how bitcoin and ethereum would have been secure without it. Every time you make a transaction, be thankful to all the mathematicians and cryptographers who have made this wonderful medium possible. If you enjoy my articles , please feel free to nominate me as a LinkedIn Top Voice . Sursa: https://medium.com/@jna1x3/holy-grail-of-crypto-cryptocurrencies-the-science-of-cryptography-773851a23b3c
      • 3
      • Upvote
      • Thanks
  13. Cryptography can be a hard subject to understand. It’s full of mathematical proofs. But unless you are actually developing cryptographic systems, much of that complexity is not necessary to understand what is going on at a high level. If you opened this article hoping to create the next HTTPS protocol, I’m sorry to say that pigeons won’t be enough. Otherwise, brew some coffee and enjoy the article. Alice, Bob and … pigeons? Any activity you do on the Internet (reading this article, buying stuff on Amazon, uploading cat pictures) comes down to sending and receiving messages to and from a server. This can be a bit abstract so let’s imagine that those messages were delivered by carrier pigeons. I know that this may seem very arbitrary, but trust me HTTPS works the same way, albeit a lot faster. Also instead of talking about servers, clients and hackers, we will talk about Alice, Bob and Mallory. If this isn’t your first time trying to understand cryptographic concepts you will recognize those names, because they are widely used in technical literature. A first naive communication If Alice wants to send a message to Bob, she attaches the message on the carrier pigeon’s leg and sends it to Bob. Bob receives the message, reads it and it’s all is good. But what if Mallory intercepted Alice’s pigeon in flight and changed the message? Bob would have no way of knowing that the message that was sent by Alice was modified in transit. This is how HTTP works. Pretty scary right? I wouldn’t send my bank credentials over HTTP and neither should you. A secret code Now what if Alice and Bob are very crafty. They agree that they will write their messages using a secret code. They will shift each letter by 3 positions in the alphabet. For example D → A, E → B, F → C. The plain text message “secret message” would be “pbzobq jbppxdb”. Now if Mallory intercepts the pigeon she won’t be able to change the message into something meaningful nor understand what it says, because she doesn’t know the code. But Bob can simply apply the code in reverse and decrypt the message where A → D, B → E, C → F. The cipher text “pbzobq jbppxdb” would be decrypted back to “secret message”. Success! This is called symmetric key cryptography, because if you know how to encrypt a message you also know how to decrypt it. The code I described above is commonly known as the Caesar cipher. In real life, we use fancier and more complex codes, but the main idea is the same. How do we decide the key? Symmetric key cryptography is very secure if no one apart from the sender and receiver know what key was used. In the Caesar cipher, the key is an offset of how many letters we shift each letter by. In our example we used an offset of 3, but could have also used 4 or 12. The issue is that if Alice and Bob don’t meet before starting to send messages with the pigeon, they would have no way to establish a key securely. If they send the key in the message itself, Mallory would intercept the message and discover the key. This would allow Mallory to then read or change the message as she wishes before and after Alice and Bob start to encrypt their messages. This is the typical example of a Man in the Middle Attack and the only way to avoid it is to change the encryption system all together. Pigeons carrying boxes So Alice and Bob come up with an even better system. When Bob wants to send Alice a message she will follow the procedure below: Bob sends a pigeon to Alice without any message. Alice sends the pigeon back carrying a box with an open lock, but keeping the key. Bob puts the message in the box, closes the locks and sends the box to Alice. Alice receives the box, opens it with the key and reads the message. This way Mallory can’t change the message by intercepting the pigeon, because she doesn’t have the key. The same process is followed when Alice wants to send Bob a message. Alice and Bob just used what is commonly known as asymmetric key cryptography. It’s called asymmetric, because even if you can encrypt a message (lock the box) you can’t decrypt it (open a closed box). In technical speech the box is known as the public key and the key to open it is known as the private key. How do I trust the box? If you paid attention you may have noticed that we still have a problem. When Bob receives that open box how can he be sure that it came from Alice and that Mallory didn’t intercept the pigeon and changed the box with one she has the key to? Alice decides that she will sign the box, this way when Bob receives the box he checks the signature and knows that it was Alice who sent the box. Some of you may be thinking, how would Bob identify Alice’s signature in the first place? Good question. Alice and Bob had this problem too, so they decided that, instead of Alice signing the box, Ted will sign the box. Who is Ted? Ted is a very famous, well known and trustworthy guy. Ted gave his signature to everyone and everybody trusts that he will only sign boxes for legitimate people. Ted will only sign an Alice box if he’s sure that the one asking for the signature is Alice. So Mallory cannot get an Alice box signed by Ted on behalf of her as Bob will know that the box is a fraud because Ted only signs boxes for people after verifying their identity. Ted in technical terms is commonly referred to as a Certification Authority and the browser you are reading this article with comes packaged with the signatures of various Certification Authorities. So when you connect to a website for the first time you trust its box because you trust Ted and Ted tells you that the box is legitimate. Boxes are heavy Alice and Bob now have a reliable system to communicate, but they realize that pigeons carrying boxes are slower than the ones carrying only the message. They decide that they will use the box method (asymmetric cryptography) only to choose a key to encrypt the message using symmetric cryptography with (remember the Caesar cipher?). This way they get the best of both worlds. The reliability of asymmetric cryptography and the efficiency of symmetric cryptography. In the real world there aren’t slow pigeons, but nonetheless encrypting messages using asymmetric cryptography is slower than using symmetric cryptography, so we only use it to exchange the encryption keys. Now you know how HTTPS works and your coffee should also be ready. Go drink it you deserved it 😉 Sursa: https://medium.freecodecamp.org/https-explained-with-carrier-pigeons-7029d2193351
      • 1
      • Upvote
      • 2
      • Upvote
  14. Introduction al-khaser is a PoC "malware" application with good intentions that aims to stress your anti-malware system. It performs a bunch of common malware tricks with the goal of seeing if you stay under the radar. Download You can download the latest release here. Possible uses You are making an anti-debug plugin and you want to check its effectiveness. You want to ensure that your sandbox solution is hidden enough. Or you want to ensure that your malware analysis environment is well hidden. Please, if you encounter any of the anti-analysis tricks which you have seen in a malware, don't hesitate to contribute. Features Anti-debugging attacks IsDebuggerPresent CheckRemoteDebuggerPresent Process Environement Block (BeingDebugged) Process Environement Block (NtGlobalFlag) ProcessHeap (Flags) ProcessHeap (ForceFlags) NtQueryInformationProcess (ProcessDebugPort) NtQueryInformationProcess (ProcessDebugFlags) NtQueryInformationProcess (ProcessDebugObject) NtSetInformationThread (HideThreadFromDebugger) NtQueryObject (ObjectTypeInformation) NtQueryObject (ObjectAllTypesInformation) CloseHanlde (NtClose) Invalide Handle SetHandleInformation (Protected Handle) UnhandledExceptionFilter OutputDebugString (GetLastError()) Hardware Breakpoints (SEH / GetThreadContext) Software Breakpoints (INT3 / 0xCC) Memory Breakpoints (PAGE_GUARD) Interrupt 0x2d Interrupt 1 Parent Process (Explorer.exe) SeDebugPrivilege (Csrss.exe) NtYieldExecution / SwitchToThread TLS callbacks Process jobs Memory write watching Anti-Dumping Erase PE header from memory SizeOfImage Timing Attacks [Anti-Sandbox] RDTSC (with CPUID to force a VM Exit) RDTSC (Locky version with GetProcessHeap & CloseHandle) Sleep -> SleepEx -> NtDelayExecution Sleep (in a loop a small delay) Sleep and check if time was accelerated (GetTickCount) SetTimer (Standard Windows Timers) timeSetEvent (Multimedia Timers) WaitForSingleObject -> WaitForSingleObjectEx -> NtWaitForSingleObject WaitForMultipleObjects -> WaitForMultipleObjectsEx -> NtWaitForMultipleObjects (todo) IcmpSendEcho (CCleaner Malware) CreateWaitableTimer (todo) CreateTimerQueueTimer (todo) Big crypto loops (todo) Human Interaction / Generic [Anti-Sandbox] Mouse movement Total Physical memory (GlobalMemoryStatusEx) Disk size using DeviceIoControl (IOCTL_DISK_GET_LENGTH_INFO) Disk size using GetDiskFreeSpaceEx (TotalNumberOfBytes) Mouse (Single click / Double click) (todo) DialogBox (todo) Scrolling (todo) Execution after reboot (todo) Count of processors (Win32/Tinba - Win32/Dyre) Sandbox known product IDs (todo) Color of background pixel (todo) Keyboard layout (Win32/Banload) (todo) Anti-Virtualization / Full-System Emulation Registry key value artifacts HARDWARE\DEVICEMAP\Scsi\Scsi Port 0\Scsi Bus 0\Target Id 0\Logical Unit Id 0 (Identifier) (VBOX) HARDWARE\DEVICEMAP\Scsi\Scsi Port 0\Scsi Bus 0\Target Id 0\Logical Unit Id 0 (Identifier) (QEMU) HARDWARE\Description\System (SystemBiosVersion) (VBOX) HARDWARE\Description\System (SystemBiosVersion) (QEMU) HARDWARE\Description\System (VideoBiosVersion) (VIRTUALBOX) HARDWARE\Description\System (SystemBiosDate) (06/23/99) HARDWARE\DEVICEMAP\Scsi\Scsi Port 0\Scsi Bus 0\Target Id 0\Logical Unit Id 0 (Identifier) (VMWARE) HARDWARE\DEVICEMAP\Scsi\Scsi Port 1\Scsi Bus 0\Target Id 0\Logical Unit Id 0 (Identifier) (VMWARE) HARDWARE\DEVICEMAP\Scsi\Scsi Port 2\Scsi Bus 0\Target Id 0\Logical Unit Id 0 (Identifier) (VMWARE) Registry Keys artifacts "HARDWARE\ACPI\DSDT\VBOX__" "HARDWARE\ACPI\FADT\VBOX__" "HARDWARE\ACPI\RSDT\VBOX__" "SOFTWARE\Oracle\VirtualBox Guest Additions" "SYSTEM\ControlSet001\Services\VBoxGuest" "SYSTEM\ControlSet001\Services\VBoxMouse" "SYSTEM\ControlSet001\Services\VBoxService" "SYSTEM\ControlSet001\Services\VBoxSF" "SYSTEM\ControlSet001\Services\VBoxVideo" SOFTWARE\VMware, Inc.\VMware Tools SOFTWARE\Wine File system artifacts "system32\drivers\VBoxMouse.sys" "system32\drivers\VBoxGuest.sys" "system32\drivers\VBoxSF.sys" "system32\drivers\VBoxVideo.sys" "system32\vboxdisp.dll" "system32\vboxhook.dll" "system32\vboxmrxnp.dll" "system32\vboxogl.dll" "system32\vboxoglarrayspu.dll" "system32\vboxoglcrutil.dll" "system32\vboxoglerrorspu.dll" "system32\vboxoglfeedbackspu.dll" "system32\vboxoglpackspu.dll" "system32\vboxoglpassthroughspu.dll" "system32\vboxservice.exe" "system32\vboxtray.exe" "system32\VBoxControl.exe" "system32\drivers\vmmouse.sys" "system32\drivers\vmhgfs.sys" Directories artifacts "%PROGRAMFILES%\oracle\virtualbox guest additions\" "%PROGRAMFILES%\VMWare\" Memory artifacts Interupt Descriptor Table (IDT) location Local Descriptor Table (LDT) location Global Descriptor Table (GDT) location Task state segment trick with STR MAC Address "\x08\x00\x27" (VBOX) "\x00\x05\x69" (VMWARE) "\x00\x0C\x29" (VMWARE) "\x00\x1C\x14" (VMWARE) "\x00\x50\x56" (VMWARE) Virtual devices "\\.\VBoxMiniRdrDN" "\\.\VBoxGuest" "\\.\pipe\VBoxMiniRdDN" "\\.\VBoxTrayIPC" "\\.\pipe\VBoxTrayIPC") "\\.\HGFS" "\\.\vmci" Hardware Device information SetupAPI SetupDiEnumDeviceInfo (GUID_DEVCLASS_DISKDRIVE) QEMU VMWare VBOX VIRTUAL HD System Firmware Tables SMBIOS string checks (VirtualBox) ACPI string checks (VirtualBox) Driver Services VirtualBox VMWare Adapter name VMWare Windows Class VBoxTrayToolWndClass VBoxTrayToolWnd Network shares VirtualBox Shared Folders Processes vboxservice.exe (VBOX) vboxtray.exe (VBOX) vmtoolsd.exe(VMWARE) vmwaretray.exe(VMWARE) vmwareuser(VMWARE) vmsrvc.exe(VirtualPC) vmusrvc.exe(VirtualPC) prl_cc.exe(Parallels) prl_tools.exe(Parallels) xenservice.exe(Citrix Xen) WMI SELECT * FROM Win32_Bios (SerialNumber) (VMWARE) SELECT * FROM Win32_PnPEntity (DeviceId) (VBOX) SELECT * FROM Win32_NetworkAdapterConfiguration (MACAddress) (VBOX) SELECT * FROM Win32_NTEventlogFile (VBOX) SELECT * FROM Win32_Processor (NumberOfCores) (GENERIC) SELECT * FROM Win32_LogicalDisk (Size) (GENERIC) DLL Exports and Loaded DLLs kernel32.dll!wine_get_unix_file_nameWine (Wine) sbiedll.dll (Sandboxie) dbghelp.dll (MS debugging support routines) api_log.dll (iDefense Labs) dir_watch.dll (iDefense Labs) pstorec.dll (SunBelt Sandbox) vmcheck.dll (Virtual PC) wpespy.dll (WPE Pro) CPU Hypervisor presence using (EAX=0x1) Hypervisor vendor using (EAX=0x40000000) "KVMKVMKVM\0\0\0" (KVM) "Microsoft Hv"(Microsoft Hyper-V or Windows Virtual PC) "VMwareVMware"(VMware) "XenVMMXenVMM"(Xen) "prl hyperv "( Parallels) -"VBoxVBoxVBox"( VirtualBox) Anti-Analysis Processes OllyDBG / ImmunityDebugger / WinDbg / IDA Pro SysInternals Suite Tools (Process Explorer / Process Monitor / Regmon / Filemon, TCPView, Autoruns) Wireshark / Dumpcap ProcessHacker / SysAnalyzer / HookExplorer / SysInspector ImportREC / PETools / LordPE JoeBox Sandbox Macro malware attacks Document_Close / Auto_Close. Application.RecentFiles.Count Code/DLL Injections techniques CreateRemoteThread SetWindowsHooksEx NtCreateThreadEx RtlCreateUserThread APC (QueueUserAPC / NtQueueApcThread) RunPE (GetThreadContext / SetThreadContext) Sursa & download: https://github.com/LordNoteworthy/al-khaser
      • 5
      • Thanks
      • Upvote
  15. Another day, another cryptocurrency exchange under cyber attack – This time Coincheck, Japanese cryptocurrency exchange has been hacked and lost $534 million in NEM tokens. One of Japan’s and Asia’s largest cryptocurrency exchange Coincheck has suffered a data breach in which unknown hackers have stolen 58 billion Yen of the virtual currency “NEM (Nemu)” ($534 million – €429 million) from its digital wallets. According to local media, the Tokyo based exchange has confirmed that it has suffered what appears to be the biggest hack in the history of cryptocurrency business. In a press conference held earlier today, Coincheck’s president Koichi Wada apologized to the customers and said that the law enforcement authorities are already investigating the incident. “Currently, credit card, Pay Easy, and convenience store payments are suspended. We sincerely apologize for these inconveniences and will continue to do our best to be back to normal operations as soon as possible,” said Coincheck’s blog post. Initially, Coincheck suspended the trade for NEM tokens without stating any reason or providing additional information. However, on Friday evening, the company conducted a press conference and announced that it has suspended the trade for almost every cryptocurrency after the incident. According to the president of blockchain company NEM.io Lon Wong “This is the biggest theft in the history of the world.” “It’s unfortunate that CoinCheck got hacked. But we are doing everything we can to help,” Wong said. This is not the first time when a Japanese cryptocurrency exchange is in news for all the wrong reasons. In 2014, Tokyo-based Mt. Gox Bitcoin exchange suffered a cyber attack in which 850,000 Bitcoins were stolen. The company managed around 80% of the world’s Bitcoin trades but ended up filing for bankruptcy. It is unclear what will be the exact impact of Coincheck breach or how much it will affect the value of other cryptocurrencies. It is advised that investors should make sure their wallets are safe from cyber attacks since there has been a 100% increase in attacks against cryptocurrency wallets and exchanges. At the time of publishing this article, Coincheck’s website was online but its operations were suspended. 9th Breach Against A Cryptocurrency Platform In Last 6 Months This is the 9th major successful security breach against a cryptocurrency platform. Here is a look at seven previous data breaches against cryptocurrency exchanges: 1: July 4th, 2017: Bithumb hacked and 1.2 billion South Korean Won stolen. 2: July 17th, 2017: CoinDash hacked and $7 million in Ethereum stolen. 3: July 24th, 2017: Veritaseum hacked and $8.4 million in Ethereum stolen. 4: July 20, 2017: Parity Technologies hacked and $32 Million in Ethereum stolen. 5: August 22nd, 2017, Enigma marketplace hacked and $500,000 in Ethereum stolen. 6: November 19th, Tether hacked and $30 million worth of tokens stolen. 7: December 7, 2017: NiceHash hacked and $70 million stolen. 9: December 21, 2017: EtherDelta hacked and $266,789 in Ethereum stolen. Sursa: https://www.hackread.com/coincheck-cryptocurrency-exchange-hacked-530-million-stolen/
      • 1
      • Upvote
  16. Replicate the AlphaZero methodology to teach a machine to learn Connect4 strategy through self-play and deep learning In this article I’ll attempt to cover three things: Two reasons why AlphaZero is a massive step forward for Artificial Intelligence How you can build a replica of the AlphaZero methodology to play the game Connect4 How you can adapt the code to plug in other games AlphaGo → AlphaGo Zero → AlphaZero In March 2016, Deepmind’s AlphaGo beat 18 times world champion Go player Lee Sedol 4–1 in a series watched by over 200 million people. A machine had learnt a super-human strategy for playing Go, a feat previously thought impossible, or at the very least, at least a decade away from being accomplished. Match 3 of AlphaGo vs Lee Sedol This in itself, was a remarkable achievement. However, on 18th October 2017, DeepMind took a giant leap further. The paper ‘Mastering the Game of Go without Human Knowledge’ unveiled a new variant of the algorithm, AlphaGo Zero, that had defeated AlphaGo 100–0. Incredibly, it had done so by learning solely through self-play, starting ‘tabula rasa’ (blank state) and gradually finding strategies that would beat previous incarnations of itself. No longer was a database of human expert games required to build a super-human AI . A graph from ‘Mastering the Game of Go without Human Knowledge’ A mere 48 days later, on 5th December 2017, DeepMind released another paper ‘Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm’ showing how AlphaGo Zero could be adapted to beat the world-champion programs StockFish and Elmo at chess and shogi. The entire learning process, from being shown the games for the first time, to becoming the best computer program in the world, had taken under 24 hours. With this, AlphaZero was born — the general algorithm for getting good at something, quickly, without any prior knowledge of human expert strategy. There are two amazing things about this achievement: 1. AlphaZero requires zero human expertise as input It cannot be understated how important this is. This means that the underlying methodology of AlphaGo Zero can be applied to ANY game with perfect information (the game state is fully known to both players at all times) because no prior expertise is required beyond the rules of the game. This is how it was possible for DeepMind to publish the chess and shogi papers only 48 days after the original AlphaGo Zero paper. Quite literally, all that needed to change was the input file that describes the mechanics of the game and to tweak the hyper-parameters relating to the neural network and Monte Carlo tree search. 2. The algorithm is ridiculously elegant If AlphaZero used super-complex algorithms that only a handful of people in the world understood, it would still be an incredible achievement. What makes it extraordinary is that a lot of the ideas in the paper are actually far less complex than previous versions. At its heart, lies the following beautifully simple mantra for learning: Mentally play through possible future scenarios, giving priority to promising paths, whilst also considering how others are most likely to react to your actions and continuing to explore the unknown. After reaching a state that is unfamiliar, evaluate how favourable you believe the position to be and cascade the score back through previous positions in the mental pathway that led to this point. After you’ve finished thinking about future possibilities, take the action that you’ve explored the most. At the end of the game, go back and evaluate where you misjudged the value of the future positions and update your understanding accordingly. Doesn’t that sound a lot like how you learn to play games? When you play a bad move, it’s either because you misjudged the future value of resulting positions, or you misjudged the likelihood that that your opponent would play a certain move, so didn’t think to explore that possibility. These are exactly the two aspect of gameplay that AlphaZero is trained to learn. How to build your own AlphaZero Firstly, check out the AlphaGo Zero cheat sheet for a high level understanding of how AlphaGo Zero works. It’s worth having that to refer to as we walk through each part of AlphaZero. There’s also a great article here that explains how it works in more detail. The code Clone this Git repository, which contains the code I’ll be referencing. To start the learning process, run the top two panels in the run.ipynb Jupyter notebook. Once it’s built up enough game positions to fill its memory the neural network will begin training. Through additional self-play and training, it will gradually get better at predicting the game value and next moves from any position, resulting in better decision making and smarter overall play. We’ll now have a look at the code in more detail, and show some results that demonstrate the AI getting stronger over time. N.B — This is my own understanding of how AlphaZero works based on the information available in the papers referenced above. If any of the below is incorrect, apologies and I’ll endeavour to correct it! Connect4 The game that our algorithm will learn to play is Connect4 (or Four In A Row). Not quite as complex as Go, but there are still 4,531,985,219,092 game positions in total, so not trivial for a laptop to learn how to play well with zero human input. Connect4 The game rules are straightforward. Players take it in turns to enter a piece of their colour in the top of any available column. The first player to get four of their colour in a row — each vertically, horizontally or diagonally, wins. If the entire grid is filled without a four-in-a-row being created, the game is drawn. Here’s a summary of the key files that make up the codebase: game.py This file contains the game rules for Connect4. Each squares is allocated a number from 0 to 41, as follows: Action squares for Connect4 The game.py file gives the logic behind moving from one game state to another, given a chosen action. For example, given the empty board and action 38, the takeAction method return a new game state, with the starting player’s piece at the bottom of the centre column. You can replace the game.py file with any game file that conforms to the same API and the algorithm will in principal, learn strategy through self play, based on the rules you have given it. run.ipynb This contains the code that starts the learning process. It loads the game rules and then iterates through the main loop of the algorithm, which consist of three stages: Self-play Retraining the Neural Network Evaluating the Neural Network There are two agents involved in this loop, the best_player and the current_player. The best_player contains the best performing neural network and is used to generate the self play memories. The current_player then retrains its neural network on these memories and is then pitched against the best_player. If it wins, the neural network inside the best_player is switched for the neural network inside the current_player, and the loop starts again. agent.py This contains the Agent class (a player in the game). Each player is initialised with its own neural network and Monte Carlo Search Tree. The simulate method runs the Monte Carlo Tree Search process. Specifically, the agent moves to a leaf node of the tree, evaluates the node with its neural network and then backfills the value of the node up through the tree. The act method repeats the simulation multiple times to understand which move from the current position is most favourable. It then returns the chosen action to the game, to enact the move. The replay method retrains the neural network, using memories from previous games. model.py A sample of the residual convolutional network build using Keras This file contains the Residual_CNN class, which defines how to build an instance of the neural network. It uses a condensed version of the neural network architecture in the AlphaGoZero paper — i.e. a convolutional layer, followed by many residual layers, then splitting into a value and policy head. The depth and number of convolutional filters can be specified in the config file. The Keras library is used to build the network, with a backend of Tensorflow. To view individual convolutional filters and densely connected layers in the neural network, run the following inside the the run.ipynb notebook: current_player.model.viewLayers() Convolutional filters from the neural network MCTS.py This contains the Node, Edge and MCTS classes, that constitute a Monte Carlo Search Tree. The MCTS class contains the moveToLeaf and backFill methods previously mentioned, and instances of the Edge class store the statistics about each potential move. config.py This is where you set the key parameters that influence the algorithm. Adjusting these variables will affect that running time, neural network accuracy and overall success of the algorithm. The above parameters produce a high quality Connect4 player, but take a long time to do so. To speed the algorithm up, try the following parameters instead. funcs.py Contains the playMatches and playMatchesBetweenVersions functions that play matches between two agents. To play against your creation, run the following code (it’s also in the run.ipynb notebook) from game import Game from funcs import playMatchesBetweenVersions import loggers as lg env = Game() playMatchesBetweenVersions( env , 1 # the run version number where the computer player is located , -1 # the version number of the first player (-1 for human) , 12 # the version number of the second player (-1 for human) , 10 # how many games to play , lg.logger_tourney # where to log the game to , 0 # which player to go first - 0 for random ) initialise.py When you run the algorithm, all model and memory files are saved in the run folder, in the root directory. To restart the algorithm from this checkpoint later, transfer the run folder to the run_archive folder, attaching a run number to the folder name. Then, enter the run number, model version number and memory version number into the initialise.py file, corresponding to the location of the relevant files in the run_archive folder. Running the algorithm as usual will then start from this checkpoint. memory.py An instance of the Memory class stores the memories of previous games, that the algorithm uses to retrain the neural network of the current_player. loss.py This file contains a custom loss function, that masks predictions from illegal moves before passing to the cross entropy loss function. settings.py The locations of the run and run_archive folders. loggers.py Log files are saved to the log folder inside the run folder. To turn on logging, set the values of the logger_disabled variables to False inside this file. Viewing the log files will help you to understand how the algorithm works and see inside its ‘mind’. For example, here is a sample from the logger.mcts file. Output from the logger.mcts file Equally from the logger.tourney file, you can see the probabilities attached to each move, during the evaluation phase: Output from the logger.tourney file Results Training over a couple of days produces the following chart of loss against mini-batch iteration number: Loss against mini-batch iteration number The top line is the error in the policy head (the cross entropy of the MCTS move probabilities, against the output from the neural network). The bottom line is the error in the value head (the mean squared error between the actual game value and the neural network predict of the value). The middle line is an average of the two. Clearly, the neural network is getting better at predicting the value of each game state and the likely next moves. To show how this results in stronger and stronger play, I ran a league between 17 players, ranging from the 1st iteration of the neural network, up to the 49th. Each pairing played twice, with both players having a chance to play first. Here are the final standings: Clearly, the later versions of the neural network are superior to the earlier versions, winning most of their games. It also appears that the learning hasn’t yet saturated — with further training time, the players would contain to get stronger, learning more and more intricate strategies. As an example, one clear strategy that the neural network has favoured over time is grabbing the centre column early. Observe the difference between the first version of the algorithm and say, the 30th version: 1st neural network version 30th neural network version This is a good strategy as many lines require the centre column — claiming this early ensures your opponent cannot take advantage of this. This has been learnt by the neural network, without any human input. Learning a different game There is a game.py file for a game called ‘Metasquares’ in the games folder. This involves placing X and O markers in a grid to try to form squares of different sizes. Larger squares score more points than smaller squares and the player with the most points when the grid is full wins. If you switch the Connect4 game.py file for the Metasquares game.py file, the same algorithm will learn how to play Metasquares instead. Summary Hopefully you find this article useful — let me know in the comments below if you find any typos or have questions about anything in the codebase or article and I’ll get back to you as soon as possible. Sursa: https://medium.com/applied-data-science/how-to-build-your-own-alphazero-ai-using-python-and-keras-7f664945c188
      • 2
      • Upvote
  17. There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation, face-swapped to look like she’s performing in an existing incest-themed porn video. The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together. It's not going to fool anyone who looks closely. Sometimes the face doesn't track correctly and there's an uncanny valley effect at play, but at a glance it seems believable. It's especially striking considering that it's allegedly the work of one person—a Redditor who goes by the name 'deepfakes'—not a big special effects studio that can digitally recreate a young Princess Leia in Rogue One using CGI. Instead, deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning. Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we're on the verge of living in a world where it's trivially easy to fabricate believable videos of people doing and saying things they never did. Even having sex. So far, deepfakes has posted hardcore porn videos featuring the faces of Scarlett Johansson, Maisie Williams, Taylor Swift, Aubrey Plaza, and Gal Gadot on Reddit. I’ve reached out to the management companies and/or publicists who represent each of these actors informing them of the fake videos, and will update if I hear back. Fake celebrity porn, where images are photoshopped to look like famous people are posing nude, is a years-old category of porn with an ardent fan base. People commenting and voting in the subreddit where deepfakes posts are big fans of his work. This is the latest advancement in that genre. “This is no longer rocket science.” According to deepfakes—who declined to give his identity to me to avoid public scrutiny—the software is based on multiple open-source libraries, like Keras with TensorFlow backend. To compile the celebrities’ faces, deepfakes said he used Google image search, stock photos, and YouTube videos. Deep learning consists of networks of interconnected nodes that autonomously run computations on input data. In this case, he trained the algorithm on porn videos and Gal Gadot’s face. After enough of this “training,” the nodes arrange themselves to complete a particular task, like convincingly manipulating video on the fly. Artificial intelligence researcher Alex Champandard told me in an email that a decent, consumer-grade graphics card could process this effect in hours, but a CPU would work just as well, only more slowly, over days. “This is no longer rocket science,” Champandard said. The ease with which someone could do this is frightening. Aside from the technical challenge, all someone would need is enough images of your face, and many of us are already creating sprawling databases of our own faces: People around the world uploaded 24 billion selfies to Google Photos in 2015-2016. It isn’t difficult to imagine an amateur programmer running their own algorithm to create a sex tape of someone they want to harass. Deepfakes told me he’s not a professional researcher, just a programmer with an interest in machine learning. “I just found a clever way to do face-swap,” he said, referring to his algorithm. “With hundreds of face images, I can easily generate millions of distorted images to train the network,” he said. “After that if I feed the network someone else's face, the network will think it's just another distorted image and try to make it look like the training face.” In a comment thread on Reddit, deepfakes mentioned that he is using an algorithm similar to one developed by Nvidia researchers that uses deep learning to, for example, instantly turn a video of a summer scene into a winter one. The Nvidia researchers who developed the algorithm declined to comment on this possible application. In almost all of the examples deepfakes has posted, the result isn’t perfect. In the Gadot video, a box occasionally appeared around her face where the original image peeks through, and her mouth and eyes don’t quite line up to the words the actress is saying—but if you squint a little and suspend your belief, it might as well be Gadot; other videos deepfakes have made are even more convincing. Porn performer Grace Evangeline told me over Twitter direct messages that porn stars are used to having their work spread around free to tube sites like SendVid, where the Gal Gadot fake is uploaded, without their permission. But she said that this was different. She’d never seen anything like this. “One important thing that always needs to happen is consent,” Evangeline said. “Consent in private life as well as consent on film. Creating fake sex scenes of celebrities takes away their consent. It’s wrong.” Even for people whose livelihoods involve getting in front of a camera, the violation of personal boundaries is troubling. I showed Alia Janine, a retired porn performer who was in the sex industry for 15 years, the video of Gadot. “It’s really disturbing,” she told me over the phone. “It kind of shows how some men basically only see women as objects that they can manipulate and be forced to do anything they want... It just shows a complete lack of respect for the porn performers in the movie, and also the female actresses.” I asked deepfakes whether he considered the ethical implications of this technology. Did consent, revenge porn, and blackmail enter their mind while developing this algorithm? “Every technology can be used with bad motivations, and it's impossible to stop that,” he said, likening it to the same technology that recreated Paul Walker’s post-mortem performance in Furious 7. “The main difference is how easy [it is] to do that by everyone. I don't think it's a bad thing for more average people [to] engage in machine learning research.” Ethically, the implications are “huge,” Champandard said. Malicious use of technology often can’t be avoided, but it can be countered. “We need to have a very loud and public debate,” he said. ”Everyone needs to know just how easy it is to fake images and videos, to the point where we won't able to distinguish forgeries in a few months from now. Of course, this was possible for a long time but it would have taken a lot of resources and professionals in visual effects to pull this off. Now it can be done by a single programmer with recent computer hardware.” Champandard said researchers can then begin developing technology to detect fake videos and help moderate what’s fake and what isn’t, and internet policy can improve to regulate what happens when these types of forgeries and harassment come up. “In a strange way,” this is a good thing, Champandard said. “We need to put our focus on transforming society to be able to deal with this.” Correction: This story has been updated to clarify that deepfake's algorithm is similar to the research produced by Nvidia researchers, but that there's no evidence that it's an application of their work. Sursa: https://motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn
      • 3
      • Upvote
      • Like
  18. It all started with a tweet, which seemed to resonate with people: The aim was to list blogs that specifically cover .NET internals at a low-level or to put it another way, blogs that answer the question how does feature ‘X’ work, under-the-hood. The list includes either typical posts for that blog, or just some of my favourites! Note: for a wider list of .NET and performance related blogs see Awesome .NET Performance by Adam Sitnik I wouldn’t recommend reading through the entire list, at least not in one go, your brain will probably melt. Picks some posts/topics that interest you and start with those. Finally, bear in mind that some of the posts are over 10 years old, so there’s a chance that things have changed since then (however, in my experience, the low-levels parts of the CLR are more stable). If you want to double-check the latest behaviour, you’re best option is to read the source! Community or Non-Microsoft Blogs These blogs are all written by non-Microsoft employees (AFAICT), or if they do work for Microsoft, they don’t work directly on the CLR. If I’ve missed any interesting blogs out, please let me know! Special mention goes to Sasha Goldshtein, he’s been blogging about this longer than anyone!! All Your Base Are Belong To Us by Sasha Goldshtein (@goldshtn) Generic Method Dispatch Inspecting Local Root Lifetime Virtual Method Dispatch and Object Layout Changes in CLR 4.0 Runtime Representation of Generics—Part 2 Revisiting Value Types vs. Reference Types Dissecting the code by Sergey Teplyakov (@STeplyakov) (M/S) Garbage collection and variable lifetime tracking Managed object internals, Part 1. The layout (Also part 2, part 3 and part 4) To box or not to Box? That is the question! Dissecting the new() constraint in C#: a perfect example of a leaky abstraction Adam Sitnik - .NET Performance and Reliability by Adam Sitnik (@SitnikAdam) (M/S) Value Types vs Reference Types Span Pooling large arrays with ArrayPool Collecting Hardware Performance Counters with BenchmarkDotNet Disassembling .NET Code with BenchmarkDotNet Andrey Akinshin’s blog by Andrey Akinshin (@andrey_akinshin) Measuring Performance Improvements in .NET Core with BenchmarkDotNet (Part 1) Blittable types DateTime under the hood Stopwatch under the hood TooSlowException by Konrad Kokosa (@konradkokosa) .NET Core – compilation, running, debugging How does Object.GetType() really work? Zero Garbage Collector for .NET Core The Ultimate .NET Experiment – open source project a little bit of programming by Marcin Juraszek (@mmjuraszek) (M/S) String.Split and int[] allocations Adding Matt operator to Roslyn - Syntax, Lexer and Parser (Part 2 - Binder, Part 3 - Emitter) yizhang82’s blog by Yi Zhang (@yizhang82) (M/S) Sharing .NET generic code under the hood C# value type boxing under the hood Embedding CoreCLR in your C/C++ application Timur Guev’s posts on {coding}Sight by Timur Guev (@timyrik200), also appears to have his own blog Math and Programming (in Russian) The origin of GetHashCode in .NET Aspects of Strings in .NET StringBuilder: the Past and the Future The mole is digging by Alexandr Nikitin (@nikitin_a_a) .NET Generics under the hood Hoisting in .NET Explained Hoisting in .NET Examples My Coding Place by Dudi Keleti (@dudi_ke) Object header get complicated IL Call Vs. Callvirt Instruction (Part 2) Value type methods – call, callvirt, constrained and hidden boxing Alexandre Mutel’s blog by Alexandre Mutel (@xoofx) A new stackalloc operator for reference types with CoreCLR and Roslyn Struct inheritance in C# with CoreCLR and Roslyn Microsoft Engineers The blogs below are written by the actual engineers who worked on, designed or managed various parts of the CLR, so they give a deep insight (again, if I’ve missed any blogs out, please let me know): Maoni’s WebLog - CLR Garbage Collector by Maoni Stephens Suspending and resuming threads for GC Allocating on the stack or the heap? Large Object Heap cbrumme’s WebLog by Christopher Brumme Memory Model Value Types Virtual and non-virtual A blog on coding, .NET, .NET Compact Framework and life in general.. by Abhinaba Basu .NET Just in Time Compilation and Warming up Your System Trivia: How does CLR create an OutOfMemoryException Back to basic: Series on dynamic memory management Joel Pobar’s CLR weblog - CLR Program Manager: Reflection, LCG, Generics and the type system.. by Joel Pobar CLR Type System notes CLR Generics and code sharing Explanatory notes on Rotor’s Garbage Collector CLR Profiling API Blog - Info about the Common Language Runtime’s Profiling API by David Broman (slightly niche, but still worth a read) Creating an IL-rewriting profiler Type Forwarding Metadata Tokens, Run-Time IDs, and Type Loading Books Finally, if you prefer reading off-line there are some decent books that discuss .NET Internals (Note: all links are Amazon Affiliate links): CLR via C#, 4ed by Jeffrey Richter Shared Source CLI Essentials Paperback by David Stutz, Ted Neward, Geoff Shilling Writing High-Performance .NET Code Paperback by Ben Watson Pro .NET Performance: Optimize Your C# Applications by Sasha Goldshtein All the books listed above I’ve bought copies of and read cover-to-cover, they’re fantastic resources. I’ve also been recently recommend the 2 books below, they look good and certainly the authors know their stuff, but I haven’t read them yet: The Common Language Infrastructure Annotated Standard by James S. Miller, Susann Ragsdale Essential .NET, Volume I: The Common Language Runtime by Don Box, Chris Sells Sursa: http://mattwarren.org/2018/01/22/Resources-for-Learning-about-.NET-Internals/
      • 2
      • Upvote
      • Thanks
  19. Today, Facebook AI Research (FAIR) open sourced Detectron — our state-of-the-art platform for object detection research. The Detectron project was started in July 2016 with the goal of creating a fast and flexible object detection system built on Caffe2, which was then in early alpha development. Over the last year and a half, the codebase has matured and supported a large number of our projects, including Mask R-CNN and Focal Loss for Dense Object Detection, which won the Marr Prize and Best Student Paper awards, respectively, at ICCV 2017. These algorithms, powered by Detectron, provide intuitive models for important computer vision tasks, such as instance segmentation, and have played a key role in the unprecedented advancement of visual perception systems that our community has achieved in recent years. Beyond research, a number of Facebook teams use this platform to train custom models for a variety of applications including augmented reality and community integrity. Once trained, these models can be deployed in the cloud and on mobile devices, powered by the highly efficient Caffe2 runtime. Our goal in open sourcing Detectron is to make our research as open as possible and to accelerate research in labs across the world. With its release, the research community will be able to reproduce our results and have access to the same software platform that FAIR uses every day. Detectron is available under the Apache 2.0 license at https://github.com/facebookresearch/Detectron We’re also releasing extensive performance baselines for more than 70 pre-trained models that are available to download from our model zoo. Sursa: https://research.fb.com/facebook-open-sources-detectron/
      • 2
      • Upvote
      • Thanks
  20. First, why would you do this? Why not. It's awesome. It's a learning experience. It's cheaper to get 6 pis than six "real computers." It's somewhat portable. While you can certainly quickly and easily build a Kubernetes Cluster in the cloud within your browser using a Cloud Shell, there's something more visceral about learning it this way, IMHO. Additionally, it's a non-trivial little bit of power you've got here. This is also a great little development cluster for experimenting. I'm very happy with the result. By the end of this blog post you'll have not just Hello World but you'll have Cloud Native Distributed Containerized RESTful microservice based on ARMv7 w/ k8s Hello World! as a service. (original Tweet). Not familiar with why Kubernetes is cool? Check out Julia Evans' blog and read her K8s posts and you'll be convinced! Hardware List (scroll down for Software) Here's your shopping list. You may have a bunch of this stuff already. I had the Raspberry Pis and SD Cards already. 6 - Raspberry Pi 3 - I picked 6, but you should have at least 3 or 4. One Boss/Master and n workers. I did 6 because it's perfect for the power supply, perfect for the 8-port hub, AND it's a big but not unruly number. 6 - Samsung 32Gb Micro SDHC cards - Don't be too cheap. Faster SD cards are better. 2x6 - 1ft flat Ethernet cables - Flat is the key here. They are WAY more flexible. If you try to do this with regular 1ft cables you'll find them inflexible and frustrating. Get extras. 1 - Anker PowerPort 6 Port USB Charging Hub - Regardless of this entire blog post, this product is amazing. It's almost the same physical size as a Raspberry Pi, so it fits perfect at the bottom of your stack. It puts out 2.4a per port AND (wait for it) it includes SIX 1ft Micro USB cables...perfect for running 6 Raspberry Pis with a single power adapter. 1 - 7 layer Raspberry Pi Clear Case Enclosure - I only used 6 of these, which is cool. I love this case, and it looks fantastic. 1 - Black Box USB-Powered 8-Port Switch - This is another amazing and AFAIK unique product. An overarching goal for this little stack is that it be easy to move around and set up but also to power. We have power to spare, so I'd like to avoid a bunch of "wall warts" or power adapters. This is an 8 port switch that can be powered over a Raspberry Pi's USB. Because I'm given up to 2.4A to each micro USB, I just plugged this hub into one of the Pis and it worked no problem. It's also...wait for it...the size of a Pi. It also include magnets for mounting. 1 - Some Small Router - This one is a little tricky and somewhat optional. You can just put these Pis on your own Wifi and access them that way, but you need to think about how they get their IP address. Who doles out IPs via DHCP? Static Leases? Static IPs completely? The root question is - How portable do you want this stack to be? I propose you give them their own address space and their own router that you then use to bridge to other places. Easiest way is with another router (you likely have one lying around, as I did. Could be any router...and remember hub/switch != router. Here is a bad network diagram that makes the point, I think. The idea is that I should be able to go to a hotel or another place and just plug the little router into whatever external internet is available and the cluster will just work. Again, not needed unless portability matters to you as it does to me. You could ALSO possibly get this to work with a Travel Router but then the external internet it consumed would be just Wifi and your other clients would get on your network subnet via Wifi as well. I wanted the relative predictability of wired. What I WISH existed was a small router - similar to that little 8 port hub - that was powered off USB and had an internal and external Ethernet port. This ZyXEL Travel Router is very close...hm... Optional - Pelican Case if you want portability. I'll see what airport security thinks. O_O Optional - Tiny Keyboard and Mouse - Raspberry Pis can put out about 500mA per port for mice and keyboards. The number one problem I see with Pis is not giving them enough power and/or then having an external device take too much and then destabilize the system. This little keyboard is also a touchpad mouse and can be used to debug your Pi when you can't get remote access to it. You'll also want an HMDI cable occasionally. You're Rich - If you have money to burn, get the 7" Touchscreen Display and a Case for it, just to show off htop in color on one of the Pis. Dodgey Network Diagram Disclaimer OK, first things first, a few disclaimers. The software in this space is moving fast. There's a non-zero chance that some of this software will have a new version out before I finish this blog post. In fact, when I was setting up Kubernetes, I created a few nodes, went to bed for 6 hours, came back and made a few more nodes and a new version had come out. Try to keep track, keep notes, and be aware of what works with what. Next, I'm just learning this stuff. I may get some of this wrong. While I've built (very) large distributed systems before, my experience with large orchestrators (primarily in banks) was with large proprietary ones in Java, C++, COM, and later in C#, .NET 1.x,2.0, and WCF. It's been really fascinating to see how Kubernetes thinks about these things and comparing it to how we thought about these things in the 90s and very early 2000s. A lot of best practices that were HUGE challenges many years ago are now being codified and soon, I hope, will "just work" for a new generation of developer. At least another full page of my resume is being marked [Obsolete] and I'm here for it. Things change and they are getting better. Software Get your Raspberry PIs and SD cards together. Also bookmark and subscribe to Alex Ellis' blog as you're going to find yourself there a lot. He's the author of OpenFaas, which I'll be using today and he's done a LOT of work making this experiment possible. So thank you Alex for being awesome! He has a great post on how Multi-stage Docker files make it possible to effectively use .NET Core on a Raspberry Pi while still building on your main machine. He and I spent a few late nights going around and around to make this easy. Alex has put together a Gist we iterated on and I'll summarize here. You'll do these instructions n times for all machines. You'll do special stuff for the ONE master/boss node and different stuff for the some number of worker nodes. ADVANCED TIP! If you know what you're doing Linux-wise, you should save this excellent prep.sh shell script that Alex made, then SKIP to the node-specific instructions below. If you want to learn more, do it step by step. ALL NODES Burn Jessie to a SD Card You're going to want to get a copy of Raspbian Jesse Lite and burn it to your SD Cards with Etcher, which is the only SD Card Burner you need. It's WAY better than the competition and it's open source. You can also try out Hypriot and their "optimized docker image for Raspberry Pi" but I personally tried to get it working reliably for a two days and went back to Jesse. No disrespect. Creating an empty file called "ssh" before you put the card in the Raspberry Pi SSH into the new Pi I'm on Windows so I used WSL (Ubuntu) for Windows that lets me SSH and do run Linux natively. ssh pi@raspberrypi Login pi, password raspberry. Change the Hostname I ran rasbpi-config then immediately reboot with "sudo reboot" Install Docker curl -sSL get.docker.com | sh && \ sudo usermod pi -aG docker Disable Swap. Important, you'll get errors in Kuberenetes otherwise sudo dphys-swapfile swapoff && \ sudo dphys-swapfile uninstall && \ sudo update-rc.d dphys-swapfile remove Go edit /boot/cmdline.txt with your favorite editor, or use sudo nano /boot/cmdline and add this at the very end. Don't press enter. cgroup_enable=cpuset cgroup_enable=memory Install Kubernetes curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \ sudo apt-get update -q && \ sudo apt-get install -qy kubeadm MASTER/BOSS NODE After ssh'ing into my main node, I used /ifconfig eth0 to figure out what the IP adresss was. Ideally you want this to be static (not changing) or at least a static lease. I logged into my router and set it as a static lease, so my main node ended up being 192.168.170.2, and .1 is the router itself. Then I initialized this main node sudo kubeadm init --apiserver-advertise-address=192.168.170.2 This took a WHILE. Like 10-15 min, so be patient. Kubernetes uses this admin.conf for a ton of stuff, so you're going to want a copy in your $HOME folder so you can call "kubectl" easily later, copy it and take ownership. mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config When this is done, you'll get a nice print out with a ton of info and a token you have to save. Save it all. I took a screenshot. WORKER NODES Ssh into your worker nodes and join them each to the main node. This line is the line you needed to have saved above when you did a kubectl init. kubeadm join --token d758dc.059e9693bfa5 192.168.170.2:6443 --discovery-token-ca-cert-hash sha256:c66cb9deebfc58800a4afbedf0e70b93c086d02426f6175a716ee2f4d Did it work? While ssh'ed into the main node - or from any networked machine that has the admin.conf on it - try a few commands. Here I'm trying "kubectl get nodes" and "kubectl get pods." Note that I already have some stuff installed, so you'll want try "kubectl get pods --namespace kube-system" to see stuff running. If everything is "Running" then you can finish setting up networking. Kubernetes has fifty-eleven choices for networking and I'm not qualified to pick one. I tried Flannel and gave up and then tried Weave and it just worked. YMMV. Again, double check Alex's Gist if this changes. kubectl apply -f https://git.io/weave-kube-1.6 At this point you should be ready to run some code! Hello World...with Markdown Back to Alex's gist, I'll try this "markdownrender" app. It will take some Markdown and return HTML. Go get the function.yml from here and create the new app on your new cluster. $ kubectl create -f function.yml $ curl -4 http://localhost:31118 -d "# test" <p><h1>test</h1></p> This part can be tricky - it was for me. You need to understand what you're doing here. How do we know the ports? A few ways. First, it's listed as nodePort in the function.yml that represents the desired state of the application. We can also run "kubectl get svc" and see the ports for various services. pi@hanselboss1:~ $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager NodePort 10.103.43.130 <none> 9093:31113/TCP 1d dotnet-ping ClusterIP 10.100.153.185 <none> 8080/TCP 1d faas-netesd NodePort 10.103.9.25 <none> 8080:31111/TCP 2d gateway NodePort 10.111.130.61 <none> 8080:31112/TCP 2d http-ping ClusterIP 10.102.150.8 <none> 8080/TCP 1d kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d markdownrender NodePort 10.104.121.82 <none> 8080:31118/TCP 1d nodeinfo ClusterIP 10.106.2.233 <none> 8080/TCP 1d prometheus NodePort 10.98.110.232 <none> 9090:31119/TCP 2d See those ports that are outside:insider? You can get to markdownrender directly from 31118 on an internal IP like localhost, or the main/master IP. Those 10.x.x.x are all software networking, you can not worry about them. See? pi@hanselboss1:~ $ curl -4 http://192.168.170.2:31118 -d "# test" <h1>test</h1> pi@hanselboss1:~ $ curl -4 http://10.104.121.82:31118 -d "# test" curl: (7) Failed to connect to 10.104.121.82 port 31118: Network is unreachable Can we access this cluster from another machine? My Windows laptop, perhaps? Access your Raspberry Pi Kubernetes Cluster from your Windows Machine (or elsewhere) I put KubeCtl on my local Windows machine put it in the PATH. I copied the admin.conf over from my Raspberry Pi. You will likely use scp or WinSCP. I made a little local batch file like this. I may end up with multiple clusters and I want it easy to switch between them. SET KUBECONFIG="C:\users\scott\desktop\k8s for pi\admin.conf Once you have Kubectl on another machine that isn't your Pi, try running "kubectl proxy" and see if you can hit your cluster like this. Remember you'll get weird "Connection refused" if kubectl thinks you're talking to a local cluster. Here you can get to localhost:8001/api and move around, then you've successfully punched a hole over to your cluster (proxied) and you can treat localhost:8001 as your cluster. So "kubectl proxy" made that possible. If you have WSL (Windows Subsystem for Linux) - and you should - then you could also do this and TUNNEL to the API. But I'm going to get cert errors and generally get frustrated. However, tunneling like this to other apps from Windows or elsewhere IS super useful. What about the Kubernetes Dashboard? ~ $ sudo ssh -L 8001:10.96.0.1:443 pi@192.168.170.2 I'm going to install the Kubernetes Dashboard like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard-arm.yaml Pay close attention to that URL! There are several sites out there that may point to older URLs, non ARM dashboard, or use shortened URLs. Make sure you're applying the ARM dashboard. I looked here https://github.com/kubernetes/dashboard/tree/master/src/deploy. Notice I'm using the "alternative" dashboard. That's for development and I'm saying I don't care at all about security when accessing it. Be aware. I can see where my Dashboard is running, the port and the IP address. pi@hanselboss1:~ $ kubectl get svc --namespace kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 2d kubernetes-dashboard ClusterIP 10.98.2.15 <none> 80/TCP 2d NOW I can punch a hole with that nice ssh tunnel... ~ $ sudo ssh -L 8080:10.98.2.15:80 pi@192.168.170.2 I can access the Kubernetes Dashboard now from my Windows machine at http://localhost:8080 and hit Skip to login. Do note the Namespace dropdown and think about what you're viewing. There's the kube-system stuff that manages the cluster Adding OpenFaas and calling a serverless function Let's go to the next level. We'll install OpenFaas - think Azure Functions or Amazon Lambda, except for your own Docker and Kubernetes cluster. To be clear, OpenFaas is an Application that we will run on Kubernetes, and it will make it easier to run other apps. Then we'll run other stuff on it...just some simple apps like Hello World in Python and .NET Core. OpenFaas is one of several open source "Serverless" solutions. Do you need to use OpenFaas? No. But if your goal is to write a DoIt() function and put it on your little cluster easily and scale it out, it's pretty fabulous. Remember my definition of Serverless...there ARE servers, you just don't think about them. Serverless Computing is like this - Your code, a slider bar, and your credit card. Let's go. .NET Core on OpenFaas on Kubernetes on Raspberry Pi I ssh'ed into my main/master cluster Pi and set up OpenFaas: git clone https://github.com/alexellis/faas-netes && cd faas-netes kubectl apply -f faas.armhf.yml,rbac.yml,monitoring.armhf.yml Once OpenFaas is installed on your cluster, here's Alex's great instructions on how to setup your first OpenFaas Python function, so give that a try first and test it. Once we've installed that Python function, we can also hit http://192.168.170.2:31112/ui/ (where that's your main Boss/Master's IP) and see it the OpenFaas UI. OpenFaas and the "faas-netes" we setup above automates the build and deployment of our apps as Docker Images to Kuberetes. It makes the "Developer's Inner Loop" simpler. I'm going to make my .NET app, build, deploy, then change, build, deploy and I want it to "just work" on my cluster. And later, and I want it to scale. I'm doing .NET Core, and since there is a runtime for .NET Core for Raspberry Pi (and ARM system) but no SDK, I need to do the build on my Windows machine and deploy from there. Quick Aside: There are docker images for ARM/Raspberry PI for running .NET Core. However, you can't build .NET Core apps (yet?) directly ON the ARM machine. You have to build them on an x86/x64 machine and then get them over to the ARM machine. That can be SCP/FTPing them, or it can be making a docker container and then pushing that new docker image up to a container registry, then telling Kubernetes about that image. K8s (cool abbv) will then bring that ARM image down and run it. The technical trick that Alex and I noticed was of course that since you're building the Docker image on your x86/x64 machine, you can't RUN any stuff on it. You can build the image but you can't run stuff within it. It's an unfortunate limitation for now until there's a .NET Core SDK on ARM. What's required on my development machine (not my Raspberry Pis? I installed KubeCtl (see above) in the PATH I installed OpenFaas's Faas-CLI command line and put it in the PATH I installed Docker for Windows. You'll want to make sure your machine has some flavor of Docker if you have a Mac or Linux machine. I ran docker login at least once. I installed .NET Core from http://dot.net/core Here's the gist we came up with, again thanks Alex! I'm going to do it from Windows. I'll use the faas-cli to make a new function with charp. I'm calling mine dotnet-ping. faas-cli new --lang csharp dotnet-ping I'll edit the FunctionHandler.cs to add a little more. I'd like to know the machine name so I can see the scaling happen when it does. using System; using System.Text; namespace Function { public class FunctionHandler { public void Handle(string input) { Console.WriteLine("Hi your input was: "+ input + " on " + System.Environment.MachineName); } } } Check out the .yml file for your new OpenFaas function. Note the gateway IP should be your main Pi, and the port is 31112 which is OpenFaas. I also changed the image to include "shanselman/" which is my Docker Hub. You could also use a local Container Registry if you like. provider: name: faas gateway: http://192.168.170.2:31112 functions: dotnet-ping: lang: csharp handler: ./dotnet-ping image: shanselman/dotnet-ping Head over to the ./template/csharp/Dockerfile and we're going to change it. Ordinarily it's fine if you are publishing from x64 to x64 but since we are doing a little dance, we are going to build and publish the .NET apps as linux-arm from our x64 machine, THEN push it, we'll use a multi stage docker file. Change the default Docker file to this: FROM microsoft/dotnet:2.0-sdk as builder ENV DOTNET_CLI_TELEMETRY_OPTOUT 1 # Optimize for Docker builder caching by adding projects first. RUN mkdir -p /root/src/function WORKDIR /root/src/function COPY ./function/Function.csproj . WORKDIR /root/src/ COPY ./root.csproj . RUN dotnet restore ./root.csproj COPY . . RUN dotnet publish -c release -o published -r linux-arm ADD https://github.com/openfaas/faas/releases/download/0.6.1/fwatchdog-armhf /usr/bin/fwatchdog RUN chmod +x /usr/bin/fwatchdog FROM microsoft/dotnet:2.0.0-runtime-stretch-arm32v7 WORKDIR /root/ COPY --from=builder /root/src/published . COPY --from=builder /usr/bin/fwatchdog / ENV fprocess="dotnet ./root.dll" EXPOSE 8080 CMD ["/fwatchdog"] Notice a few things. All the RUN commands are above the second FROM where we take the results of the first container and use its output to build the second ARM-based one. We can't RUN stuff because we aren't on ARM, right? We use the Faas-Cli to build the app, build the docker container, AND publish the result to Kubernetes. faas-cli build -f dotnet-ping.yml --parallel=1 faas-cli push -f dotnet-ping.yml faas-cli deploy -f dotnet-ping.yml --gateway http://192.168.170.2:31112 And here is the dotnet-ping command running on the pi, as seen within the Kubernetes Dashboard. I can then scale them out like this: kubectl scale deploy/dotnet-ping --replicas=6 And if I hit it multiple times - either via curl or via the dashboard, I see it's hitting different pods: If I want to get super fancy, I can install Grafana - a dashboard manager by running locally in my machine on port 3000 docker run -p 3000:3000 -d grafana/grafana Then I can add OpenFaas a datasource by pointing Grafana to http://192.168.170.2/31119 which is where the Prometheus metrics app is already running, then import the OpenFaas dashboard from the grafana.json file that is in the I cloned it from. Super cool. I'm going to keep using this little Raspberry Pi Kubernetes Cluster to learn as I get ready to do real K8s in Azure! Thanks to Alex Ellis for his kindness and patience and to Jessie Frazelle for making me love both Windows AND Linux! * If you like this blog, please do use my Amazon links as they help pay for projects like this! They don't make me rich, but a few dollars here and there can pay for Raspberry Pis! Sursa: https://www.hanselman.com/blog/HowToBuildAKubernetesClusterWithARMRaspberryPiThenRunNETCoreOnOpenFaas.aspx Linkuri utile: https://kubernetes.io/ https://blog.giantswarm.io/understanding-basic-kubernetes-concepts-i-introduction-to-pods-labels-replicas/ https://dev.to/danielkun/kubernetes-its-alive-2ndc https://www.amazon.com/Kubernetes-Running-Dive-Future-Infrastructure/dp/1491935677/ref=sr_1_1?ie=UTF8&qid=1516176454&sr=8-1&keywords=kubernetes+up+and+running https://jvns.ca/categories/kubernetes/
      • 2
      • Like
      • Upvote
  21. Intrucat multe persoane s-au impotmolit la nivelul 3, vin cu un indiciu suplimentar pentru acest nivel SGFpIHNhIG51bWFyYW06DQoxIGNhcmFjdGVyDQoyIGNhcmFjdGVyZQ0KMyBjYXJhY3RlcmUNCi4uLg== P.S. is 2 indicii
  22. http://books.goalkicker.com/
  23. BitConnect said it’s closing the company’s cryptocurrency exchange and lending operation after receiving two cease-and-desist letters from state authorities for the unauthorized sale of securities and suffering from denial-of-service attacks. The Texas State Securities Board and North Carolina Secretary of State Securities Division warned that the firm isn’t registered to sell securities in those states, the company said on its website Tuesday. BitConnect offered to let people receive interest on their digital coin balance by lending or investing their capital. BitConnect’s token, BCC, was among the world’s top-20 most successful tokens until its price plunged 65 percent since Jan. 3, as the states announced the actions. Launched a year ago, the coin still has a market cap of almost $1 billion. States are becoming more active, and joining federal authorities in going after businesses related to bitcoin and other digital currencies that they allege try to avoid proper registrations. The Securities and Exchange Commission recently halted an initial coin offering -- in which a startup issues shares to raise funds -- of Munchee Inc., after it was found to have offered unregistered securities. BitConnect.com will continue to operate a wallet and a news site, it said. Sursa: https://www.bloomberg.com/technology
      • 1
      • Upvote
  24. his site is dedicated to helping you start your own Internet Service Provider. Specifically, this guide is about building a Wireless ISP (WISP). This guide is focused on the very earliest stages of starting a WISP - determining feasibility up through connecting the first few customers. There are many challenges that will come up at 100, 1,000 or 10,000 customers that are not (yet) covered in this guide. For context, this site is the result of this discussion on Hacker News. Join the discussion! Chat with me (the author) and others interested in this kind of thing here: #startyourownisp:matrix.org. This site is a work in progress! Only some of the content is up so far and there’s still some bugs in the interface. Use the form on the bottom left to be notified of new updates. Getting Started# What is a WISP? And why might you want to build one? Also defines some terminology. Costs What does it cost to build a wireless Internet Service Provider? (Link to a Google Sheet that you can copy and customize.) About Me Who am I? Why am I doing this? Step by Step Guide# Step 1: Evaluate an Area: Make sure your area is a good candidate for a Wireless Internet network. Step 2: Find a Fiber Provider: Find a building where you can purchase a fiber connection and use the rooftop to start your wireless network. Step 3: Find Relay Sites: Extend your network wirelessly toward your customers. Step 4: Pick a Hardware Platform: Evaluate available options for wireless hardware. Step 5: Billing and Customer Management: Make sure you’re able to get paid and support your customers. Step 6: Network Topology: Design your network topology to make your network reliable and scalable. Routers, switches, IP addresses, VLANs, etc. Step 7: Build your Infrastructure: Install hardware for your fiber connection and your relay sites. Step 8: Install a Customer: Get your first customer online! Step 9: Marketing: Let people know about your service so they can experience a better Internet connection! Step 10: Maintenance: Keep your network running smoothly. Miscellaneous# Tools you’ll want to have A list of the tools you’ll need to install relays sites and customers. Aim a Backhaul A guide describing the proper techniques for aiming backhauls. Designed to be printed out and taken to the site for reference. Backhaul Picker If you just need to get a solid wireless connection from Point A to Point B then use this interactive guide to pick the right equipment and get it set up. Channel Planning Avoid self interference by carefully choosing channels for your access points and backhauls. MDUs (Multiple Dwelling Units) Best practices for providing service to apartment buildings, condos, attached townhomes, etc. Guide to Google Earth Some tips and tricks for using Google Earth to plan and build your network. Weather Proof your Network Rain, snow, ice and wind can all cause problems for a wireless network. Channel Planning Roof and Ladder Safety Stay safe out there! Sursa: https://startyourownisp.com/
      • 2
      • Upvote
×
×
  • Create New...