-
Posts
18753 -
Joined
-
Last visited
-
Days Won
726
Everything posted by Nytro
-
Buffer Overflow Study Case in CodeBlocks 16.01
Nytro replied to Tejvil's topic in Reverse engineering & exploit development
Frumos, poti posta pe forum si celelalte blog posturi! -
Nu prea ajuta, poti incerca Ghidra gratuit sau IDA Pro, crackuit. Pentru ce ai nevoie de aceasta decompilare?
-
Hacker politist* In cel mai rau caz...
-
Daca combinam Covid cu security... https://nypost.com/2020/08/11/john-mcafee-apparently-arrested-for-wearing-thong-instead-of-face-mask/
-
Un pacient bolnav de Covid-19 in spital. Bani decontati (de catre Soros si Bill Gates, desigur! Asa au zis la Antena3). Cam asa se proceda pentru a obtine sumele acelea oferite pentru "falsi infectati" (inainte sau dupa deces).
-
China is now blocking all encrypted HTTPS traffic that uses TLS 1.3 and ESN
Nytro replied to Kev's topic in Stiri securitate
TLS 1.3 cu ESNI. Cred ca toate detaliile sunt aici: https://blog.cloudflare.com/esni/ -
China is now blocking all encrypted HTTPS traffic that uses TLS 1.3 and ESN
Nytro replied to Kev's topic in Stiri securitate
Aici nu vorba de trafic slab ci de faptul ca ceea ce blocheaza acum nu poate fi monitorizat. In pachetele TLS in general se poate vedea serverul destinatie (e.g. rstforums.com). Ei bine, cu acest nou feature, nu se mai vede si ei nu au cum sa stie pe ce site-uri intra oamenii... -
China is now blocking all encrypted HTTPS traffic that uses TLS 1.3 and ESN
Nytro replied to Kev's topic in Stiri securitate
Clasic China. Monitorizare peste tot. Cel putin asa pare din-afara. Stie cineva vreun chinez sa ne zica parerea? -
Salut, ar trebui sa incepi cu domeniul in care te descurci cel mai bine, de exemplu web security. Trebuie sa tii cont de un lucru: multe nu sunt chiar "real-life". Adica desi sunt "usoare" probabil va trebui in continuare sa iti dai seama la ce s-a gandit autorul cand a facut acel exercitiu. E normal ca la inceput sa nu rezolvi prea multe. Dar incercand sa rezolvi, vei invata foarte multe lucruri. De fapt rezolvarea in sine nu te ajuta cu nimic, ci drumul pe care il parcurgi ca sa ajungi la flag.
-
Adevarul despre masti! https://9gag.com/gag/a6KDq5b
-
Explozia din Beirut (Liban) NU EXISTA! Ati vazut-o voi? Stiti pe cineva care a murit acolo? Nu e mai grava ca explozia unei petarde! E o minciuna prin care televiziunile vor sa ne sperie si sa ne controleze! Sunt interese mari la mijloc!
-
Sunt o gramada de tool-uri care fac asta. Incearca 2-3 si gaseste un dictionar de parole bun. Sau genereaza tu o lista de parole.
-
Windows 10: HOSTS file blocking telemetry is now flagged as a risk
Nytro replied to Kev's topic in Stiri securitate
Interesant, dar are sens. Multe mizerii ca sa blocheze accesul la anumite site-uri pun in hosts 127.0.0.1, nu e vorba doar de acel telemetry shit. Cat strict despre telemetry, exista metode mai practice, ca oprire a serviciilor sau mai stiu eu ce. Asta cu 127.0.0.1 e un cacat. -
La pretul ala cu 9000 BTU? Nu.
-
Red Hat and CentOS systems aren’t booting due to BootHole patches Well, you can't be vulnerable to BootHole if you can't boot your system. Jim Salter - 7/31/2020, 10:43 PM Enlarge / Security updates intended to patch the BootHole UEFI vulnerability are rendering some Linux systems unable to boot at all. Aurich Lawson 53 with 31 posters participating, including story author Share on Facebook Share on Twitter Early this morning, an urgent bug showed up at Red Hat's bugzilla bug tracker—a user discovered that the RHSA_2020:3216 grub2 security update and RHSA-2020:3218 kernel security update rendered an RHEL 8.2 system unbootable. The bug was reported as reproducible on any clean minimal install of Red Hat Enterprise Linux 8.2. Further Reading New flaw neuters Secure Boot, but there’s no reason to panic. Here’s why The patches were intended to close a newly discovered vulnerability in the GRUB2 boot manager called BootHole. The vulnerability itself left a method for system attackers to potentially install "bootkit" malware on a Linux system despite that system being protected with UEFI Secure Boot. RHEL and CentOS Unfortunately, Red Hat's patch to GRUB2 and the kernel, once applied, are leaving patched systems unbootable. The issue is confirmed to affect RHEL 7.8 and RHEL 8.2, and it may affect RHEL 8.1 and 7.9 as well. RHEL-derivative distribution CentOS is also affected. Red Hat is currently advising users not to apply the GRUB2 security patches (RHSA-2020:3216 or RHSA-2020:3217) until these issues have been resolved. If you administer a RHEL or CentOS system and believe you may have installed these patches, do not reboot your system. Downgrade the affected packages using sudo yum downgrade shim\* grub2\* mokutil and configure yum not to upgrade those packages by temporarily adding exclude=grub2* shim* mokutil to /etc/yum.conf. If you've already applied the patches and attempted (and failed) to reboot, boot from an RHEL or CentOS DVD in Troubleshooting mode, set up the network, then perform the same steps outlined above in order to restore functionality to your system. Other distributions Although the bug was first reported in Red Hat Enterprise Linux, apparently related bug reports are rolling in from other distributions from different families as well. Ubuntu and Debian users are reporting systems which cannot boot after installing GRUB2 updates, and Canonical has issued an advisory including instructions for recovery on affected systems. Although the impact of the GRUB2 bug is similar, the scope may be different from distribution to distribution; so far it appears the Debian/Ubuntu GRUB2 bug is only affecting systems which boot in BIOS (not UEFI) mode. A fix has already been committed to Ubuntu's proposed repository, tested, and released to its updates repository. The updated and released packages, grub2 (2.02~beta2-36ubuntu3.27) xenial and grub2 (2.04-1ubuntu26.2) focal, should resolve the problem for Ubuntu users. For Debian users, the fix is available in newly committed package grub2 (2.02+dfsg1-20+deb10u2). We do not have any word at this time about flaws in or impact of GRUB2 BootHole patches on other distributions such as Arch, Gentoo, or Clear Linux. Sursa: https://arstechnica.com/gadgets/2020/07/red-hat-and-centos-systems-arent-booting-due-to-boothole-patches/
-
17-Year-Old 'Mastermind', 2 Others Behind the Biggest Twitter Hack Arrested July 31, 2020Mohit Kumar A 17-year-old teen and two other 19 and 22-year-old individuals have reportedly been arrested for being the alleged mastermind behind the recent Twitter hack that simultaneously targeted several high-profile accounts within minutes as part of a massive bitcoin scam. According to the U.S. Department of Justice, Mason Sheppard, aka "Chaewon," 19, from the United Kingdom, Nima Fazeli, aka "Rolex," 22, from Florida and an unnamed juvenile was charged this week with conspiracy to commit wire fraud, conspiracy to commit money laundering, and the intentional access of a protected computer. Florida news channel WFLA has identified a 17-year-old teen named Graham Clark of Tampa Bay this week in connection with the Twitter hack, who probably is the juvenile that U.S. Department of Justice mentioned in its press release. Graham Clark has reportedly been charged with 30 felonies of communications and organized fraud for scamming hundreds of people using compromised accounts. On July 15, Twitter faced the biggest security lapse in its history after an attacker managed to hijack nearly 130 high-profile twitter accounts, including Barack Obama, Kanye West, Joe Biden, Bill Gates, Elon Musk, Jeff Bezos, Warren Buffett, Uber, and Apple. The broadly targeted hack posted similarly worded messages urging millions of followers of each profile to send money to a specific bitcoin wallet address in return for larger payback. "Everyone is asking me to give back, and now is the time," a tweet from Mr. Gates' account said. "You send $1,000; I send you back $2,000." The targeted profiles were also include some popular cryptocurrency-focused accounts, such as Bitcoin, Ripple, CoinDesk, Gemini, Coinbase, and Binance. The fraud scheme helped the attackers reap more than $100,000 in Bitcoin from several victims within just a few hours after the tweets were posted. As suspected on the day of the attack, Twitter later admitted that the attackers compromised its employees' accounts with access to the internal tools and gained unauthorized access to the targeted profiles. In its statement, Twitter also revealed that some of its employees were targeted using a spear-phishing attack through a phone, misleading "certain employees and exploit human vulnerabilities to gain access to our internal systems." Twitter said a total of 130 user accounts were targeted in the latest attack, out of which only 45 verified accounts were exploited to publish scam tweets. It also mentioned that the attackers accessed Direct Message inboxes of at least 36 accounts, whereas only eight accounts' information was downloaded using the "Your Twitter Data" archive tool. "There is a false belief within the criminal hacker community that attacks like the Twitter hack can be perpetrated anonymously and without consequence," said U.S. Attorney Anderson. "Today's charging announcement demonstrates that the elation of nefarious hacking into a secure environment for fun or profit will be short-lived. Criminal conduct over the Internet may feel stealthy to the people who perpetrate it, but there is nothing stealthy about it. In particular, I want to say to would-be offenders, break the law, and we will find you." "We've significantly limited access to our internal tools and systems. Until we can safely resume normal operations, our response times to some support needs and reports will be slower," Twitter added. This is a developing story and will be updated as additional details become available. Found this article interesting? Follow THN on Facebook, Twitter and LinkedIn to read more exclusive content we post. Sursa: https://thehackernews.com/2020/07/twitter-hacker-arrested.html
-
@andr82 - Cea clasica: session cookies. Cam astea ar fi, nu am alte idei. Dar pana la urma depinde de cum vrea fiecare developer. In trecut erau aplicatii care setau drept cookie user si parola si aceea era autentificarea...
-
Da, oricum e destul de RAR SAML. Si pentru JWT sunt niste reguli: - De preferat criptarea asimetrica sau un secret "puternic" pe ciptarea simetrica - De generat JWT dupa ce userul s-a logat doar, nu de exemplu la signup - De verificat intotdeauna semnatura si nu permis mizeriile cu algorithm 'none' - De nu pus date sensitive prin JWT - De ne reutilizarea se secretului pentru a genera JWT-uri diferite, pentru aplicatii diferite, ca apoi cineva sa le poata interschimba - De implementat corect flow-urile - De evitat open redirect, mai ales in redirect_url - De implementat protectiile anti-CSRF Desi lista pare lunga, mi se pare mult mai simplu, practic si eficient decat SAML.
-
Ar fi si cazul, ar trebui sa treaba pe un Linux embedded ceva. Oricum, e hardcore ce au facut aici: "To analyze whether the module exists a chip debug port, we scan the SoC with X-Ray to figure out the pins, which avoided damage caused by disassembling the equipment."
-
Seeing (Sig)Red Reading time ~13 min Posted by Felipe Molina de la Torre on 20 July 2020 Categories: Blue team, Digital forensics, Suricata After the SigRed (CVE-2020-1350) write-up was published by Check Point, there was enough detailed information for the smart people, like Hector and others of the Twitterverse (careful with the fake PoC!), to swiftly write a proof of concept to crash Windows DNS. CP did not publish enough details about how to convert this into an RCE, so it looks like a PoC to execute code is still going to take some time to surface. In this post I will describe how I created a Suricata rule to detect exploitation attempts of CVE-2020-1350. As Windows exploitation and debugging is not my strong point, I decided to jump onto the defensive side of the vulnerability to help blue teams and sysadmins detect the attack before their Domain Controllers catch fire. I started by reading the Check Point post, and others, detailing the vulnerability to get as much detail as possible to create a Suricata IDS signature with which to detect exploitation attempts on your network. I chose Suricata because it is a highly popular network IDS solution, its open source and it is easy to install and configure. This post will not describe the details of the vulnerability itself, as the original Check Point post and all the subsequent articles published on the Internet should be sufficient for anyone to understand the inner workings of it. Thus, I will assume you have read the details and understand the exploitation vector. Suricata Rules Syntax Before delving into the details, we should first understand the syntax of a Suricata rule. This is an example extracted from their documentation: drop tcp $HOME_NET any -> $EXTERNAL_NET any (msg:”ET TROJAN Likely Bot Nick in IRC (USA +..)”; flow:established,to_server; flowbits:isset,is_proto_irc; content:”NICK “; pcre:”/NICK .*USA.*[0-9]{3,}/i”; reference:url,doc.emergingthreats.net/2008124; classtype:trojan-activity; sid:2008124; rev:2;) In this example, red is the action, green is the header and blue are the options. The action determines what happens when the signature matches and can be “pass”,”drop”,”reject” or “alert”. Check here for more information about the behaviour of each one. The header defines the protocol, IP addresses, ports and direction of the rule. The options section defines the specifics of the rule. To detect SigRed, we are going to work primarily with the last two sections of the rule, the header and the options, leaving the action fixed as an “alert“. Bear in mind that Suricata can be configured as an in-line IPS, so you can also specify the “drop” action to protect your corporate Windows DNS servers from SigRed attacks. Creating the Rules Reading the original blog post describing the vulnerability, one could infer the following properties that a malicious DNS connection will have: First suspicious packet: Description: As DNS answers over UDP have a 512 byte limit for the length of the payload, the malicious DNS server would need to initiate the conversation over TCP. This is indicated by the DNS flag TC. Therefore, the first suspicious packet will be a SIG answer, coming from the external network (i.e. Internet) directed to your corporate DNS servers with the “answer” and “TC” flags set. Protocol/Layer: DNS over UDP. Source: Coming from port 53 (DNS answer) for an IP on the external network Destination: To any port of any of our Windows DNS servers. Flow: Communications established and flowing from the malicious DNS (server) to the victim DNS (client). DNS Flags: ANSWER (bit 1) and TC (bit 6) flags enabled. Answer Type: A SIG (0x18) IN (0x01) answer. Viewing this in wireshark may look as follows: Second (more) suspicious packet: Description: A DNS SIG IN answer over TCP packet with an excessive packet length and a compressed signer name pointing to the following byte coming from the external network (i.e. Internet) directed to your corporate DNS servers. Protocol/Layer: DNS over TCP. Source: Coming from port 53 (DNS answer) of an IP on the external network. Destination: To any port of any of our Windows DNS servers. Flow: Communications established and flowing from the malicious DNS (server) to the victim DNS (client). DNS Flags: ANSWER (bit 1) enabled. Answer Type: A SIG (0x18) IN (0x01) answer. Packet Length: Greater than 65280 bytes (0xFF00). Compressed Signer Name: Pointer to the first character of the queried domain name which is usually the byte 0xC00D, or any other value greater than 0x0D pointing to other characters of the queried domain name. Hint: Take a look here to understand how message compression works on DNS to understand why it specifically has to be this value. Again, within wireshark that may look as follows: Example of the second malicious DNS packet observed in the network on an RedSig attack Knowing what the suspicious packets may look like. we can translate this into Suricata syntax. (This is a process that took me a while, as it was the first time I was dealing with these kind of rules and I had to make my way around the Suricata documentation). In summary for the first approach, the rules for the two packets could be translated into the following (note the corresponding colours of the rule to the description above): First Rule: alert dns $EXTERNAL_NET 53 -> $DNS_SERVERS any (msg:”Windows DNS Exploit (TC header)”;flow:established,to_client;classtype:denial-of-service;byte_test:2,&,0x82,2;content: “|00 00 18 00 01|”;within: 120;reference:cve,2020-1350;sid:666661;rev:1;) Second Rule: alert tcp $EXTERNAL_NET 53 -> $DNS_SERVERS any (msg:”Windows DNS Exploit (Compressed SIG record)”;flow:established,to_client;classtype:denial-of-service;byte_test:2,>,0xFF00,0;byte_test:2,&,0x80,4;content: “|00 00 18 00 01|”;within: 120;content:”|c0 0d|”;reference:cve,2020-1350;sid:666662;rev:2;) For these rules to work in your environment, you will need to open your “suricata.yaml” configuration file and add an array with your corporate DNS IP addresses in the variable $DNS_SERVERS: Defining our internal DNS IP addresses After adding your DNS IP addresses here, save these two rules in a text file called “sigred.rules”, save them in the default “rules” folder usually located at “/etc/suricata/rules” and enable them in the “suricata.yaml” configuration file under the “rule-files” section: Multiple SigRed rules added to our Suricata installation To test whether the rules are working or not, one can run the Suricata client against PoC traffic captures. I tested these rules against three different traffic captures. The first one was the PCAP found in the first PoC written by Maxploit, the second one was the PoC written by Hector (thank you for the pcap), and the third one was the traffic capture I grabbed myself from one of our Jumpboxes. To test your rules against a PCAP, execute the following command on your Suricata box: suricata -r /home/user/sigred-dos-poc.pcapng -l /var/log/suricata/ Detection of SigRed exploit with Suricata I noticed two limitations in this first approach when writing the Suricata rules. The first limitation was that the first rule alone will trigger a fair amount of false positives on the network, as transitioning to DNS over TCP with SIG requests is not uncommon. So, the rule should be useful when triggered with the second one. If you only have the first rule triggered in your Suricata setup it may be worth it to investigate this as a potential incident (especially in the first weeks since the publication of the vulnerability), but if you have both rules configured you can be almost sure someone is doing nasty things in your corporate network. The second limitation is that we are making our rule to search for a hard-coded compressed Signer’s name with the value “0xC00D“. This compressed signer’s name is pointing to the first character of the queried domain – this is, the “9” in the domain name “9.ibrokethe.net” – to trigger the heap overflow (see “DNS Pointer Compression – Less is More” on the CP blog post). This is the value hard-coded in the Check Point blog post and the Maxpl0it PoC, but in reality you can point to other characters in the string and the exploit will work and not be detected by our Suricata rule. To tackle the first limitation, we can make a link between our two rules for Suricata IDS. To do so, we can introduce a new rule option called xbits. With this option, we set a flag (tc_requested) when the first rule is triggered. This flag can then later be queried by the second rule with the “isset” operator. If the flag is set, then the second rule will trigger an alert. If the flag is not set, the second rule will not trigger an alert. Introducing the xbits option we make our rules tightly coupled with each other, and the degree of certainty that we are facing the SigRed exploit is significantly increased. To tackle the second limitation, we should make our Suricata rule aware of other valid compressed names that will trigger the vulnerability, such as “0xC00E”, “0xC00F”, “0xC010”, etc. Thus, we can add a byte_test comparison to look for greater values than “0x0c” in the second byte of the compressed name. After addressing these two limitations, we end up with the following rules (note the red colour part of the rule to address the false positive limitation by combining the rules and the blue colour part of the rule to address other valid values of compressed names): alert dns $EXTERNAL_NET 53 -> $DNS_SERVERS any (msg:”Windows DNS Exploit (TC header)”;flow:established,to_client;classtype:denial-of-service;byte_test:2,&,0x82,2;content: “|00 00 18 00 01|”;within: 120;xbits:set,tc_requested,track ip_pair;noalert;reference:cve,2020-1350;sid:666661;rev:2;) alert tcp $EXTERNAL_NET 53 -> $DNS_SERVERS any (msg:”Windows DNS SigRed Exploit (Compressed SIG record)”;flow:established,to_client;classtype:denial-of-service;byte_test:2,>,0xFF00,0;byte_test:2,&,0x80,4;content:”|00 00 18 00 01|”;within:120;content:”|c0|”;within:31;byte_test:1,>,0x0c,0,relative;xbits:isset,tc_requested,track ip_pair;reference:cve,2020-1350;sid:666662;rev:3;) With these two rules, you would learn about the victim DNS server being targeted in the attack and the external malicious DNS server sending the payload. Once you have this information you would be able to take further incident response measures, such as blocking the malicious domain on your perimeter. With these two rules you will not get information about how your internal DNS got targeted by this exploit. Assuming we have already detected with the two previous rules the malicious domain name, e.g. “ibrokethe.net”, we could identify the attacker or victim of the attack by tweaking two more Suricata rules as follows: The first one to see if an insider is manually trying to attack corporate DNS servers, that is, manually executing a query to the malicious DNS server using something like nslookup or dig. In this case, one would see a DNS query packet trying to resolve the SIG (0X18) IN (0x01) entry of a malicious domain (ibrokethe.net), directed to our corporate DNS server: DNS SIG IN query to resolve the malicious domain name The second to see if an employee has been a target of an attack like the one described in the original Check Point post by smuggling DNS data inside of the HTTP protocol (see “Triggering From the Browser” section of the CP post). In this case, one would see a malformed DNS packet as follows : After identifying all of the relevant properties that a malicious DNS packet could have, we can create the following two rules to detect the insider or victim of this attack to a specific domain that you have detected (note the corresponding explanatory colours in the rule): alert dns $HOME_NET any -> $DNS_SERVERS 53 (msg:”Windows SigRed DNS Exploit (Insider Identification)”;classtype:denial-of-service;flow:to_server;byte_test:1,!&,0xF8,2;content:”|09|ibrokethe|03|net”;content:”|00 00 18 00 01|”;within: 5;reference:cve,2020-1350;sid:666663;rev:1;) alert tcp $HOME_NET any -> $DNS_SERVERS 53 (msg:”Windows SigRed DNS Exploit (Victim Identification)”;classtype:denial-of-service;flow:to_server;byte_test:1,!&,0xF8,4;content:”|50 4f 53 54 20 2f|”;offset:0;depth:6;content:”|09|ibrokethe|03|net”;distance:20567;within:100;content:”|00 00 18 00 01|”;within:5;reference:cve,2020-1350,sid:666664;rev:1;) With these four rules we would be able to identify a SigRed attack on a network and obtain the following details: The source IP address where the attack originated from and potentially whether it was a victim of the attack (rule 666664) or was active part in the attack (rule 666663). The target DNS name for the attack (rules 666661 and 666662) The malicious domain (rules 666661 and 666662). The malicious external DNS server (rules 666661 and 666662). I hope these rules are useful for someone out there and helps protect your corporate networks. Good luck catching the bad guy! Bibliography & Resources https://research.checkpoint.com/2020/resolving-your-way-into-domain-admin-exploiting-a-17-year-old-bug-in-windows-dns-servers/ https://www.ietf.org/rfc/rfc1035.txt http://www.tcpipguide.com/free/t_DNSNameNotationandMessageCompressionTechnique-2.htm https://github.com/maxpl0it/CVE-2020-1350-DoS https://www.immagic.com/eLibrary/ARCHIVES/GENERAL/WIKIPEDI/W120423L.pdf https://suricata.readthedocs.io/en/suricata-5.0.3/ https://www.securityartwork.es/2013/02/21/snort-byte_test-for-dummies-2/ Twitter Sursa: https://sensepost.com/blog/2020/seeing-sigred/
-
Towards native security defenses for the web ecosystem July 22, 2020 Posted by Artur Janc and Lukas Weichselbaum, Information Security Engineers With the recent launch of Chrome 83, and the upcoming release of Mozilla Firefox 79, web developers are gaining powerful new security mechanisms to protect their applications from common web vulnerabilities. In this post we share how our Information Security Engineering team is deploying Trusted Types, Content Security Policy, Fetch Metadata Request Headers and the Cross-Origin Opener Policy across Google to help guide and inspire other developers to similarly adopt these features to protect their applications. History Since the advent of modern web applications, such as email clients or document editors accessible in your browser, developers have been dealing with common web vulnerabilities which may allow user data to fall prey to attackers. While the web platform provides robust isolation for the underlying operating system, the isolation between web applications themselves is a different story. Issues such as XSS, CSRF and cross-site leaks have become unfortunate facets of web development, affecting almost every website at some point in time. These vulnerabilities are unintended consequences of some of the web's most wonderful characteristics: composability, openness, and ease of development. Simply put, the original vision of the web as a mesh of interconnected documents did not anticipate the creation of a vibrant ecosystem of web applications handling private data for billions of people across the globe. Consequently, the security capabilities of the web platform meant to help developers safeguard their users' data have evolved slowly and provided only partial protections from common flaws. Web developers have traditionally compensated for the platform's shortcomings by building additional security engineering tools and processes to protect their applications from common flaws; such infrastructure has often proven costly to develop and maintain. As the web continues to change to offer developers more impressive capabilities, and web applications become more critical to our lives, we find ourselves in increasing need of more powerful, all-encompassing security mechanisms built directly into the web platform. Over the past two years, browser makers and security engineers from Google and other companies have collaborated on the design and implementation of several major security features to defend against common web flaws. These mechanisms, which we focus on in this post, protect against injections and offer isolation capabilities, addressing two major, long-standing sources of insecurity on the web. Injection Vulnerabilities In the design of systems, mixing code and data is one of the canonical security anti-patterns, causing software vulnerabilities as far back as in the 1980s. It is the root cause of vulnerabilities such as SQL injection and command injection, allowing the compromise of databases and application servers. On the web, application code has historically been intertwined with page data. HTML markup such as <script> elements or event handler attributes (onclick or onload) allow JavaScript execution; even the familiar URL can carry code and result in script execution when navigating to a javascript: link. While sometimes convenient, the upshot of this design is that – unless the application takes care to protect itself – data used to compose an HTML page can easily inject unwanted scripts and take control of the application in the user's browser. Addressing this problem in a principled manner requires allowing the application to separate its data from code; this can be done by enabling two new security features: Trusted Types and Content Security Policy based on script nonces. Trusted Types Main article: web.dev/trusted-types by Krzysztof Kotowicz JavaScript functions used by developers to build web applications often rely on parsing arbitrary structure out of strings. A string which seems to contain data can be turned directly into code when passed to a common API, such as innerHTML. This is the root cause of most DOM-based XSS vulnerabilities. Trusted Types make JavaScript code safe-by-default by restricting risky operations, such as generating HTML or creating scripts, to require a special object – a Trusted Type. The browser will ensure that any use of dangerous DOM functions is allowed only if the right object is provided to the function. As long as an application produces these objects safely in a central Trusted Types policy, it will be free of DOM-based XSS bugs. You can enable Trusted Types by setting the following response header: We have recently launched Trusted Types for all users of My Google Activity and are working with dozens of product teams across Google as well as JavaScript framework owners to make their code support this important safety mechanism. Trusted Types are supported in Chrome 83 and other Chromium-based browsers, and a polyfill is available for other user agents. Content Security Policy based on script nonces Main article: Reshaping web defenses with strict Content Security Policy Content Security Policy (CSP) allows developers to require every <script> on the page to contain a secret value unknown to attackers. The script nonce attribute, set to an unpredictable number for every page load, acts as a guarantee that a given script is under the control of the application: even if part of the page is injected by an attacker, the browser will refuse to execute any injected script which doesn't identify itself with the correct nonce. This mitigates the impact of any server-side injection bugs, such as reflected XSS and stored XSS. CSP can be enabled by setting the following HTTP response header: This header requires all scripts in your HTML templating system to include a nonce attribute with a value matching the one in the response header: Our CSP Evaluator tool can help you configure a strong policy. To help deploy a production-quality CSP in your application, check out this presentation and the documentation on csp.withgoogle.com. Since the initial launch of CSP at Google, we have deployed strong policies on 75% of outgoing traffic from our applications, including in our flagship products such as GMail and Google Docs & Drive. CSP has mitigated the exploitation of over 30 high-risk XSS flaws across Google in the past two years. Nonce-based CSP is supported in Chrome, Firefox, Microsoft Edge and other Chromium-based browsers. Partial support for this variant of CSP is also available in Safari. Isolation Capabilities Many kinds of web flaws are exploited by an attacker's site forcing an unwanted interaction with another web application. Preventing these issues requires browsers to offer new mechanisms to allow applications to restrict such behaviors. Fetch Metadata Request Headers enable building server-side restrictions when processing incoming HTTP requests; the Cross-Origin Opener Policy is a client-side mechanism which protects the application's windows from unwanted DOM interactions. Fetch Metadata Request Headers Main article: web.dev/fetch-metadata by Lukas Weichselbaum A common cause of web security problems is that applications don't receive information about the source of a given HTTP request, and thus aren't able to distinguish benign self-initiated web traffic from unwanted requests sent by other websites. This leads to vulnerabilities such as cross-site request forgery (CSRF) and web-based information leaks (XS-leaks). Fetch Metadata headers, which the browser attaches to outgoing HTTP requests, solve this problem by providing the application with trustworthy information about the provenance of requests sent to the server: the source of the request, its type (for example, whether it's a navigation or resource request), and other security-relevant metadata. By checking the values of these new HTTP headers (Sec-Fetch-Site, Sec-Fetch-Mode and Sec-Fetch-Dest), applications can build flexible server-side logic to reject untrusted requests, similar to the following: We provided a detailed explanation of this logic and adoption considerations at web.dev/fetch-metadata. Importantly, Fetch Metadata can both complement and facilitate the adoption of Cross-Origin Resource Policy which offers client-side protection against unexpected subresource loads; this header is described in detail at resourcepolicy.fyi. At Google, we've enabled restrictions using Fetch Metadata headers in several major products such as Google Photos, and are following up with a large-scale rollout across our application ecosystem. Fetch Metadata headers are currently sent by Chrome and Chromium-based browsers and are available in development versions of Firefox. Cross-Origin Opener Policy Main article: web.dev/coop-coep by Eiji Kitamura By default, the web permits some interactions with browser windows belonging to another application: any site can open a pop-up to your webmail client and send it messages via the postMessage API, navigate it to another URL, or obtain information about its frames. All of these capabilities can lead to information leak vulnerabilities: Cross-Origin Opener Policy (COOP) allows you to lock down your application to prevent such interactions. To enable COOP in your application, set the following HTTP response header: If your application opens other sites as pop-ups, you may need to set the header value to same-origin-allow-popups instead; see this document for details. We are currently testing Cross-Origin Opener Policy in several Google applications, and we're looking forward to enabling it broadly in the coming months. COOP is available starting in Chrome 83 and in Firefox 79. The Future Creating a strong and vibrant web requires developers to be able to guarantee the safety of their users' data. Adding security mechanisms to the web platform – building them directly into browsers – is an important step forward for the ecosystem: browsers can help developers understand and control aspects of their sites which affect their security posture. As users update to recent versions of their favorite browsers, they will gain protections from many of the security flaws that have affected web applications in the past. While the security features described in this post are not a panacea, they offer fundamental building blocks that help developers build secure web applications. We're excited about the continued deployment of these mechanisms across Google, and we're looking forward to collaborating with browser makers and the web standards community to improve them in the future. For more information about web security mechanisms and the bugs they prevent, see the Securing Web Apps with Modern Platform Features Google I/O talk (video). Sursa: https://security.googleblog.com/2020/07/towards-native-security-defenses-for.html
-
Malware Reverse Engineering Handbook Authors: Ahmet BalciDan UngureanuJaromir Vondruška Files: PDF Malware is a growing threat which causes considerable cost to individuals, companies and institutions. Since basic signature-based antivirus defences are not very useful against recently emerged malware threats or APT attacks, it is essential for an investigator to have the fundamental skill set in order to analyse and mitigate these threats. This handbook by CCDCOE Technology Branch researchers gives an overview of how to analyse malware executables that are targeting the Windows platform. The authors are presenting the most common techniques used in malware investigation including set up of LAB environment, network analysis, behavioural analysis, static and dynamic code analysis. The reader will become familiar with disassemblers, debuggers, sandboxes, system and network monitoring tools. Incident response and collaboration tools are also introduced. Advanced techniques are out of the scope of this handbook as it can be considered as the first steps in investigating and dealing with malware. This research paper is an independent product of the CCDCOE and does not represent the official policy or position of NATO or any of the CCDCOE´s Sponsoring Nations. The NATO Cooperative Cyber Defence Centre of Excellence (NATO CCDCOE) is a NATO-accredited knowledge hub, research institution, and training and exercise facility. The Tallinn-based international military organisation focuses on interdisciplinary applied research, as well as consultations, training and exercises in the field of cyber security. Keywords: malware, debugger, IDAPro, static, dynamic, collaboration Sursa: https://ccdcoe.org/library/publications/malware-reverse-engineering-handbook/
-
- 1
-
-
Download: https://skygo.360.cn/archive/Security-Research-Report-on-Mercedes-Benz-Cars-en.pdf