-
Posts
18772 -
Joined
-
Last visited
-
Days Won
729
Everything posted by Nytro
-
New PetitPotam NTLM Relay Attack Lets Hackers Take Over Windows Domains July 26, 2021 Ravie Lakshmanan A newly uncovered security flaw in the Windows operating system can be exploited to coerce remote Windows servers, including Domain Controllers, to authenticate with a malicious destination, thereby allowing an adversary to stage an NTLM relay attack and completely take over a Windows domain. The issue, dubbed "PetitPotam," was discovered by security researcher Gilles Lionel, who shared technical details and proof-of-concept (PoC) code last week, noting that the flaw works by forcing "Windows hosts to authenticate to other machines via MS-EFSRPC EfsRpcOpenFileRaw function." MS-EFSRPC is Microsoft's Encrypting File System Remote Protocol that's used to perform "maintenance and management operations on encrypted data that is stored remotely and accessed over a network." Specifically, the attack enables a domain controller to authenticate against a remote NTLM under a bad actor's control using the MS-EFSRPC interface and share its authentication information. This is done by connecting to LSARPC, resulting in a scenario where the target server connects to an arbitrary server and performs NTLM authentication. "An attacker can target a Domain Controller to send its credentials by using the MS-EFSRPC protocol and then relaying the DC NTLM credentials to the Active Directory Certificate Services AD CS Web Enrollment pages to enroll a DC certificate," TRUESEC's Hasain Alshakarti said. "This will effectively give the attacker an authentication certificate that can be used to access domain services as a DC and compromise the entire domain. While disabling support for MS-EFSRPC doesn't stop the attack from functioning, Microsoft has since issued mitigations for the issue, while characterizing "PetitPotam" as a "classic NTLM relay attack," which permit attackers with access to a network to intercept legitimate authentication traffic between a client and a server and relay those validated authentication requests in order to access network services. "To prevent NTLM Relay Attacks on networks with NTLM enabled, domain administrators must ensure that services that permit NTLM authentication make use of protections such as Extended Protection for Authentication (EPA) or signing features such as SMB signing," Microsoft noted. "PetitPotam takes advantage of servers where the Active Directory Certificate Services (AD CS) is not configured with protections for NTLM Relay Attacks." To safeguard against this line of attack, the Windows maker is recommending that customers disable NTLM authentication on the domain controller. In the event NTLM cannot be turned off for compatibility reasons, the company is urging users to take one of the two steps below - Disable NTLM on any AD CS Servers in your domain using the group policy Network security: Restrict NTLM: Incoming NTLM traffic. Disable NTLM for Internet Information Services (IIS) on AD CS Servers in the domain running the "Certificate Authority Web Enrollment" or "Certificate Enrollment Web Service" services PetitPotam marks the third major Windows security issue disclosed over the past month after the PrintNightmare and SeriousSAM (aka HiveNightmare) vulnerabilities. Found this article interesting? Follow THN on Facebook, Twitter and LinkedIn to read more exclusive content we post. Sursa: https://thehackernews.com/2021/07/new-petitpotam-ntlm-relay-attack-lets.html
-
fail2ban – Remote Code Execution JAKUB ŻOCZEK | July 26, 2021 | Research This article is about the recently published security advisory for a pretty popular software – fail2ban (CVE-2021-32749). The vulnerability, which could be massively exploited and lead to root-level code execution on multiple boxes, however this task is rather hard to achieve by regular person. It all has its roots in mailutils package and I’ve found it by a total accident when playing with mail command. The fail2ban analyses logs (or other data sources) in search of brute force traces in order to block such attempts based on the IP address. There are plenty of rules for different services (SSH, SMTP, HTTP, etc.). There are also defined actions which could be performed after blocking a client. One of these actions is sending an e-mail. If you search the Internet to find out how to send an e-mail from a command line, you will often get such solution: 1 $ echo "test e-mail" | mail -s "subject" user@example.org That is the exact way how one of fail2ban actions is configured to send e-mails about client getting blocked (./config/action.d/mail-whois.conf? 1 2 3 4 5 6 7 8 actionban = printf %%b "Hi,\n The IP <ip> has just been banned by Fail2Ban after <failures> attempts against <name>.\n\n Here is more information about <ip> :\n `%(_whois_command)s`\n Regards,\n Fail2Ban"|mail -s "[Fail2Ban] <name>: banned <ip> from <fq-hostname>" <dest> There is nothing suspicious about the above, until knowing about one specific thing that can be found inside the mailutils manual. It is the tilde escape sequences: The ‘~!’ escape executes specified command and returns you to mail compose mode without altering your message. When used without arguments, it starts your login shell. The ‘~|’ escape pipes the message composed so far through the given shell command and replaces the message with the output the command produced. If the command produced no output, mail assumes that something went wrong and retains the old contents of your message. This is the way it works in real life: 1 2 3 4 5 6 7 8 9 jz@fail2ban:~$ cat -n pwn.txt 1 Next line will execute command 2 ~! uname -a 3 4 Best, 5 JZ jz@fail2ban:~$ cat pwn.txt | mail -s "whatever" whatever@whatever.com Linux fail2ban 4.19.0-16-cloud-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux jz@fail2ban:~$ If you get back to the previously mentioned fail2ban e-mail action you can notice there is a whois output attached to the e-mail body. So if we could add some tilde escape sequence to whois output of our IP address – well, it should end up with code execution. As root. What are our options? As attackers we need to control the whois output – how to achieve that? Well, the first thing which came into my mind was to kindly ask my ISP to contact RIPE and make a pretty custom entry for my particular IP address. Unfortunately – it doesn’t work like that. RIPE/ARIN/APNIC and others put entries for whole IP classes as minimum, not for particular one IP address. Also, I’m more than sure that achieving it is extremely hard in a formal way, plus the fact that putting malicious payload as a whois entry would make people ask questions. Is there a way to start my own whois server? Surprisingly – there is, and you can find a couple of them running over the Internet. By digging whois related RFC you can find information about an attribute called ReferralServer. If your whois client will find such an attribute in the response, it will query the server that was set in the value to get more information about the IP address or domain. Just take a look what happens when getting whois for 157.5.7.5 IP address: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 $ whois 157.5.7.5 # # ARIN WHOIS data and services are subject to the Terms of Use # available at: https://www.arin.net/resources/registry/whois/tou/ # # If you see inaccuracies in the results, please report at # https://www.arin.net/resources/registry/whois/inaccuracy_reporting/ # # Copyright 1997-2021, American Registry for Internet Numbers, Ltd. # NetRange: 157.1.0.0 - 157.14.255.255 CIDR: 157.4.0.0/14, 157.14.0.0/16, 157.1.0.0/16, 157.12.0.0/15, 157.2.0.0/15, 157.8.0.0/14 NetName: APNIC-ERX-157-1-0-0 NetHandle: NET-157-1-0-0-1 Parent: NET157 (NET-157-0-0-0-0) NetType: Early Registrations, Transferred to APNIC OriginAS: Organization: Asia Pacific Network Information Centre (APNIC) [… cut …] ReferralServer: whois://whois.apnic.net ResourceLink: http://wq.apnic.net/whois-search/static/search.html OrgTechHandle: AWC12-ARIN OrgTechName: APNIC Whois Contact OrgTechPhone: +61 7 3858 3188 OrgTechEmail: search-apnic-not-arin@apnic.net [… cut …] Found a referral to whois.apnic.net. % [whois.apnic.net] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html % Information related to '157.0.0.0 - 157.255.255.255' % Abuse contact for '157.0.0.0 - 157.255.255.255' is 'helpdesk@apnic.net' inetnum: 157.0.0.0 - 157.255.255.255 netname: ERX-NETBLOCK descr: Early registration addresses [… cut …] In theory and while having a pretty big network you could probably ask your Regional Internet Registries to use RWhois for your network. On the other hand – simply imagine black hats breaking into a server running rwhois, putting a malicious entry there and then starting the attack. To be fair this scenario seems to be way easier than becoming a big company to legally have its own whois server. In case you’re a government and you can simply control network traffic – the task is way easier. By taking a closer look at the whois protocol, we can notice few things: it was designed really long time ago, it’s pretty simple (you ask for IP or domain name and get the raw output), it’s unencrypted on the network level. By simply performing a MITM attack on an unencrypted protocol (which whois is) attackers could just put the tilde escape sequence and start an attack over multiple hosts. It’s worth remembering that the root problem here is mailutils which has this flaw by design. I believe a lot of people are unaware about such a feature, and there’s still plenty of software that could use the mail command this way. As could be noticed many times in history – security is hard and complex. Sometimes totally innocent functionality which you wouldn’t ever suspect for being a threat could be a cause of dangerous vulnerability. Author: Jakub Żoczek Sursa: https://research.securitum.com/fail2ban-remote-code-execution/
-
Key-Checker Go scripts for checking API key / access token validity Update V1.0.0 ? Added 37 checkers! Screenshoot ? How to Install go get github.com/daffainfo/Key-Checker Reference ? https://github.com/streaak/keyhacks Sursa: https://github.com/daffainfo/Key-Checker
-
Shadow Credentials: Abusing Key Trust Account Mapping for Account Takeover Elad Shamir The techniques for DACL-based attacks against User and Computer objects in Active Directory have been established for years. If we compromise an account that has delegated rights over a user account, we can simply reset their password, or, if we want to be less disruptive, we can set an SPN or disable Kerberos pre-authentication and try to roast the account. For computer accounts, it is a bit more complicated, but RBCD can get the job done. These techniques have their shortcomings: Resetting a user’s password is disruptive, may be reported, and may not be permitted per the Rules of Engagement (ROE). Roasting is time-consuming and depends on the target having a weak password, which may not be the case. RBCD is hard to follow because someone (me) failed to write a clear and concise post about it. RBCD requires control over an account with an SPN, and creating a new computer account to meet that requirement may lead to detection and cannot be cleaned up until privilege escalation is achieved. The recent work that Will Schroeder (@harmj0y) and Lee Christensen (@tifkin_) published about AD CS made me think about other technologies that use Public Key Cryptography for Initial Authentication (PKINIT) in Kerberos, and Windows Hello for Business was the obvious candidate, which led me to (re)discover an alternative technique for user and computer object takeover. Tl;dr It is possible to add “Key Credentials” to the attribute msDS-KeyCredentialLink of the target user/computer object and then perform Kerberos authentication as that account using PKINIT. In plain English: this is a much easier and more reliable takeover primitive against Users and Computers. A tool to operationalize this technique has been released alongside this post. Previous Work When I looked into Key Trust, I found that Michael Grafnetter (@MGrafnetter) had already discovered this abuse technique and presented it at Black Hat Europe 2019. His discovery of this user and computer object takeover technique somewhat flew under the radar, I believe because this technique was only the primer to the main topic of his talk. Michael clearly demonstrated this abuse in his talk and noted that it affected both users and computers. In his presentation, Michael explained some of the inner workings of WHfB and the Key Trust model, and I highly recommend watching it. Michael has also been maintaining a library called DSInternals that facilitates the abuse of this mechanism, and a lot more. I recently ported some of Michael’s code to a new C# tool called Whisker to be used via implants on operations. More on that below. What is PKINIT? In Kerberos authentication, clients must perform “pre-authentication” before the KDC (the Domain Controller in an Active Directory environment) provides them with a Ticket Granting Ticket (TGT), which can subsequently be used to obtain Service Tickets. The reason for pre-authentication is that without it, anyone could obtain a blob encrypted with a key derived from the client’s password and try to crack it offline, as done in the AS-REP Roasting Attack. The client performs pre-authentication by encrypting a timestamp with their credentials to prove to the KDC that they have the credentials for the account. Using a timestamp rather than a static value helps prevent replay attacks. The symmetric key (secret key) approach, which is the one most widely used and known, uses a symmetric key derived from the client’s password, AKA secret key. If using RC4 encryption, this key would be the NT hash of the client’s password. The KDC has a copy of the client’s secret key and can decrypt the pre-authentication data to authenticate the client. The KDC uses the same key to encrypt a session key sent to the client along with the TGT. PKINIT is the less common, asymmetric key (public key) approach. The client has a public-private key pair, and encrypts the pre-authentication data with their private key, and the KDC decrypts it with the client’s public key. The KDC also has a public-private key pair, allowing for the exchange of a session key using one of two methods: Diffie-Hellman Key Delivery The Diffie-Hellman Key Delivery allows the KDC and the client to securely establish a shared session key that cannot be intercepted by attackers performing passive man-in-the-middle attacks, even if the attacker has the client’s or the KDC’s private key, (almost) providing Perfect Forward Secrecy. I say _almost _because the session key is also stored inside the encrypted part of the TGT, which is encrypted with the secret key of the KRBTGT account. Public Key Encryption Key Delivery Public Key Encryption Key Delivery uses the KDC’s private key and the client’s public key to envelop a session key generated by the KDC. Traditionally, Public Key Infrastructure (PKI) allows the KDC and the client to exchange their public keys using Digital Certificates signed by an entity that both parties have previously established trust with — the Certificate Authority (CA). This is the Certificate Trust model, which is most commonly used for smartcard authentication. PKINIT is not possible out of the box in every Active Directory environment. The key (pun intended) is that both the KDC and the client need a public-private key pair. However, if the environment has AD CS and a CA available, the Domain Controller will automatically obtain a certificate by default. No PKI? No Problem! Microsoft also introduced the concept of Key Trust, to support passwordless authentication in environments that don’t support Certificate Trust. Under the Key Trust model, PKINIT authentication is established based on the raw key data rather than a certificate. The client’s public key is stored in a multi-value attribute called msDS-KeyCredentialLink, introduced in Windows Server 2016. The values of this attribute are Key Credentials, which are serialized objects containing information such as the creation date, the distinguished name of the owner, a GUID that represents a Device ID, and, of course, the public key. It is a multi-value attribute because an account have several linked devices. This trust model eliminates the need to issue client certificates for everyone using passwordless authentication. However, the Domain Controller still needs a certificate for the session key exchange. This means that if you can write to the msDS-KeyCredentialLink property of a user, you can obtain a TGT for that user. Windows Hello for Business Provisioning and Authentication Windows Hello for Business (WHfB) supports multi-factor passwordless authentication. When the user enrolls, the TPM generates a public-private key pair for the user’s account — the private key should never leave the TPM. Next, if the Certificate Trust model is implemented in the organization, the client issues a certificate request to obtain a trusted certificate from the environment’s certificate issuing authority for the TPM-generated key pair. However, if the Key Trust model is implemented, the public key is stored in a new Key Credential object in the msDS-KeyCredentialLink attribute of the account. The private key is protected by a PIN code, which Windows Hello allows replacing with a biometric authentication factor, such as fingerprint or face recognition. When a client logs in, Windows attempts to perform PKINIT authentication using their private key. Under the Key Trust model, the Domain Controller can decrypt their pre-authentication data using the raw public key in the corresponding NGC object stored in the client’s msDS-KeyCredentialLink attribute. Under the Certificate Trust model, the Domain Controller will validate the trust chain of the client’s certificate and then use the public key inside it. Once pre-authentication is successful, the Domain Controller can exchange a session key via Diffie-Hellman Key Delivery or Public Key Encryption Key Delivery. Note that I intentionally used the term “client” rather than “user” here because this mechanism applies to both users and computers. What About NTLM? PKINIT allows WHfB users, or, more traditionally, smartcard users, to perform Kerberos authentication and obtain a TGT. But what if they need to access resources that require NTLM authentication? To address that, the client can obtain a special Service Ticket that contains their NTLM hash inside the Privilege Attribute Certificate (PAC) in an encrypted NTLM_SUPPLEMENTAL_CREDENTIAL entity. The PAC is stored inside the encrypted part of the ticket, and the ticket is encrypted using the key of the service it is issued for. In the case of a TGT, the ticket is encrypted using the key of the KRBTGT account, which the user should not be able to decrypt. To obtain a ticket that the user can decrypt, the user must perform Kerberos User to User (U2U) authentication to itself. When I first read the title of the RFC for this mechanism, I thought to myself, “Does that mean we can abuse this mechanism to Kerberoast any user account? That must be too good to be true”. And it was — the risk of Kerberoasting was taken into consideration, and U2U Service Tickets are encrypted using the target user’s session key rather than their secret key. That presented another challenge for the U2U design — every time a client authenticates and obtains a TGT, a new session key is generated. Also, KDC does not maintain a repository of active session keys — it extracts the session key from the client’s ticket. So, what session key should the KDC use when responding to a U2U TGS-REQ? The solution was sending a TGS-REQ containing the target user’s TGT as an “additional ticket”. The KDC will extract the session key from the TGT’s encrypted part (hence not really perfect forward secrecy) and generate a new service ticket. So, if a user requests a U2U Service Ticket from itself to itself, they will be able to decrypt it and access the PAC and the NTLM hash. This means that if you can write to the msDS-KeyCredentialLink property of a user, you can retrieve the NT hash of that user. As per MS-PAC, the NTLM_SUPPLEMENTAL_CREDENTIAL entity is added to the PAC only if PKINIT authentication was performed. Back in 2017, Benjamin Delpy (@gentilkiwi) introduced code to Kekeo to support retrieving the NTLM hash of an account using this technique, and it will be added to Rubeus in an upcoming release. Abuse When abusing Key Trust, we are effectively adding alternative credentials to the account, or “Shadow Credentials”, allowing for obtaining a TGT and subsequently the NTLM hash for the user/computer. Those Shadow Credentials would persist even if the user/computer changed their password. Abusing Key Trust for computer objects requires additional steps after obtaining a TGT and the NTLM hash for the account. There are generally two options: Forge an RC4 silver ticket to impersonate privileged users to the corresponding host. Use the TGT to call S4U2Self to impersonate privileged users to the corresponding host. This option requires modifying the obtained Service Ticket to include a service class in the service name. Key Trust abuse has the added benefit that it doesn’t delegate access to another account which could get compromised — it is restricted to the private key generated by the attacker. In addition, it doesn’t require creating a computer account that may be hard to clean up until privilege escalation is achieved. Whisker Alongside this post I am releasing a tool called “ Whisker “. Based on code from Michael’s DSInternals, Whisker provides a C# wrapper for performing this attack on engagements. Whisker updates the target object using LDAP, while DSInternals allows updating objects using both LDAP and RPC with the Directory Replication Service (DRS) Remote Protocol. Whisker has four functions: Add — This function generates a public-private key pair and adds a new key credential to the target object as if the user enrolled to WHfB from a new device. List — This function lists all the entries of the msDS-KeyCredentialLink attribute of the target object. Remove — This function removes a key credential from the target object specified by a DeviceID GUID. Clear — This function removes all the values from the msDS-KeyCredentialLink attribute of the target object. If the target object is legitimately using WHfB, it will break. Requirements This technique requires the following: At least one Windows Server 2016 Domain Controller. A digital certificate for Server Authentication installed on the Domain Controller. Windows Server 2016 Functional Level in Active Directory. Compromise an account with the delegated rights to write to the msDS-KeyCredentialLink attribute of the target object. Detection There are two main opportunities for detection of this technique: If PKINIT authentication is not common in the environment or not common for the target account, the “Kerberos authentication ticket (TGT) was requested” event (4768) can indicate anomalous behavior when the Certificate Information attributes are not blank, as shown below: If a SACL is configured to audit Active Directory object modifications for the targeted account, the “Directory service object was modified” event (5136) can indicate anomalous behavior if the subject changing the msDS-KeyCredentialLink is not the Azure AD Connect synchronization account or the ADFS service account, which will typically act as the Key Provisioning Server and legitimately modify this attribute for users. Prevention It is generally a good practice to proactively audit all inbound object control for highly privileged accounts. Just as users with lower privileges than Domain Admins shouldn’t be able to reset the passwords of members of the Domain Admins group, less secure, or less “trustworthy”, users with lower privileges should not be able to modify the msDS-KeyCredentialLink attribute of privileged accounts. A more specific preventive control is adding an Access Control Entry (ACE) to DENY the principal EVERYONE from modifying the attribute msDS-KeyCredentialLink for any account not meant to be enrolled in Key Trust passwordless authentication, and particularly privileged accounts. However, an attacker with WriteOwner or WriteDACL privileges will be able to override this control, which can be detected with a suitable SACL. Conclusion Abusing Key Trust Account Mapping is a simpler way to take over user and computer accounts in Active Directory environments that support PKINIT for Kerberos authentication and have a Windows Server 2016 Domain Controller with the same functional level. References Whisker by (@elad_shamir) Exploiting Windows Hello for Business (Black Hat Europe 2019) by Michael Grafnetter (@MGrafnetter) DSInternals by Michael Grafnetter (@MGrafnetter) Sursa: https://posts.specterops.io/shadow-credentials-abusing-key-trust-account-mapping-for-takeover-8ee1a53566ab
-
Eviatar Gerzi, Security Researcher, CyberArk Attackers are increasingly targeting Kubernetes clusters to compromise applications or abuse resources for things like crypto-coin mining. Through live demos, this research-based session will show attendees how. Eviatar Gerzi, who researches DevOps security, will also introduce an open source tool designed to help blue and red teams discover and eliminate risky permissions.Pre-Requisites: Basic experience with Kubernetes and familiarity with docker containers.
-
Dr. Ruby, cred ca se pricepe la alte lucruri mai bine decat la medicina. Aici e profilul de Linkedin, NU e doctor: https://www.linkedin.com/in/dr-jane-ruby-49971411/ Bine, de fapt este doctor, doctor in psihologie. Mentioneaza un doctor in video, doctor HOMEOPAT (au uitat sa zica asta). Iar acel Gigel scoate articole de genul acesta in fiecare zi. Uitati-va macar la comentariile de pe Facebook, unele sunt pertitente. Bun venit pe Internet unde oricine zice ceva e adevarat.
-
Avem nevoie de cineva pentru o slujbă bine plătită
Nytro replied to ogleaks's topic in Bine ai venit
Uuu, pare sa fie un post fun! -
alert() is dead, long live print() James Kettle Director of Research @albinowax Published: 02 July 2021 at 13:27 UTC Updated: 05 July 2021 at 10:03 UTC Cross-Site Scripting and the alert() function have gone hand in hand for decades. Want to prove you can execute arbitrary JavaScript? Pop an alert. Want to find an XSS vulnerability the lazy way? Inject alert()-invoking payloads everywhere and see if anything pops up. However, there's trouble brewing on the horizon. Malicious adverts have been abusing our beloved alert to distract and social engineer visitors from inside their iframe. Google Chrome has decided to tackle this by disabling alert for cross-domain iframes. Cross-domain iframes are often built into websites deliberately, and are also a near-essential component of certain relatively advanced XSS attacks. Once Chrome 92 lands on 20th July 2021, XSS vulnerabilities inside cross-domain iframes will: No longer enable alert-based PoCs. Be invisible to anyone using alert-based detection techniques. What next? The obvious workaround is to use prompt or confirm, but unfortunately Chrome's mitigation blocks all dialogs. Triggering a DNS pingback to a listener, OAST-style is another potential approach, but less suitable as a PoC due to the config requirements. We also ruled out console.log() as console functions are often proxied or disabled by JavaScript obfuscators. It's quite funny that this "protection" against showing dialogs cross domain blocks alerts and prompts but as Yosuke Hasegawa pointed out they forgot about basic authentication. This works in the current version of canary. It's likely to be blocked in future though. We needed an alert-alternative that was: Simple, setup-free and easy to remember Highly visible, even when executed in an invisible iframe After weeks of intensive research, we're thrilled to bring you... print() We will be updating our Web Security Academy labs to support print() based solutions shortly. The XSS cheat sheet will also be updated to reflect the new print() payloads when using cross domain iframes. We'll keep using alert when there's no iframes involved... for now. Long live print! - Gareth & James Sursa: https://portswigger.net/research/alert-is-dead-long-live-print
-
- 1
-
-
CVE-2021-22555: Turning \x00\x00 into 10000$ Andy Nguyen (theflow@) - Information Security Engineer CVE-2021-22555 is a 15 years old heap out-of-bounds write vulnerability in Linux Netfilter that is powerful enough to bypass all modern security mitigations and achieve kernel code execution. It was used to break the kubernetes pod isolation of the kCTF cluster and won 10000$ for charity (where Google will match and double the donation to 20000$). Table of Contents Introduction Vulnerability Exploitation Exploring struct msg_msg Achieving use-after-free Bypassing SMAP Achieving a better use-after-free Finding a victim object Bypassing KASLR/SMEP Escalating privileges Kernel ROP chain Escaping the container and popping a root shell Proof-Of-Concept Timeline Thanks Introduction After BleedingTooth, which was the first time I looked into Linux, I wanted to find a privilege escalation vulnerability as well. I started by looking at old vulnerabilities like CVE-2016-3134 and CVE-2016-4997 which inspired me to grep for memcpy() and memset() in the Netfilter code. This led me to some buggy code. Vulnerability When IPT_SO_SET_REPLACE or IP6T_SO_SET_REPLACE is called in compatibility mode, which requires the CAP_NET_ADMIN capability that can however be obtained in a user+network namespace, structures need to be converted from user to kernel as well as 32bit to 64bit in order to be processed by the native functions. Naturally, this is destined to be error prone. Our vulnerability is in xt_compat_target_from_user() where memset() is called with an offset target->targetsize that is not accounted for during the allocation - leading to a few bytes written out-of-bounds: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/netfilter/x_tables.c void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr, unsigned int *size) { const struct xt_target *target = t->u.kernel.target; struct compat_xt_entry_target *ct = (struct compat_xt_entry_target *)t; int pad, off = xt_compat_target_offset(target); u_int16_t tsize = ct->u.user.target_size; char name[sizeof(t->u.user.name)]; t = *dstptr; memcpy(t, ct, sizeof(*ct)); if (target->compat_from_user) target->compat_from_user(t->data, ct->data); else memcpy(t->data, ct->data, tsize - sizeof(*ct)); pad = XT_ALIGN(target->targetsize) - target->targetsize; if (pad > 0) memset(t->data + target->targetsize, 0, pad); tsize += off; t->u.user.target_size = tsize; strlcpy(name, target->name, sizeof(name)); module_put(target->me); strncpy(t->u.user.name, name, sizeof(t->u.user.name)); *size += off; *dstptr += tsize; } The targetsize is not controllable by the user, but one can choose different targets with different structure sizes by name (like TCPMSS, TTL or NFQUEUE). The bigger targetsize is, the more we can vary in the offset. Though, the target size must not be 8 bytes aligned in order to fulfill pad > 0. The biggest possible I found is NFLOG for which we can choose an offset up to 0x4C bytes out-of-bounds (one can influence the offset by adding padding between struct xt_entry_match and struct xt_entry_target? struct xt_nflog_info { /* 'len' will be used iff you set XT_NFLOG_F_COPY_LEN in flags */ __u32 len; __u16 group; __u16 threshold; __u16 flags; __u16 pad; char prefix[64]; }; Note that the destination of the buffer is allocated with GFP_KERNEL_ACCOUNT and can also vary in the size: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/netfilter/x_tables.c struct xt_table_info *xt_alloc_table_info(unsigned int size) { struct xt_table_info *info = NULL; size_t sz = sizeof(*info) + size; if (sz < sizeof(*info) || sz >= XT_MAX_TABLE_SIZE) return NULL; info = kvmalloc(sz, GFP_KERNEL_ACCOUNT); if (!info) return NULL; memset(info, 0, sizeof(*info)); info->size = size; return info; } Though, the minimum size is > 0x100 which means that the smallest slab this object can be allocated in is kmalloc-512. In other words, we have to find victims which are allocated between kmalloc-512 and kmalloc-8192 to exploit. Exploitation Our primitive is limited to writing four bytes of zero up to 0x4C bytes out-of-bounds. With such a primitive, usual targets are: Reference counter Unfortunately, I could not find any suitable objects with a reference counter in the first 0x4C bytes. Free list pointer CVE-2016-6187: Exploiting Linux kernel heap off-by-one is a good example on how to exploit the free list pointer. However, this was already 5 years ago, and meanwhile, kernels have the CONFIG_SLAB_FREELIST_HARDENED option enabled which among other things protects free list pointers. Pointer in a struct This is the most promising approach, however four bytes of zero is too much to write. For example, a pointer 0xffff91a49cb7f000 could only be turned to 0xffff91a400000000 or 0x9cb7f000, where both of them would likely be invalid pointers. On the other hand, if we used the primitive to write at the very beginning of the adjacent block, we could write less bytes, e.g. 2 bytes, and for example turn a pointer from 0xffff91a49cb7f000 to 0xffff91a49cb70000. Playing around with some victim objects, I noticed that I could never reliably allocate them around struct xt_table_info on kernel 5.4. I realized that it had something to do with the GFP_KERNEL_ACCOUNT flag, as other objects allocated with GFP_KERNEL_ACCOUNT did not have this issue. Jann Horn confirmed that before 5.9, separate slabs were used to implement accounting. Therefore, every heap primitive we use in the exploit chain should also use GFP_KERNEL_ACCOUNT. The syscall msgsnd() is a well known primitive for heap spraying (which uses GFP_KERNEL_ACCOUNT) and has been utilized for multiple public exploits already. Though, its structure msg_msg has surprisingly never been abused. In this write-up, we will demonstrate how this data-structure can be abused to gain a use-after-free primitive which in turn can be used to leak addresses and fake other objects. Coincidentally, in parallel to my research in March 2021, Alexander Popov also explored the very same structure in Four Bytes of Power: exploiting CVE-2021-26708 in the Linux kernel. Exploring struct msg_msg When sending data with msgsnd(), the payload is split into multiple segments: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c static struct msg_msg *alloc_msg(size_t len) { struct msg_msg *msg; struct msg_msgseg **pseg; size_t alen; alen = min(len, DATALEN_MSG); msg = kmalloc(sizeof(*msg) + alen, GFP_KERNEL_ACCOUNT); if (msg == NULL) return NULL; msg->next = NULL; msg->security = NULL; len -= alen; pseg = &msg->next; while (len > 0) { struct msg_msgseg *seg; cond_resched(); alen = min(len, DATALEN_SEG); seg = kmalloc(sizeof(*seg) + alen, GFP_KERNEL_ACCOUNT); if (seg == NULL) goto out_err; *pseg = seg; seg->next = NULL; pseg = &seg->next; len -= alen; } return msg; out_err: free_msg(msg); return NULL; } where the headers for struct msg_msg and struct msg_msgseg are: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/msg.h /* one msg_msg structure for each message */ struct msg_msg { struct list_head m_list; long m_type; size_t m_ts; /* message text size */ struct msg_msgseg *next; void *security; /* the actual message follows immediately */ }; // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/types.h struct list_head { struct list_head *next, *prev; }; // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c struct msg_msgseg { struct msg_msgseg *next; /* the next part of the message follows immediately */ }; The first member in struct msg_msg is the mlist.next pointer which is pointing to another message in the queue (which is different from next as this is a pointer to the next segment). This is a perfect candidate to corrupt as you will learn next. Achieving use-after-free First, we initialize a lot of message queues (in our case 4096) using msgget(). Then, we send one message of size 4096 (including the struct msg_msg header) for each of the message queues using msgsnd(), which we will call the primary message. Eventually, after a lot of messages, we have some that are consecutive: [Figure 1: A series of blocks of primary messages] Next, we send a secondary message of size 1024 for each of the message queues using msgsnd(): [Figure 2: A series of blocks of primary messages pointing to secondary messages] Finally, we create some holes (in our case every 1024th) in the primary messages, and trigger the vulnerable setsockopt(IPT_SO_SET_REPLACE) option, which, in the best scenario, will allocate the struct xt_table_info object in one of the holes: [Figure 3: A xt_table_info allocated in between the blocks which corrupts the next pointer] We choose to overwrite two bytes of the adjacent object with zeros. Assume we are adjacent to another primary message, these bytes we overwrite are part of the pointer to the secondary message. Since we allocate them with a size of 1024 bytes, we therefore have a 1 - (1024 / 65536) chance to redirect the pointer (the only case we fail is when the two least significant bytes of the pointer are already zero). Now, the best scenario we can hope for is that the manipulated pointer also points to a secondary message, since the consequence will be two different primary messages pointing to the same secondary message, and this can lead to a use-after-free: [Figure 4: Two primary messages pointing to the same secondary message due to the corrupted pointer] However, how do we know which two primary messages are pointing to the same secondary message? In order to answer this question, we tag every (primary and secondary) message with the index of the message queue which is in [0, 4096). Then, after triggering the corruption, we iterate through all message queues, peek at all messages using msgrcv() with MSG_COPY and see if they are the same. If the tag of the primary message is different from the secondary message, it means that it has been redirected. In which case the tag of the primary message represents the index of the fake message queue, i.e. the one containing the wrong secondary message, and the tag of the wrong secondary message represents the index of the real message queue. Knowing these two indices, achieving a use-after-free is now trivial - we namely fetch the secondary message from the real message queue using msgrcv() and as such free it: [Figure 5: Freed secondary message with a stale reference] Note that we still have a reference to the freed message in the fake message queue. Bypassing SMAP Using unix sockets (which can be easily set up with socketpair()), we now spray a lot of messages of size 1024 and imitate the struct msg_msg header. Ideally, we are able to reclaim the address of the previously freed message: [Figure 6: Fake struct msg_msg put in place of the freed secondary message] Note that mlist.next is 41414141 as we do not yet know any kernel addresses (when SMAP is enabled, we cannot specify a user address). Not having a kernel address is crucial as it actually prevents us from freeing the block again (you will learn later why that is desired). The reason is that during msgrcv(), the message is unlinked from the message queue that is a circular list. Luckily, we are actually in a good position to achieve an information leak, as there are some interesting fields in struct msg_msg. Namely, the field m_ts is used to determine how much data to return to userland: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c struct msg_msg *copy_msg(struct msg_msg *src, struct msg_msg *dst) { struct msg_msgseg *dst_pseg, *src_pseg; size_t len = src->m_ts; size_t alen; if (src->m_ts > dst->m_ts) return ERR_PTR(-EINVAL); alen = min(len, DATALEN_MSG); memcpy(dst + 1, src + 1, alen); ... return dst; } The original size of the message is only 1024-sizeof(struct msg_msg) bytes which we can now artificially increase to DATALEN_MSG=4096-sizeof(struct msg_msg). As a consequence, we will now be able to read past the intended message size and leak the struct msg_msg header of the adjacent message. As said before, the message queue is implemented as a circular list, thus, mlist.next points back to the primary message. Knowing the address of a primary message, we can re-craft the fake struct msg_msg with that address as next (meaning that it is the next segment). The content of the primary message can then be leaked by reading more than DATALEN_MSG bytes. The leaked mlist.next pointer from the primary message reveals the address of the secondary message that is adjacent to our fake struct msg_msg. Subtracting 1024 from that address, we finally have the address of the fake message. Achieving a better use-after-free Now, we can rebuild the fake struct msg_msg object with the leaked address as mlist.next and mlist.prev (meaning that it is pointing to itself), making the fake message free-able with the fake message queue. [Figure 7: Fake struct msg_msg with a valid next pointer pointing to itself] Note that when spraying using unix sockets, we actually have a struct sk_buff object which points to the fake message. Obviously, this means that when we free the fake message, we still have a stale reference: [Figure 8: Freed fake message with a stale reference] This stale struct sk_buff data buffer is a better use-after-free scenario to exploit, because it does not contain header information, meaning that we can now use it to free any kind of object on the slab. In comparison, freeing a struct msg_msg object is only possible if the first two members are writable pointers (needed to unlink the message). Finding a victim object The best victim to attack is one that has a function pointer in its structure. Remember that the victim must also be allocated with GFP_KERNEL_ACCOUNT. Talking to Jann Horn, he suggested the struct pipe_buffer object which is allocated in kmalloc-1024 (hence why the secondary message is 1024 bytes). The struct pipe_buffer can be easily allocated with pipe() that has alloc_pipe_info() as a subroutine: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/pipe.c struct pipe_inode_info *alloc_pipe_info(void) { ... unsigned long pipe_bufs = PIPE_DEF_BUFFERS; ... pipe = kzalloc(sizeof(struct pipe_inode_info), GFP_KERNEL_ACCOUNT); if (pipe == NULL) goto out_free_uid; ... pipe->bufs = kcalloc(pipe_bufs, sizeof(struct pipe_buffer), GFP_KERNEL_ACCOUNT); ... } While it does not contain a function pointer directly, it contains a pointer to struct pipe_buf_operations that on the other hand has function pointers: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/pipe_fs_i.h struct pipe_buffer { struct page *page; unsigned int offset, len; const struct pipe_buf_operations *ops; unsigned int flags; unsigned long private; }; struct pipe_buf_operations { ... /* * When the contents of this pipe buffer has been completely * consumed by a reader, ->release() is called. */ void (*release)(struct pipe_inode_info *, struct pipe_buffer *); ... }; Bypassing KASLR/SMEP When one writes to the pipes, struct pipe_buffer is populated. Most importantly, ops will point to the static structure anon_pipe_buf_ops which resides in the .data segment: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/pipe.c static const struct pipe_buf_operations anon_pipe_buf_ops = { .release = anon_pipe_buf_release, .try_steal = anon_pipe_buf_try_steal, .get = generic_pipe_buf_get, }; Since the difference between the .data segment and the .text segment is always the same, having anon_pipe_buf_ops basically allows us to calculate the kernel base address. We spray a lot of struct pipe_buffer objects and reclaim the location of the stale struct sk_buff data buffer: [Figure 9: Freed fake message reclaimed with a struct pipe_buffer] As we still have a reference from the struct sk_buff, we can read its data buffer, leak the content of struct pipe_buffer and reveal the address of anon_pipe_buf_ops: [+] anon_pipe_buf_ops: ffffffffa1e78380 [+] kbase_addr: ffffffffa0e00000 With this information, we can now find JOP/ROP gadgets. Note that when reading from the unix socket, we actually free its buffer as well: [Figure 10: Freed fake message reclaimed with a struct pipe_buffer] Escalating privileges We reclaim the stale struct pipe_buffer with a fake one with ops pointing to a fake struct pipe_buf_operations. This fake structure is planted at the same location since we know its address, and obviously, this structure should contain a malicious function pointer as release. [Figure 11: Freed struct pipe_buffer reclaimed with a fake struct pipe_buffer] The final stage of the exploit is to close all pipes in order to trigger the release which in turn will kick off the JOP chain. Finding JOP gadgets is hard, thus the goal is to achieve a kernel stack pivot as soon as possible in order to execute a kernel ROP chain. Kernel ROP chain We save the value of RBP at some scratchpad address in kernel so that we can later resume the execution, then we call commit_creds(prepare_kernel_cred(NULL)) to install kernel credentials and finally we call switch_task_namespaces(find_task_by_vpid(1), init_nsproxy) to switch the namespace of process 1 to the one of the init process. After that, we restore the value of RBP and return to resume the execution (which will immediately make free_pipe_info() return). Escaping the container and popping a root shell Arriving back in userland, we now have root permissions to change mnt, pid and net namespaces to escape the container and break out of the kubernetes pod. Ultimately, we pop a root shell. setns(open("/proc/1/ns/mnt", O_RDONLY), 0); setns(open("/proc/1/ns/pid", O_RDONLY), 0); setns(open("/proc/1/ns/net", O_RDONLY), 0); char *args[] = {"/bin/bash", "-i", NULL}; execve(args[0], args, NULL); Proof-Of-Concept The Proof-Of-Concept is available at https://github.com/google/security-research/tree/master/pocs/linux/cve-2021-22555. Executing it on a vulnerable machine will grant you root: theflow@theflow:~$ gcc -m32 -static -o exploit exploit.c theflow@theflow:~$ ./exploit [+] Linux Privilege Escalation by theflow@ - 2021 [+] STAGE 0: Initialization [*] Setting up namespace sandbox... [*] Initializing sockets and message queues... [+] STAGE 1: Memory corruption [*] Spraying primary messages... [*] Spraying secondary messages... [*] Creating holes in primary messages... [*] Triggering out-of-bounds write... [*] Searching for corrupted primary message... [+] fake_idx: ffc [+] real_idx: fc4 [+] STAGE 2: SMAP bypass [*] Freeing real secondary message... [*] Spraying fake secondary messages... [*] Leaking adjacent secondary message... [+] kheap_addr: ffff91a49cb7f000 [*] Freeing fake secondary messages... [*] Spraying fake secondary messages... [*] Leaking primary message... [+] kheap_addr: ffff91a49c7a0000 [+] STAGE 3: KASLR bypass [*] Freeing fake secondary messages... [*] Spraying fake secondary messages... [*] Freeing sk_buff data buffer... [*] Spraying pipe_buffer objects... [*] Leaking and freeing pipe_buffer object... [+] anon_pipe_buf_ops: ffffffffa1e78380 [+] kbase_addr: ffffffffa0e00000 [+] STAGE 4: Kernel code execution [*] Spraying fake pipe_buffer objects... [*] Releasing pipe_buffer objects... [*] Checking for root... [+] Root privileges gained. [+] STAGE 5: Post-exploitation [*] Escaping container... [*] Cleaning up... [*] Popping root shell... root@theflow:/# id uid=0(root) gid=0(root) groups=0(root) root@theflow:/# Timeline 2021-04-06 - Vulnerability reported to security@kernel.org. 2021-04-13 - Patch merged upstream. 2021-07-07 - Public disclosure. Thanks Eduardo Vela Francis Perron Jann Horn Sursa: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html
-
- 1
-
-
Remote code execution in cdnjs of Cloudflare 2021-07-16 1891 字 cdnjs Vulnerability Go Supply Chain RCE Preface (日本語版も公開されています。) Cloudflare, which runs cdnjs, is running a “Vulnerability Disclosure Program” on HackerOne, which allows hackers to perform vulnerability assessments. This article describes vulnerabilities reported through this program and published with the permission of the Cloudflare security team. So this article is not intended to recommend you to perform an unauthorized vulnerability assessment. If you found any vulnerabilities in Cloudflare’s product, please report it to Cloudflare’s vulnerability disclosure program. TL;DR There was a vulnerability in the cdnjs library update server that could execute arbitrary commands, and as a result, cdnjs could be completely compromised. This allows an attacker to tamper 12.7%1 of all websites on the internet once caches are expired. About cdnjs cdnjs is a JavaScript/CSS library CDN that is owned by Cloudflare, which is used by 12.7% of all websites on the internet as of 15 July 2021. This is the second most widely used library CDN after 12.8%2 of Google Hosted Libraries, and considering the current usage rate, it will be the most used JavaScript library CDN in the near future. Usage graph of cdnjs from W3Techs, as of 15 July 2021 Reason for investigation A few weeks before my last investigation into “Remote code execution in Homebrew by compromising the official Cask repository”, I was investigating supply chain attacks. While finding a service that many software depends on, and is allowing users to perform the vulnerability assessment, I found cdnjs. So I decided to investigate it. Initial investigation While browsing the cdnjs website, I found the following description. Couldn’t find the library you’re looking for? You can make a request to have it added on our GitHub repository. I found out that the library information is managed on the GitHub repository, so I checked the repositories of the GitHub Organization that is used by cdnjs. As a result, it was found that the repository is used in the following ways. cdnjs/packages: Stores library information that is supported in cdnjs cdnjs/cdnjs: Stores files of libraries cdnjs/logs: Stores update logs of libraries cdnjs/SRIs: Stores SRI (Subresource Integrity) of libraries cdnjs/static-website: Source code of cdnjs.com cdnjs/origin-worker: Cloudflare Worker for origin of cdnjs.cloudflare.com cdnjs/tools: cdnjs management tools cdnjs/bot-ansible: Ansible repository of the cdnjs library update server As you can see from these repositories, most of the cdnjs infrastructure is centralized in this GitHub Organization. I was interested in cdnjs/bot-ansible and cdnjs/tools because it automates library updates. After reading codes of these 2 repositories, it turned out cdnjs/bot-ansible executes autoupdate command of cdnjs/tools in the cdnjs library update server periodically, to check updates of library from cdnjs/packages by downloading npm package / Git repository. Investigation of automatic update The automatic update function updates the library by downloading the user-managed Git repository / npm package and copying the target file from them. And npm registry compress libraries into .tgz to make it downloadable. Since the tool for this automatic update is written in Go, I guessed that it may use Go’s compress/gzip and archive/tar to extract the archive file. Go’s archive/tar returns the filename contained in the archive without sanitizing3, so if the archive is extracted into the disk based on the filename returned from archive/tar, archives that contain filename like ../../../../../../../tmp/test may overwrite arbitrary files on the system. 4 From the information in cdnjs/bot-ansible, I knew that some scripts were running regularly and the user that runs the autoupdate command had write permission for them, so I focused on overwriting files via path traversal. Path traversal To find path traversal, I started reading the main function of the autoupdate command. func main() { [...] switch *pckg.Autoupdate.Source { case "npm": { util.Debugf(ctx, "running npm update") newVersionsToCommit, allVersions = updateNpm(ctx, pckg) } case "git": { util.Debugf(ctx, "running git update") newVersionsToCommit, allVersions = updateGit(ctx, pckg) } [...] } As you can see from the code snippet above, if npm is specified as a source of auto-update, it passes package information to the updateNpm function. func updateNpm(ctx context.Context, pckg *packages.Package) ([]newVersionToCommit, []version) { [...] newVersionsToCommit = doUpdateNpm(ctx, pckg, newNpmVersions) [...] } Then, updateNpm passes information about the new library version to doUpdateNpm function. func doUpdateNpm(ctx context.Context, pckg *packages.Package, versions []npm.Version) []newVersionToCommit { [...] for _, version := range versions { [...] tarballDir := npm.DownloadTar(ctx, version.Tarball) filesToCopy := pckg.NpmFilesFrom(tarballDir) [...] } And doUpdateNpm passes the URL of .tgz file into npm.DownloadTar. func DownloadTar(ctx context.Context, url string) string { dest, err := ioutil.TempDir("", "npmtarball") util.Check(err) util.Debugf(ctx, "download %s in %s", url, dest) resp, err := http.Get(url) util.Check(err) defer resp.Body.Close() util.Check(Untar(dest, resp.Body)) return dest } Finally, pass the .tgz file obtained using http.Get to the Untar function. func Untar(dst string, r io.Reader) error { gzr, err := gzip.NewReader(r) if err != nil { return err } defer gzr.Close() tr := tar.NewReader(gzr) for { header, err := tr.Next() [...] // the target location where the dir/file should be created target := filepath.Join(dst, removePackageDir(header.Name)) [...] // check the file type switch header.Typeflag { [...] // if it's a file create it case tar.TypeReg: { [...] f, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode)) [...] // copy over contents if _, err := io.Copy(f, tr); err != nil { return err } } } } } As I guessed, compress/gzip and archive/tar were used in Untar function to extract .tgz file. At first, I thought that it’s sanitizing the path in the removePackageDir function, but when I checked the contents of the function, I noticed that it’s just removing package/ from the path. From these code snippets, I confirmed that arbitrary code can be executed after performing path traversal from the .tgz file published to npm and overwriting the script that is executed regularly on the server. Demonstration of vulnerability Because Cloudflare is running a vulnerability disclosure program on HackerOne, it’s likely that HackerOne’s triage team won’t forward the report to Cloudflare unless it indicates that the vulnerability is actually exploitable. Therefore, I decided to do a demonstration to show that vulnerability can actually be exploited. The attack procedure is as follows. Publish the .tgz file that contains the crafted filename to the npm registry. Wait for the cdnjs library update server to process the crafted .tgz file. The contents of the file that is published in step 1 are written into a regularly executed script file and arbitrary command is executed. … and after writing the attack procedure into my notepad, for some reason, I started wondering how automatic updates based on the Git repository works. So, I read codes a bit before demonstrating the vulnerability, and it seemed that the symlinks aren’t considered when copying files from the Git repository. func MoveFile(sourcePath, destPath string) error { inputFile, err := os.Open(sourcePath) if err != nil { return fmt.Errorf("Couldn't open source file: %s", err) } outputFile, err := os.Create(destPath) if err != nil { inputFile.Close() return fmt.Errorf("Couldn't open dest file: %s", err) } defer outputFile.Close() _, err = io.Copy(outputFile, inputFile) inputFile.Close() if err != nil { return fmt.Errorf("Writing to output file failed: %s", err) } // The copy was successful, so now delete the original file err = os.Remove(sourcePath) if err != nil { return fmt.Errorf("Failed removing original file: %s", err) } return nil } As Git supports symbolic links by default, it may be possible to read arbitrary files from the cdnjs library update server by adding symlink into the Git repository. If the regularly executed script file is overwritten to execute arbitrary commands, the automatic update function may be broken, so I decided to check the arbitrary file reading first. Along with this, the attack procedure was changed as follows. Add a symbolic link that points harmless file (Assumed /proc/self/maps here) into the Git repository. Publish a new version in the repository. Wait for the cdnjs library update server to process the crafted repository. The specified file is published on cdnjs. It was around 20:00 at this point, but what I have to do was creating a symlink, so I decided to eat dinner after creating the symbolic link and publishing it.5 ln -s /proc/self/maps test.js Incident Once I finished the dinner and returning to my PC desk, I was able to confirm that cdnjs has released a version containing symbolic links. After checking the contents of the file to send the report, I was surprised. Surprisingly, clearly sensitive information such as GITHUB_REPO_API_KEY and WORKERS_KV_API_TOKEN was displayed. I couldn’t understand what happened for a moment, and when I checked the command log, I found that I accidentally put a link to /proc/self/environ instead of /proc/self/maps.6 As mentioned earlier, if cdnjs' GitHub Organization is compromised, it’s possible to compromise most of the cdnjs infrastructure. I needed to take immediate action, so I sent the report that only contains a link that shows the current situation, and requested them to revoke all credentials. At this point, I was very confused and hadn’t confirmed it, but in fact, these tokens were invalidated before I sent the report. It seems that GitHub notified Cloudflare immediately because GITHUB_REPO_API_KEY (API key of GitHub) was included in the repository, and Cloudflare started incident response immediately after the notification. I felt that they’re a great security team because they invalidated all credentials within minutes after cdnjs processed the specially crafted repository. Determinate impact After the incident, I investigated what could be impacted. GITHUB_REPO_API_KEY was an API key for robocdnjs, which belongs to cdnjs organization, and had write permission against each repository. This means it was possible to tamper arbitrary libraries on the cdnjs or tamper the cdnjs.com itself. Also, WORKERS_KV_API_TOKEN had permission against KV of Cloudflare Workers that is used in the cdnjs, it could be used to tamper the libraries on the KV cache. By combining these permissions, the core part of cdnjs, such as the origin data of cdnjs, the KV cache, and even the cdnjs website, could be completely tampered. Conclusion In this article, I described the vulnerability that was existed in cdnjs. While this vulnerability could be exploited without any special skills, it could impact many websites. Given that there are many vulnerabilities in the supply chain, which are easy to exploit but have a large impact, I feel that it’s very scary. If you have any questions/comments about this article, please send a message to @ryotkak on Twitter. Timeline Date (JST) Event April 6, 2021 19:00 Found a vulnerability April 6, 2021 20:00 Published a crafted symlink April 6, 2021 20:30 cdnjs processed the file At the same time GitHub sent an alert to Cloudflare At the same time Cloudflare started an incident response Within minutes Cloudflare finished revocation of credentials April 6, 2021 20:40 I sent an initial report April 6, 2021 21:00 I sent detailed report April 7, 2021- Secondary fix has applied June 3, 2021 Complete fix has applied July 16, 2021 Published this article Quoted from W3Techs as of 15 July 2021. Due to the presence of SRI / cache, fewer websites could tamper immediately. ↩︎ Quoted from W3Techs as of 15 July 2021. ↩︎ https://github.com/golang/go/issues/25849 ↩︎ Archives like this can be created by using tools such as evilarc. ↩︎ I don’t know if this is correct, but I remember that the dinner on that day was frozen gyoza (dumplings). (It was yummy!) ↩︎ Because I was tired from work and I was hungry, I ran the command completed by shell without any confirmation. ↩︎ Sursa: https://blog.ryotak.me/post/cdnjs-remote-code-execution-en/
-
CVE-2021-33742: Internet Explorer out-of-bounds write in MSHTML Maddie Stone, Google Project Zero & Threat Analysis Group The Basics Disclosure or Patch Date: 03 June 2021 Product: Microsoft Internet Explorer Advisory: https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-33742 Affected Versions: For Windows 10 20H2 x64, KB5003173 and previous First Patched Version: For Windows 10 20H2 x64, KB5003637 Issue/Bug Report: N/A Patch CL: N/A Bug-Introducing CL: N/A Reporter(s): Clément Lecigne of Google’s Threat Analysis Group The Code Proof-of-concept: Proof-of-concept by Ivan Fratric of Project Zero <script> var b = document.createElement("html"); b.innerHTML = Array(40370176).toString(); b.innerHTML = ""; </script> Exploit sample: Examples of the Word documents used to distribute this exploit: 656d19186795280a068fcb97e7ef821b55ad3d620771d42ed98d22ee3c635e67 851bf4ab807fc9b29c9f6468c8c89a82b8f94e40474c6669f105bce91f278fdb Did you have access to the exploit sample when doing the analysis? Yes The Vulnerability Bug class: Out-of-bounds write Vulnerability details: The vulnerability is due to the size of the string of the inner html element being truncated (size&0x1FFFFFF) in the CTreePos structure while the non-truncated size is still in the text data object. Memory at [1] is allocated based on the size in the CTreePos structure, the truncated size. The text data returned by MSHTML!Tree::TextData::GetText [2] includes the full non-truncated length of the string. The non-truncated length is then passed as the src length to wmemcpy_s [3] while the allocated destination memory uses the truncated length. While wmemcpy_s protects against the buffer overflow here, the source size is used as the increment even though that was not the number of bytes actually copied: the size of the allocation was. The index (v190) is incremented by the larger number. When that index is then used to access the memory allocated at [1], it leads to the out of bounds write at MSHTML!CSpliceTreeEngine::RemoveSplice+0xb1f. if ( v172 >= 90000 && ((_BYTE)v4[21] & 4) != 0 ) { v70 = 1 - CTreePos::GetCp(v4[5]); v71 = CTreePos::GetCp(v4[6]); /*** v71 = Truncated size (orig_sz&0x1ffffff) ***/ v72 = v4[6]; v104 = (*(_BYTE *)v72 & 4) == 0; v189 = (CTreeNode *)(v70 + v71); if ( !v104 ) { v73 = CTreeDataPos::GetTextLength(v72); v189 = (CTreeNode *)(v73 + v74 - 1); } if ( v184 <= (int)v187 ) { v77 = (struct CMarkup *)operator new[]( /*** [1] allocates based on truncated size ***/ (unsigned int)newAlloc, (const struct MemoryProtection::leaf_t *)newAllocSz); v4[23] = v77; if ( v77 ) { for ( i = v4[5]; i != *((struct CMarkup **)v4[6] + 5); i = (struct CMarkup *)*((_DWORD *)i + 5) ) { if ( (*(_BYTE *)i & 4) != 0 ) { /*** [2] srcTextSz is non truncated size ***/ srcText = Tree::TextData::GetText(*((Tree::TextData **)i + 8), 0, &srcTextSz); /*** [3] -- srcTextSz > newAllocSz ***/ wmemcpy_s(srcText, srcTextSz, (const wchar_t *)newAlloc, (rsize_t)newAllocSz); /*** memcpy only copied newAllocSz not srcTextSz so v190 is now > max ***/ v190 += srcTextSz; } else if ( (*(_BYTE *)i & 3) != 0 && (*(_BYTE *)i & 0x40) != 0 ) { v80 = v190; *((_WORD *)v4[23] + (_DWORD)v190) = 0xFDEF; v190 = v80 + 1; } } } Patch analysis: The patch is in Tree::TreeWriter::NewTextPosInternal. The patch will cause a release assert if there is an attempt to add TextData greater than 0x1FFFFFFF to the HTML tree. Thoughts on how this vuln might have been found (fuzzing, code auditing, variant analysis, etc.): This vulnerability was likely found via fuzzing. A fuzzer may not have found this vulnerability if your fuzzer runs with a tight timeout since this vulnerability takes a few seconds to trigger. It still seems more likely that this would have been found via fuzzing rather than manual review. (Historical/present/future) context of bug: See this Google TAG blogpost for more info. Malicious Office documents loaded web content within Internet Explorer. The malicious document would fingerprint the device and then send this Internet Explorer website to users. The Exploit (The terms exploit primitive, exploit strategy, exploit technique, and exploit flow are defined here.) Exploit strategy (or strategies): Still under analysis. Exploit flow: Known cases of the same exploit flow: Part of an exploit chain? This vulnerability was likely paired with a sandbox escape, but that was not collected. The Next Steps Variant analysis Areas/approach for variant analysis (and why): It seems possible that there would be more of these types of instances throughout the code base if CTreePos structures are truncating the sizes to 25 bits while other areas, such as TextData are not. The top 7 bits of the size in the CTreePos struct are used as flags. Found variants: N/A Structural improvements What are structural improvements such as ways to kill the bug class, prevent the introduction of this vulnerability, mitigate the exploit flow, make this type of vulnerability harder to exploit, etc.? Ideas to kill the bug class: If truncating the size/length of an object, do the bounds checking/input validation of the size at the earliest point and only store the truncated size. Kill the tab process when the size reaches the size that can no longer be properly represented. Ideas to mitigate the exploit flow: N/A Other potential improvements: Microsoft has announced that Internet Explorer will be retired in June 2020. However, it also says that the retirement does not affect the MSHTML (Trident) engine. This means that mshtml.dll where this vulnerability exists is not planning to be retired. In the future, if a user enables IE mode in Edge, the mshtml engine would be used. It seems likely that Office will still have access to mshtml. Limiting access to mshtml and audit applications that use mshtml. 0-day detection methods What are potential detection methods for similar 0-days? Meaning are there any ideas of how this exploit or similar exploits could be detected as a 0-day? Variants of this bug could potentially be detected by looking for javascript that tries to create objects with sizes greater than the allowed bounds. Other References July 2021: "How We Protect Users From 0-Day Attacks" by Google's Threat Analysis Group gives context about how this exploit was used. Sursa: https://googleprojectzero.github.io/0days-in-the-wild/0day-RCAs/2021/CVE-2021-33742.html
-
Mitmproxy 7 16 Jul 2021, Maximilian Hils @maximilianhils We’re delighted to announce the release of mitmproxy 7, a free and open source interactive HTTPS proxy. This release is all about our new proxy core, which bring substantial improvements across the board and represents a massive milestone for the project. What’s in the release? In this post we’ll focus on some of the user-facing improvements coming with mitmproxy 7. If you are interested in the technical details of our new sans-io proxy core, check out our blog post dedicated to that! Full TCP Support Mitmproxy now supports proxying raw TCP connections out of the box, including ones that start with a server-side greeting – for example SMTP. Opportunistic TLS (STARTTLS) is not supported yet, but regular TCP-over-TLS just works! HTTP/1 ⇔ HTTP/2 Interoperability Mitmproxy can now accept HTTP/2 requests from the client and forward them to an HTTP/1 server. This on-the-wire protocol translation works bi-directional: All HTTP requests and responses were created equal!. This change also makes it possible to change the request destination for HTTP/2 flows, which previously was not possible at all. WebSocket Message Display Mitmproxy now displays WebSocket messages not only in the event log, but also in a dedicated UI tab! There are still UX details to be ironed out, but we’re excited to ship a first prototype here. While this is only for the console UI via mitmproxy, the web UI via mitmweb is still looking for amazing contributors to get feature parity! Secure Web Proxy (TLS-over-TLS) Clients usually talk in plaintext to HTTP proxies – telling them where to connect – before they ultimately establish a secure TLS connection through the proxy with the destination server. With mitmproxy 7, clients can now establish TLS with the proxy right from the start (before issuing an HTTP CONNECT request), which can add a significant layer of defense in public networks. So instead of simply specifying http://127.0.0.1:8080 you can now also use HTTPS via https://127.0.0.1:8080 (or any other listen host and port). Windows Support for Console UI Thanks to an experimental urwid patch, mitmproxy’s console UI is now natively available on Windows. While the Window Subsystem for Linux (WSL) has been a viable alternative for a while, we’re very happy to provide the same tools across all platforms now. API Reference Documentation Having recently adopted the pdoc project, which generates awesome Python API documentation, we have built a completely new API reference documentation for mitmproxy’s addon API. Paired with our existing examples on GitHub, this makes it much simpler to write mitmproxy new addons. What’s next? While this release focuses heavily on our backend, the next mitmproxy release will come with lots of mitmweb improvements by our current GSoC 2021 student @gorogoroumaru. Stay tuned! Release Changelog Since the release of mitmproxy 6 about seven months ago, the project has had 527 commits by 28 contributors, resulting in 234 closed issues and 173 closed pull requests. New Proxy Core (@mhils) Secure Web Proxy: Mitmproxy now supports TLS-over-TLS to already encrypt the connection to the proxy. Server-Side Greetings: Mitmproxy now supports proxying raw TCP connections, including ones that start with a server-side greeting (e.g. SMTP). HTTP/1 – HTTP/2 Interoperability: mitmproxy can now accept an HTTP/2 connection from the client, and forward it to an HTTP/1 server. HTTP/2 Redirects: The request destination can now be changed on HTTP/2 flows. Connection Strategy: Users can now specify if they want mitmproxy to eagerly connect upstream or wait as long as possible. Eager connections are required to detect protocols with server-side greetings, lazy connections enable the replay of responses without connecting to an upstream server. Timeout Handling: Mitmproxy will now clean up idle connections and also abort requests if the client disconnects in the meantime. Host Header-based Proxying: If the request destination is unknown, mitmproxy now falls back to proxying based on the Host header. This means that requests can often be redirected to mitmproxy using DNS spoofing only. Internals: All protocol logic is now separated from I/O (“sans-io”). This greatly improves testing capabilities, prevents a wide array of race conditions, and increases proper isolation between layers. Additional Changes mitmproxy’s command line interface now supports Windows (@mhils) The clientconnect, clientdisconnect, serverconnect, serverdisconnect, and log events have been replaced with new events, see addon documentation for details (@mhils) Contentviews now implement render_priority instead of should_render, allowing more specialization (@mhils) Addition of block_list option to block requests with a set status code (@ericbeland) Make mitmweb columns configurable and customizable (@gorogoroumaru) Automatic JSON view mode when +json suffix in content type (@kam800) Use pyca/cryptography to generate certificates, not pyOpenSSL (@mhils) Remove the legacy protocol stack (@Kriechi) Remove all deprecated pathod and pathoc tools and modules (@Kriechi) In reverse proxy mode, mitmproxy now does not assume TLS if no scheme is given but a custom port is provided (@mhils) Remove the following options: http2_priority, relax_http_form_validation, upstream_bind_address, spoof_source_address, and stream_websockets. If you depended on one of them please let us know. mitmproxy never phones home, which means we don’t know how prominently these options were used. (@mhils) Fix IDNA host ‘Bad HTTP request line’ error (@grahamrobbins) Pressing ? now exits console help view (@abitrolly) --modify-headers now works correctly when modifying a header that is also part of the filter expression (@Prinzhorn) Fix SNI-related reproducibility issues when exporting to curl/httpie commands. (@dkasak) Add option export_preserve_original_ip to force exported command to connect to IP from original request. Only supports curl at the moment. (@dkasak) Major proxy protocol testing (@r00t-) Switch Docker image release to be based on Debian (@PeterDaveHello) Multiple Browsers: The browser.start command may be executed more than once to start additional browser sessions. (@rbdixon) Improve readability of SHA256 fingerprint. (@wrekone) Metadata and Replay Flow Filters: Flows may be filtered based on metadata and replay status. (@rbdixon) Flow control: don’t read connection data faster than it can be forwarded. (@hazcod) Docker images for ARM64 architecture (@hazcod, @mhils) Fix parsing of certificate issuer/subject with escaped special characters (@Prinzhorn) Customize markers with emoji, and filters: The flow.mark command may be used to mark a flow with either the default “red ball” marker, a single character, or an emoji like ?. Use the ~marker filter to filter on marker characters. (@rbdixon) New flow.comment command to add a comment to the flow. Add ~comment <regex> filter syntax to search flow comments. (@rbdixon) Fix multipart forms losing boundary values on edit. (@roytu) Transfer-Encoding: chunked HTTP message bodies are now retained if they are below the stream_large_bodies limit. (@mhils) json() method for HTTP Request and Response instances will return decoded JSON body. (@rbdixon) Support for HTTP/2 Push Promises has been dropped. (@mhils) Make it possible to set sequence options from the command line. (@Yopi) Sursa: https://mitmproxy.org/posts/releases/mitmproxy7/
-
Security Analysis of Telegram (Symmetric Part) Overview We performed a detailed security analysis of the encryption offered by the popular Telegram messaging platform. As a result of our analysis, we found several cryptographic weaknesses in the protocol, from technically trivial and easy to exploit to more advanced and of theoretical interest. For most users, the immediate risk is low, but these vulnerabilities highlight that Telegram fell short of the cryptographic guarantees enjoyed by other widely deployed cryptographic protocols such as TLS. We made several suggestions to the Telegram developers that enable providing formal assurances that rule out a large class of cryptographic attacks, similarly to other, more established, cryptographic protocols. By default, Telegram uses its bespoke MTProto protocol to secure communication between clients and its servers as a replacement for the industry-standard Transport Layer Security (TLS) protocol. While Telegram is often referred to as an “encrypted messenger”, this level of protection is the only protection offered by default: MTProto-based end-to-end encryption, which would protect communication from Telegram employees or anyone breaking into Telegram’s servers, is only optional and not available for group chats. We thus focused our efforts on analysing whether Telegram’s MTProto offers comparable privacy to surfing the web with HTTPS. Vulnerabilities We disclosed the following vulnerabilities to the Telegram development team on 16 April 2021 and agreed with them on a disclosure on 16 July 2021: An attacker on the network can reorder messages coming from a client to the server. This allows, for example, to alter the order of “pizza” and “crime” in the sequence of messages: “I say yes to”, “all the pizzas”, “I say no to”, “all the crimes”. This attack is trivial to carry out. Telegram confirmed the behaviour we observed and addressed this issue in version 7.8.1 for Android, 7.8.3 for iOS and 2.8.8 for Telegram Desktop. An attacker can detect which of two special messages was encrypted by a client or a server under some special conditions. In particular, Telegram encrypts acknowledgement messages, i.e. messages that encode that a previous message was indeed received, but the way it handles the re-sending of unacknowledged messages leaks whether such an acknowledgement was sent and received. This attack is mostly of theoretical interest. However, cryptographic protocols are expected to rule out even such attacks. Telegram confirmed the behaviour we observed and addressed this issue in version 7.8.1 for Android, 7.8.3 for iOS and 2.8.8 for Telegram Desktop. We also studied the implementation of Telegram clients and found that three of them (Android, iOS, Desktop) contained code which – in principle – permitted to recover some plaintext from encrypted messages. For this, an attacker must send many carefully crafted messages to a target, on the order of millions of messages. This attack, if executed successfully, could be devastating for the confidentiality of Telegram messages. Luckily, it is almost impossible to carry out in practice. In particular, it is mostly mitigated by the coincidence that certain metadata in Telegram is chosen randomly and kept secret. The presence of these implementation weaknesses, however, highlights the brittleness of the MTProto protocol: it mandates that certain steps are done in a problematic order (see discussion below), which puts significant burden on developers (including developers of third-party clients) who have to avoid accidental leakage. The three official Telegram clients which exhibit non-ideal behaviour are evidence that this is a high burden. Telegram confirmed the attacks and rolled out fixes to all three affected clients in June. Telegram also awarded a “bug bounty” for these vulnerabilities. We also show how an attacker can mount an “attacker-in-the-middle” attack on the initial key negotiation between the client and the server. This allows an attacker to impersonate the server to the client, allowing to break confidentiality and integrity of the communication. Luckily, this attack is also quite difficult to carry out, as it requires sending billions of messages to a Telegram server within minutes. However, it highlights that while users are required to trust Telegram’s servers, the security of those servers and their implementations cannot be taken for granted. Telegram confirmed the behaviour and implemented some server-side mitigations. In addition, from version 7.8.1 for Android, 7.8.3 for iOS and 2.8.8 for Telegram Desktop client apps support an RSA-OAEP+ variant. We were informed by the Telegram developers that they do not do security or bugfix releases except for immediate post-release crash fixes. The development team also informed us that they did not wish to issue security advisories at the time of patching, nor commit to release dates for specific fixes. As a consequence, the fixes were rolled out as part of regular Telegram updates. Formal Security Analysis The central result of our investigation, however, is that Telegram’s MTProto can provide a confidential and integrity-protected channel when the changes we suggested are adopted by the Telegram developers. As mentioned above, the Telegram developers communicated to us that they did adopt these changes. Telegram awarded a cash price for this analysis to stimulate future analysis. However, this result comes with significant caveats. Cryptographic protocols like MTProto are built from cryptographic building blocks such as hash functions, block ciphers and public-key encryption. In a formal security analysis, the security of the protocol is reduced to the security of its building blocks. This is no different to arguing that a car is road safe if its tires, brakes and indicator lights are fully functional. In the case of Telegram, the security requirements on the building blocks are unusual. Because of this, these requirements have not been studied in previous research. This is somewhat analogous to making assumptions about a car’s brakes that have not been lab-tested. Other cryptographic protocols such as TLS do not have to rely on these sort of special assumptions. A further caveat of these findings is that we only studied three official Telegram clients and no third-party clients. However, some of these third-party clients have substantial user bases. Here, the brittleness of the MTProto protocol is a cause for concern if the developers of these third-party clients are likely to make mistakes in implementing the protocol in a way that avoids, e.g. the timing leaks mentioned above. Alternative design choices for MTProto would have made the task significantly easier for the developers. Paper Martin R. Albrecht, Lenka Mareková, Kenneth G. Paterson, Igors Stepanovs: Four Attacks and a Proof for Telegram. To appear at IEEE Symposium on Security and Privacy 2022. Team Martin R. Albrecht (Information Security Group, Royal Holloway, University of London) Lenka Mareková (Information Security Group, Royal Holloway, University of London) Kenneth G. Paterson (Applied Cryptography Group, ETH Zurich) Igors Stepanovs (Applied Cryptography Group, ETH Zurich) A Somewhat Opinionated Discussion “Don’t roll your own crypto” is a common mantra issued when a cryptographic vulnerability is found in some protocol. Indeed, Telegram has been the recipient of unsolicited advice of this nature. The problem with this mantra is, of course, that it sounds like little more than gatekeeping. Clearly, some people need to roll “their own crypto” for cryptography to be rolled at all. However, despite the gatekeeping flavour, there is a rationale behind this advice. Standard cryptographic protocols have received attention from analysts and new protocols are developed in parallel with a proof that roughly says: “No adversary with these capabilities can break the given well-defined security goals unless one of the underlying primitives – a block cipher, a hash function etc – has a weakness.“ Of course, proofs can have bugs too, but this process significantly reduces the risk of catastrophic failure. Two of our attacks described above serve to illustrate that some behaviours exhibited by Telegram clients and servers are undesirable (permitting reordering of some messages, encrypting twice under the same state in some corner case). The apparent need to make non-standard assumptions on the underlying building blocks in our proofs (which we do not know how to avoid) further illustrates that some design choices made in MTProto are more risky than they need to be. In other words, this part of our paper – “Two Attacks and a Proof” so to speak – illustrates the implied rationale of the above mentioned mantra: proofs help to reduce the attack surface. But there is another leg of the “don’t roll your own crypto” mantra: it can be surprisingly tricky to implement cryptographic algorithms in a way that they leak no secret information, e.g. through timing side channels. Proofs only cover what is in their model. Two of our attacks are timing attacks and thus “outside the model”. Our proof essentially states that it is, in principle, possible to implement MTProto in a way that is secure but does not cover how easy or hard it is or how to do it at all. Here, two recurring “anti-patterns” in MTProto’s design make it tricky to implement the protocol securely. First, MTProto opts to protect the integrity of plaintexts rather than ciphertexts. This is the difference between Encrypt-and-MAC and Encrypt-then-MAC. It might seem natural to protect the integrity of the part that you care about – the plaintext – but doing it this way around means that a receiver must first process an incoming ciphertext with their secret key (i.e. decrypt it) before being able to verify that the ciphertext has not been tampered with. In other words, the receiver must perform a computation involving untrusted data – the received ciphertext – and their decryption key. This can be done in a secure manner, but Encrypt-then-MAC completely sidesteps the issue by first checking whether the ciphertext was tampered with (i.e. checking the MAC on the ciphertext), and only then decrypting. Second, but related to the first point, block ciphers process data in blocks of e.g. 16 bytes. Since data may have an arbitrary byte length, there will be some bytes left over that MTProto fills with random garbage (which is good). Now, since Telegram protects the integrity of plaintexts instead of ciphertexts, the question arises: compute the MAC over the plaintext with or without the padding? The original design decision was “without padding”, presumably because the designers did not see a need to protect useless random padding. The remaining two of our attacks exploit this behaviour. As mentioned above, we break – in a completely impractical way! – the initial key exchange between the client and the server. Here, we exploit that MTProto attempts to add integrity inside RSA encryption by including a hash of the payload but excluding the added random padding. This is like a homegrown variant of RSA-OAEP. The problem with this approach is that the receiver must – after decryption – figure out where the payload ends and where the padding starts. This means parsing the payload before being able to check its integrity. Furthermore, depending on the result of this parsing, more or less data may be fed into the hash function for integrity checking, which in turn produces slightly shorter or longer running times (our actual attack proceeds differently, we are merely illustrating the principle here while avoiding many details). Our second attack goes for the one place where MTProto does indeed also protect useless random padding, but the processing in some clients behaves as if this was not the case. In 2015 Jakob Jakobsen and Claudio Orlandi gave an attack on the IND-CCA security of the previous version of MTProto. As a result of this, MTProto 2.0, the current version, now also protects the integrity of padding bytes. Thus, the logic now could be: (a) decrypt and then immediately (b) check integrity. (This, too, isn’t without its pitfalls. For example, when do you know that you have enough data to run the integrity check? Parsing some length field first, for example, to establish this could again lead to attacks.) However, we found that three official Telegram clients do additional processing on the decrypted data before step (b), processing that is necessary in MTProto 1.0 (where padding and plaintext data needed to be separated before checking integrity) but not in MTProto 2.0 (where the integrity of everything is protected). We exploit – again in a completely impractical way! – this behaviour in our attacks (but we also need to combine it with the previous attack to make it all come together). So, again, the original decision not to protect some useless random bytes in MTProto 1.0 required the receiver to decide which bytes are useless and which aren’t before checking their integrity, and three official Telegram clients have carried this behaviour forward into MTProto 2.0. As an aside, Jakobsen and Orlandi wrote: “We stress that this is a theoretical attack on the definition of security and we do not see any way of turning the attack into a full plaintext-recovery attack.” Similarly, the Telegram “FAQ for the Technically Inclined (MTProto v.1.0)” provides the following analogy: “A postal worker could write ‘Haha’ (using invisible ink!) on the outside of a sealed package that he delivers to you. It didn’t stop the package from being delivered, it doesn’t allow them to change the contents of the package, and it doesn’t allow them to see what was inside.” In hindsight, we think that this is incorrect. As explained above, our timing side channels essentially exploit this behaviour in order to do message recovery (but we need to “chain” two “exploits” to make it work, even ignoring practicality concerns). In summary, MTProto protects the integrity of plaintexts rather than ciphertexts, which necessitates operating with a decryption key on untrusted data. Moreover, MTProto in several places opted (or at least used to opt) to require additional parsing of decrypted data before its integrity could be checked by only protecting the payload without padding. This produces an opportunity for timing side-channel attacks; an opportunity that could be completely removed by using a standard authenticated encryption scheme (roughly speaking, Encrypt-then-MAC with key separation has been shown to be a decent such scheme, but faster dedicated schemes exist). Finally, given that Telegram’s ecosystem is serviced also by many third-party clients, the “brittleness” of the design or the presence of “footguns” means that even if the developers of the official clients manage to take great care to avoid timing leaks, those are difficult to rule out for third-party clients. Q & A What about IGE? Telegram uses the little-known Infinite Garble Extension (IGE) block cipher mode in place of more standard alternatives. While claims about its infinite error propagation have been disproven, our proofs show that its use in the symmetric part of MTProto is no more problematic than if CBC mode was used. However, its similarity with CBC also means it is vulnerable to manipulation if some bits of plaintext are known. Indeed, we use this property in combination with the timing side channel described earlier. What about length extension attacks? MTProto makes heavy use of plain SHA-256, both in deriving keys and calculating the MAC, which on first look appears as the kind of use that would lead to length extension attacks. However, as our proofs show, MTProto manages to sidestep this particular issue because of its plaintext encoding format which mandates the presence of certain metadata in the first block. Did we really break IND-CPA? Above, we wrote: An attacker can detect which of two special messages was encrypted by a client or a server under some special conditions. In particular, Telegram encrypts acknowledgement messages, i.e. messages that encode that a previous message was indeed received, but the way it handles the re-sending of unacknowledged messages leaks whether such an acknowledgement was sent and received. This attack is mostly of theoretical interest. However, cryptographic protocols are expected to rule out even such attacks. Telegram confirmed the behaviour we observed and addressed this issue in version 7.8.1 for Android, 7.8.3 for iOS and 2.8.8 for Telegram Desktop. Telegram wrote: MTProto never produces the same ciphertext, even for messages with identical content, because MTProto is stateful and msg_id is changed on every encryption. If one message is re-sent on the order of 2^64 times, MTProto can transmit the same ciphertext for the same message on two of these re-sendings. However, it would not be correct to claim that retransmission over the network of the same ciphertext for a message that was previously sent is a violation of IND-CPA security because otherwise any protocol over TCP wouldn’t be IND-CPA secure due to TCP retransmissions. To facilitate future research, each message that is re-sent by Telegram apps is now either wrapped in a new container or re-sent with a new msg_id. We have already addressed this in the latest version of our write-up (which we shared with the Telegram developers on 15 July 2021). We reproduce that part below, slightly edited for readability. If a message is not acknowledged within a certain time in MTProto, it is re-encrypted using the same msg_id and with fresh random padding. While this appears to be a useful feature and a mitigation against message deletion, it enables attacks in the IND-CPA setting, as we explain next. As a motivation, consider a local passive adversary that tries to establish whether R responded to I when looking at a transcript of three ciphertexts (c_{I, 0}, c_{R}, c_{I, 1}), where c_{u} represents a ciphertext sent from u. In particular, it aims to establish whether c_{R} encrypts an automatically generated acknowledgement, we will use “ACK” below to denote this, or a new message from R. If c_{I, 1} is a re-encryption of the same message as c_{I, 0}, re-using the state, this leaks that bit of information about c_{R}. Note that here we are breaking the confidentiality of the ciphertext carrying “ACK”. In addition to these encrypted acknowledgement messages, the underlying transport layer, e.g. TCP, may also issue unencrypted ACK messages or may resend ciphertexts as is. The difference between these two cases is that in the former case the acknowledgement message is encrypted, in the latter it is not. For completeness, note that Telegram clients do not resend cached ciphertext blobs when unacknowledged, but re-encrypt the underlying message under the same state but with fresh random padding. These pararagraphs are then followed by a semi-formal write-up of the attack. Sursa: https://mtpsym.github.io/
-
Hooking CandiruAnother Mercenary Spyware Vendor Comes into Focus By Bill Marczak, John Scott-Railton, Kristin Berdan, Bahr Abdul Razzak, and Ron Deibert July 15, 2021 Summary Candiru is a secretive Israel-based company that sells spyware exclusively to governments. Reportedly, their spyware can infect and monitor iPhones, Androids, Macs, PCs, and cloud accounts. Using Internet scanning we identified more than 750 websites linked to Candiru’s spyware infrastructure. We found many domains masquerading as advocacy organizations such as Amnesty International, the Black Lives Matter movement, as well as media companies, and other civil-society themed entities. We identified a politically active victim in Western Europe and recovered a copy of Candiru’s Windows spyware. Working with Microsoft Threat Intelligence Center (MSTIC) we analyzed the spyware, resulting in the discovery of CVE-2021-31979 and CVE-2021-33771 by Microsoft, two privilege escalation vulnerabilities exploited by Candiru. Microsoft patched both vulnerabilities on July 13th, 2021. As part of their investigation, Microsoft observed at least 100 victims in Palestine, Israel, Iran, Lebanon, Yemen, Spain, United Kingdom, Turkey, Armenia, and Singapore. Victims include human rights defenders, dissidents, journalists, activists, and politicians. We provide a brief technical overview of the Candiru spyware’s persistence mechanism and some details about the spyware’s functionality. Candiru has made efforts to obscure its ownership structure, staffing, and investment partners. Nevertheless, we have been able to shed some light on those areas in this report. 1. Who is Candiru? The company known as “Candiru,” based in Tel Aviv, Israel, is a mercenary spyware firm that markets “untraceable” spyware to government customers. Their product offering includes solutions for spying on computers, mobile devices, and cloud accounts. Figure 1: A distinctive mural of five men with empty heads wearing suits and bowler hats is displayed in this “Happy Hour” photo a previous Candiru office posted on Facebook by a catering company. A Deliberately Opaque Corporate structure Candiru makes efforts to keep its operations, infrastructure, and staff identities opaque to public scrutiny. Candiru Ltd. was founded in 2014 and has undergone several name changes (see: Table 1). Like many mercenary spyware corporations, the company reportedly recruits from the ranks of Unit 8200, the signals intelligence unit of the Israeli Defence Forces. While the company’s current name is Saito Tech Ltd, we will refer to them as “Candiru” as they are most well known by that name. The firm’s corporate logo appears to be a silhouette of the reputedly-gruesome Candiru fish in the shape of the letter “C.” Company name Date of registration Possible meaning Saito Tech Ltd. (סאייטו טק בעיימ) 2020 “Saito” is a town in Japan Taveta Ltd. (טאבטה בעיימ) 2019 “Taveta” is a town in Kenya Grindavik Solutions Ltd. (גרינדוויק פתרונות בעיימ) 2018 “Grindavik” is a town in Iceland DF Associates Ltd. (ד. אפ אסוסיאייטס בעיימ) 2017 ? Candiru Ltd. (קנדירו בעיימ) 2014 A parasitic freshwater fish Table 1: Candiru’s corporate registrations over time Candiru has at least one subsidiary: Sokoto Ltd. Section 5 provides further documentation of Candiru’s corporate structure and ownership. Reported Sales and Investments According to a lawsuit brought by a former employee, Candiru had sales of “nearly $30 million,” within two years of its founding. The firm’s reported clients are located in “Europe, the former Soviet Union, the Persian Gulf, Asia and Latin America.” Additionally, reports of possible deals with several countries have been published: Uzbekistan: In a 2019 presentation at the Virus Bulletin security conference, a Kaspersky Lab researcher stated that Candiru likely sold its spyware to Uzbekistan’s National Security Service. Saudi Arabia & the UAE: The same presentation also mentioned Saudi Arabia and the UAE as likely Candiru customers. Singapore: A 2019 Intelligence Online report mentions that Candiru was active in soliciting business from Singapore’s intelligence services. Qatar: A 2020 Intelligence Online report notes that Candiru “has become closer to Qatar.” A company linked to Qatar’s sovereign wealth fund has invested in Candiru. No information on Qatar-based customers has yet emerged, Candiru’s Spyware Offerings A leaked Candiru project proposal published by TheMarker shows that Candiru’s spyware can be installed using a number of different vectors, including malicious links, man-in-the-middle attacks, and physical attacks. A vector named “Sherlock” is also offered, that they claim works on Windows, iOS, and Android. This may be a browser-based zero-click vector. Figure 2: Infection vectors offered by Candiru. Like many of its peers, Candiru appears to license its spyware by number of concurrent infections, which reflects the number of targets that can be under active surveillance at any one instant in time. Like NSO Group, Candiru also appears to restrict the customer to a set of approved countries. The €16 million project proposal allows for an unlimited number of spyware infection attempts, but the monitoring of only 10 devices simultaneously. For an additional €1.5M, the customer can purchase the ability to monitor 15 additional devices simultaneously, and to infect devices in a single additional country. For an additional €5.5M, the customer can monitor 25 additional devices simultaneously, and conduct espionage in five more countries. Figure 3: Proposal for a Candiru Customer indicating number of concurrent infections under a given contract. The fine print in the proposal states that the product will operate in “all agreed upon territories, ”then mentions a list of restricted countries including the US, Russia, China, Israel and Iran. This same list of restricted countries has previously been mentioned by NSO Group. Nevertheless, Microsoft observed Candiru victims in Iran, suggesting that in some situations, products from Candiru do operate in restricted territories. In addition, targeting infrastructure disclosed in this report includes domains masquerading as the Russian postal service. The proposal states that the spyware can exfiltrate private data from a number of apps and accounts including Gmail, Skype, Telegram, and Facebook. The spyware can also capture browsing history and passwords, turn on the target’s webcam and microphone, and take pictures of the screen. Capturing data from additional apps, such as Signal Private Messenger, is sold as an add-on. Figure 4: Customers can pay additional money to capture data from Signal. For a further additional €1.5M fee, customers can purchase a remote shell capability, which allows them full access to run any command or program on the target’s computer. This kind of capability is especially concerning, given that it could also be used to download files, such as planting incriminating materials, onto an infected device. 2. Finding Candiru’s Malware In The Wild Using telemetry data from Team Cymru, along with assistance from civil society partners, the Citizen Lab was able to identify a computer that we suspected contained a persistent Candiru infection. We contacted the owner of the computer, a politically active individual in Western Europe, and arranged for the computer’s hard drive to be imaged. We ultimately extracted a copy of Candiru’s spyware from the disk image. While analysis of the extracted spyware is ongoing, this section outlines initial findings about the spyware’s persistence Persistence Candiru’s spyware was persistently installed on the computer via COM hijacking of the following registry key: HKEY_LOCAL_MACHINE\Software\Classes\CLSID\{CF4CC405-E2C5-4DDD-B3CE-5E7582D8C9FA}\InprocServer32 Normally, this registry key’s value points to the benign Windows Management Instrumentation wmiutils.dll file, but the value on the infected computer had been modified to point to a malicious DLL file that had been dropped inside the Windows system folder associated with the Japanese input method (IMEJP) C:\WINDOWS\system32\ime\IMEJP\IMJPUEXP.DLL. This folder is benign and included in a default install of Windows 10, but IMJPUEXP.DLL is not the name of a legitimate Windows component. When Windows boots, it automatically loads the Windows Management Instrumentation service, which involves looking up the DLL path in the registry key, and then invoking the DLL. Loading the Spyware’s Configuration The IMJPUEXP DLL file has eight blobs in the PE resources section with identifiers 102, 103, 105, 106, 107, 108, 109, 110. The DLL decrypts these using an AES key and IV that are hardcoded in the DLL. Decryption is via Windows CryptoAPI, using AES-256-CBC. Of particular note is resource 102, which contains the path to the legitimate wmiutils.dll, which is loaded after the spyware, ensuring that the COM hijack does not disrupt normal Windows functionality. Resource 103 points to a file AgentService.dat in a folder created by the spyware, C:\WINDOWS\system32\config\spp\Licenses\curv\config\tracing\. Resource 105 points to a second file in the same directory, KBDMAORI.dat. IMJPUEXP.DLL decrypts and loads the AgentService.dat file whose path is in resource 103, using the same AES key and IV, and decompresses it via zlib. AgentService.dat file then loads the file in resource 105, KBDMAORI.dat, using a second AES key and IV hardcoded in AgentService.dat, and performs the decryption using a statically linked OpenSSL. Decrypting KBDMAORI.DAT yields a file with a series of nine encrypted blobs, each prefixed with an 8-byte little-endian length field. Each blob is encrypted with the same AES key and IV used to decrypt KBDMAORI.DAT, and is then zlib compressed. The first four encrypted blobs appear to be DLLs from the Microsoft Visual C++ redistributable: vcruntime140.dll, msvcp140.dll, ucrtbase.dll, concrt140.dll. The subsequent blobs are part of the spyware, including components that are apparently called Internals.dll and Help.dll. Both the Microsoft DLLs and the spyware DLLs in KBDMAORI.DAT are lightly obfuscated. Reverting the following modifications makes the files valid DLLs: The first two bytes of the file (MZ) have been zeroed. The first 4 bytes of NT header (\x50\x45\x00\x00) have been zeroed. The first 2 bytes of the optional header (\x0b\x02) have been zeroed. The strings in the import directory have been XOR obfuscated, using a 48-byte XOR key hardcoded in AgentService.dat: 6604F922F90B65F2B10CE372555C0A0C0C5258B6842A83C7DC2EE4E58B363349F496E6B6A587A88D0164B74DAB9E6B58 The final blob in KBDMAORI.DAT is the spyware’s configuration in JSON format. The configuration is somewhat obfuscated, but clearly contains Base64 UTF-16 encoded URLs for command-and-control. Figure 5: The obfuscated spyware’s C&C configuration in JSON format. The C&C servers in the configuration are: https://msstore[.]io https://adtracker[.]link https://cdnmobile[.]io All three domain names pointed to 185.181.8[.]155. This IP address was connected to three other IPs that matched our Candiru fingerprint CF1 (Section 3). Spyware Functionality We are still reversing most of the spyware’s functionality, but Candiru’s Windows payload appears to include features for exfiltrating files, exporting all messages saved in the Windows version of the popular encrypted messaging app Signal, and stealing cookies and passwords from Chrome, Internet Explorer, Firefox, Safari, and Opera browsers. The spyware also makes use of a legitimate signed third-party driver, physmem.sys: c299063e3eae8ddc15839767e83b9808fd43418dc5a1af7e4f44b97ba53fbd3d Microsoft’s analysis also established that the spyware could send messages from logged-in email and social media accounts directly on the victim’s computer. This could allow malicious links or other messages to be sent directly from a compromised user’s computer. Proving that the compromised user did not send the message could be quite challenging. 3. Mapping Candiru’s Command & Control Infrastructure To identify the websites used by Candiru’s spyware, we developed four fingerprints and a new Internet scanning technique. We searched historical data from Censys and conducted our own scans in 2021. This led us to identify at least 764 domain names that we assess with moderate-high confidence to be used by Candiru and its customers. Examination of the domain names indicates a likely interest in targets in Asia, Europe, the Middle East, and North America. Additionally, based on our analysis of Internet scanning data, we believe that there are Candiru systems operated from Saudi Arabia, Israel, UAE, Hungary, and Indonesia, among other countries. OPSEC Mistake by Candiru Leads to their Infrastructure Using Censys, we found a self-signed TLS certificate that included the email address “amitn@candirusecurity.com”. We attributed the candirusecurity[.]com domain name to Candiru Ltd, because a second domain name (verification[.]center) was registered in 2015 with a candirusecurity[.]com email address and a phone number (+972-54-2552428) listed by Dun & Bradstreet as the fax number for Candiru Ltd, also known as Saito Tech Ltd. Figure 6: This Candiru certificate we found on Censys was the starting point of our analysis. Censys data records that a total of six IP addresses returned this certificate: 151.236.23[.]93, 69.28.67[.]162, 176.123.26[.]67, 52.8.109[.]170, 5.135.115[.]40, 185.56.89[.]66. The latter four of these IP addresses subsequently returned another certificate, which we fingerprinted (Fingerprint CF1) based on distinctive features. We searched Censys data for this fingerprint. SELECT parsed.fingerprint_sha256 FROM`censys-io.certificates_public.certificates` WHERE parsed.issuer_dn IS NULL AND parsed.subject_dn IS NULL AND parsed.validity.length = 8639913600 AND parsed.extensions.basic_constraints.is_ca Table 2: Fingerprint CF1 We found 42 certificates on Censys matching CF1. We observed that six IPs matching CF1 certificates later returned certificates that matched a second fingerprint we devised, CF2. The CF2 fingerprint is based on certificates that match those generated by a “Fake Name” generator. We first ran an SQL query on Censys data for the fingerprint, and then filtered by a list of fake names. SELECT parsed.fingerprint_sha256, parsed.subject_dn FROM`censys-io.certificates_public.certificates` WHERE (parsed.subject_dn = parsed.issuer_dn AND REGEXP_CONTAINS (parsed.subject_dn, r"^O=[A-Z][a-z]+,,? CN=[a-z]+\.(com|net|org)+$") AND parsed.extensions.basic_constraints.is_ca Table 3: Fingerprint CF2 SQL Query. The SQL query yielded 572 results. We filtered the results, requiring the TLS certificate’s organization in the parsed.subject_dn field to contain an entry from the list of 475 last names in the Perl Data-Faker module. We suspect that Candiru is using either this Perl module, or another module that uses the same word list, to generate fake names for TLS certificates. Neither the Perl Data-Faker module, nor other similar modules (e.g., the Ruby Faker Gem, or the PHP Faker module) appear to have built-in functionality for generating fake TLS certificates. Thus, we suspect that the TLS certificate generation code is custom code written by Candiru. After filtering, we found 542 matching certificates. We then developed an HTTP fingerprint, called BRIDGE, with which we scanned the Internet and built a third TLS fingerprint, CF3. We are keeping the BRIDGE and CF3 fingerprints confidential for now in order to maintain visibility into Candiru’s infrastructure. Overlap with CHAINSHOT One of the IPs that matched our CF1 fingerprint, 185.25.50[.]194, was pointed to by dl.nmcyclingexperience[.]com, which is mentioned as a final URL of a spyware payload delivered by the CHAINSHOT exploit kit in a 2018 report. CHAINSHOT is believed to be linked to Candiru, though no public reports have outlined the basis for this attribution, until now. Kaspersky has observed UAE hacking group Stealth Falcon using CHAINSHOT, as well as an Uzbekistan-based customer that they call SandCat. While numerous analyses have focused on various CHAINSHOT exploitation techniques, we have not seen any public work that examines Candiru’s final Windows payload. Overlap with Google TAG Research On 14 July 2021, Google’s Threat Analysis Group (TAG) published a report that mentions two Chrome zero-day exploits that TAG observed used against targets (CVE-2021-21166 and CVE-2021-30551). The report mentions nine websites that Google determined were used to distribute the exploits. Eight of these websites pointed to IP addresses that matched our CF3 Candiru fingerprint. We thus believe that the attacks that Google observed involving these Chrome exploits were linked to Candiru. Google also linked a further Microsoft Office exploit they observed (CVE-2021-33742) to the same operator. Targeting Themes Examination of Candiru’s targeting infrastructure permits us to make guesses about the location of potential targets, and topics and themes that Candiru operators believed that targets would find relevant and enticing. Some of the themes strongly suggest that the targeting likely concerned civil society and political activity. This troubling indicator matches with Microsoft’s observation of the extensive targeting of members of civil society, academics, and the media with Candiru’s spyware. We observed evidence of targeting infrastructure masquerading as media, advocacy organizations, international organizations, and others (see: Table 4). We found many aspects of this targeting concerning, such as the domain blacklivesmatters[.]info, which may be used to target individuals interested in or affiliated with this movement. Similarly, infrastructure masquerading as Amnesty International and Refugee International are troubling, as are lookalike domains for the United Nations, World Health Organization, and other international organizations. We also found the targeting theme of gender studies (e.g. womanstudies[.]co & genderconference[.]org) to be particularly interesting and warranting further investigation. Theme Example Domains Masquerading as International Media cnn24-7[.]online CNN dw-arabic[.]com Deutsche Welle euro-news[.]online Euronews rasef22[.]com Raseef22 france-24[.]news France 24 Advocacy Organizations amnestyreports[.]com Amnesty International blacklivesmatters[.]info Black Lives Matter movement refugeeinternational[.]org Refugees International Gender Studies womanstudies[.]co Academic theme genderconference[.]org Academic conference Tech Companies cortanaupdates[.]com Microsoft googlplay[.]store Google apple-updates[.]online Apple amazon-cz[.]eu Amazon drpbx-update[.]net Dropbox lenovo-setup[.]tk Lenovo konferenciya-zoom[.]com Zoom zcombinator[.]co Y Combinator Social Media linkedin-jobs[.]com LinkedIn faceb00k-live[.]com Facebook minstagram[.]net Instagram twitt-live[.]com Twitter youtubee[.]life YouTube Popular Internet Websites wikipediaathome[.]net Wikipedia International Organizations osesgy-unmissions[.]org Office of the Special Envoy of the Secretary-General for Yemen un-asia[.]co United Nations whoint[.]co World Health Organization Government Contractors vesteldefnce[.]io Turkish defense contractor vfsglobal[.]fr Visa services provider Table 4: Some targeting themes observed in Candiru domains. A range of targeting domains appears to be reasonably country-specific (see: Table 5). We believe these domain themes indicate likely countries of targets and not necessarily the countries of the operators themselves. Country Example Domain What is this likely impersonating? Indonesia indoprogress[.]co Left-leaning Indonesian publication Russia pochtarossiy[.]info Russian postal service Czechia kupony-rohlik[.]cz Czech grocery Armenia armenpress[.]net State news agency of Armenia Iran tehrantimes[.]org English-language daily newspaper in Iran Turkey yeni-safak[.]com Turkish newspaper Cyprus cyprusnet[.]tk A portal providing information on Cypriot businesses. Austria oiip[.]org Austrian Institute for International Affairs Palestine lwaeh-iteham-alasra[.]com Website that publishes Israeli court indictments of Palestinian prisoners Saudi Arabia mbsmetoo[.]com Website for “an international campaign to support the case of Jamal Khashoggi” and other cases against Saudi Crown Prince Mohammed bin Salman Slovenia total-slovenia-news[.]net English-language Slovenian news site. Table 5: Some country themes observed in Candiru domains. 4. A Saudi-Linked Cluster? A document was uploaded from Iran to VirusTotal that used an AutoOpen Macro to launch a web browser, and navigated the browser to the URL https://cuturl[.]space/lty7uw, which VirusTotal recorded as redirecting to a URL, https://useproof[.]cc/1tUAE7A2Jn8WMmq/api, that mentions a domain we linked to Candiru, useproof[.]cc. The domain useproof[.]cc pointed to 109.70.236.107, which matched our fingerprint CF3. The document was blank, except for a graphic containing the text “Minister of Foreign Affairs of the Islamic Republic of Iran.” Figure 7: A document that loads a Candiru URL was uploaded to VirusTotal from Iran, and includes a header image referencing the Minister of Foreign Affairs. We fingerprinted the behaviour of cuturl[.]space and traced it to five other URL shorteners: llink[.]link, instagrarn[.]co, cuturl[.]app, url-tiny[.]co, and bitly[.]tel. Interestingly, several of these domains were flagged by a researcher at ThreatConnect in two tweets, based on suspicious characteristics of their registration. We suspect that the AutoOpen format and the URL shorteners may be unique to a particular Candiru client. A Saudi Twitter user contacted us and reported that Saudi users active on Twitter were receiving messages with suspicious short URLs, including links to the domain name bitly[.]tel. Given this, we suspect that the URL shorteners may be linked to Saudi Arabia. 5. Additional Corporate Details for Candiru Ya’acov Weitzman (ויצמן יעקב) and Eran Shorer (שורר ערן) founded Candiru in 2014. Isaac Zack (זק יעקב), also reportedly an early investor in NSO Group, became the largest shareholder of Candiru less than two months after its founding and took a seat on its board of directors. In January 2019, Tomer Israeli (ישראלי תומר) first appeared in corporate records as Candiru’s “director of finance,” and Eitan Achlow (אחלאו איתן) was named CEO. A number of independent investors appear to have funded Candiru’s operations over the years. As of Candiru’s notice of allotment of shares filed in February 2021 with the Israeli Corporations Authority, Zack, Shorer, and Weitzman are still the largest shareholders. Three organizations are the next largest shareholders: Universal Motors Israel LTD (corporate registration 511809071), ESOP management and trust services (איסופ שירותי ניהול) corporate registration 513699538, and Optas Industry Ltd. ESOP (corporate registration no. 513699538) is an Israeli company that provides employee stock program administrative services to corporate clients. We do not know whether ESOP holds its stock in trust for certain Candiru employees. Optas Industry Ltd. is a Malta-based private equity firm (registration number C91267, shareholder Leonard Joseph O’Brien, directors are O’Brien and Michael Ellul, incorporated 28 March 2019). It has been reported that for a decade O’Brien has served as head of investment and a board member of the Gulf Investment Fund, and that the sovereign Qatar Investment Authority has a 12% stake in the Gulf Investment Fund (through a subsidiary, Qatar Holding). Universal Motors Israel (company registration no. 511809071) as an investor (including a seat on Candiru’s board) is curious considering their primary business is the distribution of new and used automobiles. Besides Amit Ron (רון עמית), the Universal Motors Israel representative, Candiru’s board as of December 2020 includes Isaac Zack, Ya’acov Weitzman, and Eran Shorer. In addition to the involvement of Zack, Candiru shares other points of commonality with NSO Group, including representation by the same law firm and utilization of the same employee equity and trust administration services company. 6. Conclusion Candiru’s apparent widespread presence, and the use of its surveillance technology against global civil society, is a potent reminder that the mercenary spyware industry contains many players and is prone to widespread abuse. This case demonstrates, yet again, that in the absence of any international safeguards or strong government export controls, spyware vendors will sell to government clients who will routinely abuse their services. Many governments that are eager to acquire sophisticated surveillance technologies lack robust safeguards over their domestic and foreign security agencies. Many are characterized by poor human rights track records. It is not surprising that, in the absence of strong legal restraints, these types of government clients will misuse spyware services to track journalists, political opposition, human rights defenders, and other members of global civil society. Civil Society in the Crosshairs…Again The apparent targeting of an individual because of their political beliefs and activities that are neither terrorist or criminal in nature is a troubling example of this dangerous situation. Microsoft’s independent analysis is also disconcerting, discovering at least 100 victims of Candiru’s malware operations that include “politicians, human rights activists, journalists, academics, embassy workers and political dissidents.” Equally disturbing in this regard is Candiru’s registration of domains impersonating human rights NGOs (Amnesty International), legitimate social movements (Black Lives Matter), international health organizations (WHO), women’s rights themes, and news organizations. Although we lack context around the specific use cases connected to these domains, their mere presence as part of Candiru’s infrastructure—in light of widespread harms against civil society associated with the global spyware industry—is highly concerning and an area that merits further investigation. Rectifying Harms around the Commercial Spyware Market Ultimately, tackling the malpractices of the spyware industry will require a robust, comprehensive approach that goes beyond efforts focused on a single high-profile company or country. Unfortunately, Israel’s Ministry of Defense—from whom Israeli-based companies like Candiru must receive an export license before selling abroad—has so far proven itself unwilling to subject surveillance companies to the type of rigorous scrutiny that would be required to prevent abuses of the sort we and other organizations have identified. The export licensing process in that country is almost entirely opaque, lacking even the most basic measures of public accountability or transparency. It is our hope that reports such as this one will help spur policymakers and legislators in Israel and elsewhere to do more to prevent the mounting harms associated with an unregulated spyware marketplace. It is worth noting the growing risks that spyware vendors and their ownership groups themselves face as a result of their own reckless sales. Mercenary spyware vendors like Candiru market their services to their government clients as “untraceable” tools that evade detection and thus prevent their clients’ operations from being exposed. However, our research shows once again how specious these claims are. Although sometimes challenging, it is possible for researchers to detect and uncover targeted espionage using a variety of networking monitoring and other investigative techniques, as we have demonstrated in this report (and others like it). Even the most well-resourced surveillance companies make operational mistakes and leave digital traces, making their marketing claims about being stealthy and undetectable highly questionable. To the extent that their products are implicated in significant harms or cases of unlawful targeting, the negative exposure that comes from public interest research may create significant liabilities for ownership, shareholders, and others associated with these spyware companies. Finally, this case shows the value of a community-wide approach to investigations into targeted espionage. In order to remedy the harms generated by this industry for innocent members of global civil society, cooperation among academic researchers, network defenders, threat intelligence teams, and technology platforms is critical. Our research drew upon multiple data sources curated by other groups and entities with whom we cooperated, and ultimately helped identify software vulnerabilities in a widely used product that were reported to and then patched by its vendor. Acknowledgements Thanks to Microsoft and Microsoft Threat Intelligence Center (MSTIC) for their collaboration, and for working to quickly address the security issues identified through their research. We are especially grateful to the targets that make the choice to work with us to help identify and expose the entities involved in targeting them. Without their participation this report would not have been possible. Thanks to Team Cymru for providing access to their Pure Signal Recon product. Their tool’s ability to show Internet traffic telemetry from the past three months provided the breakthrough we needed to identify the initial victim from Candiru’s infrastructure Funding for this project was provided by a generous grant from the John D. and Catherine T. MacArthur Foundation, the Ford Foundation, Oak Foundation, Sigrid Rausing Trust, and Open Societies Foundation. Thanks to Miles Kenyon, Mari Zhou, and Adam Senft for communications, graphics, and organizational support. Sursa: https://citizenlab.ca/2021/07/hooking-candiru-another-mercenary-spyware-vendor-comes-into-focus/
-
Adica sa "ataci" putinii oameni care munesc la stat. Daca vrei sa faci ceva super util, dupa parerea mea, fa un crawler sa iei documentele din SEAP - Sistemul Electronic de Achizitii Publice care sa arate cam unde se duc banii pe care ii platim noi catre stat.
-
Say hi to Microsoft's own Linux: CBL-Mariner Microsoft has its own Linux distribution and, yes, you can download, install and run it. In fact, you may want to do just that. By Steven J. Vaughan-Nichols for Linux and Open Source | July 16, 2021 -- 12:27 GMT (13:27 BST) | Topic: Edge Computing Ok, so it's not named MS-Linux or Lindows, but Microsoft now has its very own, honest-to-goodness general-purpose Linux distribution: Common Base Linux, (CBL)-Mariner. And, just like any Linux distro, you can download it and run it yourself. Amazing isn't it? Why the next thing you know Microsoft will let you run Windows applications on Linux! Oh, wait it has! One more time with feeling, listen to yours truly and Linus Torvalds, Microsoft is no longer Linux's enemy. The enemy of AWS and Google? You bet. But, Linux no. Take, for example, CBL-Mariner. Microsoft didn't make a big fuss about releasing CBL-Mariner. It quietly released the code on GitHub and anyone can use it. Indeed, Juan Manuel Rey, a Microsoft Senior Program Manager for Azure VMware, recently published a guide on how to build an ISO CBL-Mariner image. Before this, if you were a Linux expert, with a spot of work you could run it, but now, thanks to Rey, anyone with a bit of Linux skill can do it. CBL-Mariner is not a Linux desktop. Like Azure Sphere, Microsoft's first specialized Linux distro, which is used for securing edge computing services, it's a server-side Linux. This Microsoft-branded Linux is an internal Linux distribution. It's meant for Microsoft's cloud infrastructure and edge products and services. Its main job is to provide a consistent Linux platform for these devices and services. Just like Fedora is to Red Hat, it keeps Microsoft on Linux's cutting edge. CBL-Mariner is built around the idea that you only need a small common core set of packages to address the needs of cloud and edge services. If you need more, CBL-Mariner also makes it easy to layer on additional packages on top of its common core. Once that's done, its simple build system easily enables you to create RPM packages from SPEC and source files. Or, you can also use it to create ISOs or Virtual hard disk (VHD) images. As you'd expect the basic CBL-Mariner is a very lightweight Linux. You can use it as a container or a container host. With its limited size also comes a minimal attack surface. This also makes it easy to deploy security patches to it via RPM. Its designers make a particular point of delivering the latest security patches and fixes to its users. For more about its security features see CBL-Mariner's GitHub security features list. Like any other Linux distro, CBL-Mariner is built on the shoulders of giants. Microsoft credits VMware's Photon OS Project, a secure Linux, The Fedora Project, Linux from Scratch -- a guide to building Linux from source, the OpenMamba distro, and, yes, even GNU and the Free Software Foundation (FSF). I know it galls some of you that Microsoft acknowledges the FSF, but this is not the '90s and Steve "Linux is a cancer" Ballmer hasn't been Microsoft's CEO since 2014. To try it for yourself, you'll build it on Ubuntu 18.04. Frankly, I'd be surprised if you couldn't build it on any Ubuntu Linux distro from 18.04 on up. I did it on my Ubuntu 20.04.2 desktop. You'll also need the latest version of the Go language and Docker. Even though the default build system is Ubuntu, CBL-Mariner itself owes a large debt to Fedora. For example, it uses Tiny DNF as its DNF RPM package manager. For its atomic image-based update mechanism it uses RPM-OSTree. So, if you want a secure, stable Linux for your edge computing or container needs, I suggest -- in all seriousness -- you give CBL-Mariner a try. While I continue to have my doubts about Windows as a serious operating system, Microsoft did a fine job of creating a solid Linux. Who would have guessed! Sursa: https://www.zdnet.com/article/say-hi-to-microsofts-own-linux-cbl-mariner/
-
- 1
-
-
Google patches 8th Chrome zero-day exploited in the wild this year By Sergiu Gatlan July 16, 2021 Google has released Chrome 91.0.4472.164 for Windows, Mac, and Linux to fix seven security vulnerabilities, one of them a high severity zero-day vulnerability exploited in the wild. "Google is aware of reports that an exploit for CVE-2021-30563 exists in the wild," the company revealed. The new Chrome release has started rolling out worldwide to the Stable desktop channel and will become available to all users over the following days. Google Chrome will automatically update itself on the next launch, but you can also manually update it by checking for the newly released version from Settings > Help > 'About Google Chrome.' Eighth exploited zero-day patched this year The zero-day patched on Thursday and reported by Google Project Zero's Sergei Glazunov is described as a type confusion bug in V8, Google's open-source C++-based and high-performance WebAssembly and JavaScript engine. Even though type confusion weaknesses would generally lead to browser crashes following successful exploitation by reading or writing memory out of the bounds of the buffer, they can also be exploited by threat actors to execute arbitrary code on devices running vulnerable software. While Google said that it is aware of CVE-2021-30563 in the wild exploitation, it did not share info regarding these attacks to allow the security update to deploy on as many systems as possible before more threat actors start actively abusing. "Access to bug details and links may be kept restricted until a majority of users are updated with a fix," Google said. "We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed." In all, Google has patched eight Chrome zero-day bugs exploited by attackers in the wild since the start of 2021. Besides CVE-2021-30563, the company previously addressed: CVE-2021-21148 - February 4th, 2021 CVE-2021-21166 - March 2nd, 2021 CVE-2021-21193 - March 12th, 2021 CVE-2021-21220 - April 13th, 2021 CVE-2021-21224 - April 20th, 2021 CVE-2021-30551 - June 9th, 2021 CVE-2021-30554 - June 17th, 2021 More details on previously patched Chrome zero-days The Google Threat Analysis Group (TAG) has shared additional details earlier this week regarding in-the-wild exploitation of CVE-2021-21166 and CVE-2021-30551 Chrome zero-days. "Based on our analysis, we assess that the Chrome and Internet Explorer exploits described here were developed and sold by the same vendor providing surveillance capabilities to customers around the world," Google said. On Thursday, Microsoft and Citizen Lab linked the vendor mentioned in Google TAG's report to Israeli spyware vendor Candiru Threat actors deployed the surveillance vendor's spyware to infect iOS, Android, macOS, and Windows devices using Chrome zero-days and Windows unpatched flaws. Microsoft researchers found that Candiru's malware was used to compromise the systems of "politicians, human rights activists, journalists, academics, embassy workers, and political dissidents." In all, Microsoft said it discovered "at least 100 victims in Palestine, Israel, Iran, Lebanon, Yemen, Spain, United Kingdom, Turkey, Armenia, and Singapore." Sursa: https://www.bleepingcomputer.com/news/security/google-patches-8th-chrome-zero-day-exploited-in-the-wild-this-year/
-
- 1
-
-
Microsoft: Mai multe guverne au folosit un program de spionaj făcut în Israel. Printre ținte au fost politicieni, disidenți, jurnaliști 16.07.2021 11:03 Un program informatic de spionaj făcut de o companie din Israel a fost folosit de mai multe guverne pentru a monitoriza politicieni, disidenți, jurnaliști, activiști pentru drepturile omului. Foto: Profimedia Images Mai multe guverne au utilizat instrumente informatice de spionaj dezvoltate de o companie israeliană pentru a monitoriza responsabili politici, disidenţi, jurnalişti, universitari şi militanţi pentru drepturile omului, potrivit unor experţi din cadrul Microsoft și Citizen Lab, citaţi joi de AFP. Aceste “arme” puternice au fost utilizate împotriva a peste 100 de persoane din lumea întreagă, susţine un responsabil de securitate de la Microsoft şi Citizen Lab, o organizaţie cu sediul la Universitatea din Toronto. Microsoft declară că şi-a modificat sistemul de operare Windows pentru a remedia lacunele exploatate de grupul israelian, conform Agerpres. Aceasta este o companie cu sediul la Tel Aviv, deosebit de discretă, care vinde exclusiv guvernelor spyware (categorie de software rău intenţionat) ce pot infecta smartphone-uri, calculatoare şi servicii de cloud computing (accesarea la cerere, de resurse hardware şi software prin internet), potrivit Citizen Lab. Numele său oficial este în prezent Saito Tech Ltd, dar este mai cunoscută sub numele de Candiru. Cercetătorii de la Citizen Lab au găsit dovezi că software-ul de spionare a reuşit să extragă informaţii din mai multe aplicaţii utilizate de victime, inclusiv Gmail, Skype, Telegram şi Facebook. Software-ul poate consulta, de asemenea, istoricul căutărilor persoanelor vizate pe Internet, precum şi parolele lor şi poate activa camera şi microfonul aparatelor lor. La rândul său, Microsoft subliniază că a identificat victime ale acestui soft în teritoriile palestiniene, în Israel, în Liban, Yemen, Spania, Regatul Unit, Turcia, Armenia şi Singapore. Potrivit gigantului IT, care a denumit spyware-ul „DevilsTongue”, acesta din urmă a reuşit să se infiltreze pe site-uri populare precum Facebook, Twitter, Gmail şi Yahoo pentru a colecta informaţii, a citi mesajele victimelor şi a prelua fotografii. Software-ul a reuşit de asemenea să trimită mesaje în numele victimelor. Compania americană a creat „protecţii” pentru a-şi proteja produsele de incursiunile acestui software dezvoltat de grupul israelian pe care îl numeşte Sourgum. „Am prezentat aceste protecţii comunităţii de securitate, astfel încât să putem contracara şi atenua în mod colectiv această ameninţare”, menţionează Microsoft. Editor : A.C. Sursa; https://www.digi24.ro/stiri/externe/microsoft-mai-multe-guverne-au-folosit-un-program-de-spionaj-facut-in-israel-printre-tinte-au-fost-politicieni-disidenti-jurnalisti-1600429
-
- 1
-
-
Salut, nu am niciun detaliu despre astfel de firme, ma astept sa se invete lucruri de aici, nu stiu cat de mult conteaza acea diploma pe care o primesti dar din cate imi aduc aminte cursurile se intind pe o durata imensa de timp. Daca de exemplu un curs dureaza 6 luni... In 6 luni poti invata sa proiectezi rachete (daca nu lucrezi 2 ore pe saptamana).
-
Critical SQL Injection Vulnerability Patched in WooCommerce
Nytro posted a topic in Stiri securitate
Critical SQL Injection Vulnerability Patched in WooCommerce This entry was posted in Vulnerabilities, WordPress Security on July 15, 2021 by Ram Gall 3 Replies Update: The article originally credited Tommy DeVoss (dawgyg) for the discovery. We’ve since been contacted by Tommy, who let us know that the credit should go to another researcher, Josh from DOS (Development Operations Security) On July 14, 2021, WooCommerce released an emergency patch for a SQL Injection vulnerability reported by a security researcher, Josh from DOS (Development Operations Security), based in Richmond Virginia. This vulnerability allowed unauthenticated attackers to access arbitrary data in an online store’s database. WooCommerce is the leading e-Commerce platform for WordPress and is installed on over 5 million websites. Additionally, the WooCommerce Blocks feature plugin, installed on over 200,000 sites, was affected by the vulnerability and was patched at the same time. The Wordfence Threat Intelligence team was able to develop proofs of concept for time-based and boolean-based blind injections and released an initial firewall rule to our Premium customers within hours of the patch. We released an additional firewall rule to cover a separate variant of the same attack the next day, on July 15, 2021. Sites still running the free version of Wordfence will receive the same protection after 30 days, on August 13 and August 14, 2021. We strongly recommend updating to a patched version of WooCommerce immediately if you have not been updated automatically, as this will provide the best possible protection. The vulnerability affects versions 3.3 to 5.5 of the WooCommerce plugin and WooCommerce Blocks 2.5 to 5.5 plugin. WooCommerce Responded Immediately In the announcement by WooCommerce, Beau Lebens, the Head of Engineering for WooCommerce stated, “Upon learning about the issue, our team immediately conducted a thorough investigation, audited all related codebases, and created a patch fix for every impacted version (90+ releases) which was deployed automatically to vulnerable stores.” Due to the critical nature of the vulnerability, the WordPress.org team is pushing forced automatic updates to vulnerable WordPress installations using these plugins. Store owners using older versions can update to the latest version in their branch. For example, if your storefront is using WooCommerce version 5.3, you can update to version 5.3.1 to minimize the risk of compatibility issues. Within the security announcement from WooCommerce, there is a table detailing the 90 patched versions of WooCommerce. Additionally, WooCommerce has a helpful guide for WooCommerce updates. Has This Been Exploited in the Wild? While the original researcher has indicated that this vulnerability has been exploited in the wild, Wordfence Threat Intelligence has found extremely limited evidence of these attempts and it is likely that such attempts were highly targeted. If you think you have been exploited due to this vulnerability, the WooCommerce team is recommending administrative password resets after updating to provide additional protection. If you do believe that your site may have been affected, a review of your log files may show indications. Look for a large number of repeated requests to /wp-json/wc/store/products/collection-data or ?rest_route=/wc/store/products/collection-data in your log files. Query strings which include %2525 are an indicator that this vulnerability may have been exploited on your site. Improving Security of the WordPress Ecosystem Sites with e-Commerce functionality are a high-value target for many attackers, so it is critical that vulnerabilities in e-Commerce platforms are addressed promptly to minimize the potential damage that can be caused. With the growth of both WordPress and WooCommerce, more security researchers have turned attention to WordPress related products. The rapid and deep response that the WooCommerce team performed in protecting WooCommerce users is a great sign for the ongoing security of e-Commerce in the open source WordPress ecosystem. Sursa: https://www.wordfence.com/blog/2021/07/critical-sql-injection-vulnerability-patched-in-woocommerce/ -
#CHATCONTROL: EU PARLIAMENT APPROVES MASS SURVEILLANCE OF PRIVATE COMMUNICATIONS JULY 6, 2021 Brussels, 06/07/2021 – Today, the European Parliament approved the ePrivacy Derogation, allowing providers of e-mail and messaging services to automatically search all personal messages of each citizen for presumed suspect content and report suspected cases to the police. The European Pirates Delegation in the Greens/EFA group strongly condemns this automated mass surveillance, which effectively means the end of privacy in digital correspondence. Pirate Party MEPs plan to take legal action. In today’s vote, 537 Members of the European Parliament approved Chatcontrol, with 133 voting against and 20 abstentions.[1] According to police data, in the vast majority of cases, innocent citizens come under suspicion of having committed an offence due to unreliable processes. In a recent representative poll, 72% of EU citizens opposed general monitoring of their messages.[2] While providers will initially have a choice to search or not to search communications, follow-up legislation, expected in autumn, is to oblige all communications service providers to indiscriminate screening. Breyer: “This harms children rather than protecting them” German Pirate Party Member of the European Parliament Patrick Breyer, shadow rapporteur on the legislative proposal, comments: *“The adoption of the first ever EU regulation on mass surveillance is a sad day for all those who rely on free and confidential communications and advice, including abuse victims and press sources. The regulation deals a death blow to the confidentiality of digital correspondence. It is a general breach of the dam to permit indiscriminate surveillance of private spaces by corporations – by this totalitarian logic, our post, our smartphones or our bedrooms could also be generally monitored. Unleashing such denunciation machines on us is ineffective, illegal and irresponsible. Indiscriminate searches will not protect children and even endanger them by exposing their private photos to unknown persons, and by criminalising children themselves. Already overburdened investigators are kept busy with having to sort out thousands of criminally irrelevant messages. The victims of such a terrible crime as child sexual abuse deserve measures that prevent abuse in the first place. The right approach would be, for example, to intensify undercover investigations into child porn rings and reduce of the years-long processing backlogs in searches and evaluations of seized data.”* Marcel Kolaja, Czech Pirate Party MEP and Vice-President of the European Parliament, comments: “Post officers also do not open your private letters to see if you’re sending anything objectionable. The same rule should apply online. However, what this exception will do is an irrevocable damage to our fundamental right to privacy, Moreover, monitoring across large platforms will only lead to criminals moving to platforms where chat control will be technically impossible. As a result, innocent people will be snooped on a daily basis while tracking down criminals will fail.“ Pirates plan legal action against the regulation The EU’s plans for chat control have been confirmed to violate fundamental rights by a former judge of the European Court of Justice.[3] Patrick Breyer plans to take legal action against the regulation and is looking for victims of abuse who would file such a complainant. „Abuse victims are particularly harmed by this mass surveillance“, explains Breyer. „To be able to speak freely about the abuse they have suffered and seek help in a safe space is critical to victims of sexualised violence. depend on the possibility to communicate safely and confidentially. These safe spaces are now being taken away from them, which will prevent victims from seeking help and support.“ The European Commission has already announced a follow-up regulation to make chat control mandatory for all email and messaging providers. Previously secure end-to-end encrypted messenger services such as Whatsapp or Signal would be forced to install a backdoor. There is a considerable backlash against these plans: A public consultation carried out by the EU Commission revealed that 51% of all respondents oppose chat control for e-mail and messaging providers. 80% of respondents do not want chat control to be applied to encrypted messages. [4] Due to the resistance, EU Commissioner for Home Affairs Ylva Johannson has postponed the proposal until September 2021. More Information on Chatcontrol: www.chatcontrol.eu [1] [2] https://www.patrick-breyer.de/en/poll-72-of-citizens-oppose-eu-plans-to-search-all-private-messages-for-allegedly-illegal-material-and-report-to-the-police/ [3] https://www.patrick-breyer.de/wp-content/uploads/2021/03/Legal-Opinion-Screening-for-child-pornography-2021-03-04.pdf [4] https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12726-Child-sexual-abuse-online-detection-removal-and-reporting-/public-consultation_de Sursa: https://european-pirateparty.eu/parliament-approves-chatcontrol/
-
Par sa fie multe request-uri catre portul 27016 care sa contina "....TSource Engine Query.." Am luat 3 IP-uri random si pare ca vine cate un astfel de pachet de la fiecare IP, nu mai multe. Ma gandesc ca IP-ul sursa este spoofed, dar e posibil ca pachetul in sine sa fie necesar. Un test rapid ar fi blocarea acestor pachete DAR ar putea crapa ceva(sau tot, sa nu mai mearga).
-
O solutie teoretica, dar care ar dura ceva timp, ar fi urmatoarea: 1. Monitorizezi performanta si prinzi in timp ce se intampla un astfel de atac 2. Pornesti un tcpdump si capturezi pachete pentru o anumite perioada, sa zicem 2-5 minute 3. Analizezi si vezi ce pachete vin in disperare 4. Le blochezi (dar verifici sa nu crape ceva) O alta posibila solutie ar fi log-uri de la serverele de CS, daca exista. Daca se pot pune pe un mod mai "verbose" e ideal. Poate, cumva, apar multe loguri cu anumite lucruri. SYN cookies ai incercat? https://tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/chap5sec56.html De fapt cred ca foloseste UDP CS-ul din cate stie eu. Asta inseamna ca teoretic DDOS-ul poate fi "amplificat" prin diverse vulnerabilitati in servere de pe Internet, dar pachetele nu sunt valide. Fa un astfel de tcpdump mai bine si daca crezi ca nu contine nimic "sensitive" ni-l poti da sa ne uitam peste el. Nu garantam ca gasim ceva, dar putem incerca. PS: Daca CS e doar pe portul 27.001 ai putea captura doar datele de pe acel port, dar ar fi util sa te uiti si la celelalte, cine stie ce o mai fi pe acolo.
-
Salut, ai incercat sa vorbesti cu cei de la Voxility? Ar putea stii despre ce e vorba si cum sa le opreasca. Daca nu, va trebui vazut cum functioneaza mizeriile respective, sa stii ce si cum sa opresti, probabil cateva reguli de iptables ar trebui sa fie de ajuns, nu ma astept sa fie ceva tocmai sofisticat.
-
Nu ma pricep la hardware dar cand am comparat 2 procesoare am folosit asta: https://cpu.userbenchmark.com/Compare/Intel-Core-i7-3610QM-vs-Intel-Core-i7-2600/2730vs620 Si am tinut cont de acel "Speed rank". Dar o comparatie reala se face in functie de multe aspecte si la un laptop conteaza mai multe decat procesorul.