-
Posts
3453 -
Joined
-
Last visited
-
Days Won
22
Everything posted by Aerosol
-
@AdIntended you made my day, why js? Eu unul zic C# On:// Nimeni nu zice nimic de ASM?
-
SubDomain Scanner v1.0 - Pastebin.com[/url] Nu ai postat sursa si de ce ai modificat in RST? Respecta si tu autorul. Citeste si tu regulamentul. [code]2. Oferi?i credite ?i da?i sursele originale. Dac? ve?i copia un link sau un tutorial de pe alt site/forum/blog, oferi?i credit autorului ini?ial.[/code] https://rstforums.com/forum/59818-regulamentul-forumului-ro.rst
-
In the first part of this series, we covered the Top 5 OWASP ProActive Controls and learned how they can prove to be of great use in securing applications. In this part, we will look at the last 5 OWASP ProActive Controls and learn more about them. Protect Data and Privacy It helps to protect our data inside a database. Sensitive data like passwords, credit card details and bank account details etc. should be stored in encrypted or hashed format inside a database or chosen data storage. One should not use encryption and hashing interchangeably, as encryption and hashing are entirely different from each other. Encryption is used to convert readable text or plain text into unreadable text or cipher text. Encryption is a two way data conversion technique, meaning data which is encrypted can also be decrypted (if you have the decryption key). Encryption can be done in two main ways: Symmetric method Asymmetric method Symmetric encryption or Secret Key Cryptography (SKC) uses a secret key for encryption and decryption. It means the receiver uses same key that was used for encryption to decrypt. Asymmetric method or Public Key Cryptography (PKC) uses two sets of keys to perform encryption and decryption. One is a public key and another is a private key. Public Key is used for data encryption and Private Key is used for data decryption. Depending upon your application requirement, developers can choose between the two encryption methods. Hashing is different from encryption; unlike encryption, it is a one way process. It means data that’s converted into hashed format can never be converted into plain text. An application cannot choose hashing or encryption just like that. A ecure storage technique is chosen depending upon the data that has to be stored securely. At some time in the future, if the sensitive data is to be shown to the user in plaintext, then encryption is the best option (plaintext <->ciphertext). If the sensitive data is to be stored for some validation or authentication or verification, then hashing should be stored (Plaintext -> Hash). For example: Sensitive information between the client and server should also be in encrypted form. Hyper Text Transfer Protocol Secure (HTTPS) should be used instead of Hyper Text Transfer Protocol (HTTP) whenever any sensitive information is to be transmitted. When HTTPS is used, client server communication is encrypted using supported technology like SSLv2, SSLv3, TLS1.0, and TLS1.2. It is especially used to protect highly confidential data like online banking. The port number for HTTP is 80 and for HTTPS is 443. Implement Logging and intrusion Detection In an application, most requests are received using GET, POST, PUT, and DELETE methods. A request sent can be either a malicious request or a clean request. Malicious requests are those requests which contain attack vectors like SQL Injection, XSS, Unauthorized Data Access, etc. When there is public user activity or Intranet employee access, then the application should always keep track of all the activities taking place. Logging is very important in every application and one of the areas which is most neglected during development and deployment. Logging means storing log data about every request that is sent and received, such time, IP address, requested page, GET data, and POST data of a request. If a user is authenticated, then who is the user, when he logged in, when he logged out, etc. Since all user activity is being logged, it should also be noted that user sensitive data like password and financial details should NEVER be logged. Intrusion Detection means a malicious request with an attack vector has been detected and received by the application or not. If such a request has been received, then suitable actions like logging and request drop should be performed. For example, if a SQL Injection vulnerability exists on a login page, the application should have a feature to detect when SQL Injection is performed and should log time and from which IP address the attack originated, and then perform a suitable action on it. ModSecurity and OWASP ModSecurity Core Rule Set Project can prove to be of great use when you want to detect and/or prevent any malicious activity. Logging and intrusion detection is necessary to keep a record of every activity that takes place on an application. Intrusion detection is implemented along with logging to keep a check on when an attack or malicious data is received, so that it can be handled properly. Leverage Security Features of Frameworks and Security Libraries When developers start developing any application, either they don’t implement secure coding practices or use third party libraries for implementing security features. But most programming languages or development framework have built-in security functions and libraries which can be leveraged to implement security features in applications. Developers should use those built-in features instead of third party libraries. Recall OWASP Top 10 Vulnerabilities “A-9 Using Components with Known Vulnerabilities”. If third party components or libraries are used and any vulnerability is discovered in those components, then our application will automatically become vulnerable. It is recommended that developers should use security features provided by the programming language like escapeHtml() of httputils provided by Apache Commons Lang in Java and htmlentities() in PHP, which can be used to mitigate Cross-Site Scripting (XSS) vulnerability. But it is a known fact that industry tested security features are not readily available in programming languages. In such a case where useful and required security features or libraries are not available in the programming language you are using, then industry trusted and tested security libraries should be used. One of the well-known OWASP projects for this purpose is the OWASP ESAPI Project, which helps developers to implement security controls in their applications. For example: In Java we have security functions like escapeHtml() which can be used to mitigate XSS. String name = StringEscapeUtils.escapeHtml(request.getParameter(“name”)); PreparedStatement is used to mitigate SQL Injection. PreparedStatement ps=(PreparedStatement) con.prepareStatement(“select * from users where username=? and password=? limit 0,1?); Using built-in security features ensures that you don’t have to use unnecessary libraries you are not confident in or have security tested. Include Security-Specific Requirements When a software or web application development is to be started, then software requirements are laid out, which takes place in the early stage of an SDLC. As software requirements are mentioned initially in any project, security requirements should also be mentioned. Security requirements, if being made part of an SDLC, can help in implementing security inside the application and also identifying the key areas which can be exploited. According to OWASP Proactive Controls, three security requirements are important: Security features and functions; Business logic abuse cases; And data classification and privacy requirements. Security features and function\ All security details, such as application features, modules, database details, modules functioning and security implementation in modules should be mentioned in an application. It should be defined that all secure coding practices in any application should be implemented at the time of development. Business logic abuse cases When any application is designed, there is a way to access data and to perform operations. For example, when a user is performing an online banking transaction, some details are required within a well-defined process: Login to bank account. Choose your account to transfer from. Choose amount and destination account to transfer to. Enter profile password. Enter OTP password received on registered phone number. Confirm transaction. Wait for success message. All these steps define a data flow diagram or business logic. Now these details can have some weaknesses, which can make them vulnerable. When the business logic has been listed, key areas of weakness can be identified, and areas where security can be beefed up can be identified too. For example: User should not be able to choose someone else’s bank account as source account of transfer. User should not be able to bypass profile password requirement. OTP should be valid only once and for that account only. Data classification and privacy requirement Data classification and requirement should be decided at the time of development. When any application interacts with the user, then user data is received and stored. The answers to these questions should be decided in advance: Which data is to be accepted from the user? Is that data sensitive or not? Is that data to be stored? If data is sensitive, then should the application decide if it will be stored in encrypted or hashed format? If bank details are stored, then those details should be verified and validated by the application. Data authorization should also be decided at an initial stage, like who can access, delete and modify data. Since the application will be dealing with users and operations on user data. It is critical to maintain logs for all activities. Logging of activity was discussed above in the “Implement Logging and Intrusion Detection” section. Security Design and Architecture In the last one to nine OWASP ProActive Controls, we saw how to implement security in our code, which areas to secure, how to secure and what components can be used to help you implement better security in your application. In the last ProActive Control, we discuss the other areas of application security which can prove to be of great use and should not be neglected. OWASP has defined three key areas to take care of when developing any application: Know Your Tools Tiering, Trust and Dependencies Manage the Attack Surface Know Your Tools Every application is built using some server side language, client side language, database or no database, etc. Each component used could be the source of opening a security vulnerability in your application and server. For example, using an outdated version of Struts Framework can lead to a user exploiting remote code execution on it, or an older version of PHP leading to the same consequence. Similar is the case for databases and every other component which is used to build an application. So before starting any application development, it should be made clear what components can or may lead to a vulnerable application in the present or near future. Tiering, Trust and Dependencies Each layer of the whole application is called a tier. With each tier there is an associated level of risk and vulnerabilities that can crop in. For every tier — be it client side, server side, database, or anything — the risk associated with it should be calculated, and necessary mitigations should be implemented. When an application is interacting with user input and user data, trust is the only factor which decides which operation should be performed, when to perform, and on what to perform. An authentication page not implemented properly will have a poor trust level and will allow malicious users to access others’ data. In the worst case, it will result in a user transferring funds or accessing confidential company data without proper authorization. Application development involves using several components all together and making sure that each component will work with others. This is the case of dependency, where X component depends upon Y component for its proper functioning. It is very common to use older components to maintain reliability and proper functioning. But each dependency should be thoroughly checked, or else it can create an unwanted weakness inside the application. Manage Attack Surface The attack surface is the whole combined application including software, hardware, logic, client controls, server controls. Everything from physical, digital, to logical makes the attack surface. Any part of a setup if and when found to be vulnerable can act as an open entry gate for a malicious user to perform an action. Developers are usually not concerned about the web server software version the application will be deployed on. But older web server software like Apache or Struts can lead to an attacker successfully exploiting it and managing his/her way into the application and user data. Conclusion From OWASP ProActive Controls we learned how an application can be secured and how to identify the key areas of every application that can all together help in strengthening our application and stored data. OWASP ProActive Controls are a good place to start training developers to implement secure coding practices and beef up the security of key areas of an application like authentication, authorization, user data access and storage. But ProActive Controls should not be looked upon as the only set of controls for application security. It is a good place to start developing skills and knowledge leading to continuous learning and habitual secure coding practices. Reference https://www.owasp.org/index.php/OWASP_Proactive_Controls Source
-
- application
- data
-
(and 3 more)
Tagged with:
-
What is OWASP ProActive Controls? In one line, this project can be explained as “Secure Coding Practices by Developers for Developers“. OWASP ProActive Controls is a document prepared for developers who are developing or are new to developing software/application with secure software development. This OWASP project lists 10 controls that can help a developer implement secure coding and better security inside the application while it is being developed. Following these secure application development controls ensures that the key areas of the development cycle have secure coding along with traditional coding practices. The strength of this project is not just in the listed 10 controls but in the key references associated with it. Every control extends the knowledge and capabilities by mentioning existing OWASP or other open source projects that can be used to strengthen the security of an application. The ten controls defined by this project are: Parameterize Queries Encode Data Validate All Inputs Implement Appropriate Access Controls Establish Identity and Access Controls Protect Data and Privacy Implement Logging, Error Handling and Intrusion Detection Leverage Security Features of Frameworks and Security Libraries Include Security-Specific Requirements Design and Architect Security In Let us go deeper into each ProActive Control and see what it takes for us to implement it in the real world. PARAMETERIZE QUERIES One of the most dangerous attacks on a Web application and its backend data storage is SQL injection. It occurs when a user sends malicious data to an interpreter as an SQL query, which then manipulates the backend SQL statement. It is easy for an attacker to find a SQLi vulnerability using automated tools like SQLMap or by manual testing. The simplest and most popular attack vector used is: 1? or ‘1’= ‘1 Submitting it as a username and password or in any other field can lead to an authentication bypass in many cases. Here is an example of typical SQL injection in a user authentication module: String username= request.getParameter(“username”); String password= request.getParameter(“password”); Class.forName("com.mysql.jdbc.Driver"); Connection con = (Connection) DriverManager.getConnection("jdbc:mysql://database-server:3306/securitydb:", "root" ,"root"); Statement st= con.createStatement(); ResultSet rs=st.executeQuery("select * from users where username='"+username+"' and password='"+password+"' limit 0,1"); In this vulnerable code, the ‘Statement’ class is used to create a SQL statement, and at the same time it is modified by directly adding user input to it, then it is executed to fetch results from the database. Performing a simple SQLi attack in the username field will manipulate the SQL query, and an authentication bypass can take place. To stop a SQLi vulnerability, developers must prevent untrusted input from being interpreted as a part of a SQL query. It will lead to an attacker not being able to manipulate the SQL logic implemented on the server side. OWASP ProActive Controls recommends that developers should use parameterized queries only in combination with input validation when dealing with database operations. Here is an example of SQL query parameterization: String username=request.getParameter(“username”); String password=request.getParameter(“password”); Class.forName(“com.mysql.jdbc.Driver”); Connection con=( Connection) DriverManager.getConnection("jdbc:mysql://database-server:3306/securitydb:", "root" ,"root"); PreparedStatement ps=(PreparedStatement) con.prepareStatement("select * from users where username=? and password=? limit 0,1"); ps.setString(1,username); ps.setString(2,password); ResultSet rs=ps.executeQuery(); if(rs.next()) out.println("Login success"); else out.println("Login failed"); Using a parameterized query makes sure that the SQL logic is defined first and locked. Then the user input is added to it where it is needed, but treated as a particular data type string, integer, etc. as whole. In a database operation with a parameterized query in the backend, an attacker has no way to manipulate the SQL logic, leading to no SQL injection and database compromise. SQL injection vulnerability has been found and exploited in applications of very popular vendors like Yahoo! too. ENCODE DATA Data encoding helps to protect a user from different types of attacks like injection and XSS. Cross Site Scripting (XSS) is the most popular and common vulnerability in Web applications of smallest to biggest vendors with a Web presence or in their products. Web applications take user input and use it for further processing and storing in the database when ever needed. Also user input could be part of the HTTP response sent back to the user. Developers should always treat user input data as untrusted data. If user input at any point of time will be part of the response to user, then it should be encoded. If proper output encoding has been implemented, then even if malicious input was sent, it will not be executed and will be shown as plain text on the client side. It will help to solve a major web application vulnerability like XSS. Here is an example of XSS vulnerability: if(request.getMethod().equalsIgnoreCase("post")) { String name = request.getParameter("name"); if(!name.isEmpty()) { out.println("<br>Hi "+name+". How are you?"); } } In the above code, user input is not filtered and used, as it is part of message to be displayed to the user without implementing any sort of output encoding. Most common XSS vulnerabilities that affect users and are found in applications are of two types: Stored XSS Reflected XSS Stored XSS are those XSS which get stored on a sever like in a SQL database. Some part of the application fetches that information from the database and sends it to the user without properly encoding it. It then leads to malicious code being executed by the browser on the client side. Stored XSS can be carried out in public forums to conduct mass user exploitation. In Reflected XSS, the XSS script does not get stored on the server but can be executed by the browser. These attacks are delivered to victims via common communication mediums like e-mail or some other public website. By converting input data into its encoded form, this problem can be solved, and client side code execution can be prevented. Here is an example of output encoding of user input: if(request.getMethod().equalsIgnoreCase("post")) { String name = StringEscapeUtils.escapeHtml(request.getParameter("name")); if(!name.isEmpty()) { out.println("<br>Hi "+name+". How are you?"); } } In the next section you will see how input validation can secure an application. Combining input validation with data encoding can solve many problems of malicious input and safeguard the application and its users from attackers. OWASP has a project named OWASP ESAPI, which allows users to handle data in a secure manner using industry tested libraries and security functions. VALIDATE ALL INPUTS One of the most important ways to build a secure web application is to restrict what type of input a user is allowed to submit. This can be done by implementing input validation. Input validation means validating what type of input is acceptable and what is not. Input validation is important because it restricts the user to submit data in a particular format only, no other format is acceptable. This is beneficial to an application, because a valid input cannot contain malicious data and can be further processed easily. Important and common fields in a web application which require input validation are: First Name, Last Name, Phone Number, Email Address, City, Country and Gender. These fields have a particular format which has to be followed, especially email and phone number, which is very common. It is a known fact that first name and last name cannot have numbers in them; you cannot have a name as John39 *Bri@n. Such user input is treated as malicious and thus requires input validation. Input validation can be implemented on client side using JavaScript and on the server side using any server side language like Java, PHP etc. Implementing server side input validation is compulsory, whereas client side is optional but good to have. Now input validation is again of two types: Blacklist Whitelist The simplest example to explain the two can be: A security guard stops all guys wearing a red t-shirt who are trying to enter a mall, but anyone else can enter. This is a blacklist, because we are saying the red color is blocked. Whereas a whitelist says that guys wearing white, black and yellow t-shirt are allowed, and the rest all are denied entry. Similarly in programming, we define for a field what type of input and format it can have. Everything else is invalid. It is called whitelisting. Blacklisting is invalidating an input by looking for specific things only. For example, specifying that a phone number should be of 10 digits with only numbers is whitelist. Searching input for A-Z and then saying it is valid or not is blacklisting, because we are invalidating using alphabet characters only. Blacklisting has been proven to be weaker than whitelisting. In the above case, if a user enters 123456+890, then a blacklist will say it is valid because it does not contain A-Z. But it is wrong. Whereas a whitelist will say it contains a character that is not a number, and only numbers are allowed, so it is invalid. Input validation can be implemented in a web application using regular expressions. A regular expression is an object that describes a pattern of characters. These are used to perform pattern based matching on input data. Here is the example of a regular expression for first name: ^[a-zA-Z ]{3,30}$ This regular expression ensures that first name should include characters A-Z and a-z. The size of first name should be limited to 3-30 characters only. Let’s take another example of regular expression for username: ^[a-z0-9_]{3,16}$ Here this expression shows that username should include alphabets ‘a-z’, numbers ‘0-9? and special characters underscore ‘_’ only. The input length should be limited to 3-16 only. Email address validation can be performed using the following regular expression: ^[a-zA-Z0-9.!#$%&'*+/=?^_`{|}~-]+@[a-zA-Z0-9-]+(?:\.[a-zA-Z0-9-]+)*$ Depending upon the programming language a developer uses to build an application, regular expression can easily be implemented in it. Another advantage of regular expressions is that there are many industry tested regular expressions for all popular input types. So you don’t have to write one from scratch and then get it security tested. It is better to use industry tested regular expressions than writing one on your own (which in most cases will be flawed). OWASP has an Input Validation Cheat Sheet to help you implement proper input validation in your application. IMPLEMENT APPROPRIATE ACCESS CONTROLS Before we begin, it should be crystal clear that authentication is not same as authorization. Authentication takes care of your identity, whereas authorization makes sure that you have the authority or privilege to access a resource like data or some sensitive information. A simple real world example to show this can be: Alice visits Bob’s home. Her identity is known to Bob, so he allows her to enter her home (if she was not known to Bob then entry would have been denied, aka authentication failure). Alice is now inside Bob’s home. But she cannot open Bob’s family safe at home, because she is not authorized to do so. On the other hand, Bob’s sister Eve is known, so successful authentication occurs, and she is a family member, so she is authorized to access the family safe, aka successful authorization. Implementing authorization is one of the key components of application development. It has to be ensured at all times that access certain parts of the application should be accessible to users with certain privileges only. Authorization is the process of giving someone permission to do or have something. It is to be noted again that authentication is not equivalent to authorization. Many developers have a tough time handling authorization, and at some point leave a gap that gets exploited, leading to unauthorized data access. To solve this problem, access control or authorization checks should always be centralized. All user requests to access some page or database or any information should pass through the central access control check only. Access control checks should not be implemented at different locations in different application codes. If at any point in time you have to modify an access control check, then you will have to change it at multiple locations, which is not feasible for large applications. Access control should by default deny all requests which are from a user for a resource for which either access is restricted or an authorized entry has not been made. Layered Authorization Checks should be implemented. It means that the user’s request should be checked for authorization in layered manner instead of a haphazard manner. Below is an example: User requests “/protected” file access. Is user logged-in? Is user normal user or privileged user? Is user allowed access to the resource? Is resource marked as locked? f the access control check at any point in 1-5 fails, then the user will be denied access to the requested resource. OWASP Access Control Cheat Sheet can prove to be good resource for implementing access control in an application. ESTABLISH IDENTITY AND AUTHENTICATION CONTROLS Authentication is the process by which it is verified that someone is who they claim to be, or we can say it is the process of identifying individuals. Authentication is performed by entering username or password or any sensitive information. Authentication and identity are two components of accessing any kind of information that goes hand-in-hand. For example, if you want to access your bank account details or perform a transaction, you need to login into your bank account website. Successfully authenticating to your bank account proves that you are the owner of that account. From this discussion, it is clear that username and password are the elements of authentication that prove your identity. OWASP ProActive: Establish Identity and Authentication Controls says that all the modules of an application which are related to authentication and identity management should have proper security in place and secure all sensitive information. Also, an application should request for and store only the information which is absolutely needed, and nothing else. Sensitive information like password and account number should be either stored in encrypted or hashed format inside a database, so that it cannot be misused by a malicious user if he or she gains unauthorized access and decrypts it easily. Below is an example of an application that stores the user’s password in plaintext inside a MySQL database. String username=request.getParameter("username"); String password=request.getParameter("password"); PreparedStatement ps = (PreparedStatement) con.prepareStatement("insert into login_users values(?,?)"); ps.setString(1,username); ps.setString(2,password); Here the password is stored in plain text. If the database is compromised at the same time, the attacker will be able to access the user account easily. The attacker will be able to login to the user’s account using the username and password from the database, which is stored in plain text. But this vulnerability can be exploited by converting sensitive information into a hashed format, like in salted MD5 or SHA2 hash format or in encrypted form. Here is an example of hashing sensitive information before storing it in a SQL database: String username=request.getParameter("username"); String password=request.getParameter("password"); MessageDigest m = MessageDigest.getInstance("MD5"); m.update(s.getBytes(),0,s.length()); String calc_hash = new BigInteger(1,m.digest()).toString(16); if(calc_hash.length()<32) { calc_hash = "0"+calc_hash; } PreparedStatement ps = (PreparedStatement) con.prepareStatement("insert into login_users values(?,?,?)"); ps.setString(1,username); ps.setString(2,password); ps.setString(3,calc_hash); The above code shows that here sensitive information (i.e. password) is stored in a salted MD5 format. The salt is different for every new registration. If the database is compromised, then the attacker will have to find clear text for the hashed passwords, or else it will be of no use. Broken Session Management is also a type of vulnerability which exists in a web application that does not properly implement session management. For example, if a user logs out from his/her account, but he/she is redirected to some page, but session is not invalidated properly, a post-login page is opened without asking for re-authentication. Another example can be a session cookie for pre- and post-login being same. Vulnerable code: String username = request.getParameter("username"); String password = request.getParameter("password"); PreparedStatement ps=(PreparedStatement) con.prepareStatement("select * from users where username=? and password=? limit 0,1"); ps.setString(1,username); ps.setString(2,password); ResultSet rs=ps.executeQuery(); if(rs.next()) { session.setAttribute("useracc", rs.getString("username")); out.println("Login success"); } else { out.println("Login failed"); } Observe in the above code that the session cookie JSESSIONID remains the same for pre- and post-login. This vulnerability can be exploited by an attacker who has physical access to the machine and notes the value of session cookie pre-authentication. This attack is known as Session Fixation. This patched code will invalidate the session when authentication is successful and creates a new session cookie value. This changes the post-login session cookie value, and Session Fixation vulnerability cannot be exploited. String username=request.getParameter(“username”); String password=request.getParameter(“password”); PreparedStatement ps=(PreparedStatement) con.prepareStatement("select * from users where username=? and password=? limit 0,1"); ps.setString(1,username); ps.setString(2,password); ResultSet rs=ps.executeQuery(); if(rs.next()) { session.invalidate(); request.getSession(true); session.setAttribute("useracc", rs.getString("username")); out.println("Login success"); } else { out.println("Login failed"); } The session cookie value should never be predictable, and should comply with strong complexity for better security. Authentication and secure storage is not just limited to the username-password module of an application. Other key modules like forgot password and change password are also part of authentication. Financial data and personal information like SSN are some of the most important details a person is concerned with, so an application storing that data should make sure it is encrypted securely. OWASP has some key resources like: Authentication Cheat Sheet Session Management Cheat Sheet In this part of OWASP ProActive Controls, we discussed in depth how ProActive Controls 1-5 can be used in an application as a secure coding practice to safeguard it from well-known attacks. The controls discussed do not modify application development lifecycle, but ensure that application security is given the same priority as other tasks and can be carried out easily by developers. We will see the last 5 ProActive Controls in the next and final part. Reference: https://www.owasp.org/index.php/OWASP_Proactive_Controls Source
-
- access
- application
-
(and 3 more)
Tagged with:
-
The options currently available for user authentication fall within three categories: authentication through something that the user knows, such as a PIN or a password; something the user has, such as a token with random codes, a flash drive or a proximity card; and something the user is identified by through the use of biometrics or something physically unique to the individual. Today’s system security professionals speak of passwords being too weak; this means that authentication, which for years has been the most widely used tool to protect data and systems, has been often proven too easy to break or too impractical to use when systems administrators enforce long, complex and unmemorable alphanumeric passwords. Tokens and other devices have also proved not always effective due to the cost of production and distribution and the possibility of being stolen and used fraudulently. So what are the alternatives? Biometrics, for one, can be used for password replacement. This is an ideal solution for identity-based authentication of computer users as it is for securing a computer facility. The article focuses on understanding why so many people and businesses depend on biometrics to provide the highest level of security, and it will address some of the new developments in biometric science that may just help boost its acceptance and offset some of its shortcomings, as well as address where the future lies for this type of technology. The uncertainty today is whether biometrics will play an important role in the future. Biometrics Exposed: How it Works for User Authentication Biometrics is the science and technology that analyze human body characteristics. It is based on measuring and analyzing biological and behavioral data. Biometric recognition simply draws on patterns and measurements (characteristics that are unique to individuals) for authentication. Many security experts agree that user authentication by means of linking a person to his/her body part(s) to establish an identity is a preferred method to enhance security. In many cases, in fact, biometric-based personal identification/verification technology even eliminates the need for usernames or passwords. As a logical control, biometric systems can provide entry into systems; as for physical security, they come handy to control access to secure areas. Biometric progression requires two stages: “enrollment and “authentication.” The first phase comprises of a capturing and an extraction stage. A user is enrolled by having biometric data collected through a device that records distinctive physical characteristics and/or behavioral traits. Video-optical images or thermal imaging scanning are examples of what can be used for this purpose. Data are extracted from the sample and a template is created. Data are then stored in a database where each template is linked to a person for future identity matching. The second part of the process is the authentication when data extracted are compared with the stored template so the individual can be identified or verified. This phase also is comprised of two stages: comparison (the template is compared to the sample) and the match/non-match decision. Fundamentally, the course of action is detection, recognition, verification, and then validation. Examples of biometric data that can be used for identification and authentication are fingerprints, facial recognition, iris scans and even vein scanning. These biometric traits are seen as especially “unique” identifiers for recognizing humans. Most biometric techniques are implemented using a sensor, which is used to scan, identify and authenticate someone to a system or entry point, only after having compared the extracted physical or behavioral feature-set against stored templates residing in a database. In general, biometric methods are exceptionally reliable for a positive identity match. A false-positive or false-negative is rare, although possible, depending on the accuracy of the biometric systems and sensor characteristics. Although the hardware needed to implement biometric verification can be quite expensive, this type of technology has proven worth the price. As with all electronic technologies, biometric devices can be fooled by impostors, but they are still becoming more commonly used at business locations and in work centers as trusted recognition systems that are sustainable in the long term to control access to high-security areas and, more importantly, to prevent identity theft. Types of Biometrics: Physical and Behavioral Traits There are two main types of biometric traits used for verification: physical traits, more commonly used so far, and behavioral, solely based on measurements and data derived from an action or series of actions performed by users. Physical biometrics uses “biological properties” that can uniquely determine an identity. Behavioral biometrics is based on “characteristic traits” exhibited by a person that can lead to his or her identification. Physiological biometrics includes face recognition technology and finger- and hand-scan in addition to the measurements and data derived from patterns of the iris or retinal scan that reads the blood vessels in the back of the eyes for identification. Physiological biometrics (in particular fingerprints and DNA) is already widely used in forensics for criminal identification. Fingerprints, for one, have been used for years to prove an individual’s identity electronically based on unique biological characteristics. The method has been used to distinguish one individual from another, as no two people have the same fingerprints. Fingerprint scanners can capture the user’s finger imprint to compare the person’s identity with a created unique biometric template. A person’s fingertip has come to be the most widely used biometric data. Behavioral biometrics includes voice-scans, signature-scans and keystroke-scans. The human voice was found to be a viable authentication thanks to the possibility of being recognized through unique voiceprints. Although effective, it is less secure than other behavioral traits like a keyboard-scan, for instance, that has no user interference. Signature and keystroke scans can help recognize individuals by analyzing the way they write or by patterns in keystroking. Privacy, Concerns and Security Issues The biometric authentication technique based on “something users are” is considered the most secure method over a PIN or passwords and smart card technology for physical and logical access control. Every so often, an uncovered password has led to a compromised system, while the use of cards has made information vulnerable when lost or stolen. Biometric traits are normally unique and permanent and hard to reproduce, especially in view of advances in technology, data communication security and biometric extraction devices. According to Biometrics.gov, the central source of information on biometrics-related activities of the U.S. federal government, “most biometric systems have a high accuracy (over 95 percent and many approach 100 percent) when matching biometrics against a large database of biometrics and when matching a biometric against the originally enrolled biometric.” The advantage of biometric security over more conventional systems is that it is easier to use for authentication situations, and yet it offers improved reliability and strengthened information delivery capabilities. Despite these advantages, there are, however, open issues involved with these systems, some technical and some privacy-related. Much of the skepticism that surrounds biometric technology has to do with the privacy concerns on storage, transmission and utilization of data that are perceived as extremely personal. Users are mostly concerned, especially now that the technology has been introduced in the mobile device world, about the safety of their unique identifiers and about the efficacy or lack of laws that govern use and misuse of personal bio data. Another source of concern is the increased use of biometrics in health service facilities and government, especially when mobile biometrics technology is used to verify identities anywhere on the go. The concern regards storage of data and their transmission to mobile devices. For the most part, the fact that information on people’s body features and behavior traits are recorded has been always a concern for many people worried about their privacy. Many see the storing of data and records as an infringement of privacy and personal rights. Biometric factors that are unique to a subject could lead to the development of tracking or monitoring of somebody’s movements from that point on. Some fear biometric data be accessed and misused. Users have expressed concerns over a number of biometric-related issues and possible forgeries. Authentication based on a signature-scan that analyzes handwritten text is often seen as simple to spoof, as forgeries are possible by a simple optical scanner or a camera. That may be why digitized electronic signature generation, even if considered legally binding on documents, is not widely used, and other behavioral biometric technologies are now used in its place. A fingerprint reader that is embedded on the laptop or keyboard or added through a USB port is a good alternative. However fingerprints could also be compromised, as fingerprints can be lifted from touched items by an imposter looking to gain fraudulent access to resources. Voice biometric systems unfortunately are sometimes prone to loud ambient sounds or low-quality inputs that tend to interfere with the ability to successfully record a usable sample. A voice biometric system could also be tampered with by someone able to record another’s voice, and play it back later to gain entry. Other difficulties come from input sensors being too sensitive, for example, to aging or facial expressions. These are all valid concerns related to the use of biometrics technology. It is true that biometric traits have been spoofed; however, they are definitely more secure than many other systems of authentication because they are natural, physically or behaviorally linked to a person. Reproducing them requires sophisticated techniques and advanced technology knowledge that is not required to spoof and crack other methods (as getting hold of a token or stealing passwords is a much simpler feat in comparison). In biometrics, what is stored is not an exact image of what has been scanned (the fingerprint, the retina, etc.) but a collection of binary numbers created when scanning; this extra passage is devised to prevent malicious hackers from reproducing exactly the image from which the numbers were extrapolated. Knowing humans are often the weakest link in the security chain, password-based security mechanisms (that can be cracked, reset, and socially engineered) might be substituted by biometrics that can be a natural, effortless, and much more accurate way to authenticate. The Future Biometrics is often seen today as an additional layer of protection to add to other, more traditional, authentication systems like passwords and PINs. Using a second (or even a third) authentication mechanism may provide a much higher level of security to verify the identity of a user. What the future might hold is a shift from multi-type secure authentication to simply using synergistic multiple biometric systems. Unimodal biometric systems are based on identification through only one trait. This is obviously not as accurate as we could wish and might not be adequate to all applications and uses. Also, if collection of that single data is affected in any way (for example by cream on hands that are fingerprint identified or by noise when collecting voice), accuracy would be limited. In addition, collecting only one type of data could exclude part of the users population when particular disabilities are present. The possibility of spoofing a single biometric data is higher than that of compromising more. This is why a multimodal biometric system that uses more than one trait for identification can be more reliable and resolve ambiguities and accuracy concerns. Advances in behavioral-based (dynamic) biometrics are also giving new life to this technology and are providing better and more accurate ways to authenticate users. Finger writing is a good example. This is a recognition verification system based on gesture movement, comprised of a system able to learn a user’s unique way of writing by collecting data through subsequent logins. The user is asked to handwrite four characters using their fingertip or pointing device, and the software is able to extrapolate the unique way these letters and numbers are written (length, speed, angle, height). Tests on this system have shown it is actually one of the most accurate systems of recognition available. A research by Tolly Group, a testing and third party verification provider, for example, has found a confidence rating of 99.97% and 27 times greater accuracy than keystroke analysis. In terms of use, the future of biometrics could be in mobile devices and applications for eGovernment, eHealth and eBanking. Through biometric mobile scanning devices, authentication and identification can be brought to the field. It is easy to imagine the possible uses for such systems for other professions, like law enforcement, borders control, medical and emergency services, or even to secure access to government or financial services. The trend is (in order to ensure less possibility of spoofing, replication of physical traits and privacy concerns) to base biometrics systems on the collection of non-physical, dynamic traits. For example, the US military is developing a “cognitive fingerprints” system that might be able to replace the use of faces, fingers and irises as an identification trait. In West Point, in fact, an algorithm is being developed that allows identification through the way individuals interact with their computers; it considers behavioral-based information such as typing speed, writing rhythm and even common spelling mistakes. The algorithm is able to create a unique fingerprint for each user by putting together multiple behavioral and stylometric information that, collectively, are very difficult to reproduce. Once fully implemented, this solution could transfer from military use to civilian, more mundane applications in e-banking, access to services and to secure devices. Will the privacy concern be solved? Not really, as many believe collection of this type of data could easily be embedded in applications commonly used by users and create concerns for widespread classification of users. Privacy vs. Security will be the battle to be fought for these systems’ implementation. Nevertheless, biometric technology could soon become mainstream thanks to the growth of the mobile devices market. Biometrics Research Group, Inc. estimates that the sale of smartphones, in the U.S. only, will grow to 121 million in 2018. Due to this proliferation and to the increased functionalities they offer their users, their analysts believe there will be a strong push toward the integration of biometric technology to replace traditional authentication via pin and password. Biometrics Research Group, Inc. predicted that already in 2014 over 90 million smartphones would be shipped with biometric technology, while Goode Intelligence has forecasted that by 2019 the number of mobile and wearable biometric technology users in the world will reach 5.5 billion. Conclusion Today, biometrics matter more than ever before. In this digital-driven era, more users will come to rely on biometrics as an answer to problems concerning systems security and authorization matters. Although privacy, security and accuracy concerns are still valid, biometrics is still a system that promises the security and ease of use necessary for modern users needing access (even on-the-go) to sensitive data. Biometrics is already hard to forge or spoof, and new advances in technology and new trends like multimodal can really ensure the highest security that sophisticated authentication can give to facilities and computer networks. As scanning devices are made less prone to mistakes and less subjected to sensor error, it will even become easier and faster to implement a biometric security system on a larger scale. This, coupled with its use on mobile devices, will ensure the technology is used for a wide variety of new scopes, including border and law enforcement controls. Although biometrics may be susceptible to false matches, possibly due to scanning and sensor errors, there are ways to minimize this, currently, by utilizing multi-factor options like a password or smartcard combined with biometrics to add an extra layer of security towards authentication. If used together, and not alternatively, the systems are significantly stronger than when used individually. Two-factor authentication is not a new concept. Newest trends, however, see multi-biometrics (the use of different sets of biometric data simultaneously) as a good alternative to increase matching accuracy for identification and verification. Multimodal biometrics systems, which use multiple sensors for data acquisition, offer multiple recognition algorithms and take advantage of each biometric technology while overcoming the limitations of a single technology. Advances in algorithms considering dynamic biometrics that are less linked to physical characteristics but more to behavioral traits is where civilian and military researchers are concentrating their efforts in trying to devise a security system that is, at the same time, foolproof, reliable and quick to use. The call for quicker and more secure authentication systems for mobile devices will also boost the adoption of biometric technology. As biometric devices become more secure and error-free as well as more affordable, the extra security that they can provide, ultimately, will outweigh any shortcoming of this technology as well as problems and concerns on privacy and safety. We might be closer to the end of passwords. References Brecht, D. (2011, January 4). Biometric Devices: They Provide IT Security. Retrieved from Biometrics in IT Security: Questions, Options and Solutions Duncan, G. (2013, March 9). Why haven’t biometrics replaced passwords yet? Retrieved from Why haven't biometrics replaced passwords yet? | Digital Trends FRMC. (2014, September 11). Biometric Signature Authentication: The New Modality of Choice for Safe Guarding EMR Access. Retrieved from Biometric Signature Authentication: The New Modality of Choice for Safe Guarding EMR Access | First Report Managed Care ID Control. (n.d.). Biometric Authentication method Pro’s and Con’s. Retrieved from Biometric Authentication method Pro's and Con's - Keystroke Biometrics - Strong authentication with One Time Password, PKI and Keystroke Recognition Mayhew, S. (2014, August). Special Report: Mobile Biometric Authentication. Retrieved from Special Report: Mobile Biometric Authentication | BiometricUpdate Memon, S. (2014, February 28). Use of Mobile Biometrics Systems for ID Management in eServices. Retrieved from http://www.researchgate.net/profile/Sander_Khowaja/publication/260079452_Use_of_Mobile_Biometrics_Systems_for_ID_Management_in_eServices/links/00b7d5348eed55220b000000.pdf PYMNTS. (2015, January 29). Next in ID Verification: Behavioral Biometrics. Retrieved from http://www.pymnts.com/news/2015/next-in-id-verification-behavioral-biometrics/#.VO8RT010yUl Seals, T. (2015, January 29). US Military to Replace Passwords with “Cognitive Fingerprints”. Retrieved from http://www.infosecurity-magazine.com/news/us-military-passwords-with/ Shahnewaz, M. (2014, December 14). How Mobile Biometrics is Fundamentally Changing Human Identification. Retrieved from http://www.infosecurity-magazine.com/opinions/how-mobile-biometrics-is-changing/ Trader, J. (2014, August 1). The Top 5 Reasons to Deploy Multimodal Biometrics. Retrieved from http://blog.m2sys.com/important-biometric-terms-to-know/top-5-reasons-deploy-multimodal-biometrics/ Source
-
- authentication
- biometric
-
(and 3 more)
Tagged with:
-
1. Introduction Electronic signatures were used for the first time in 1861 when agreements were signed by telegraphy using Morse code. In 1869, the New Hampshire Court confirmed the legality of such agreements by stating that: “It makes no difference whether [the telegraph] operator writes the offer or the acceptance in the presence of his principal and by his express direction, with a steel pen an inch long attached to an ordinary penholder, or whether his pen be a copper wire a thousand miles long. In either case the thought is communicated to the paper by the use of the finger resting upon the pen; nor does it make any difference that in one case common record ink is used, while in the other case a more subtle fluid, known as electricity, performs the same office.” In the past, electronic signatures were accepted with mixed feelings. Nowadays, they are considered as a secure way of authentication and are often used for signing legal documents, such as contracts and tax declarations. The European Union (EU) and the United States (US), the two largest financial markets, have adopted legislation recognizing the enforceability of electronic signatures. This article provides an overview of the laws concerning electronic signatures in the EU (Section 2) and the US (Section 3). Afterward, it examines the similarity and difference between the EU and the US laws (Section 4). Next, this article analyses the validity of EU electronic signatures in the US and vice versa (Section 5). Finally, a conclusion is drawn (Section 6). Before proceeding with Section 2, it is necessary to clarify the difference between the electronic signature and digital signature. Any signature in electronic form can be generally defined as an electronic signature. The digital signature is a type of electronic signature that is created by using cryptographic techniques. Such cryptographic techniques are typically based on Public Key Infrastructure (PKI) systems. The term “PKI” refers to the set of computer systems, individuals, policies, and procedures necessary to provide encryption, integrity, non-repudiation, and authentication services by way of public and private key cryptography. 2. EU electronic signature laws The EU Electronic Signatures Directive 1999/93/EC (the “Directive”) currently regulates the electronic signatures in the EU. However, on July 1st, 2016, the Directive will be replaced by a new European Regulation which will ensure the cross-border operability of electronic signatures within the EU. The Directive defines three types of electronic signature, namely, basic electronic signature (Section 2.1), advanced electronic signature (Section 2.2), and qualified electronic signature (Section 2.3). These three types of electronic signature are discussed below. 2.1 Basic electronic signature The term “basic electronic signature” refers to “data in electronic form which are attached to or logically associated with other electronic data and which serve as a method of authentication.” This type of electronic signature is considered as weak in terms of reliability and security of authentication. For example, a scanned signature which is attached to a document will be regarded as a basic electronic signature. The basic electronic signatures can be easily faked. Actually, there are numerous malware programs that use fake electronic signatures, including basic electronic signatures. A 2012 McAfee report stated that, at that time, there were 200,000 malware programs that used valid electronic signatures. A large number of those signatures were faked or based on stolen certificates. Some of the faked signatures indicate that the signature is made by Microsoft, whereas it is actually made by a hacker. Advanced electronic signature An advanced electronic signature allows the unique identification and authentication of the signer of a document. Moreover, the advanced electronic signature enables the check of the integrity of the signed data. In most cases, asymmetric cryptographic technologies (e.g., PKI) are used for advanced electronic signatures. There is no difference between the legal value of the electronic signature and the advanced electronic signature. Both types of electronic signature can have a legal effect if they offer sufficient guarantees with respect to authenticity and integrity. According to the Directive, an advanced electronic signature should meet four requirements, namely: (1) it is uniquely linked to the signatory; (2) it is capable of identifying the signatory; (3) it is created using means that the signatory can maintain under their sole control; and (4) it is linked to the data to which it relates in such a manner that any subsequent change in the data is detectable. Pertaining to the first requirement, the uniqueness of an electronic signature depends on how unique a signature key is to an individual. Signature keys should be unique if they are generated properly. For instance, the recommended parameters for RSA (a widely used digital signature algorithm) should provide at least the equivalent security of a 128-bit symmetric key, which means that there should be 1040 possibilities for a signature key. Because this number exceeds the number of the people in the world, it is very unlikely that two individuals will be able to generate the same signature key. Concerning the second requirement, a signatory can be “identified” by verifying an electronic signature created by the signatory. Such a verification can be done, for example, by a PKI system. With regard to the third requirement, the confidence that an electronic signature could only be produced by the designated signatory requires confidence in: (1) the processes that surround the generation of signature keys; (2) the ongoing management of signature keys; and (3) the secure operation of the computing device that was used to compute the electronic signature. In relation to the fourth requirement, the only form of electronic signature that is capable of complying with this requirement is the private key of electronic signature. 2.3 Qualified electronic signature According to the Directive, the qualified electronic signature is an advanced electronic signature which is based on a qualified certificate and which is created by a secure-signature-creation device. In practice, the qualified electronic signature relates to a PKI-based electronic signature for which the signature certificate and the device used to create the signature meet certain quality requirements. The qualified electronic signature benefits from an automatic legal equivalence to a hand written signature within the territory of the European Union. If a non-qualified signature is used, it will be necessary to assess the following two factors before accepting it for the specific context in which it is used: (1) the characteristics of this electronic signature; and (2) whether it offers sufficient guarantees regarding authenticity and integrity. For the qualified signature, such an assessment is not necessary. 3. US electronic signature laws The US Electronic Signatures in Global and National Commerce Act (E-Sign Act) allows the use of electronic signatures to “satisfy any statute, regulation, or rule of law requiring that such information be provided in writing, if the consumer has affirmatively consented to such use and has not withdrawn such consent.” According to the E-Sign Act, the electronic signature means “an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record.” Consequently, the electronic signature as defined by the E-Sign Act may include, but is not limited to, encryption-based signatures, signatures created by electronic signing pads, and scanned signatures. The E-Sign Act does not apply to every type of documentation. Certain types of records and documents are not covered by the E-Sign Act. These documents include, without limitation, adoption paperwork, divorce decrees, court documents, documentation accompanying the transportation of hazardous materials, foreclosures, prenuptial agreements, and wills. It should be noted that 48 US States have adopted the Uniform Electronic Transactions Act (UETA) with the aim to create more uniformity in relation to electronic signatures. The UETA and the E-Sign Act overlap significantly. However, UETA is more comprehensive than the E-Sign Act. Similarly to the E-Sign Act, the UETA does not distinguish different types of electronic signatures. 4. Similarity and difference between the EU and the US laws The similarity between the e-Sign Act and the Directive is that both laws recognize the enforceability of electronic signatures. The difference between the two laws is that, whereas the Directive distinguishes three types of electronic signatures, the E-Sign Act provides a broad definition of electronic signature that encompasses signatures made through various technologies. 5. The validity of the EU electronic signatures in the US and vice versa In most cases, electronic signatures meeting the requirements of the Directive would also comply with the E-Sign act because the e-Sign Act defines the electronic signature broadly. However, electronic signatures complying with the e-Sign Act would need to meet additional requirements in order to comply with the requirements of the Directive in relation to advanced electronic signatures and qualified electronic signatures. 6. Conclusions This article has shown that the electronic signatures are legally enforceable in both the EU and the US. However, the EU and the US have adopted different legislative approaches with regard to electronic signatures. While the US provides a broad definition of electronic signature, the EU distinguishes three types of electronic signatures, namely, (1) basic electronic signature, (2) advanced electronic signature, and (3) qualified electronic signature. Each of these three types allows the authentication of electronic communications. The advanced electronic signature and the qualified electronic signature ensure greater security as to the authenticity of electronic communications than the basic electronic signature. The qualified electronic signature benefits from an automatic legal equivalence to handwritten signatures. Although the EU has a comprehensive legal framework regarding electronic signatures, the framework does not ensure the cross-border interoperability of electronic signatures throughout the entire EU. The new EU Regulation which would enter into force on 1st July 2016 would address this issue by ensuring that electronic trust services (e.g., electronic signatures, electronic seals, time stamp, electronic delivery service, and website authentication) will work across all EU countries. The EU Commissioner Neelie Kroes justified the new Regulation as follows: “People and businesses should be able to transact within a borderless Digital Single Market, that is the value of Internet. Legal certainty and trust is also essential, so a more comprehensive eSignatures and eIdentification Regulation is needed.” * The author would like to thank Rasa Juzenaite for her invaluable contribution to this article. References 1. Abelson, H., Ledeen, K., Lewis, H., ‘Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion‘, Addison-Wesley Professional, 2012. 2. ‘Community framework for electronic signatures’, a webpage published by the European Commission, last updated on 6th of July 2011. Available at Community framework for electronic signatures . 3. Chander, H., ‘Cyber Laws and IT Protection‘, PHI Learning Pvt. Ltd., 3.04.2012. 4. De Andrade, N., ‘Electronic Identity‘, Springer, 2014. 5. Howley v. Whipple 48 N.H. 487 (1869). 6. Liard, B., Lyannaz, C., ‘Adoption of a new European legal framework applicable to cross-border electronic identification and e-signatures’, September 2014. Available at Bad Request . 7. Mason,S., ‘Electronic Signatures in Law‘, Cambridge University Press, 2012. 8. Menna, M., ‘From Jamestown to the Silicon Valley, Pioneering A Lawless Frontier: The Electronic Signatures in Global and National Commerce Act’, 6 VA. J.L. & TECH 12, 2001. 9. Miller, R., ‘Cengage Advantage Books: Fundamentals of Business Law: Excerpted cases‘, Cengage Learning, 2012. 10. Orijano, S., ‘Cryptography InfoSec Pro Guide‘, McGraw Hill Professional, 16 August 2013. 11. Savin, A., ‘EU Internet Law‘, Edward Elgar Publishing, 2013. 12. Savin, A., Trzaskowski, J., ‘Research Handbook on EU Internet Law‘, Edward Elgar Publishing, 2014. 13. Schmugar, C., ‘Signed Malware: You Can Run, But You Can’t Hide‘, 23 March, 2012. Available at https://blogs.mcafee.com/mcafee-labs/signed-malware-you-can-runbut-you-cant-hide . 14. Srivastava, A., ‘Electronic Signatures for B2B Contracts: Evidence from Australia‘, Springer India, 2014. 15. Wang, F., ‘Law of Electronic Commercial Transactions: Contemporary Issues in the EU, US and China‘, Routledge, 2014. Source
-
Introduction In this mini-course, we will learn about various aspects of cryptography. We’ll start with cryptography objectives, the need for it, various types of cryptography, PKI, and we’ll look at some practical usage in our daily digital communication. In this mini-course, I will explain every detail with an example which end users can perform on their machines. What is cryptography and why it is required? Today, digital communication has become far more important than what it was a decade ago. We use internet banking, social networking sites, online shopping, and online business activities. Everything is online these days, but the internet is not the most secure means to conduct all those activities. Nobody would want to do an online transaction with communication from their machine to their bank through an open channel. With cryptography, the channel secured between different entities which helps to do business activity in a more secure fashion. Cryptography is a method of storing and transmitting data in a particular form so that only those for whom it is intended can read it. Cryptography is a broad term which includes sub disciplines and very important concepts such as encryption. Let’s get into the main objectives of cryptography. Cryptography Objectives C-Confidentiality: Ensuring the information exchanged between two parties is confidential between them and is not visible to anyone else. I-Integrity: Ensuring that message integrity is not changed while in transit. A-Availability: Ensuring systems are available to fulfill requests all the time. Here are some additional concepts: Authentication: To confirm someone’s identity with the supplied parameters, such as usernames, passwords, and biometrics. Authorization: The process to grant access to a resource to the confirmed identity based on their permissions. Non-Repudiation: To make sure that only the intended endpoints have sent the message and later cannot deny it. Cryptography key definitions Here’s some cryptographic key terminology: Plaintext: The original raw text document onto which encryption needs to be applied. Ciphertext: When we apply encryption to a plaintext document, the output is ciphertext. Encryption: Encryption is the process of converting plaintext to ciphertext using an encryption algorithm. We have different types of encryption available today like symmetric, asymmetric and hybrid encryption. We will discuss them in depth later in the course. Encryption algorithm: An encryption algorithm is a mathematical procedure for converting plaintext into ciphertext with a key. Various examples of encryption algorithms include RSA, AES, DES, and 3DES. Key-length: Choosing an encryption algorithm with an appropriate keysize is an important decision to make. The strength of the key is usually determined by keysize, or the number of bits. Thus, the larger the bit size of a key, the more difficult it is to break the key. For example, with a key which has a bit length of 5, the key will have only 2^5 or 32 combinations. That’s pretty easy to break considering today’s computation methods. That’s why older algorithms like WEP (40 bits) & DES (56 bits) are considered obsolete and now much more powerful algorithms with larger key sizes, such as AES (128 bits), are now used. Hash: A hash value, also called a message digest, is a number generated from a string of text. As per the hash definition, no two different texts should produce the same hash value. If an algorithm can produce the same hash for a different string of text, then that algorithm is not collision free and can be cracked. Various examples of hash algorithm are MD2, MD5 and SHA-1 etc. Digital signature: Digital signature is the process of making sure that the two entities talking with each other can establish a trust relationship among them. We will take a look at its practical demonstration later in this document. Source Part2 Part3 Part4 Part5
-
- algorithm
- cryptography
-
(and 3 more)
Tagged with:
-
Multumesc, foarte bun! ( dupa astea 24h VPN + alt email = +1 vps )
-
unindexed A website that irrevocably deletes itself once indexed by Google. The site is constantly searching for itself in Google, over and over and over, 24 hours a day. The instant it finds itself in Google search results, the site will instantaneously and irrevocably securely delete itself. Visitors can contribute to the public content of the site, these contributions will also be destroyed when the site deletes itself. Why would you do such a thing? The full explanation is in the content of the site (which is not linked anywhere here). UPDATE: The experiment lasted 22 days before it was indexed by Google on 24 February 2015 at 21:01:14 and instantaneously destroyed. It was primarily shared via physical means in the real world, word of mouth, etc. If you didn't find it before it went away. If you want to conduct your own similar experiment, the source code is here. info Nothing has been done to prevent the site from being indexed, however the NOARCHIVE meta tag is specified which prevents the Googles from caching their own copy of the content. The content for this site is stored in memory only (via Redis) and is loaded in via a file from an encrypted partition on my personal laptop. This partition is then destroyed immediately after launching the site. Redis backups are disabled. The content is flushed from memory once the site detects that it has been indexed. The URL of the site can be algorithmically generated and is configured via environment variable, so this source code can be made public without disclosing the location of the site to bots. Visitors can leave comments on the site while it is active. These comments are similarly flushed along with the rest of the content upon index event, making them equally ephemeral. other Sample configuration notes for running on Heroku: $ heroku create `pwgen -AnB 6 1` # generates a random hostname $ heroku addons:add rediscloud # default free tier disables backups $ heroku config:set REDIS_URL=`heroku config:get REDISCLOUD_URL` $ heroku config:set SITE_URL=`heroku domains | sed -ne "2,2p;2q"` $ git push heroku master $ heroku run npm run reset $ heroku addons:add scheduler:standard $ heroku addons:open scheduler Schedule a task every N minutes for npm run-script query (unfortunately seems like this can only be done via web interface). Use scripts/load_content.js to load the content piped from STDIN. You can configure monitoring to check the /status endpoint for "OK" if you trust an external service with your URL. Link: https://github.com/mroth/unindexed
-
VPS nu inseamna ca foloseste neaparat linux, poate folosi si windows ( Virtual private server - Wikipedia, the free encyclopedia ) mai multe detalii gasesti acolo. On:// Comandat, astept sa primesc datele.
-
Google Earth Pro is a 3D interactive globe that can be used to aid planning, analysis and decision making. Businesses, governments and professional users from around the world use Google Earth Pro data visualization, site planning and information sharing tools. Google Earth Pro includes the same easy-to-use features and imagery of Google Earth, but with additional professional tools designed specifically for people who need it for more than entertainment purposes. Link: Free Google Earth Pro (100% discount)
-
WINner Tweak 3 Pro is an all-in-one suite that allows for tweaking, optimizing and tuning of Windows; it includes over ten tools, hundreds of tweaks, settings, and optimizations and you can browse the five main sections of the Tune-Up Center — Windows, Hardware, Security, Network and Software — to get access to the needed tweak or setting. Try it now. Link: Free WINner Tweak 3 Pro (100% discount)
-
Have you been wondering how to speed up your computer? Cacheman (short for Cache-manager), the award-winning Windows optimizer, offers you a multitude of ways to speed up your computer. Cacheman has been developed with novice, intermediate, and expert users in mind. Immediately after installation, Cacheman examines your computer and automatically tweaks a vast number of cache settings, Registry values, system service options, and PC memory parameters. But this is only the start. Cacheman then continues to work quietly in the background, in order to speed up your computer even more by managing computer memory (RAM), program processes and system services. Cacheman makes sure that the active application gets the maximum possible processing power and available system memory. Cacheman also includes a special optimization for computer games, to prevent slow downs, lag, and stuttering caused by system tools like anti-virus programs. This giveaway has no free updates or free tech support and is for home/personal use only. Get Cacheman with free lifetime upgrades to get free updates, free tech support, and business or home use. Sale ends in 1 day 19 hrs 58 mins Link: Free Cacheman (100% discount)
-
Microsoft OneNote needs no introduction — it is feature-filled and highly coveted digital notetaking app by Microsoft that works on and syncs with all your devices: Windows, Mac, iPhone, iPad, Android, online (via any modern web browser), and Windows Phone. Previously, Microsoft OneNote was available for purchase as a standalone program. Then it was bundled with Microsoft Office. Later, OneNote was made a freemium app; some features available for free while others you had to pay for. Now, however, all features of Microsoft OneNote are available for free — forever, for everyone, and on all platforms. Get it now! Link: Free Microsoft OneNote (100% discount)
-
As you may be aware, you need an Android smartphone to use an Android Wear smartwatch, but if you carry an Apple iPhone or iPad, you’ll soon be able to use the same Android Wear smartwatch, without relying on unofficial third-party app support. Google is reportedly going to release its a new iOS app over to the App Store that will allow iPhone and iPad users to pair Android Wear devices such as Moto 360 and LG G Watch with their Apple products, French outlet 01net claimed. OFFICIAL ANDROID WEAR APP FOR iOS Google’s new move to go cross-platform with an iOS app would expand support for the wearable platform beyond Android devices and target the potential market of tens of Millions of Apple users that may not be interested in purchasing an Apple Watch. As well as, with lower prices and strong design, a fair amount of Android Wear smartwatch demand would likely be there. The search engine giant is possibly planning to launch the Android Wear app for iOS at Google’s annual developer conference in late May 2015, although the company may push the agenda depending upon the sales of the Apple Watch, which will be launched in the coming weeks. UNOFFICIAL iOS APP SUPPORTS ANDROID WEAR Recently, iOS app developer Ali Almahdi have also made an app that connects iOS to Android Wear device, same what Google is planning to officially launch. In a video submitted to The Hacker News, Almahdi demonstrated the hack on how custom developed iOS app allows his Moto 360 Android Wear smartwatch to sync directly with his iPhone device, without having Jailbreak or root access. GOOGLE TO CLUB WITH APPLE? Right now, I can’t say if Google really be able to convince Apple into approving an Android Wear app for iOS as well as Apple users into using it, but if this happens, it would be highly profitable for both Google and its Android partners. However, much details aren't available yet. But, it would definitely require additional efforts, as it wouldn't be an Android-to-Android connection any more, rather it’s an Android-to-iOS connection. In case, Apple denied approving the Google’s proposal, the search engine giant could partner-up with Microsoft to widespread its Android wearable market. Another Gadget report suggests that Microsoft’s upcoming rumored smartwatch might be compatible with both iOS and Android devices. Google has not yet commented on the matter, but if the rumors turn out to be true, iPhone users would be welcomed to the world of Android Wear for the very first time. Source
-
Do you own a Facebook Business page? If yes, then you will notice a drop in the number of "likes" on your Facebook Page by next week, which could be quite disappointing but, Facebook believes, will help business to know their actual followers. FACEBOOK'S OFFICIAL MASS AUTO-UNLIKE The social network giant is giving its Pages a little spring cleaning, purging them of memorialized and voluntarily deactivated inactive Facebook accounts in an attempt to make its users data more meaningful for businesses and brands. Facebook purge will begin from March 12, Facebook said, and should continue over the next few weeks. FACEBOOK TO DETECT FAKE FOLLOWERS Facebook is also taking steps to improve how it detects fake profiles. We all know that a number of Businesses and Brands buy fake Facebook Likes and Twitter followers in order to show their brand popularity. Social Media giants, Facebook, Twitter and Google, have emerged as major players in recent general elections in India, where political parties spend millions of dollar to buy number of Followers and advertize their promo campaigns to impact Election results. BENEFITS OF REMOVING INACTIVE USERS FROM LIKES According to Facebook, there are two main reasons to remove inactive Facebook accounts from Page audience: Accurate Likes Keeping Actual followers on the Top With more accurate "like" counts, businesses and brands could better understand how much followers are actually interested in their contents and products. Facebook wants to give businesses “up-to-date insights” on their pages’ active followers. The move will give businesses more precise information about those Facebook users who are actively following their Facebook Page and make better use of Facebook’s Custom Audiences tool, which lets businesses create followers — aka lookalike audiences — by finding people on Facebook who are similar to those who already follow the company’s page. The company also wants to make business results consistent with individual users’ experiences. Facebook already filters out "likes and comments generated by deactivated or memorialized accounts from individual Page posts." While, the decrease in number of followers may disappoint you at the very first time, but at the same time it will help you gain a more accurate way to track your customers and grow your followers with authentic number of likes, which will be more beneficial to your business. Source
-
- businesses
-
(and 3 more)
Tagged with:
-
The Angler Exploit Kit continues to evolve at an alarming rate, seamlessly adding not only zero-day exploits as they become available, but also a host of evasion techniques that have elevated it to the ranks of the more formidable hacker toolkits available. Researchers at Cisco’s Talos intelligence team today reported on a technique used in a recent Angler campaign in which attackers are using stolen domain registrant credentials to create massive lists of subdomains that are used in rapid-fire fashion to either redirect victims to attack sites, or serve as hosts for malicious payloads. The technique has been called domain shadowing, and it is considered the next evolution of fast flux; so far it has enabled attackers to have thousands of subdomains at their disposal. In this case, the attackers are taking advantage of the fact that domain owners rarely monitor their domain registration credentials, which are being stolen in phishing attacks.They’re then able to create a seemingly endless supply of subdomains to be used in additional compromises. “It’s one thing that people just don’t do,” said Craig Williams, security outreach manager for Cisco Talos. “No one logs back into their registrant account unless they are going to change something, or renew it.” Researchers Nick Biasani and Joel Esler wrote that Cisco has found hundreds of compromised accounts—most of them GoDaddy accounts—and control up to 10,000 unique domains. “This behavior has shown to be an effective way to avoid typical detection techniques like blacklisting of sites or IP addresses,” Biasini and Esler said. “Additionally, these subdomains are being rotated quickly minimizing the time the exploits are active, further hindering analysis. This is all done with the users already registered domains. No additional domain registration was found.” Cisco said the campaign began in earnest in December, though some early samples date back to September 2011; more than 75 percent of subdomain activity, however, has occurred since December. There are multiple tiers to the attack, with different subdomains being created for different stages. The attacks start with a malicious ad redirecting users to the first tier of subdomains which send the user to a page serving an Adobe Flash or Microsoft Silverlight exploit. The final page is rotated heavily and sometimes, those pages are live only for a few minutes, Cisco said. “The same IP is utilized across multiple subdomains for a single domain and multiple domains from a single domain account,” Biasini and Esler wrote. “There are also multiple accounts with subdomains pointed to the same IP. The addresses are being rotated periodically with new addresses being used regularly. Currently more than 75 unique IPs have been seen utilizing malicious subdomains.” Domain shadowing may soon supercede fast flux, a technique that allow hackers to stay one step ahead of detection and blocking technology. Unlike fast flux, which is the rapid rotation of a large list of IP addresses to which a single domain or DNS entry points, domain shadowing rotates in new subdomains and points those at a single domain or small group of IP addresses. “When you think about it, this is likely the next evolution of fast flux. It allows attackers an easy way to come up with domains they can use in a short amount of time and move on,” Williams said. “It doesn’t cost them anything and it’s tough to detect because it’s difficult to use blocklisting technology to defend against it. It’s not something we’ve observed before.” The attackers have zeroed in almost exclusively on GoDaddy accounts since the registrar is by far the biggest on the Internet; for now, that is the only commonality to the attacks carried out in this Angler campaign, Cisco said. “The accounts are largely random so there is no way to track which domains will be used next. Additionally, the subdomains are very high volume, short lived, and random, with no discernible patterns,” Biasini and Esler wrote. “This makes blocking increasingly difficult. Finally, it has also hindered research. It has become progressively more difficult to get active samples from an exploit kit landing page that is active for less than an hour. This helps increase the attack window for threat actors since researchers have to increase the level of effort to gather and analyze the samples.” Williams, meanwhile, warns that as security technologies catch up to domain shadowing, there is a risk that mitigations could impact legitimate traffic. “If the block list is made incorrectly, it could block both bad and legitimate traffic and harm an innocent victim,” Williams said. “If you know an attacker has credentials, you could make the case to block everything associated with a domain. That could also block the legitimate domain.” Source
-
OpenDNS has gone public with a new tool that uses a blend of analytics principles found outside information security to create a threat model for detecting domains used in criminal and state-sponsored hacking campaigns. NLPRank is not ready for production, said OpenDNS director of security research Andrew Hay, but the threat model has been proven out and false positives kept in check to the point where Hay and NLPRank’s developer Jeremiah O’Connor were satisfied that it could be shared publicly. What separates NLPRank from other analytics software that searches, for example, for typo-squatting domains used in phishing attacks, is that the OpenDNS tool also relies on natural language processing, ASN mappings, WHOIS domain registration information, and HTML tag analysis to weed out legitimate domains from the bad ones. The data comes from OpenDNS’ massive storehouses of DNS traffic (70 billion DNS queries daily), as well as from other sources provided by researchers investigating APT campaigns, for example. The spark for NLPRank’s development was a repeating pattern of evidence from a number of phishing attacks used to gain a foothold for APT groups. Certain themes such as fraudulent social media accounts or password reset requests purporting to be from popular services such as Facebook or PayPal were used to add urgency for the potential victim, enticing them to follow the link to trouble. “Using this malicious language and applying analysis to the domains, we can start picking them off prior to a campaign launching,” Hay said. O’Connor shared details in a blog post on the science behind the analytics, including algorithms used in bioinformatics and data mining, natural language processing techniques that allow him to develop a dictionary of malicious language used in these campaigns that helps the tool predict malicious domain activity. “NLPRank is designed to detect these fraudulent branded domains that often serve as C2 domains for targeted attacks,” O’Connor wrote, adding that the tool uses a minimum edit-distance algorithm used in spell-checkers and other applications to whittle down words used for typo-squatting domains and legitimate domains. “The intuition behind using this algorithm is that essentially we’re trying to define a language used by malicious domains vs. a language of benign domains in DNS traffic,” O’Connor said. Hay added that the domains used in the recently unveiled Carbanak APT bank heist, with losses anywhere between $300 million and $1 billion, were identified as malicious by NLPRank prior to the campaign going public during the recent Security Analyst Summit. Data from Carbanak, DarkHotel and other APT groups uncovered by Kaspersky Lab are among the data sets used to put NLPRank through its paces. “This has been incredibly successful in looking at phishing kits that, at face value, are identical to the parent company’s site,” Hay said, stressing that the tool looks at various low-level code, JavaScript hosted on the site, redirects and more in its analysis. “The model picks them off and starts analyzing the data, making sure it’s associated with the parent company, that it was registered by someone associated with the parent domain through the WHOIS information, looking at how embedded HTML may be different versus the parent company and determining how much it deviates from the parent site.” Eventually the tool will be folded into OpenDNS offerings, but Hay said more analysis capabilities, such expanded HTML and embedded script analysis, need to be added to further keep false positives at bay. “The false positive rate is low, but it’s not at point where we are comfortable putting it into production or turning on automated blocking,” Hay said. “We want additional inputs to the model, but so far it’s looking great.” Source
-
Microsoft today issued an advisory warning Windows users that Secure Channel, or Schannel, the Windows implementation of SSL/TLS, is vulnerable to the FREAK attack. Disclosed this week, FREAK (CVE-2015-1637) is the latest big Internet bug. It affects a number of SSL clients, including OpenSSL, and enables attackers to force clients to downgrade to weakened ciphers that can be broken and then supposedly encrypted traffic can be sniffed via man-in-the-middle attacks. Microsoft warned that Schannel is not immune to FREAK exploits, though it said it has not received any reports of public attacks. Windows users can expect either a security bulletin released on a regularly scheduled Patch Tuesday update, or an out-of-band patch. Microsoft said that Windows servers are not impacted if in their default configuration, in which export ciphers such as the RSA cipher in question with FREAK are disabled. Microsoft suggested a few workarounds that include disabling RSA key exchange ciphers via the registry for Windows Server 2003 systems. For later versions of Windows, Microsoft said RSA key exchange ciphers may be disabled using Group Policy Object Editor. The export ciphers are a remnant of the crypto wars of the 1980s and 1990s; SSL clients will accept the weaker RSA keys without asking for them. The RSA keys in question are 512-bit and were approved by the U.S. government for overseas export and it was assumed that most servers no longer supported them. “The export-grade RSA ciphers are the remains of a 1980s-vintage effort to weaken cryptography so that intelligence agencies would be able to monitor. This was done badly. So badly, that while the policies were ultimately scrapped, they’re still hurting us today,” cryptographer Matthew Green of Johns Hopkins University wrote in a blog post explaining the vulnerability and its consequences. “The 512-bit export grade encryption was a compromise between dumb and dumber. In theory it was designed to ensure that the NSA would have the ability to ‘access’ communications, while allegedly providing crypto that was still ‘good enough’ for commercial use. Or if you prefer modern terms, think of it as the original ‘golden master key.'” Given today’s modern computing power, an attacker could crack the weaker keys in a matter of hours using processing power available from providers such as Amazon, for example. “What this means is that you can obtain that RSA key once, factor it, and break every session you can get your ‘man in the middle’ mitts on until the server goes down. And that’s the ballgame,” Green said. Source
-
Not long ago, criminals pushing the Dridex banking Trojan were using Microsoft Excel documents spiked with a malicious macro as a phishing lure to entice victims to load the malware onto their machines. Even though macros are disabled by default inside most organizations, the persistent hackers are still at it, this time using XML files as a lure. Researchers at Trustwave today said that over the past few days, several hundred messages have been corralled that are trying to exploit users’ trust in Office documents with some clever social engineering thrown into the mix in an attempt to convince users to enable macros and thus download the banking malware onto their machines. The XML files are passed off as “remittance advice,” or payment notifications, with the hopes that some users will believe it’s an innocent text file and execute the malicious code. “XML files are the old binary format for Office docs and once you double click them to open, the file associated with Microsoft Word and opens,” said Karl Sigler, Trustwave threat intelligence manager. The malicious macro is compressed and Base64 encoded in order to slide through detection technology, Sigler said, adding that the attackers have also included a pop-up with instructions for the user on how to enable macros with language that stresses macros must be enabled for the invoice to viewed properly or to ensure proper security. “Which is the exact opposite of what this does,” Sigler said. “It doesn’t seem to be all that sophisticated. They’re either trying to capitalize on a user’s trust in XML files, or the fact that a user may not be that familiar with what that extension is.” If the user does follow through and execute the malware, Dridex behaves like most banking Trojans. It sits waiting for a user to visiting an online banking site and then injects code onto the bank site in order to capture the user’s credentials for their online account. Sigler said this is the first time they’ve spotted XML docs used as a lure. As for macros, they’ve been disabled by default since Office 2007 was released. “Sometimes in large organizations, local administrators have the ability to enable macros,” Sigler said. “Some organizations use them quite a bit, but it’s not common. Most people leave the default settings. It’s hard to say why these guys moved to XML. It could be that they’re looking for a new attack vector and they weren’t getting good click-through rates with the Excel documents. Maybe they were not getting people to enable macros the way they hoped and they’re looking for a way to better their success rate.” Dridex is a descendent of Cridex and is in the GameOver Zeus family. GameOver Zeus has been used for years to great profit, particularly through wire fraud. It used a peer-to-peer architecture to spread and send stolen goods, opting to forgo a centralized command-and-control. P2P and domain generation algorithm techniques make botnet takedowns difficult and extend the lifespan of such malware schemes. The previous Dridex campaign targeted U.K. banking customers with spam messages spoofing popular companies either based or active in the U.K. Separate spam spikes using macros started in October and continued right through mid-December; messages contained malicious attachments claiming to be invoices from a number of sources, including shipping companies, retailers, software companies, financial institutions and others. Source
-
Mandarin Oriental Hotel Group is investigating a credit card breach, according to a statement emailed to SCMagazine.com on Wednesday. “We can confirm that Mandarin Oriental has been alerted to a potential credit card breach and is currently conducting a thorough investigation to identify and resolve the issue,” according to the statement. “Unfortunately incidents of this nature are increasingly becoming an industry-wide concern. The Group takes the protection of customer information very seriously and is coordinating with credit card agencies and the necessary forensic specialists to ensure our guests are protected.” In a different statement posted to the Mandarin Oriental website on Thursday, the international hotel investment and management group said it has removed the “offending malware.” The company said it has top security systems in place, but the malware was undetectable by all anti-virus systems. The incident affected “an isolated number” of hotels in the U.S. and Europe, but none were impacted in Asia, according to the statement, which adds that further details cannot be disclosed due to the ongoing investigation. “Moreover, from the information we have to date, the breach has only affected credit card data and not any other personal guest data, and credit card security codes have not been compromised,” according to the statement. Mandarin Oriental is testing its security protocols and is taking additional steps to prevent a similar incident from occurring. Technology journalist Brian Krebs reported on Wednesday that he contacted the hotel group after financial industry sources identified a pattern of fraudulent charges on payment cards, all of which had been used recently at Mandarin hotels. Source