-
Posts
3972 -
Joined
-
Last visited
-
Days Won
22
Everything posted by begood
-
[ATENTIE] 0-day Adobe Flash Player Adobe Reader and Acrobat
begood replied to begood's topic in Exploituri
contagio: Jun 7 Adobe 0 day CVE-2010-1297 11d2f8d754f3e52893c631f0.pdf -
According to a survey conducted by Tufin Technologies, of 242 IT professionals mainly from organizations employing 1000 to 5000+ employees, 1 in 10 admitted that either they or a colleague have cheated to get an IT audit passed. However, it isn’t all bad news; compared to a similar survey conducted in 2009 the number of people admitting to cheating has halved in number. Amongst those who have cheated lack of time and resources are cited as the main reasons, underlining the ever increasing pressure on today’s IT departments. With 25% responding that firewall audits take a week to conduct attempting to avoid this painful process is understandable - if not excusable. What’s more 30% of respondents only audit their firewalls once every 5 years and even more worrying 7% never even conduct an audit. With this in mind it’s less surprising to find out that 36% of IT professionals admit their firewall rule bases are a mess increasing their susceptibility to hackers, network crashes and compliance violations. The survey also found that: 31% only audit their firewalls once a year 22% don’t know how long it takes to audit their firewalls Of those that admit their firewall rule base is a mess, 25% believe this makes their network susceptible to crashes and 38% susceptible to compliance violations 56% responded that automation tools would save them a lot of time While companies pay a lot of attention to the firewalls selection process, and invest millions in acquiring it, much less attention and resources are invested in making sure the firewalls are optimized at all times for potential security risks and compliance breaches. But, despite our gloomy economic environment it is encouraging to see that IT has remained high on the budget priorities with 59% of companies revealing that they have not been forced to focus on cost savings at the expense of their company’s security. With malware at record highs and more and more compliance legislation businesses are clear that it is not in their interests to cut IT spend. To view the complete survey statistics, go here.
-
The explosion in internet usage over the last 10 years has ensured that from the biggest Fortune 500 companies to small one-man startups, almost every company now has a vital IT component (whether they know it or not). Related Articles IT Security's Security Audit Buyers Guide Pinpointing Your Security Risks The Application Audit Process Vulnerability Scanning for Business Every business, including yours, has valuable IT assets such as computers, networks, and data. And protecting those assets, requires that companies big and small conduct their own IT security audits in order to get a clear picture of the security risks they face and how to best deal with those threats. The following are 10 steps to conducting your own basic IT security audit. While these steps won't be as extensive as audits provided by professional consultants, this DIY version will get you started on the road to protecting your own company. 1. Defining the Scope of Your Audit: Creating Asset Lists and a Security Perimeter The first step in conducting an audit is to create a master list of the assets your company has, in order to later decide upon what needs to be protected through the audit. While it is easy to list your tangible assets, things like computers, servers, and files, it becomes more difficult to list intangible assets. To ensure consistency in deciding which intangible company assets are included, it is helpful to draw a "security perimeter" for your audit. What is the Security Perimeter? The security perimeter is both a conceptual and physical boundary within which your security audit will focus, and outside of which your audit will ignore. You ultimately decide for yourself what your security perimeter is, but a general rule of thumb is that the security perimeter should be the smallest boundary that contains the assets that you own and/or need to control for your own company's security. Assets to Consider Once you have drawn up your security perimeter, it is time to complete your asset list. That involves considering every potential company asset and deciding whether or not it fits within the "security perimeter" you have drawn. To get you started, here is a list of common sensitive assets: Computers and laptops Routers and networking equipment Printers Cameras, digital or analog, with company-sensitive photographs Data - sales, customer information, employee information Company smartphones/ PDAs VoIP phones, IP PBXs (digital version of phone exchange boxes), related servers VoIP or regular phone call recordings and records Email Log of employees daily schedule and activities Web pages, especially those that ask for customer details and those that are backed by web scripts that query a database Web server computer Security cameras Employee access cards. Access points (i.e., any scanners that control room entry) This is by no means an exhaustive list, and you should at this point spend some time considering what other sensitive assets your company has. The more detail you use in listing your company's assets (e.g., "25 Dell Laptops Model D420 Version 2006", instead of "25 Computers") the better, because this will help you recognize more clearly the specific threats which face each particular company asset. 2. Creating a 'Threats List' You can't protect assets simply by knowing what they are, you also have to understand how each individual asset is threatened. So in this stage you will compile an overall list of threats which currently face your assets. What Threats to Include? If your threat list is too broad, your security audit will end up getting focused on threats which are extremely small or remote. When deciding whether to include a particular threat on your 'Threat List' keep in mind that your test should follow a sliding scale. For example, if you are considering whether the possibility of a hurricane flooding out your servers you should consider both, how remote the threat is, but also how devastating the harm would be if it occurred. A moderately remote harm can still be reasonably included in your threat list if the potential harm it would bring is large enough to your company. Common 'Threats' to Get you Started? Here are some relatively common security threats to help you get started in creating your company's threat list: Computer and network passwords. Is there a log of all people with passwords (and what type). How secure is this ACL list, and how strong are the passwords currently in use? Physical assets. Can computers or laptops be picked up and removed from the premises by visitors or even employees? Records of physical assets. Do they exist? Are they backed up? Data backups. What backups of virtual assets exist, how are they backed up, where are the backups kept, and who conducts the backups? Logging of data access. Each time someone accesses some data, is this logged, along with who, what, when, where, etc.? Access to sensitive customer data, e.g., credit card info. Who has access? How can access be controlled? Can this information be accessed from outside the company premises? Access to client lists. Does the website allow backdoor access into the client database? Can it be hacked? Long-distance calling. Are long-distance calls restricted, or is it a free-for-all? Should it be restricted? Emails. Are spam filters in place? Do employees need to be educated on how to spot potential spam and phishing emails? Is there a company policy that outgoing emails to clients not have certain types of hyperlinks in them? 3. Past Due Diligence & Predicting the Future At this point, you have compiled a list of current threats, but what about security threats that have not come on to your radar yet, or haven't even been developed? A good security audit should account not just for those security threats that face your company today, but those that will arise in the future. Examining Your Threat History The first step towards predicting future threats is to examine your company's records and speak with long-time employees about past security threats that the company has faced. Most threats repeat themselves, so by cataloging your company's past experiences and including the relevant threats on your threat list you'll get a more complete picture of your company's vulnerabilities. Checking Security Trends In addition to checking for security threats specific to your particular industry, ITSecurity.com's recent white paper covers trends for 2007 as well as offering a regularly updated blog which will keep you abreast of all new security threat developments. Spend some time looking through these resources and consider how these trends are likely to affect your business in particular. If you're stumped you may want to Ask the IT Security Experts directly. Checking with your Competition When it comes to outside security threats, companies that are ordinarily rivals often turn into one another's greatest asset. By developing a relationship with your competition you can develop a clearer picture of the future threats your company will face by sharing information about security threats with one another. 4. Prioritizing Your Assets & Vulnerabilities You have now developed a complete list of all the assets and security threats that your company faces. But not every asset or threat has the same priority level. In this step, you will prioritize your assets and vulnerabilities in order to know your company's greatest security risks, and so that you can allocate your company's resources accordingly. Perform a Risk Calculation/ Probability Calculation The bigger the risk, the higher priority dealing with the underlying threat is. The formula for calculating risk is: Risk = Probability x Harm The risk formula just means that you multiply the likelihood of a security threat actually occurring (probability) times the damage that would occur to your company if the threat actually did occur (harm). The number that comes out of that equation, is the risk that threat poses to your company. Calculating Probability Probability is simply the chance that a particular threat will actually occur. Unfortunately, there isn't a book that lists the probability that your website will be hacked this year, so you have to come up with those figures yourself. Your first step in calculating probability should be to do some research into your company's history with this threat, your competitors' history, and any empirical studies on how often most companies face this threat. Any probability figure that you ultimately come up with is an estimate, but the more accurate the estimate, the better your risk calculation will be. Calculating Harm How much damage would a particular threat cause if it occurred? Calculating the potential harm of a threat can be done in a number of different ways. You might count up the cost in dollars that replacing the lost revenue or asset would cost the company. Or instead you might calculate the harm as the number of man-hours which would be lost trying to remedy the damage once it has occurred. But whatever method you use, it is important that you stay consistent throughout the audit in order to get an accurate priorities list. Developing Your Security Threat Response Plan When working down your newly developed priority list, there will be a number of potential responses you could make to any particular threat. The remaining six points in this article cover the primary responses a company can make to a particular threat. While these security responses are by no means the only appropriate ways to deal with a security threat, they will cover the vast majority of the threats your company faces, and as a result you should go through this list of potential responses before considering any alternatives. 5. Implementing Network Access Controls Network Access Controls, or NACs, check the security of any user trying to access a network. So, for example, if you are trying to come up with a solution for the security threat of your competition stealing company information from private parts of the company's website, applying network access controls or NACs is an excellent solution. Part of implementing effective NAC is to have an ACL (Access Control List), which indicates user permissions to various assets and resources. Your NAC might also include steps such as; encryption, digital signatures, ACLs, verifying IP addresses, user names, and checking cookies for web pages. 6. Implementing Intrusion Prevention While a Network Access Control deals with threats of unauthorized people accessing the network by taking steps like password protecting sensitive data, an Intrustion Prevention System (IPS) prevents more malicious attacks from the likes of hackers. The most common form of an IPS is a second generation firewall. Unlike first generation firewalls, which were merely content based filters, a second generation firewall adds to the content filter a 'Rate-based filter'. Content-based. The firewall does a deep pack inspection, which is a thorough look at actual application content, to determine if there are any risks. Rate-based. Second generation firewalls perform advanced analyses of either web or network traffic patterns or inspection of application content, flagging unusual situations in either case. 7. Implementing Identity & Access Management Identity and Access Management (IAM) simply means controlling users' access to specific assets. Under an IAM, users have to manually or automatically identify themselves and be authenticated. Once authenticated, they are given access to those assets to which they are authorized. An IAM is a good solution when trying to keep employees from accessing information they are not authorized to access. So, for instance, if the threat is that employees will steal customers credit card information, an IAM solution is your best bet. 8. Creating Backups When we think of IT security threats, the first thing that comes to mind is hacking. But a far more common threat to most companies is the accidental loss of information. Although it's not sexy, the most common way to deal with threats of information loss is to develop a plan for regular backups. These are a few of the most common backup options and questions you should consider when developing your own backup plan: Onsite storage. Onsite storage can come in several forms, including removable hard drives or tape backups stored in a fireproofed, secured-access room. The same data can be stored on hard drives which are networked internally but separated by a DMZ (demilitarized zone) from the outside world. Offsite storage. Mission-critical data could be stored offsite, as an extra backup to onsite versions. Consider worst-case scenarios: If a fire occurred, would your hard-drives or digital tapes be safe? What about in the event of a hurricane or earthquake? Data can be moved offsite manually on removable media, or through a VPN (Virtual Private Network) over the Internet. Secured access to backups. Occasionally, the need to access data backups will arise. Access to such backups, whether to a fireproofed room or vault, or to an offsite data center, physically or through a VPN, must be secure. This could mean issuing keys, RFID-enabled "smart pass cards", VPN passwords, safe combinations, etc. Scheduling backups. Backups should be automated as much as possible, and scheduled to cause minimum disruption to your company. When deciding on the frequency of backups, be aware that if your backups aren't frequent enough to be relevant when called upon, they are not worth conducting at all. 9. Email Protection & Filtering Each day, 55 billion spam messages are sent by email throughout the world. To limit the security risk that unwanted emails pose, spam filters and an educated workforce are a necessary part of every company's security efforts. So, if the threat you are confronting is spam emails, the obvious (and correct) response is to implement an email security and filtering system for your company. While the specific email security threats confronting your company will determine the appropriate email protections you choose, here are a few common features: Encrypt emails. When sending sensitive emails to other employees at other locations, or to clients, emails should be encrypted. If you have international clients, make sure that you use encryption allowed outside of the United States and Canada. Try steganography. Steganography is a technique for hiding information discreetly in the open, such as within a digital image. However, unless combined with something like encryption, it is not secure and could be detected. Don't open unexpected attachments. Even if you know the sender, if you are not expecting an email attachment, don't open it, and teach your employees to do the same. Don't open unusual email. No spam filter is perfect. But if your employees are educated about common spam techniques, you can help keep your company assets free of viruses. 10. Preventing Physical Intrusions Despite the rise of new generation threats like hacking and email spam, old threats still imperil company assets. One of the most common threats is physical intrusions. If, for example, you are trying to deal with the threat of a person breaking into the office and stealing company laptops, and along with them valuable company information, then a plan for dealing with physical intrusions is necessary. Here are some common physical threats along with appropriate solutions for dealing with them: Breaking into the office: Install a detection system. Companies like ADT have a variety of solutions for intrusion detection and prevention, including video surveillance systems. Stolen laptop: Encrypt hard drive. Microsoft offers an Encrypt File System, or EFS, which can be used to encrypt sensitive files on a laptop. Stolen screaming smart phones. A new service from Synchronica protect smartphones and PDAs, should they be stolen. Once protected, a stolen phone cannot be used without an authorization code. If this is not given correctly, all data is wiped from the phone and a high-pitch "scream" is emitted. Once your phone is recovered, the data can be restored from remote servers. Currently, this particular service is limited to the UK, but comparable services are available throughout the world. Kids + Pets = Destruction: Prevent unauthorized access. For many small-business owners, the opportunity to work from home is an important perk. But having children and/or pets invading office space and assets can often be a greater risk that that posed by hackers. By creating an appropriate-use policy and sticking with it small business owners can quickly deal with one of their most significant threats. Internal Click Fraud: Education and Blocks. Many web-based businesses run advertising such as Google AdSense or Chitika to add an extra revenue stream. However, inappropriate clicking of the ads by employees or family can cause your account to be suspended. Make employees aware of such things, and prevent the company's live website from being viewed internally. Conclusion These 10 steps to conducting your own IT Security Audit will take you a long way towards becoming more aware of the security threats facing your company as well as help you begin to develop a plan for confronting those threats. But it is important to remember that security threats are always changing, and keeping your company safe will require that you continually assess new threats and revisit your response to old ones. For further research, visit IT Security's Security Audit Resource Center.
-
mi-ai amintit de o zi cu soare din clasa a 5-a )))))))))
-
mai repede te loghezi pe 100 hub-uri si dai search la fisierele ce stocheaza parolele. apoi folosesti un tool sa le decriptezi. parca la opera era wand.dat si la firefox trebuie key.db si signons.sqlite (corectati-ma daca gresesc, spun din amintiri demult apuse)
-
https://twitter.com/tnagareshwar/statuses/15612631414 Home Page - www.SecurityXploded.com
-
I can remember as a child the PSA’s (see below for an example) about keeping your kids safe from predators. Times surely have changed in the recent years. There are plenty of laws that are supposed to keep our kids safe. Yet it seems that those who desire to hurt our children are coming up with more and more malicious ways using social engineering to lure children into the dark corners of their depravity. Malicious Social Engineers May Use This When the stories never cease to amaze you and you think you have seen it all, there comes a story that just seems to defy all logic. Enters our present story. Prosecutors in New Jersey USA says that Jonathan Prime, a 20-year old man convinced a 13 and 14 year old boy to send him pictures of their genitals. How? The two young men where frequent players of the game Call of Duty: World at War on MS Live. It seems that Jonathan was able to convince the two young boys that it was a condition of the clan he was starting. This wasn’t a lone incident, he did this to many children. Many who rejected him but he was able to convince at least four of them by grooming them, getting them to comply and even getting one to call him and have phone sex. Despite the inherent WTH factor here. How could these kids fall for this? How could they believe that this really was a term of the contract? Those questions are above our scope of our site. What we will cover is what could parents do to keep safe? How is it possible to keep your children safe without having to unplug the television and disconnect the Internet? There are certain things that can be done, but the reason many fall short is these steps don’t involve a plug in or device to keep you safe, but there are two steps that can keep your family safe. Communication: Nothing can beat just sitting your kids down and talking with them. Telling them what is going on in the world and how malicious people think. Telling them what signs to look for and being involved in their lives. This can keep them safe. If kids are going to play online, consider muting all the other players. It is normally possible to only talk to people that are known friends, instead of random strangers. Gaming can be a social event, but best to keep it social to those you know. Parents can use gaming as a chance to do something with their kids. If parents sit down and play games with the kids, they will better understand the potential issues that could be encountered. This will put them in a better situation to provide guidance to the kids in a manner that is truly helpful. Education: Right along with communication, teach your kids about the world and what is going on. If they are aware of the malicious attacks and how these people think they can be aware of their tactics. This doesn’t mean you need to tell them all the gory details but keeping them aware can go a long way in a good protection plan. We always strive to learn something from the attacks we analyze, but truly in this one there are no redeeming qualities. All we can say, it is one of those attacks that is pure evil and malicious and there is not much to learn except, keep your kids safe. Its 10:pm Do you know where your children are? Social Engineering being used by Child Predators | Social Engineering - Security Through Education
-
(PhysOrg.com) -- Researchers at MIT's McGovern Institute for Brain Research have developed a new mathematical model to describe how the human brain visually identifies objects. The model accurately predicts human performance on certain visual-perception tasks, which suggests that it’s a good indication of what actually happens in the brain, and it could also help improve computer object-recognition systems. How the brain recognizes objects
-
These are my views on careers in information security careers based on the experience I've had and your mileage may vary. The information below will be most appropriate if you live in New York City, you're interested in application security, pentesting, or reversing, and you are early on in your career in information security. Employers In my opinion, there are five major employers in the infosec industry (not counting academia). I ranked them below according to the proportion of talent I think they employ: (25%) Government (including consultants) (20%) Finance (15%) Software Vendors (15%) Consulting (non-gov) (24%) Other (retail, healthcare, etc) (1%) Academia The industry you work in will determine the major problems you have to solve. For example, the emphasis in finance is to reduce risk at the lowest cost to the business (opportunities for large-scale automation). On the other hand, consulting often means selling people on the idea that X is actually a vulnerability and researching to find new ones. Roles I primarily split up infosec jobs into internal network security, product security, and consulting. I further break down these classes of jobs into the following roles: Application Security (code audits/app assessments) Attacker (offensive) <- try not to do this one Compliance Forensics Incident Handler Manager Network Security Engineer Penetration Tester Policy Researcher Reverse Engineer Security Architect The roles above each require a different, highly specialized body of knowledge. This website is a great resource for application security and penetration testing, but you should find other resources if you are interested in a different role. Learn Fortunately, there are dozens of good books written about each topic inside information security. Dino Dai Zovi has an excellent reading list, as does Tom Ptacek, and Richard Bejtlich has recommendations from another perspective (bonus: Richard's book reviews are usually spot-on). I would personally recommend looking at: Gray Hat Hacking (the textbook for this course) The Myths of Security (a quick read that covers larger issues) Hacking: The Next Generation (a quick read that covers the latest in web security and then some), and any book from O'Reilly on a scripting language of your choice. If you're not sure what you're looking for, then you should browse the selection offered by O'Reilly. They are probably the most consistent and high-quality book publisher in this industry. Don't forget that reading the book alone won't impart you any additional skills beyond the conversational. You need to practice or create something based on what you read to really gain value and understanding from it. University The easiest shortcut to finding a university with a dedicated security program is to look through the NSA Centers of Academic Excellence (NSA-COE) institution list. This certification has become watered down as more universities have obtained it and you should focus your search on those that have obtained the newer COE-R certification. There are a number of universities with special programs in security that are not on this list, so I listed some of the best courses I know about below: Secure Software Principles at RPI (listed first for a reason) Computer Security (Graduate), taught by Hovav Shacham at UCSD Advanced Vulnerability Assessment, taught by Chris Eagle at NPS Intro to Web Application Security, taught by Edward Z. Yang at MIT IAP 2009 Intro to Software Exploitation, taught by Nathan Rittenhouse at MIT IAP 2009 Web Programming and Security at Stanford Computer and Network Security at Stanford Malware Analysis and Antivirus Technologies at the University of Helsinki (alternate) Binary Auditing and Reverse Code Engineering at the University of Bielefeld Software Security Assessment, taught by Gregory Ose at DePaul Once in university, take classes that force you to write code in large volumes to solve hard problems. IMHO the courses that focus on mainly theoretical or simulated problems provide limited value. Ask upper level students for recommendations if you can't identify the CS courses with programming from the CS courses done entirely on paper. The other way to frame this is to go to school for software development rather than computer science. Capture the Flag and War Games If you want to acquire and maintain technical skills and you want to do it fast, play in a CTF or jump in to a wargame. The one thing to note is that many of these challenges attach themselves to conferences (of all sizes), and by playing in them you will likely miss the entire rest of the conference. Try not to over do it, conferences are useful in their own way (see below). NYU:Poly CSAW CTF (here are the reversing challenges from 2009) UCSB iCTF Defcon CTF Pre-qualifications wargames at smashthestack.org wargames at intruded.net calendar of upcoming CTF competitions There are some defense-only competitions that disguise themselves as normal CTF competitions, mainly the Collegiate Cyber Defense Challenge (CCDC) and its regional variations, and you should avoid them. They are exercises in system administration and frustration and will teach you little about security or anything else. They are incredibly fun to play as a Red Team though. Communication In any role, the majority of your time will be spent communicating with others, primarily through email and meetings and less by phone and IM. The role/employer you have will determine whether you speak more with internal infosec teams, non-security technologists, or business users. For example, expect to communicate more with external technologists if you do network security for a financial firm. Tips for communicating well in a large organization: Learn to write clear, concise, and professional email. Learn to get things done and stay organized. Do not drop the ball. Learn the business that your company or client is in. If you can speak in terms of the business, your arguments a) to not do things to fix things and c) to do things that involve time and money will be much more persuasive. Learn how your company or client works, ie. key individuals, processes, or other motivators that factor into what gets things done. If you are still attending a university, as with CS courses, take humanities courses that force you to write. Meet People CitySec - informal meetups without presentations, once monthly, occurs in most cities (NYSEC, google for others) OWASP - formal meetups with presentations about web security, usually quarterly (OWASP NY/NJ) ISSA and ISC2 focus on policy, compliance and other issues that will be of uncertain use for a new student in this field. Similarly, InfraGard mainly focuses on law enforcement-related issues. Conferences If you've never been to an infosec conference before, use the google calendar below to find a low-cost local one and go. There have been students of mine who think that attending a conference will be some kind of test and put off going to one for as long as possible. I promise I won't pop out of the bushes with a final exam and publish your scores afterward. Information Security Conferences Calendar If you go to a conference, don't obsess over attending a talk during every time slot. The talks are just bait to lure all the smart hackers to one location for a weekend: you should meet the other attendees! If a particular talk was interesting and useful then you can and should talk to the speaker. If you're working somewhere and are having trouble justifying conference attendance to your company, the Infosec Leaders blog has some helpful advice. Certifications This industry requires specialized knowledge and skills and studying for a certification exam will not help you gain them. In fact, in many cases, it can be harmful because the time you spend studying for a test will distract you from doing anything else in this guide. That said, there are inexpensive and vendor-neutral certifications that you can reasonably obtain with your current level of experience to help set apart your resume, like the Network+ and Security+ or even a NOP, but I would worry about certifications the least in your job search or professional development. In general, the two best reasons to get certifications are: If you are being paid to get certified, through paid training and exams or sometimes through an automatic pay raise after you get the certification (common in the government). If your company or your client is forcing you to get certified. This is usually to help with a sales pitch, ie. "You should hire us because all of our staff are XYZ certified!" Random Links (some better than others) Information Security Leaders Blog Advice for Computer Science College Students Organizing and Participating in Computer Network Attack and Defense Exercises vf on how she got started in security Friends of the Class Attack Research Gotham Digital Science Intrepidus Group iSEC Partners MANDIANT Matasano Security McAfee TippingPoint DVLabs Vulnerability Research Labs zero(day)solutions There are a number of internal security and product security teams that I've worked with in the past who I'm not sure would appreciate being called out like this. Needless to say, there are dozens of financials, healthcare, and technology companies in NYC that require information security to run their businesses and they shouldn't be hard to find. Penetration Testing and Vulnerability Analysis - Careers - Information Security CareersCheatsheet
-
Automatically steal SAM and SYSTEM Files using HashGrab
begood replied to begood's topic in Tutoriale video
ai hashuri LM si NTLM ? sau doar NTLM ? in primul caz nu iti trebuie decat tabelele rainbow de la freerainbowtables.com + rcracki_mt de pe sourceforge + lm2ntlm de pe blog.distracted.nl in al doilea caz iti trebuie tabelele NTLM (frt) + rcracki_mt -
update saracu iar si-a facut cont pe trilulilu pentru a-si demonstra skill-ul (1) asta cred ca-i forumu ratatului n-IT.Info • Index page
-
le modifica, se semna si apoi le binduia cu un RAT. probabil viseaza la un botnet mare si frumos. ain't gonna happen on RST while I'm here !
-
2010-1992=? Nume Real:Gabriel Nick: tr0ja3n Varsta: 18 Cunostiinte :Vb6Vb.NetHTML / PHP Sunt incepator la toate. Sper sa invat mai multe de la voi si sa nu imi dati in cap daca pun un post aiurea. Sper... sper? în continuare
-
Have you ever wondered what exactly DCOP is, or where your drivers are hidden? Level 1 : Userspace The main problem you face when you're attempting to lift the lid on what makes Linux tick is knowing where to start. It's a complicated stack of software that's been developed by thousands of people. Following the boot sequence would be a reasonable approach, explaining what Grub actually does, before jumping into the initiation of a RAM disk and the loading of the kernel. But the problem with this is obvious. Mention Grub too early in any article and you're likely to scare many readers away. We'd have the same problem explaining the kernel if we took a chronological approach. Instead, we've opted for a top-down view, tackling each stratum of Linux technology from the desktop to the kernel as it appears to the average user. This way, you can descend from your desktop comfort zone into the underworld of Linux archaeology, where we'll find plenty of relics from the bygone era of multi-user systems, dumb terminals, remote connections and geeks gone by. This is one of the things that makes Linux so interesting: you can see exactly what has happened, why and when. This enables us to dissect the operating system in a way we couldn't attempt with some alternatives, while at the same time, you learn something about why things work the way they do on the surface. Level 1: Userspace Before we delve into the Linux underworld, there's one idea that's important to understand. It's a concept that links userspace, privileges and groups, and it governs how the whole Linux system works and how you, as a user, interact with it. It's based on the premise that a normal desktop user shouldn't be able to make important system changes without proving that they have the correct administrator's privileges to do so. This is why you're asked for a password when you install new packages or open your distribution's configuration panels, and it's why a normal user can't see the contents of the /root directory or make changes to specific files. Your distribution will use either sudo or an administrator account to grant access to the system-wide configurable parts of your system. The former will work typically only for a single session or command, and is used as an ad-hoc solution for normal day-to-day use, much like the way both Windows 7 and OS X handle privileges. USER CONTROL: Groups make it possible to enable and disable certain services on a per-user basis With a full-blown system administrator's account, on the other hand, it's sometimes far too easy to stay logged in for too long (and thus more likely that you'll make an irreversible mistake or change). But the reason for both methods is security. Linux uses a system of users, groups and privilege to keep your system as secure as possible. The idea is that you can mess around with your own files as much as you like, but you can't mess about with the integrity of the whole system without at least entering a password. It might seem slightly redundant on a system when you are the only user of your system, but as we'll see with many other parts of Linux, this concept is a throwback to a time when the average system had many users and only a single administrator or two. Linux is a variant of the Unix operating system, which has been one of the most common multi-user systems for decades. This means that multi-user functionality is difficult to avoid in Linux, but it's also one of the reasons why Linux is so popular – multi-user systems have to be secure, and Linux has inherited many of the advantages of these early systems. A user account on Linux is still self-contained, for example. All of your personal files are held within your own home directory, and it's the same for other users of the system. You can usually see their names by looking at the contents of /home with your file manager, and depending on their permissions, even look inside other people's home folders. But who can and can't read their contents is governed by the user who owns the files, and that's down to permissions. Permissions Every file and directory on the Linux filesystem has nine attributes that are used to define how they can be accessed. These attributes correspond to whether a user, a group or anyone can read, write and execute the file. You might want to share a collection of photos with other users of your system, for example, and if you create a group called 'photos', add all the users who you'd like access to the group and set the group permissions for the photos folder, you'll be able to limit who has access to your images. Any modern file manager will be able to perform this task, usually by selecting a file and choosing its properties to change its permissions. This is also how your desktop will store configuration information for your applications, tools and utilities. Hidden directories (those that start with a full stop), are often created within your home directory, and within these you'll find text files that your desktop and applications will use to store your setup. No one else can see them, and it's one of the reasons why porting your current home directory to a new distribution can be such a good idea – you'll keep all your settings, despite the entire operating system changing. Level 2 : Desktop If you come to Linux from Windows or OS X rather than through the server room, the idea that there's something called a desktop is quite a strange one. It's like trying to explain that Microsoft Windows is an operating system to someone who just thinks it's 'the computer'. The desktop is really just a special kind of application that has been designed to aid communication between the user and any other applications you may run. This communication part is important, because the desktop always needs to know what's happening and where. It's only then it can do clever things like offer virtual desktops, minimise applications, or divide windows into different activities. There are two ways that a desktop helps this to happen. The first is through something called its API, which is the Application Programming Interface. When a programmer developers an application using a desktop's API, they're able to take advantage of lots of things the desktop offers. It could be spell checking, for example, or it could be the list of contacts you keep in another app that uses the same API. MOBLIN: Moblin and UNR make good use of the Clutter framework to offer accelerated and smooth scrolling graphics When lots of applications use the same API, it creates a much more homogeneous and refined experience, and that's exactly what we've come to expect of both Gnome and KDE desktops. The reason why K3b works so well with your music files is because it's using the same KDE API that your music player uses, and it's the same with many Gnome apps too. Toolkits But applications designed for a specific desktop environment don't have to use any one API exclusively. There are probably more APIs than there are Linux distributions, and they can do anything from complex mathematics to hardware interfacing. This is where you'll hear terms like Clutter and Cairo bandied around, as these are additional toolkits that can help a programmer build more unified-looking applications. Clutter, for example, is used by both Ubuntu Netbook Remix and Moblin to create hardware-accelerated, smoothly animated GUIs for low-power devices. It's Clutter that scrolls the top bar down in Moblin, for instance, and provides the fade-in effects of the launch menu in UNR. Cairo helps programmers create vector graphics easily, and is the default rendering engine in GTK, the toolkit behind Gnome, for many of its icons. Rather than locking an image to a specific resolution, vector-based images can be infinitely scaled, making them perfect for images that are going to be used in a variety of resolutions. Inter-process communication The second way the desktop helps is by using something called 'inter-process communication'. As you might expect from its name, this helps one process talk to another, which in the case of a desktop, is usually one application talking to another. This is important because it helps a desktop feel cohesive: your music player might want to know when an MP3 player has been connected, for example, or your wireless networking software may want to use the system-wide notification system to let you know its found an open network. In general terms, inter-process communication is the reason why GTK apps perform better on the Gnome desktop, and KDE apps work well with KDE, but the great thing about both desktops is that they use the same compatible method for inter-process communication – a system called D-BUS. So why do Gnome and KDE feel so different to each another? Well, it's because they use different window managers. The idea of a window manager stretches right back to the time when Unix systems first crawled out of the primordial soup of the command line, and started to display a terminal within a window. You could drag this single window across the cross-hatched background, and open other terminals that you could also manipulate thanks to something called TWM, an acronym that reputedly stood for Tom's Window Manager. It didn't do much, but it did free the user from pages of text. You could move windows freely around the display, resize them, maximize them and let them overlap one another. And this is exactly what Gnome and KDE's window managers are still doing today. KDE's window manager, dubbed KWin, augments the moving and management components of TWM with some advanced features, such as its new-found abilities to embed any window within a tabbed border, snap applications to an area of the screen or move specific applications to preset virtual activities on their own desktops. KWin also recreates plenty of compositing effects, such as window wobble, drop shadows and reflections, an idea pioneered by Compiz. This is yet another window manager, but rather than adding functionality, it was created specifically to add eye-candy to the previously static world of window management. Compiz is still the default replacement for Gnome's window manager (Metacity), and you can get it on your Gnome machine if you enable the advanced effects in the Visual Effects panel. You'll find that it seamlessly replaces the default drawing routines with hardware-accelerated compositing. Dependencies One of biggest hurdles for people when they switch to Linux is the idea that you can't simply download an executable from the internet and expect it to run. When a new version of Firefox is released, for example, you can't just grab a file from Mozilla.org - Home of the Mozilla Project, save it to your desktop and double-click on the file to install the new version. A few distributions are getting close to this ideal, but that's the problem. It's distribution-dependent, and we're no closer to a single solution for application installation than we were 10 years ago. The problem is down to dependencies and the different ways distributions try to tame them. A dependency is simply a package that an application needs if it's to work properly. These are normally the APIs that the developers have used to help them develop the application, and they need to be included because the application uses parts of its functionality. When they're bundled in this way they're known as libraries, because an app will borrow one or two components from a library to add to its own functionality. Clutter is a dependency for both Moblin and UNR, for instance, and it would need to be installed for both desktops to work. And while Firefox may seem relatively self-contained on the surface, it has a considerable list of dependencies, including Cairo, a selection of TrueType fonts and even an audio engine. Other operating systems solve this problem by statically linking applications to the resources they require. This means that they bundle everything that an app needs in one file. All dependencies are hidden within the setup.msi file on Windows, for example, or the DMG file on OS X, giving the application or utility everything it needs to be able to run without any further additions. The main disadvantage with this approach is that you'll typically end up with several different versions of the same library on your system. This takes up more space, and if a security flaw is found, you'll have to update all the applications rather than just the single library. Level 3 : Beneath the surface Xis a stupid name for the system responsible for drawing the windows on your screen, and for managing your mouse and keyboard, but that's the name we're stuck with. As with the glut of programming languages called B, C, C++ and C#, X got its name because its the successor to a windowing system called W, which at least makes a little more sense. X has been one of the most important components in the Linux operating system almost from its inception. It's often criticised for its complexity and size, but there can't be many pieces of software that have lasted almost 20 years, especially when graphics and GUIs have changed so much. But there's something even more confusing about X than its name, and that its use of the terms 'client' and 'server'. This relationship hails back to a time before Linux, when X was developed to work on dumb, cheap screens and keyboards connected to a powerful Unix mainframe system. XTERM: The original XTerm is still the default failsafe terminal for many distributions, including Ubuntu The mainframe would do all the hard work, calculating the contents of windows and the shape of the GUI, while all the screen had to do was handle the interaction and display the data. To ensure that this connectivity wasn't tied to any single vendor, an open protocol was created to shuffle the data between the various devices, and the result was X. Client–server confusion What is counter-intuitive is that the server in this equation is the terminal – the bit with the screen and keyboard. The client is the machine with all the CPU horsepower. Normally, in client–server environments, it's the other way around, with the more powerful machine being called the server. X swaps this around because it's the terminal that serves resources to the user, while the applications use these resources as clients. Now that both the client and the server run on the same machine, these complications aren't an issue. Configuration is almost automatic these days, but you can still exploit X's client–server architecture. It's the reason why you can have more than one graphical session on one machine, for example, and why Linux is so good for remote desktops. The system that handles authentication when you log into your system is called PAM (Pluggable Authentication Modules), which, as its name suggests, is able to implement many different types of security systems through the use of modules. Authentication, in this sense, is a way of securing your login details and making sure they match those in your configuration files without the data being snooped or copied in the process. If a PAM module fails the authentication process, then it can't be trusted. Installed modules can be found in the /etc/ pam.d/directory on most distributions. If you use Gnome, there's one to authenticate your login at the Gdm screen, as well as enabling the auto-login feature. There are common modules for handling the standard login prompt for the command line, as well as popular commands like passwd, cvs and sudo. Each will use Pam to make sure you are who you say you are, and because it's pluggable, the authentication modules don't always have to be password-based. There are modules you can configure to use biometric information, like a fingerprint, or an encrypted key held on a USB thumb drive. The great thing about PAM is that these methods are disconnected from whatever it is you're authenticating, which means you can freely configure your system to mix and match. Command-line shells The thing that controls the inner workings of your computer is known as a shell, and shells can be either graphical or text based. Before graphical displays were used to provide interactive environments to people over a network, text-based displays were the norm, and this layer is still a vitally important part of Linux. They hide beneath your GUI, and often protrude through the GUI level when you need to accomplish a specific task that no GUI design has yet been able to contain. There are many graphical applications that can open a window on the world of the command line, with Gnome's Terminal and KDE's Konsole being two of the most common. But the best thing about the shell is that you don't need a GUI at all. You may have seen what are known as virtual consoles, for example. These are the login prompts that appear when you hold the Alt key and press F1–F6. If you log in with your username and password through one of these, you'll find a fully functional terminal, which can be particularly handy if your X session crashed and you need to restart it. Consoles like these are still used by many system administrators and normal desktop users today. It takes less bandwidth to send text information over a network and it's easier to reconstruct than its graphical counterpart, which makes it ideal for remote administration. This also means that the command line interface is more capable than a graphical environment, if you can cope with the learning curve. By default, if you don't install the X Window System, most distributions will fall back to what's known as the Bourne Again Shell – Bash for short. THE TERMINAL: Most Linux installations offer more than one way of accessing a terminal, and more than one terminal Bash is the command line that most of us use, and it enables you to execute scripts and applications from anywhere on your system. If you don't mind the terse user interface of text-based systems like this, you can accomplish almost anything with the command line. There are many different shells, and each is tailored for a specific type of user. You might want a programming-like interface (C-Shell), for example, or a super-powerful do-everything shell (Z Shell), but they all offer the same basic functionality, and to get the best out of them, you need to understand something about the Linux filesystem. Level 4 : The kernel and friends We're moving into the lower levels of the Linux operating system, leaving behind the realm of user interaction, GUIs, command lines and relative simplicity. The best way of explaining what goes on at this level is to go through the booting process up to the point where you can choose either a graphical session or work with the command line, and the first thing you see when you turn your machine on. The init process is used by many distributions, including Debian and Fedora, to launch everything your operating system needs to function from the moment it leaves the safety of Grub. It's got a long history – the version used by Linux is often written as sysvinit, which shows its Unix System V heritage. Everything from Samba to SSH will need to be started at some point, and init does this by trawling through a script for each process in a specific order, which is defined by a number at the beginning of the script's name. Which scripts are executed is dependent on something called the runlevel of your system, and this is different from one distribution to another, and especially between distros based on Fedora and Debian. GUFW: You don't have to mess around with Iptables manually if you don't want to. There are many GUIs, like GUFW, that make the job much easier to manage You can see this in action by using the init command to switch runlevels manually. On Debian-based systems, type init 1 for single-user mode, and init 5 for a full graphical environment. Older versions of Fedora, on the other hand, offer a non-networking console login at runlevel 2, network functionality at level 3, and a full blown GUI at level 5, and each process will be run in turn as your system boots. This can create a bottleneck, especially when one process is waiting for network services to be enabled. Each script needs to wait for the previous to complete before it can run, regardless of how many other system resources are being under-utilised. If you think the init system seems fairly antiquated, you're not alone. Many people feel the same way, and several distributions are considering a switch from init to an alternative called upstart. Most notably, the distribution that currently sponsors its development, Ubuntu, now uses upstart as its default booting daemon, as does Fedora, and the Debian maintainers have announced their intention to switch for the next release of their distribution. Upstart's great advantage is that it can run scripts asynchronously. This means that when one is waiting for a network connection to appear, another can be configuring hardware or initiating X. It will even use the same scripts as init, making the boot process quicker and more efficient, which is one of the main reasons why the latest versions of Ubuntu and Fedora boot so quickly in comparison with their older counterparts. The kernel We've now covered almost everything, with one large exception, the kernel itself. As we've already discussed, the kernel is responsible for managing and maintaining all system resources. It's at the heart of a running Linux system, and it's what makes Linux, Linux. The kernel handles the filesystem, manages processes and loads drivers, implements networking, userspaces, memory and storage. And surprisingly, for the normal user, there isn't that much to see. Other than the elements displayed through the /proc and /sys filesystems, and the various processes that happen to be running in the background, most of these management systems are transparent. But there are some elements that are visible, and the most notable of these is the driver framework used to control your hardware. Most distributions choose to package drivers as modules rather than as part of the monolithic kernel, and this means they can be loaded and unloaded as and when you need them. Which kernel modules are included and which aren't is dependent on your distribution. But if you've installed the kernel source code, you can usually build your own modules without too much difficulty, or install them through your distribution's package manager. To see what modules are running type lsmod as a system administrator to list all the modules currently plugged into the kernel. Next to each module you'll see listed any dependencies. Like the software variety, these are a requirement for the module to work correctly. Modules are kernel-specific, which is why your Nvidia driver might sometimes break if your distribution automatically updates the kernel. Nvidia's GLX module needs to be built against the current version of the kernel, which is what it attempts to do when you run the installer. Fortunately, you can install more than one version of a module, and each will be automatically detected when you choose a new kernel from the Grub menu. This is because all the various modules are hidden within the /lib/modules directory, which itself should contain further directories named after kernel versions. You can find which version of the kernel you're running by typing uname -a. Depending on your distribution, you can find many kernel driver modules in the /lib/modules/kernel_name/kernel/drivers directory, and this is sometimes useful if your hardware hasn't been detected properly. If you know exactly which module your hardware should use, for example, you can load it with the modprobe module name. You may find that your hardware works without any further configuration, but it might also be wise to check your system logs to make sure your hardware is being used as expected. You can remove modules from memory with the rmmod command, which is useful if Nvidia's driver installer complains that a driver is already running. Iptables One of the more unusual modules you've find listed with lsmod is ip_tables. This is part of one of the most powerful aspects to Linux – its online security. Iptables is the system used by the kernel to implement the Linux firewall. It can govern all packets coming into and out of your system using a complex series of rules. You can change the configuration in real time using the iptables command, but unless you're an expert, this can be difficult to understand, especially when your computer's security is at risk. This is a reflection of the complexity within the networking stack, rather than Iptables itself, and is a necessary side effect of trying to handle several different layers of network data at the same time. But if you're used to other systems and you want to configure Iptables manually, we'd recommend a GUI application like Firestarter, or Ubuntu's ufw, which was developed specifically to make Iptables easier to use. When it's installed, you can quickly enable the firewall by typing ufw enable as root, for instance. You can allow or block specific ports with the ufw allow and ufw deny commands, or substitute the port with the name of the service you want to block. You can find a list of service names for the system in the /etc/services file, and if you're really stuck, you can install an even more user-friendly front-end to Iptables by installing the gufw package. It's not the end We've uncovered all the essential aspects of the Linux operating system, and we hope you've now got a much better understanding of how it all hangs together. One of the best things about Linux is that you're free to experiment and change things freely. This is one of the best ways of learning about the system and what it's capable of – as long as you don't try it in a production environment! Try a virtual machine running your favourite distribution instead, and if you need any help or clarification, try the LXF Forums at www.linuxformat.co.uk/forums. Read more: How Linux works | News | TechRadar UK
-
rep+ l-ai nimerit ) razorica zbori. ban + sterse toate posturile.
-
WARNING! Social network websites can be hazardous if you don’t change the default settings! Instructions: Start with the “5 Tips” below then configure your Facebook account with the suggested Privacy and Security settings in this guide. These settings should be your “baseline”. Adjust them based on your own needs and level of risk. Please read this guide and pass it on to friends and family members! http://socialmediasecurity.com/downloads/Facebook_Privacy_and_Security_Guide.pdf
-
ultimele posturi se puteau face pe privat.
-
trafic trafic, dar daca nu vorbim despre facut bani de pe urma blogului, la ce iti trebuie trafic ? ego ?
-
Am observat ca in ultimul timp mai toti treceti pe un domeniu cumparat, majoritatea pe .ro De ce? Care-s avantajele ? Sincer eu nu vad niciunul decat daca vrei sa faci bani de pe urma blog-ului, iar asta nu va va iesi la foarte multi din voi, no offence.
-
pink + happy fonts = gay.
-
probabil are exploit 0-day in linux, toate atacurile lui sunt indreptate in acea directie. dati un nmap pe cateva ex-target-uri sa vedem ce au toate in comun. ce kernel de linux, ce porturi etc. //a atacat doar linux si win2003
-
Dezactivati Flash-ul pana la urmatorul update ! Problema e si in yahoo messenger, la ads. Incercati sa dezactivati tot altfel riscati sa fiti arsi. Adobe - Security Advisories: Security Advisory for Flash Player, Adobe Reader and Acrobat LE: Adobe Reader and Acrobat 8.x are confirmed not vulnerable. update : instalati update-urile de la Adobe Flash player !! Flash Player 10.1.53.64 (IE) Flash Player 10.1.53.64 (Non-IE)